Arainach 8 hours ago

This is a rational take, which is why it is wrong.

I agree that we're not about to be all replaced with AGI, that there is still a need for junior eng, and with several of these points.

None of those arguments matter if the C suite doesn't care and keeps doing crazy layoffs so they can buy more GPUs, and intelligence and rationality are way less common than following the cargo cult among that group.

  • simonw 8 hours ago

    The C suite may make some dumb short-term mistakes - just like twenty years ago when they tried to outsource all software development to cheaper countries - but they'll either course-correct when they spot those mistakes or will be out-competed by other, smarter companies.

    • reactordev 8 hours ago

      The market will always respond. When it happened before, younger - smarter - more efficient competitors emerged.

    • jimbob45 7 hours ago

      HR can remain irrational longer than I can remain solvent.

  • DanielHB 3 hours ago

    it takes years, if not decades, but that kind of attitude eventually leads to IBM.

  • lloeki 4 hours ago

    It's not just the C-suite.

    I keep seeing and talking to people that are completely high guzzling Kool-Aid straight from a pressure tap.

    The level of confirmation bias is absolutely unhinged "I told $AGENTICFOOLLM to do that and WOW this PR is 90% AI!", _ignoring any previous failed attempt_, nor the 10% of humanness needed for the change of what ultimately is a 10 line diff and handwaving any counterpoint away as "you're holding it wrong".

    I'm essentially seeing two groups emerging: those all aboard the high velocity hype train and those that are curious about the technology but absolutely appalled about the former's irrational attitude which is tantamount to completely winging it diving head first off of a cliff deeply convinced that of course the tide will rise at just the right time for skulls not to be cracked open upon collision with very big sharp rocks.

    • zombot 3 hours ago

      I bet there are also a good number of paid shills among those. If you look at how much money goes into the tech it's not too far-fetched to invest a tiny fraction of that in human "agents" to push the narrative. The endless repetition will produce some believers, even if the story is a complete lie.

      • guappa 3 hours ago

        Not necessarily paid shills, but it's a good way to get a promotions, and then when it will be revealed that it doesn't actually work, they got the promotion already and will jump on the next hype to get the next promotion.

      • globalnode 3 hours ago

        don't forget the useful idiots.

        • bluefirebrand 9 minutes ago

          In tech we seem to have a tendency to believe people are rational and behave predictably, but the truth is there are still truly just a ton of useful idiots even in software and other "high intelligence" areas

  • paulddraper 6 hours ago

    Well that’s the nice thing about capitalism.

    If it doesn’t work, it eventually dies.

    • hermitcrab 2 hours ago

      >If it doesn’t work, it eventually dies.

      Unless it is a bank, in which case it gets bailed out by tax payers.

    • Arainach 6 hours ago

      And the not nice thing about capitalism is that it can keep not working longer than most of us can pay for rent and food.

    • zombot 3 hours ago

      The belief in a rational market approaches religion.

      • rightbyte 2 hours ago

        The dissolving of the Soviet Union gave the neoliberals hubris or something and as they dwindle the ones left seem to get crazier.

    • asimovfan 2 hours ago

      can you please define "does not work"? and give some examples of things that died because it didn't work?

    • guappa 3 hours ago

      Capitalism failed in 1929 but its corpse is still here…

      • zombot 2 hours ago

        Depends on who you ask. Today's billionaires would disagree.

billy99k 9 hours ago

"What I see is a future where AI handles the grunt work, freeing us up to focus on the truly human part of creation: the next step, the novel idea, the new invention. If we don’t have to spend five years of our early careers doing repetitive tasks, that isn’t a threat, it’s a massive opportunity. It’s an acceleration of our potential."

The problem is that only a fraction of software developer have the ability/skills to work on the hard problems. A much larger percentage will only be able to work on things like CRUD apps and grunt work.

When these jobs are eliminated, many developers will be out of work.

  • anilgulecha 8 hours ago

    IMO, if this ends up occuring, it will follow how other practitioner roles have evolved. Take medicine, and doctors for eg: there's a high bar to reach to be able to do a specialist surgery, until then you hone up your skills and practice. Compensation wise it isn't lucrative from the get-go, but can get so once you reach the specialist level. At that point they are liable for the work done. Hence such roles are typically licensed (CAs, lawyers, etc).

    So if I have to make a few 5 year predictions:

    1. Key human engineer skills will be to take liabilty for the output produced by agents. You will be responsible for the signoff, and any good/bad that comes from it.

    2. Some engineering roles/areas will become a "licensed" play - the way canada is for other engineering disciplines.

    3. Compensation at the entry level will be lower, and the expected time to ramp up to productive level will be larger.

    4. Careers will meaningfully start only at the senior level. At the junior level, your focus is to learn enough of the fundamentals, patterns and design principles so you reach the senior level and be a net positive in the team.

    • chii 8 hours ago

      > At the junior level, your focus is to learn enough of the fundamentals, patterns and design principles so you reach the senior level and be a net positive in the team.

      I suspect that juniors will not want to do this, because the end result of becoming a scenior is not lucrative enough given the pace of LLM advancement.

    • turbofreak 8 hours ago

      Canada?? They can’t build a subway station in 5 years nevermind restructure a massive job sector like this lmao

      • bluefirebrand 4 minutes ago

        You're being downvoted but you're actually spot on

        Calgary was supposed to have a new train line, planning has been in motion for years. Back in 2019 when I bought my house, the new train was supposed to open in 2025. As far as I know not a single piece of track has been placed yet. So... Yes

  • csomar 6 hours ago

    This. There are millions of software developers. There are hundreds (thousands?) that are working on the cutting edge of things. Think of popular open source projects used by the masses, usually there is one or a handful of developers doing most of the work. If the other side of the puzzle (integration) becomes automated, 95% or more of software developers are redundant.

  • chii 8 hours ago

    > A much larger percentage will only be able to work on things like CRUD apps and grunt work.

    which is lower valued, and thus it is economically "correct" to have them be replaced when an appropriate automation method is found.

    > When these jobs are eliminated, many developers will be out of work.

    like in the past, those developers will need to either move up the value chain, or move out into a different career. This has happened before, and will continue to happen until the day humanity reaches some sort of singularity or post-scarcity.

    • bugglebeetle 7 hours ago

      > which is lower valued, and thus it is economically "correct" to have them be replaced when an appropriate automation method is found.

      Textbook example of why this “economic” form of analysis is naive, stupid, and short-sighted, (as is almost always the case).

      AI models will never obtain the ability to completely replace “low value work” (as that is not perfectly definable or able to be defined in advance for all cases), so in a scenario where all engineers devoted to these tasks are let go, what you would end up with is a engineers higher up the value chain being tasked with resolving the problems that result from when the AI fails, underperforms, or the assessment of a task’s value was incorrect. The cumulative effect of this would be a massive drain on the effectiveness of said engineers, as they’re now tasked with context switching from creative, high-value work to troubleshooting opaque, AI code slop.

      • blackbear_ an hour ago

        > AI models will never obtain the ability to completely replace “low value work"

        Maybe, but this is not the meaning of replacement in this context and it need not hold for the "economic" reasoning to work.

        All that matters is that AI makes developers more productive, as measured by number of (CRUD or whatever) apps per developer per unit of time. If this is true, then the current supply of apps can be provided by fewer developers, meaning that some of the current developers aren't needed anymore to sustain the current production level. In this scenario lower level engineers still exist, they are just able to do more in the same time by using AI.

      • chii 6 hours ago

        > AI models will never obtain the ability to completely replace “low value work”

        if this were truly the case, then companies that _didn't_ replace the "low value work" by ai and continued to use people will outperform and outcompete. My prediction is entirely predicated on the ability for the LLM to do the replacement.

        A second alternative would be that the cost of the "sloppy" ai code is externalized, which is not ideal but the past history has any bearing, externalization of costs is rampant in corporate profit struggles.

  • 123yawaworht456 8 hours ago

    90% of real grunt work is "stitch an extra appendage to this unholy abomination of God" or "*points at the screen* look at this shit, figure out why is it happening and fix it". LLMs are more or less useless for either of those things.

    • zeta0134 6 hours ago

      An LLM can certainly try to consume an extremely poorly specified bug report, in half English half the user's native language, then consume the entire codebase and guess what the devil they're actually referring to. My guess is the humans are better at this, and I mostly speak from experience on two separate support floors that tried to add AI to that flow. It fails, miserably.

      It's not really possible for an LLM to pick up on the hidden complexities of the app that real users and developers internalize through practice. Almost by definition, they're not documented! Users "just know" and thus there is no training data to ingest. I'd estimate though that the vast, vast majority of bugs I've been assigned originate from one of these vague reports by a client paying us enough to care.

    • danielbln 6 hours ago

      I disagree, agentic LLMs are incredibly useful for both.

    • csomar 6 hours ago

      LLMs are very good at fixing bugs. They do lack broader contexts and tools to navigate the codebase/interface. That's why Claude Code was such a breakthrough despite using the very same models you run on the chat.

3D30497420 2 hours ago

> We are all, collectively, building a giant, shared brain. Every time you write a blog post, answer a question on a forum, or push a project to GitHub, you are contributing to this massive corpus of human knowledge.

I would be more excited about this concept if this shared brain wasn't owned by rich, powerful people who will most certainly deploy this power in ways that benefit themselves to the detriment of everyone else.

Fendy 3 hours ago

I think a lot of people are feeling this, not just engineers. Engineering already has a high entry bar, and now with AI moving so fast, it’s honestly overwhelming. Feels like there's no way to avoid it—we either embrace it, actively or passively, whether we like it or not.

Personally, I think this whole shift might actually be better for young people early in their careers. They can change direction more easily, and in a weird way, AI kind of puts everyone back at the starting line. Stuff that used to take years to master, you can now learn—or get help with—in minutes. I’ve had interns solve problems way faster and smarter than me just because they knew how to use AI tools better. That’s been a real wake-up call.

I’m doing my best to treat AI as a teammate. It really does help with productivity. But the world never stop changing, and that’s exhausting sometimes. I try to keep learning, keep practicing, and keep adapting. And yeah, if I ever lose my job because of AI... ok, fine, I’ll have to change and try getting another, maybe different job. Easy to say, harder to do—but that mindset at least helps me not spiral.

  • skydhash 2 hours ago

    The true value of experience comes in two ways: Knowing when not to do something; and knowing the shortest path to produce the result needed.

    More often, the result of juniors using LLM is a frankenstein ball of mud that is close to its implosion point. Individual features are part of a system and are judged based on how they contribute to its goal, not how they are individually correct.

smallstepforman 8 hours ago

Google search is giving us a taste of AI summarised results, and for simple things its passable, but ask a serious question and you get good looking garbage. Yes, I know its early days, but looking at the current output quality we have nothing to worry about. It will be used as calculators, offload some menial repetetive task which can be automated, but the next gen of developers will still be tasked to solve complex problems.

  • 999900000999 7 hours ago

    Case in point.

    I purchased a small electronic device from Japan recently. The language can be changed to English, but it’s a bit of a process.

    Google’s AI just copied a Reddit comment that itself was probably AI generated. It made up instructions that are completely wrong.

    I had to find an actual human written web page.

    The problem is with more and more AI sloop, less humans will be motivated to write. AGI at least the first generation is going to be an extremely confident entity that refuses to be wrong.

    Eventually someone is going to lose a billion dollars trusting it, and it’ll set back AI by 20 years. The biggest issue with AI is it must be right.

    It’s impossible for anything to always be right since it’s impossible to know everything.

  • erentz 7 hours ago

    Google AI the other day told me that tinnitus is listed as a potential adverse reaction of Saphnelo.

    Only it damn well isn’t. Anywhere. Not even patient reports.

    The problem with AI is if it’s right 90% of the time but I have to do all the work anyway to make sure it’s not one of the 10% of times it’s extremely confidently wrong, what use is it to me?

    • cwalv 6 hours ago

      This problem is has already gotten so much better. In my experience it's no longer 10% of the time (I'd estimate more like 1%). In the end, you still need to use judgement; maybe it doesn't matter if it's wrong, and maybe it really does. It could be citing papers, and even then you don't know if the results are reproducible.

  • simonw 8 hours ago

    Google's AI overviews is the single worst AI-driven experience in widespread use today. It's a mistake to make conclusions about how good AI has got based on that.

    Have you tried search in ChatGPT with o4-mini or o3?

    • cwalv 6 hours ago

      I don't use it that much, but I have noticed these AI overviews still seem hallucinate a lot, compared to others. Meanwhile I hear that Gemini is catching up or surpassing other models, so I wonder if I'm just unlucky (or just haven't used it enough to see how much better it is)

      • input_sh 5 hours ago

        One is their state-of-the-art model, the other one's the best model they can run at scale and speed people expect from a search engine.

  • RaftPeople 7 hours ago

    I had a couple of great examples of getting the exact opposite answer depending on how I worded my question, now I can't trust any of the answers.

    • cwalv 6 hours ago

      Maybe the answer to your question was subjective?

  • csomar 6 hours ago

    Google Search AI is the worst and considering that AI is not a good alternative to search (the models are compressed data), I am not sure why Google has decided to use LLMs to answer questions.

  • readthenotes1 8 hours ago

    Try asking Perplexity for a real taste. It works far better than google's--good enough to make searching fun again.

    Try coding with Claude instead of Gemini. Those that do tell me it is well beyond.

    Look at the recent US jobs reports--the draw down was mostly in professional services. According to the chief economist of ADP "Though layoffs continue to be rare, a hesitancy to hire and a reluctance to replace departing workers led to job losses last month."

    Of course, correlation is not causation, but everyone white collar person I talk with is saying AI is making them far more productive. It's not a leap to figure out that management sees that as well.

GardenLetter27 27 minutes ago

I feel exactly the same way.

I just wish they were forced to publish open weights in return for using copyrighted materials in the "brain".

disambiguation 7 hours ago

> We are all, collectively, building a giant, shared brain.

"Shared" as in shareholder?

"Shared" as in piracy?

"Shared" as in monthly subscription?

"Shared" as in sharing the wealth when you lose your job to AI?

zkmon 5 hours ago

It's not just the grunt work going to AI. Actually it is the opposite. Grunt work of dealing with mess is the only thing that is left for humans. Think of legacy systems, archaic processes, meaningless workflows, dealing with other teams and people, politics of work, negotiations, team calls, history of technical issues... AI is a new recruit that has massive general abilities, but has no clue about the dingy layers of the corporate mess.

simonw 8 hours ago

Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the invention of the table saw.

  • chii 8 hours ago

    More like quitting blacksmithing due to the invention of CNC.

    • imtringued an hour ago

      Forging and subtractive manufacturing are different techniques.

    • globular-toast 5 hours ago

      CNC produces something no blacksmith could. The same cannot be said of LLMs.

jbs789 7 hours ago

As this started with career advice, two points: the world values certain things (usually making people’s lives easier, one version of that is building useful tools) and the individual has a set of interests and skills. Finding the intersection of that for you should help guide you toward a career that the world values and interests you (if that’s important to you).

I’m looking at this as the landscape of the tools is changing, so personally anyway, I just keep looking for ways to use those tools to solve problems / make peoples lives easier (by offering new products etc). It’s an enabler rather than a threat, once the perspective is broadened, I feel.

asimpletune 3 hours ago

Software engineers should unionize. We’re not real engineers until we have professional standards that are enforced (as well as liability for what we make). Virtually every other profession has some mandatory license or other mechanism to bring credibility and standards to their fields. AI just further emphasizes the need for such an institution.

octo888 5 hours ago

I wasted 2 days using Cursor with the 3.7 thinking models to implement a relatively straightforward task (somewhat malicious compliance with being highly encouraged to use the tools, and because a coworker insisted I use their overly complex mini framework instead of just plain code for this task)

It went round in circles doubting itself. When I challenged it, it would redo or undo too much of its work instead of focussing on what I'm asking it about. It seemed to be desperate to please me, backing down to my challenging it.

Ie depending on it turned me into a junior coder. Overly complex code, jumping to code without enough thought etc

Yes yes I'm holding it wrong

The code they create seems to be creating a mess that also is solved by AI. Huge sprawling messes. Funny that. God help us if we need to clear up these messes if AI dies down

lawgimenez 8 hours ago

I just recently inherited a vibe coded project in iOS (fully created with ChatGPT), and not even close to working. This is annoying.

  • grogenaut 7 hours ago

    I helped my brother a few times with iOS apps he had folks from upwork build. They also didn't work beyond the initial version. They always wanted to rebuid theapp from scratch for each new requirement.

    • lawgimenez 7 hours ago

      Everything's a mess, a thousand commits and PRs from ChatGPT. Code won't compile because Codex it seems doesn't seem to understand static variables.

      And now this error The project is damaged and cannot be opened due to a parse error. Examine the project file for invalid edits or unresolved source control conflicts.

Animats 8 hours ago

It's hard to see manual coding of web sites remaining a thing for much longer.

  • sublinear 2 hours ago

    If you're talking about personal websites I think that ship sailed almost 20 years ago with the rise of social media.

    If you mean business websites, they are just about the most volatile code out there with crazy amounts of work that never stops. It's still a form of publishing after all. Every marketing decision has to filter through design agencies, legal and compliance, SEO, etc. before it gets handed off to web devs. Then the web dev has to push back when a ton of details are still wrong. Many decisions after testing are left unresolved by the time it goes live and those pages still need maintenance until its expiry.

    Smaller businesses also have these problems with their websites, but with less complexity until they get more attention from the public.

  • globular-toast 5 hours ago

    Most people younger than 30 probably haven't "manually coded a website" anyway.

mirsadm 6 hours ago

Initially I felt anxiety about AI and it's potential to destroy my career. However now I am much more concerned about what will happen in 5 or 10 years of widespread AI slop. When humans lose motivation to produce content and all we're left is AI continually regenerating the same rubbish over and over again. I suspect there'll be a shortage of programmers in the future as people are hesitant to start a career in programming.

Apocryphon 7 hours ago

AI is a red herring. The current malaise in the industry is caused by interest rates. Certainly, AI has the potential to disrupt things further later down the line, but the present has already been shaken enough by the whiplash between pandemic binging and post-ZIRP purging.

  • ifwinterco an hour ago

    This is my read as well.

    Hard to say for sure because chatGPT came out at almost exactly the same time the post-covid wheels where starting to fall off anyway, but I think it's fair to say that as of right now you can't really replace all (or even many) of your engineers with AI.

    What you definitely can do though, is fire 20+% of your engineers and get the same amount done simply because more is not necessarily better