I am personally of the opinion that ML will end up being 'normal technology', albeit incredibly transformative.
I think you can combine 'Incanters' and 'Process Engineers' into one - 'Users'. Jobs that encompass a role that requires accountability will be directing, providing context, and verifying the output of agents, almost like how millions of workers know basic computer skills and Microsoft Office.
In my opinion, how at-risk a job is in the LLM era comes down to:
1: How easy is it to construct RL loops to hillclimb on performance?
2: How easy is it to construct a LLM harness to perform the tasks?
3: How much of the job is a structured set of tasks vs. taking accountability? What's the consequence of a mistake? How much of it comes down to human relationships?
Hence why I've been quite bullish on software engineering (but not coding). You can easy set up 1) and 2) on contrived or sandboxed coding tasks but then 3) expands and dominates the rest of the role.
On Model Trainers -- I'm not so convinced that RLHF puts the professional experts out of work, for a few reasons. Firstly, nearly all human data companies produce data that is somewhat contrived, by definition of having people grade outputs on a contracting platform; plus there's a seemingly unlimited bound on how much data we can harvest in the world. Secondly, as I mentioned before, the bottleneck is both accountability and the ability for the model to find fresh context without error.
> I think you can combine 'Incanters' and 'Process Engineers' into one - 'Users'
I wanted to talk about this more but couldn't quite figure out how to phrase it, so I cut a fair bit: with "incanters" I'm trying to point at a sort of ... intuitive, more informal practitioner knowledge / metis, and contrast it with a more statistically rigorous approach in "statistical/process engineers". I expect a lot of people will fuse the two, but I'm trying to stake out some tentpoles here. Users integrate a continuum of approaches, including individual intuition, folklore, formal and informal texts, scientific papers, and rigorously designed harnesses & in-house experiments. Like farming--there's deep, intuitive knowledge of local climate and landraces, but also big industrial practice, and also research plots, and those different approaches inform (and override) each other in complex ways.
In some sense, technology is "not normal" regardless.
If we think of the digitization tech revolution... the changes it made to the economy are hard to describe well, even now.
In the early days, it was going to turn banks from billion dollar businesses to million dollar ones. Universities would be able to eliminate most of their admin. Accounting and finances would be trivialized. Etc.
Earlier tech revolution s were unpredictable too... But at lest retrospectively they made sense.
It's not that clear what the core activities of our economy even are. It's clear at micro level, but as you zoom out it gets blurry.
Why is accountability needed? It's clearly needed in its context... but it's hard to understand how it aggregates.
Accountability is really a way to address liability. So long as people can sue and companies can pay out, or individuals can go to jail, there is always going to be a question of liability; and historically the courts have not looked kindly at those who throw their hands up in the air and say “I was just following orders from a human/entity”
This is dependent on having a court system uncaptured by corruption. We're already seeing that large corporations in the "too big to fail" categories fall outside of government control. And in countries with bribing/lobbying legalized or ignored they have the funds to capture the courts.
A huge component of compulsory (either by statute or de-facto as a result of adjacent statute, like mandatory insurance + requirements thereof) professional licensure is that if you follow the rules set by (some entity deputized by) government the government will in return never leave you holding the bag. The government gains partial control and the people under it's control get partial protection.
"oh I'm sorry your hospital burned down mr plantiff but the electrician was following his professional rules so his liability is capped at <small number> you'll just have to eat this one"
I would wager that a solid half if not more of the economy exists under some sort of arrangement like that.
> Hence why I've been quite bullish on software engineering (but not coding). You can easy set up 1) and 2) on contrived or sandboxed coding tasks but then 3) expands and dominates the rest of the role.
Why can't LLMs and agents progress further to do this software engineering job better than an actual software engineer? I've never seen anyone give a satisfactory answer to this. Especially the part about making mistakes. A lot of the defense of LLM shortcomings (i.e., generating crappy code) comes down to "well humans write bad code too." OK? Well, humans make mistakes too. Theoretically, an LLM software engineer will make far fewer than a human. So why should I prefer keeping you in the loop?
It's why I just can't understand the mindset of software engineers who are giddy about the direction things are going. There really is nothing special about your expertise that an LLM can't achieve, theoretically.
We're always so enamored by new and exciting technology that we fail to realize the people in charge are more than happy to completely bury us with it.
> Why can't LLMs and agents progress further to do this software engineering job better than an actual software engineer?
Because a machine can never take accountability. If a software engineer throughout the entire year has been directing AI with prompts that created weaker systems then that person is on the chopping block, not the AI. Compared to another software engineer who directed prompts to expand the system and generate extra revenue streams.
> Because a machine can never take accountability.
A business leader can though.
> Compared to another software engineer who directed prompts to expand the system and generate extra revenue streams.
I think you're missing the point. Why can't an LLM advance sufficiently to be a REAL senior software engineer that a business person/product manager is prompting instead of YOU, a software engineer? Why are YOU specifically needed if an LLM can do a better job of it than you? I can't believe people are so naive to not see what the endgame is: getting rid of those primadonna software engineers that the C-suite and managers have nothing but contempt for.
If a 'business leader' is prompting out software through their agents, ensuring it works, maintaining it, and taking accountability... they're also a software engineer
By this definition, pre-LLM "business leaders" circa 2008 with not even an understanding of Excel were already "software engineers" this whole time - just prompting out software through their meatspace agents, instead of their silicon ones.
Dismissal of arguments as "just semantics" is high school level argumentation.
clearly not the same when they were abstracted from the realities of building software and.. directly taking accountability for it!
by semantics, i mean the definition and pool of tasks, responsibilities, and outcomes a job is comprised of is shifting so fast that the borders of what is a 'software engineer' and 'business person' are melding together. software engineers are business people in their own way
I don't understand why humans abstract a business leader away from the realities of building software, while LLMs do not.
If the rhetoric is to be believed, the set of responsibilities falling to the role of "software engineer" is shrinking to zero, and all engineers are being forcibly "promoted" to the managerial class of shepherding around agents.
i would say theres more nuance than that (disclaimer: dont have a crystal ball)
software engineers who are comfortable doing business work - managing, working with different stakeholders, having product and design taste, being sociable, driving business outcomes are going to be more desired than ever
likewise, business leads who can be technical, can decompose vague ideas into product, leverage code to prototype and work with the previous person will also be extremely high value.
i would be concerned if i was an engineer with no business acumen or a business lead with no technical acumen (not counting CEOs obviously, but then again the barrier to starting your own business as a SWE has never been lower)
It's funny, that's why COBOL was originally developed in 1960: so that business people could write software themselves without needing software engineers. And it sort of worked, to an extent. History repeats itself.
Between then and now, what ever happened to "no code development" or whatever they called it, where all of the world's APIs could be connected with lines in a diagram?
That's how things work already in every workplace where there's any real danger. The companies construes its policies and paper trail in bad faith so that the employees are always operating contradictory to policy/training and then when something happens blame can be shifted on them.
It's funny how we see some people who claim to have "taste" walking around in public wearing horrible Balenciaga shoes. Are they really just tasteless, or are they doing it ironically to troll the rest of us? I guess we'll never know. Maybe someday AI robots will achieve the same level.
Who is better positioned to pilot the LLM than a domain expert?
"Software engineer" as a job title has included a lot of people who write near-zero-code, at least at the higher levels of the career ladder, for years prior to LLMs. People assuming the only, or even primary, function of the job is outputting code reveal a profound lack of understanding of the industry in my opinion. Beyond the first year or two it has been commonly accepted that the code is the easy part of the job.
> has included a lot of people who write near-zero-code, at least at the higher levels of the career ladder
This is something that I would have thought HN readers were pretty familiar with. LLMs can make my code work faster or more prolific, but with 30yoe I spend a fairly significant chunk of my work time doing anything but code.
I'm occasionally reminded the HN's commenting base is much larger than my niche in the industry (VC backed startups + large public tech companies is my background). I had a similar reaction to people thinking Peter Bailis going from CTO at workday to "member of technical staff" at Anthropic was him trading a leadership position for closing jira tickets.
It's not about whether they make mistakes (they do! although the exact definition of a mistake is nuanced), but whether they can take accountability if the software fails and millions are lost or people die. A large part of the premium paid on software engineers is to take accountability for their work. If a "business person" directs their agent to build some software and takes accountability -- congrats! They are also now a software engineer :)
The lines between a software engineer / business person / product / design and everything else will blur, because AI increases the individual person's leverage. I posit that there will be more 'software engineers' in this new world, but also more product people, more business people, more companies in general.
> It's why I just can't understand the mindset of software engineers who are giddy about this brave new world. There really is nothing special about your expertise that an LLM can't achieve, theoretically.
They’re stupid or they’re already set up for success. The general ideas seems to be generalists are screwed, domain experts will be fine.
I think these arguments tend to reach impasse because one gravitates to one of two views:
1) My experiences with LLMs are so impressive that I consider their output to generally be better than what the typical developer would produce. People who can't see this have not gotten enough experience with the models I find so impressive, or are in denial about the devaluation of their skills.
2) My experiences with LLMs have been mundane. People who see them as transformative lack the expertise required to distinguish between mediocre and excellent code, leading them to deny there is a difference.
Not sure that's what I was getting at. People in camp 2 don't think an LLM can take over the job of a real software engineer.
It's people in camp 1 that I wonder about. They're convinced that LLMs can accomplish anything and understand a codebase better than anyone (and that may be the case!). However, they're simultaneously convinced that they'll still be needed to do the prompting because ???reasons???.
One explanation is that some think we might be getting to the limits of what an LLM can reasonably do. There's a lot of functions of any job that are not easily translated to an LLM and are much more about interacting with people or critical thinking in a way LLMs can't do. I'm not sure if that's everyone's rationale but that's my personal view of the situation. Like the jobs will change but we likely won't be losing them to AI outright.
I was at 2) until the end of last year, then LLM/agent/harnesses had a capability jump that didn't quite bring me to be a 1) but was a big enough jump in that direction that I don't see why I shouldn't believe we get there soonish.
So now I tend to think a lot of people are in heavy denial in thinking that LLMs are going to stop getting better before they personally end up under the steamroller, but I'm not sure what this faith is based on.
I also think people tend to treat the "will LLMs replace <job>" question in too much of a binary manner. LLMs don't have to replace every last person that does a specific job to be wildly disruptive, if they replace 90% of the people that do a particular job by making the last 10% much more productive that's still a cataclysmic amount of job displacement in economic terms.
Even if they replace just 10-30% that's still a huge amount of displacement, for reference the unemployment rate during the Great Depression was 25%.
An enormous amount of domain expertise is not legible to LLMs. Their dependence on obtaining knowledge through someone else's writing is a real limitation. A lot of human domain expertise is not acquired that way.
They still have a long way to go before they can master a domain from first principles, which constrains the mastery possible.
People need to be careful about buying into the shorthand lingo with LLMs. They do not learn like we do. At the lowest level, they predict which tokens follow a body of tokens. This lets them emulate knowledge in a very useful way. This is similar to a time series model of user activity: the time series model does not keep tabs on users to see when they are active, it has not read studies about user behavior, it just reflects a mathematical relationship between points of data.
For an LLM and this "vague" domain expertise, even if none of the LLM's training material includes certain nuggets of wisdom, if the material includes enough cases of problems and the solutions offered by domain experts, we should expect the model to find a decent relationship between them. That the LLM has never ingested an explicit documentation of the reasoning is irrelevant, because it does not perform reasoning.
The domain expertise I'm referring to isn't vague, it literally doesn't exist as training data. There are no cases of problems and solutions to study that are relevant to the state-of-the-art. In some cases this is by intent and design (e.g. trade secrets, national security, etc) long before for LLMs arrived on the scene.
We even have some infamous "dark" domains in computer science where it is nearly impossible for a human to get to the frontier because the research that underpins much of the state-of-the-art hasn't existed as public literature for decades. If you want to learn it, you either have to know a domain expert willing to help you or reinvent it from first principles.
>They still have a long way to go before they can master a domain from first principles, which constrains the mastery possible.
Mastery isn't necessary. Why are Waymos lacking drivers? Not because self-driving cars have mastered driving, but because self-driving works sufficiently well that the economics don't play out for the cab driver.
I think the reason AI isn't going to replace CEOs, or anyone in the C suite, is pretty obvious. They see themselves as the company. Everyone else is a resource. AI is here to replace resources, just like investing in a brand new lawn mower. For them, replacing an executive with AI is like saying you're going to marry a broom.
Most companies contain several layers worth of business context that the higher ups have no idea of, as well.
Everything from "unpaid bills are handled this way" through "the website has a certificate that needs yearly renewal" to "we need to report our earnings biweekly in Indonesia, and we need to retry the form several times before it works".
This is not fundamentally doable by LLMs because the higher ups wouldn't know what to ask for, and if they did it would not be feasible to keep everything in single persons head, no matter how AI-assisted.
So that alone I think guarantees a good amount of unreplaced jobs.
Eh, this sounds like the people that have been replaced at a lot of companies already.
>"the website has a certificate that needs yearly renewal"
For example, why modern certificates are being dropped down to very short times. The push to automate everything.
>"unpaid bills are handled this way"
Any company that has things like this will start changing their process.
You have this idea (and maybe the AI company CEOs do too) that this will be like a container drop in. No, instead these things will be part of a gradual process change and human parts will eventually disappear.
>So that alone I think guarantees a good amount of unreplaced jobs.
What percent. This is what matters at the end of the day. Once unemployment reaches high rates over all age groups in a society that is used to low unemployment things go bad. It can take longer to go bad where you have social safety nets and jobs programs, but in places like the US that is communism, so expect the haves to shoot at the have nots.
That's true, too. I guess we will see if executive pay and credentials start going down. They could technically have AI make all the decisions while someone just plays the patsy.
> They see themselves as the company. Everyone else is a resource
Knowing nothing about how these things work, I wonder if the board will see it the same way? Even today I could see the following play out:
CEO says X. Board member puts a bunch of strategic info into ChatGPT on the spot which argues Y more convincingly than X.
In that moment, the CEO will find themselves arguing against a chatbot, which can gish gallop with plausible bs faster than you can say the word “transformative”.
Maybe they win the argument today, but eventually the CEO will be functionally replaced, and eventually actually replaced or watered down.
They're just a thin layer to be replaced last. They're just arrogant enough to think they're the company, but ultimately the endgame is -- all humans become economically insignificant compared to the automated economy.
Heh, how long before someones agent starts looking for these 3 so it can run the business in the background and feeding them all the reports they need to sign.
Loved that section about "meat shields". LLMs cannot be held accountable. Someone needs to be involved in decision making, with real stakes if those decisions are bad.
What I had considered is that in the case of self-driving cars, nobody is criminally accountable, even though the rest of us may be criminally negligent should we make some horrific error. Philosophically, there is some kind of reason that criminal acts require punishment beyond mere financial liability (e.g., prison time) and self-driving cars are exempted from this. Currently, self-driving cars are also exempt from the actual laws of the road because the police are dis-empowered to enforce anything on the self-driving car.
It just makes logical sense really; the human using the tool is in the end responsible.
Whether the tool is too powerful or ethical to use is an orthogonal discussion, in my opinion. Taken to the extreme, nuclear weapons still need someone fire or drop them. (We should still have discussions on safety and ethics always!)
why can't the name be 'scape goat'? Since that's what they are - the "real" responsibility rests on the owners, and they happily shed it as limited liability ownership of shares.
The problem with AI is that it isn't like any previous technology. There may be temporary jobs to fill in the gaps but they won't be careers. The AI will do the process engineering and self optimization. The prompt witchcraft is a good example because today its totally unnecessary and doesn't actually increase performance, and they'll continue to make it easier to direct/steer the models.
We're literally trying to build an intelligence to replace us.
The human species. "We" doesn't include everyone and doesn't necessarily imply the process happens through collaboration and planning (conspiracy). The race to automation is happening as expected; outside any group control and bound by competition. Game theory suggests the end result is us being replaced, if we make it that far. "We" as a species are the ones making it happen.
I think that this is an interesting attempt at taxonomy, but it's a bit on the magical thinking end (and I say this as somebody that does a good amount of what's described as the incanter role). It's a combination of the author's previous witchy aesthetic (see his excellent "<x>ing the technical interview" series) and progressive labor politics (which are asymptotically doomed in the current automation push).
The biggest failure of imagination, I think, is the assumption we'd use humans for most (or *any) of these jobs--for example, the work of the haruspex is better left to an LLM that can process the myriad of internal states (this is the mechanical interpretation field).
Yes, I had the same impression. I'm sympathetic to the author's perspective but I can't muster even the minimal optimism they've shown here. The "process engineers" as described would themselves quickly be replaced by an automated system. The "statistical engineers", I think, would never be able to keep up with the rate of change of the AI models, which would likely have different statistical behavior and biases in each language/context/etc with each update, and so it's unlikely anyone would pay them to develop that required deep expertise in the first place. More likely, that work would be done at an AI foundation model company -- but it would be done just once, and then incorporated into the training process.
As an engineer, I'm never more excited about this job.
My implementation speed and bug fixing my typed code to be the bottleneck - now I just think about an implementation and it then exist - As long as I thought about the structure/input/output/testability and logic flow correctly and made sure I included all that information, it just works, nicely, with tests.
Unix philosophy works well with LLM too - you can have software that does one thing well and only one thing well, that fit in their context and do not lead to haphazard behavior.
Now my day essentially revolves around delivering/improving on delivering concentrated engineering thinking, which in my opinion is the pure part about engineer profession itself. I like it quite a lot.
Though something I half-miss is using my own software as I build it to get a visceral feel for the abstractions so far. I've found that testability is a good enough proxy for "nice to use" since I think "nice to use" tends to mean that a subsystem is decoupled enough to cover unexpected usage patterns, and that's an incidental side-effect of testability.
One concern I have is that it's getting harder to demonstrate ability.
e.g. Github profiles were a good signal though one that nobody cared about unless the hiring person was an engineer who could evaluate it. But now that signal is even more rubbish. Even readmes and blog posts are becoming worse signals since they don't necessarily showcase your own communication skills anymore nor how you think about problems.
Funny enough, I think github and communication are still a huge part of what I see.
Github code itself maybe irrelevant, but is the product KISS/UNIX? Or is it an demonstration of complete lack of discipline about what "feature" should be added. If you see something that have multiple weakly or completely irrelevant feature strung together, it's saying something. Additionally, AI would often create speghetti structures, and require human shepherding to ensure the structure remain sound.
Same with communication. I have AI smell, I know if something is AI slop. In my current job, docs sent with expectation for others to read always prefaced with -- this section typed 100% by aperocky -- and I dispensed with grammar and spelling checks for added authenticity. I'll then add -- following section is AI generated -- to mark the end of my personal writing.
I think that is the way to go in the future. I pass intentional thinking into AI, not the other way around. There are knowledge flowing back for sure, but only humans possess intention, at least for now.
> But now that signal is even more rubbish. Even readmes and blog posts are becoming worse signals since they don't necessarily showcase your own communication skills anymore nor how you think about problems.
Yup. I've spotted former coworkers who I know for a fact can barely write in their native language, let alone in English, working for AWS and writing English-language technical blog posts in full AI-ese. Full of the usual "it's not X, it's Y", full of AI-slop. Most of the text is filler, with a few tidbits of real content here and there.
I don't know before, but now blog posts have become more noise than signal.
The "dead Internet" theory has become more real. It's especially bad on LinkedIn. Everyone is now an "AI expert", posting generated slop and updating their profiles with AI enhanced head shots.
Agreed, but to be fair, LinkedIn was especially bad to begin with.
Even before AI-slop, LinkedIn posts were rightfully mocked. Self-congratulatory or self-pitying, full of empty platitudes and "lessons learned" and "journeys" (ended or started). There was never anything worth reading to begin with.
Now it's of course worse. I don't think I can stand reading about another self-appointed expert on LinkedIn writing about their completely unwarranted strategy and/or lessons and/or skepticism about AI.
> My implementation speed and bug fixing my typed code to be the bottleneck
I remember those days fondly and often wish I could return to them. These days it's not uncommon to go a couple days without writing a meaningful amount of code. The cost of becoming too senior I suppose.
Anecdotally I've been observing a significant uptick in the amount of code being produced by my peers who are in senior engineer, leadership and engineering management positions.
They can take their 20+ years of experience and use it to build working systems in the gaps between meetings now. Previously they would have to carve out at least half a day of uninterrupted time to get something meaningful done.
> build working systems in the gaps between meetings now
Agreed, I've actually done this. Sitting in a meeting where someone was asking about what tooling we could build, what it might be capable of, what their options were. So while we were chatting I was having Claude build a working demo.
In the end it still needs to be turned into an enterprise app with all the annoying accoutrements that go with that, but for demo work it was phenomenal.
Yes I'm much more productive than before, and I'm convinced we can't get rid of engineers altogether... But how long until my team of 5 gets replaced by a single engineer? Am I going to be the one to keep my job or one of the 4 to be let go?
If the team does the exact same thing, not very long.
The ability to know what to build and what not to build is going to be as important as knowing how to build it. I still think engineers have an edge here. All my childhood dreams of what I should be able to do or build are coming to a reality and the only thing that is blocking me is lack of time. I want to go faster still
When I was in automation a decade ago, they keep telling us to never tell people this is going to replace them. What you tell them is it will allow their teams to finally focus on what really matters. Instead of working on all these repetitive tasks, now they can focus on the much larger issues. Everybody bought in, teams felt like the automation we were doing was really going to make their jobs easier.
It never did.
Managers realized they could trim their teams down after we were done and did in fact, layoff people by the hundreds. Doing the same work with less people was beneficial to them because now they got bigger bonuses and salary increases for adding to the bottom line of the company. Many managers who did nothing more than layoff half their team were promoted faster up the ranks.
So yes, be scared, be VERY scared and have a Plan B and a Plan C going forward. The people who created this have rose colored glasses on how its going to revolutionize business. The actual businesses owners and CEO's just a see another new way to reduce human capital in order to increase profits.
> As an engineer, I'm never more excited about this job.
How long do you think it'll take for the AI trend to mostly automate the parts of your job that still make you excited?
Everyone thinks it won't be them, it will be others that will be impacted. We all think what we do is somehow unique and cannot be automated away by AI, and that our jobs are safe for the time being.
> How long do you think it'll take for the AI trend to mostly automate the parts of your job that still make you excited?
The exciting part of the job is, and always has been, listening to idle chitchat where you pick up on the subtle cues of where one is finding difficulty in their life and then solving those problems. I think AI could already largely handle that today just fine, except:
You have to convince, especially non-technical, people to have idle chitchat with machines instead of humans
-or-
Convince them of and into having a machine always listening in to their idle conversations with humans
Neither of those are all that palatable in the current social landscape. If anything, people seem to be growing more weary of letting technology into their thoughts. Maybe there is never a future where humans become accepting of machines being always there trying to figure out what is wrong with them.
As someone in 99th percentile in terms of token usage, it's super clear to me where the agent will not be able to replace my judgement, two areas:
1. if it exceed the context the agent does random stuff, that are often against simplicity and coherent logical structure.
2. LLM has zero intention, and rely on you to decide what to build and more importantly not build.
As such, I'm the limit of the numbers of concurrent agents working fo rme, because there is still a limit to my output of engineering judgement. I do get better, both at generating and delivering this judgement. Exceeding this limit, the output becomes garbage.
At this current year and date, the AI does not automate me in anyway, I have something that they just flat out don't have.
Playing devil's advocate here, I'm not antagonizing you but thinking out loud.
> if it exceed the context the agent does random stuff, that are often against simplicity and coherent logical structure.
That's a current technical limitation. Are you so sure it won't be overcome in the near/mid future?
> LLM has zero intention, and rely on you to decide what to build and more importantly not build
But work is being done to even remove or automate this layer, right? It can be hyperbole (in fact, it is) but aren't Anthropic et al predicting this? Why wouldn't your boss, or your boss' boss, do this instead of you? If they lack the judgment currently, are you so sure they cannot gain it, if they don't have to waste time learning how to code? If not now, what about soon-ish?
> At this current year and date, the AI does not automate me in anyway
Not now, granted. But what about soon? In other words, shouldn't you be worried as well as excited?
Well if you do nothing you should definitely be worried, because not using LLM is rapidly becoming untenable.
If you do a lot, you'll grow skeptical about some of the claims and hype, and have a sense of where this is leading to.
My position is that if someone use LLM a lot, they maybe right or wrong about the future of LLM. If they don't, then they definitely are not right or are only lucky.
My personal judgement is both of these are hard caps until they invented something that's not a transformer, start from scratch bascially.
> because not using LLM is rapidly becoming untenable
Completely agreed. This is not what I'm advocating for. And definitely, there's a lot of self-serving hype (and fearmongering can be another kind of hype) by AI companies. But some of it I think will be true, or enough companies will believe it to be true, which amounts to the same.
I'm just worried, I cannot help it. And I'm not saying "don't use AI", I'm pushing back about the feeling of reckless "excitement".
Does it seem to you like those issues will be solved soon? Does your boss have the time to do this AI wrangling work on top of their other tasks even if they don't have to learn to code?
> Does it seem to you like those issues will be solved soon?
No.
But I was also very skeptical about AI being able to code semi-reliably during the early stages of GPT hype, and look where I'm now: most of the code I produce is written by an AI. So I was wrong before, which makes me doubt my own ability to predict the near future.
> Does your boss have the time to do this AI wrangling work on top of their other tasks even if they don't have to learn to code?
My boss' boss would probably love to get rid of both me and my direct boss. And a whole class of problems will disappear, freeing time of people higher up the chain to focus on this... either them or a tiny group of engineers, which leaves me out of a job either way. I've already seen people in small shops get fired because their immediate semi-technical boss can now do their job with AI (cannot go details because of privacy reasons. Also, it doesn't matter if the end result is flawed, it matters that "mission accomplished" and someone is out of a job).
> How long do you think it'll take for the AI trend to mostly automate the parts of your job that still make you excited?
Yeah, no one ever thinks beyond "whoa, how cool, I cloned Slack in 15 minutes!"
Personally, the thing I find more depressing is turning a career that was primarily about solving interesting puzzles in elegant ways into managing a swarm of idiot savant chatbots with "OK, that looks good" or "no, do it better" commands.
The problem that I'm trying to solve with agent is similar here, for instance, my comment likely made zero impression on you because I'm against both of the things that you are also against here.
All plausible, but not very transformative. Like imagining that the new jobs enabled for the automobile include automobile maintenance, tire shops, and so on. Traveling nurses, motel operators, military tanks, doordash, suburban life, beer sales at NASCAR, those were all enabled by the car (and its larger sibling the truck).
Still missing are the jobs snd industries enabled by "AI" that are not themselves "AI".
> I think a part of the reason is that these roles are not just about sending emails and looking at graphs, but also about dangling a warm body over the maws of the legal system and public opinion.
Spoilers for "How I Met Your Mother" ... but there's a character who has that kind of job, as a legal meat-shield. Now, ~10 years after airing, this funny clip feels like it would only need slight adjustments.
Geo blocking the UK satisfies any age verification, otherwise the site owner would have to check if their content is considered adult in the UK and implement something.
> IMO a small blog website is not going to get pulled-up for this
Well, maybe not the typical engineering blog but I think if you're a puritan some posts/texts from Aphyr probably reaches borderline "adult content", so I'm not that surprised Aphyr rather play it safe and also make a point at the same time.
It's probably a political point, but I think your comparison over sells how inconvenient it is for someone to geoblock one small country and the headache if anything did happen. It's not much more effort than doing nothing really?
And clearly users in the UK can find their own way to read it if they like, so the cost is also small there.
Considering that there is multiple "why is this blocked in the uk" comments on every single one of these posts maybe the UK isn't such a small country. Geoblocking a decent chunk of your readership would be a pretty big inconvenience for a writer I would imagine.
the culture section of this writeup links to explicitly adult/erotic content in the footnotes and discusses 'adult themes' directly. his caution seems reasonable.
Have you even read the shit politicians are either pulling or trying to these days? There is no amount of paranoia that is too little when talking about things like cross national prosecution, laws regarding users not considered adults, and age verification.
Never know when one of your posts might gain serious traction. Not worth the risk. Very easy to find many examples of people making decisions thinking “I/we will never be big enough for that to be relevant” only to be haunted by that decision later. Classic example: partnership agreements/contracts between friends and family on small endeavors.
It's self-imposed, I think? curl connects to the same aphyr.com in both cases, but when connecting from the UK it receives a different response body. Probably sensible I expect, if you just want things to work, legally speaking.
Can people make a soft assumption that if somebody went through the trouble of digging up an archive link, then access to the article is limited in some way?
I don't understand the title. It doesn't seem exactly clickbait but also doesn't seem to be what the article is about?
Anyway: The new job types might seem overspecialized now but history shows us this is indeed what happens as new industries open up. I think these predictions look quite solid.
I think, long term, there will be only one job comprising all these aspects. The only absolutely-minimum-required skill will be critical thinking, possibly along with data / statistical literacy for grounding.
"Specialization is for insects." (Or for someone pushing the boundaries of human knowledge.)
jobs that have liability coverage are the last ones to get automated. even if its not doing anything all day long in terms or "work". but someone needs to go to jail when shit goes wrong as mandate by law quite often. the "oops the computer did it" isn't a sufficient excuse in some situations, accountability is a thing. so while we mechanically could replace also CEOs with machines and arguably at times would be even better than a few humans out there, some body that can be dragged to court is mandatory no matter how good the tech gets.
Humans will be held accountable, not machines, whatever is the technology used. The jobs you suggest are based on the state of LLM right now, this could change rapidly, considering the state of progress. These are just activities that are already done by people that work with these instruments, because they want to optimize and obtain the best/safest output from these machines.
> Humans will be held accountable, not machines, whatever is the technology used
Isn't this addressed explicitly in TFA, in section "meat shields"?
As for the rest, if you predict even the jobs described in TFA will be obsoleted by future LLMs+tools, then the future is even more dire than predicted by Aphyr, right? Fewer jobs for humans to do.
This is part 9 of a 10-part series. The author has posted every chapter to Hacker News every day for the past 9 days. Every time four of the first five or so comments are:
Someone noting it is unavailable in the UK.
Someone posting an archive.is link.
Someone asking why the above posted an archive link to a static site.
An answer that it is because the content is otherwise unavailable in the UK.
Do we really need to see this every single time?
I realize I am also not adding to the real discussion now as well, but Jesus Christ, this is irritating. Can we get a new rule that an author posting their own content, knowing it is unavailable in the UK, has to post their own archive link and explain why they're doing so as part of the submission?
>Can we get a new rule that an author posting their own content, knowing it is unavailable in the UK, has to post their own archive link and explain why they're doing so as part of the submission?
[Author blocks link to avoid being potentially in violation of the law]
You ask author to willingly provide link to again potentially be in violation of the law
So you are thinking that the UK government is going to do an international criminal investigation against aphyr for posting an archive link on a hacker news thread.
Does the UK government have the legal right to do an international criminal investigation against any website that is potentially violating their laws by having visitors from the UK accessing the site?
Answer yes or no, this is an easy binary question, and not one that requires any probabilistic thinking.
A company like Amazon doesn't treat its warehouse workers as human beings. Workers are seen as disposable: forced to piss in bottles, forced to work around the corpses of their collapsed coworkers, paid the absolute minimum possible, and replaced the second they don't operate like a perfect unfailing machine. You aren't viewed like a human, you are a tool. Cattle. A piece of meat they are forced to retain because a robot isn't quite capable of doing your task yet.
The article's use of "meat shields" isn't any different. Humans are going to be hired for the sole reason of taking accountability for actions dictated by AI. They are there only because the company can't put blame on a machine and will be sued to oblivion if there's nobody to blame at all. Your existence as a person is irrelevant, they are just interested in someone with a heartbeat they can blame when stuff inevitably goes wrong.
> someone with a heartbeat they can blame when stuff inevitably goes wrong.
if said person can be blamed (and take on the liability), but cannot stop the action or audit the action, take preventative measures (which costs money) etc, then they cannot take responsibility for real and thus whether the blame falls on them on paper is irrelevant - if there's real punishment (like jail time), but no real power to enforce anything, then who would be stupid enough to take on this job? If there's no real punishment, then what does it matter that the blame on paper is there?
Someone who needs to feed their family given that AI CEOs predicting that their technology will either destroy the world, take everyone's jobs or both.
In the article, Ctrl+F for "meat" returns 3 results, while "human" returns 8. Seems like "human" remains the dominant word of choice in this author's vernacular.
Edit: Further, the only times "meat" appears is in the phrase "meat shield", which is an analogy that is very apt relative to the crux of the article.
There has always been a subset of highly technical people in the software world who are anti-organic. They dislike the "meatspace" and humans and relate more with machines and software.
Yeah, that was what I was refering to, no the specific part of the article. I've seen it much more here recently. Kind of disgusting and sad, but on the other hand it's good if people show their real face that way.
I am personally of the opinion that ML will end up being 'normal technology', albeit incredibly transformative.
I think you can combine 'Incanters' and 'Process Engineers' into one - 'Users'. Jobs that encompass a role that requires accountability will be directing, providing context, and verifying the output of agents, almost like how millions of workers know basic computer skills and Microsoft Office.
In my opinion, how at-risk a job is in the LLM era comes down to:
1: How easy is it to construct RL loops to hillclimb on performance?
2: How easy is it to construct a LLM harness to perform the tasks?
3: How much of the job is a structured set of tasks vs. taking accountability? What's the consequence of a mistake? How much of it comes down to human relationships?
Hence why I've been quite bullish on software engineering (but not coding). You can easy set up 1) and 2) on contrived or sandboxed coding tasks but then 3) expands and dominates the rest of the role.
On Model Trainers -- I'm not so convinced that RLHF puts the professional experts out of work, for a few reasons. Firstly, nearly all human data companies produce data that is somewhat contrived, by definition of having people grade outputs on a contracting platform; plus there's a seemingly unlimited bound on how much data we can harvest in the world. Secondly, as I mentioned before, the bottleneck is both accountability and the ability for the model to find fresh context without error.
> I think you can combine 'Incanters' and 'Process Engineers' into one - 'Users'
I wanted to talk about this more but couldn't quite figure out how to phrase it, so I cut a fair bit: with "incanters" I'm trying to point at a sort of ... intuitive, more informal practitioner knowledge / metis, and contrast it with a more statistically rigorous approach in "statistical/process engineers". I expect a lot of people will fuse the two, but I'm trying to stake out some tentpoles here. Users integrate a continuum of approaches, including individual intuition, folklore, formal and informal texts, scientific papers, and rigorously designed harnesses & in-house experiments. Like farming--there's deep, intuitive knowledge of local climate and landraces, but also big industrial practice, and also research plots, and those different approaches inform (and override) each other in complex ways.
In some sense, technology is "not normal" regardless.
If we think of the digitization tech revolution... the changes it made to the economy are hard to describe well, even now.
In the early days, it was going to turn banks from billion dollar businesses to million dollar ones. Universities would be able to eliminate most of their admin. Accounting and finances would be trivialized. Etc.
Earlier tech revolution s were unpredictable too... But at lest retrospectively they made sense.
It's not that clear what the core activities of our economy even are. It's clear at micro level, but as you zoom out it gets blurry.
Why is accountability needed? It's clearly needed in its context... but it's hard to understand how it aggregates.
Accountability is really a way to address liability. So long as people can sue and companies can pay out, or individuals can go to jail, there is always going to be a question of liability; and historically the courts have not looked kindly at those who throw their hands up in the air and say “I was just following orders from a human/entity”
>nd historically the courts have not looked
This is dependent on having a court system uncaptured by corruption. We're already seeing that large corporations in the "too big to fail" categories fall outside of government control. And in countries with bribing/lobbying legalized or ignored they have the funds to capture the courts.
While this is true, this is somewhat mitigated by the fact that few sectors are truly monopolized and large corporations also sue each other.
Sort if like how one could be held liable for copyright infringement?
A huge component of compulsory (either by statute or de-facto as a result of adjacent statute, like mandatory insurance + requirements thereof) professional licensure is that if you follow the rules set by (some entity deputized by) government the government will in return never leave you holding the bag. The government gains partial control and the people under it's control get partial protection.
"oh I'm sorry your hospital burned down mr plantiff but the electrician was following his professional rules so his liability is capped at <small number> you'll just have to eat this one"
I would wager that a solid half if not more of the economy exists under some sort of arrangement like that.
Right, but usually that also involves verifying that the electrician actually followed the professional rules, and if not, they have liability
So the court checks if they were "just following orders"?
Sounds to me like following orders is in fact this magical thing that causes courts to direct liability away from the defendant.
Throw on a wizard hat and robe at some of the voice only vibe coders you see on YouTube and it’s essentially “incanters”. Hilarious.
> Hence why I've been quite bullish on software engineering (but not coding). You can easy set up 1) and 2) on contrived or sandboxed coding tasks but then 3) expands and dominates the rest of the role.
Why can't LLMs and agents progress further to do this software engineering job better than an actual software engineer? I've never seen anyone give a satisfactory answer to this. Especially the part about making mistakes. A lot of the defense of LLM shortcomings (i.e., generating crappy code) comes down to "well humans write bad code too." OK? Well, humans make mistakes too. Theoretically, an LLM software engineer will make far fewer than a human. So why should I prefer keeping you in the loop?
It's why I just can't understand the mindset of software engineers who are giddy about the direction things are going. There really is nothing special about your expertise that an LLM can't achieve, theoretically.
We're always so enamored by new and exciting technology that we fail to realize the people in charge are more than happy to completely bury us with it.
> Why can't LLMs and agents progress further to do this software engineering job better than an actual software engineer?
Because a machine can never take accountability. If a software engineer throughout the entire year has been directing AI with prompts that created weaker systems then that person is on the chopping block, not the AI. Compared to another software engineer who directed prompts to expand the system and generate extra revenue streams.
> Because a machine can never take accountability.
A business leader can though.
> Compared to another software engineer who directed prompts to expand the system and generate extra revenue streams.
I think you're missing the point. Why can't an LLM advance sufficiently to be a REAL senior software engineer that a business person/product manager is prompting instead of YOU, a software engineer? Why are YOU specifically needed if an LLM can do a better job of it than you? I can't believe people are so naive to not see what the endgame is: getting rid of those primadonna software engineers that the C-suite and managers have nothing but contempt for.
> A business leader can though.
If a 'business leader' is prompting out software through their agents, ensuring it works, maintaining it, and taking accountability... they're also a software engineer
These titles are mostly semantics
By this definition, pre-LLM "business leaders" circa 2008 with not even an understanding of Excel were already "software engineers" this whole time - just prompting out software through their meatspace agents, instead of their silicon ones.
Dismissal of arguments as "just semantics" is high school level argumentation.
clearly not the same when they were abstracted from the realities of building software and.. directly taking accountability for it!
by semantics, i mean the definition and pool of tasks, responsibilities, and outcomes a job is comprised of is shifting so fast that the borders of what is a 'software engineer' and 'business person' are melding together. software engineers are business people in their own way
I don't understand why humans abstract a business leader away from the realities of building software, while LLMs do not.
If the rhetoric is to be believed, the set of responsibilities falling to the role of "software engineer" is shrinking to zero, and all engineers are being forcibly "promoted" to the managerial class of shepherding around agents.
i would say theres more nuance than that (disclaimer: dont have a crystal ball)
software engineers who are comfortable doing business work - managing, working with different stakeholders, having product and design taste, being sociable, driving business outcomes are going to be more desired than ever
likewise, business leads who can be technical, can decompose vague ideas into product, leverage code to prototype and work with the previous person will also be extremely high value.
i would be concerned if i was an engineer with no business acumen or a business lead with no technical acumen (not counting CEOs obviously, but then again the barrier to starting your own business as a SWE has never been lower)
It's funny, that's why COBOL was originally developed in 1960: so that business people could write software themselves without needing software engineers. And it sort of worked, to an extent. History repeats itself.
Between then and now, what ever happened to "no code development" or whatever they called it, where all of the world's APIs could be connected with lines in a diagram?
why would it be a manager? hire a cheap intern to be the scapegoat, if the job market is bad enough. no reason for liability to fall on the suits
That's how things work already in every workplace where there's any real danger. The companies construes its policies and paper trail in bad faith so that the employees are always operating contradictory to policy/training and then when something happens blame can be shifted on them.
You can say this about every single role.
Why can't VCs feed your pitch deck into an AI and get a business they own 100%?
If the only thing you're paying for is compute time...
Some.people are claiming it's about taste. Why can't an AI learn taste?
It's funny how we see some people who claim to have "taste" walking around in public wearing horrible Balenciaga shoes. Are they really just tasteless, or are they doing it ironically to troll the rest of us? I guess we'll never know. Maybe someday AI robots will achieve the same level.
Who is better positioned to pilot the LLM than a domain expert?
"Software engineer" as a job title has included a lot of people who write near-zero-code, at least at the higher levels of the career ladder, for years prior to LLMs. People assuming the only, or even primary, function of the job is outputting code reveal a profound lack of understanding of the industry in my opinion. Beyond the first year or two it has been commonly accepted that the code is the easy part of the job.
> has included a lot of people who write near-zero-code, at least at the higher levels of the career ladder
This is something that I would have thought HN readers were pretty familiar with. LLMs can make my code work faster or more prolific, but with 30yoe I spend a fairly significant chunk of my work time doing anything but code.
I'm occasionally reminded the HN's commenting base is much larger than my niche in the industry (VC backed startups + large public tech companies is my background). I had a similar reaction to people thinking Peter Bailis going from CTO at workday to "member of technical staff" at Anthropic was him trading a leadership position for closing jira tickets.
It's not about whether they make mistakes (they do! although the exact definition of a mistake is nuanced), but whether they can take accountability if the software fails and millions are lost or people die. A large part of the premium paid on software engineers is to take accountability for their work. If a "business person" directs their agent to build some software and takes accountability -- congrats! They are also now a software engineer :)
The lines between a software engineer / business person / product / design and everything else will blur, because AI increases the individual person's leverage. I posit that there will be more 'software engineers' in this new world, but also more product people, more business people, more companies in general.
> It's why I just can't understand the mindset of software engineers who are giddy about this brave new world. There really is nothing special about your expertise that an LLM can't achieve, theoretically.
They’re stupid or they’re already set up for success. The general ideas seems to be generalists are screwed, domain experts will be fine.
> domain experts will be fine
But I don't see how this holds up to even the slightest amount of scrutiny. We're literally training LLMs to BE domain experts.
I think these arguments tend to reach impasse because one gravitates to one of two views:
1) My experiences with LLMs are so impressive that I consider their output to generally be better than what the typical developer would produce. People who can't see this have not gotten enough experience with the models I find so impressive, or are in denial about the devaluation of their skills.
2) My experiences with LLMs have been mundane. People who see them as transformative lack the expertise required to distinguish between mediocre and excellent code, leading them to deny there is a difference.
Not sure that's what I was getting at. People in camp 2 don't think an LLM can take over the job of a real software engineer.
It's people in camp 1 that I wonder about. They're convinced that LLMs can accomplish anything and understand a codebase better than anyone (and that may be the case!). However, they're simultaneously convinced that they'll still be needed to do the prompting because ???reasons???.
One explanation is that some think we might be getting to the limits of what an LLM can reasonably do. There's a lot of functions of any job that are not easily translated to an LLM and are much more about interacting with people or critical thinking in a way LLMs can't do. I'm not sure if that's everyone's rationale but that's my personal view of the situation. Like the jobs will change but we likely won't be losing them to AI outright.
I was thinking today that I need to pivot to making and selling shovels, but then other issue is is anyone going to need shovels in the future.
I was at 2) until the end of last year, then LLM/agent/harnesses had a capability jump that didn't quite bring me to be a 1) but was a big enough jump in that direction that I don't see why I shouldn't believe we get there soonish.
So now I tend to think a lot of people are in heavy denial in thinking that LLMs are going to stop getting better before they personally end up under the steamroller, but I'm not sure what this faith is based on.
I also think people tend to treat the "will LLMs replace <job>" question in too much of a binary manner. LLMs don't have to replace every last person that does a specific job to be wildly disruptive, if they replace 90% of the people that do a particular job by making the last 10% much more productive that's still a cataclysmic amount of job displacement in economic terms.
Even if they replace just 10-30% that's still a huge amount of displacement, for reference the unemployment rate during the Great Depression was 25%.
An enormous amount of domain expertise is not legible to LLMs. Their dependence on obtaining knowledge through someone else's writing is a real limitation. A lot of human domain expertise is not acquired that way.
They still have a long way to go before they can master a domain from first principles, which constrains the mastery possible.
People need to be careful about buying into the shorthand lingo with LLMs. They do not learn like we do. At the lowest level, they predict which tokens follow a body of tokens. This lets them emulate knowledge in a very useful way. This is similar to a time series model of user activity: the time series model does not keep tabs on users to see when they are active, it has not read studies about user behavior, it just reflects a mathematical relationship between points of data.
For an LLM and this "vague" domain expertise, even if none of the LLM's training material includes certain nuggets of wisdom, if the material includes enough cases of problems and the solutions offered by domain experts, we should expect the model to find a decent relationship between them. That the LLM has never ingested an explicit documentation of the reasoning is irrelevant, because it does not perform reasoning.
The domain expertise I'm referring to isn't vague, it literally doesn't exist as training data. There are no cases of problems and solutions to study that are relevant to the state-of-the-art. In some cases this is by intent and design (e.g. trade secrets, national security, etc) long before for LLMs arrived on the scene.
We even have some infamous "dark" domains in computer science where it is nearly impossible for a human to get to the frontier because the research that underpins much of the state-of-the-art hasn't existed as public literature for decades. If you want to learn it, you either have to know a domain expert willing to help you or reinvent it from first principles.
>They still have a long way to go before they can master a domain from first principles, which constrains the mastery possible.
Mastery isn't necessary. Why are Waymos lacking drivers? Not because self-driving cars have mastered driving, but because self-driving works sufficiently well that the economics don't play out for the cab driver.
I think the reason AI isn't going to replace CEOs, or anyone in the C suite, is pretty obvious. They see themselves as the company. Everyone else is a resource. AI is here to replace resources, just like investing in a brand new lawn mower. For them, replacing an executive with AI is like saying you're going to marry a broom.
Most companies contain several layers worth of business context that the higher ups have no idea of, as well.
Everything from "unpaid bills are handled this way" through "the website has a certificate that needs yearly renewal" to "we need to report our earnings biweekly in Indonesia, and we need to retry the form several times before it works".
This is not fundamentally doable by LLMs because the higher ups wouldn't know what to ask for, and if they did it would not be feasible to keep everything in single persons head, no matter how AI-assisted.
So that alone I think guarantees a good amount of unreplaced jobs.
Eh, this sounds like the people that have been replaced at a lot of companies already.
>"the website has a certificate that needs yearly renewal"
For example, why modern certificates are being dropped down to very short times. The push to automate everything.
>"unpaid bills are handled this way"
Any company that has things like this will start changing their process.
You have this idea (and maybe the AI company CEOs do too) that this will be like a container drop in. No, instead these things will be part of a gradual process change and human parts will eventually disappear.
>So that alone I think guarantees a good amount of unreplaced jobs.
What percent. This is what matters at the end of the day. Once unemployment reaches high rates over all age groups in a society that is used to low unemployment things go bad. It can take longer to go bad where you have social safety nets and jobs programs, but in places like the US that is communism, so expect the haves to shoot at the have nots.
I think the more likely reason would be that legally someone needs to be in charge of the business.
That's true, too. I guess we will see if executive pay and credentials start going down. They could technically have AI make all the decisions while someone just plays the patsy.
> They see themselves as the company. Everyone else is a resource
Knowing nothing about how these things work, I wonder if the board will see it the same way? Even today I could see the following play out:
CEO says X. Board member puts a bunch of strategic info into ChatGPT on the spot which argues Y more convincingly than X.
In that moment, the CEO will find themselves arguing against a chatbot, which can gish gallop with plausible bs faster than you can say the word “transformative”.
Maybe they win the argument today, but eventually the CEO will be functionally replaced, and eventually actually replaced or watered down.
They're just a thin layer to be replaced last. They're just arrogant enough to think they're the company, but ultimately the endgame is -- all humans become economically insignificant compared to the automated economy.
https://www.theguardian.com/technology/2026/apr/13/meta-ai-m...
In the U.S., most or all states require all corporations to have a president, secretary, and treasurer.
Heh, how long before someones agent starts looking for these 3 so it can run the business in the background and feeding them all the reports they need to sign.
Loved that section about "meat shields". LLMs cannot be held accountable. Someone needs to be involved in decision making, with real stakes if those decisions are bad.
Data & Society put out a paper on this role back in 2019 but used the term "moral crumple zones" since they were focusing on how to assign blame in autonomous vehicle crashes: https://www.researchgate.net/publication/351054898_Moral_Cru...
"Meat shields" has a nice physicality to it, though
Thank you for this--I remember reading this paper when it came out, but forgot it by the time I wrote this section. Will add a citation.
Thank you, I had not found this one.
What I had considered is that in the case of self-driving cars, nobody is criminally accountable, even though the rest of us may be criminally negligent should we make some horrific error. Philosophically, there is some kind of reason that criminal acts require punishment beyond mere financial liability (e.g., prison time) and self-driving cars are exempted from this. Currently, self-driving cars are also exempt from the actual laws of the road because the police are dis-empowered to enforce anything on the self-driving car.
It just makes logical sense really; the human using the tool is in the end responsible.
Whether the tool is too powerful or ethical to use is an orthogonal discussion, in my opinion. Taken to the extreme, nuclear weapons still need someone fire or drop them. (We should still have discussions on safety and ethics always!)
the name is very sticky too. I can't imagine not calling people taking the blame meat shields now
why can't the name be 'scape goat'? Since that's what they are - the "real" responsibility rests on the owners, and they happily shed it as limited liability ownership of shares.
Reminds me of Succession when the next CEO of Waystar Royco was described as a “pain sponge.”
The problem with AI is that it isn't like any previous technology. There may be temporary jobs to fill in the gaps but they won't be careers. The AI will do the process engineering and self optimization. The prompt witchcraft is a good example because today its totally unnecessary and doesn't actually increase performance, and they'll continue to make it easier to direct/steer the models.
We're literally trying to build an intelligence to replace us.
We?
The human species. "We" doesn't include everyone and doesn't necessarily imply the process happens through collaboration and planning (conspiracy). The race to automation is happening as expected; outside any group control and bound by competition. Game theory suggests the end result is us being replaced, if we make it that far. "We" as a species are the ones making it happen.
There is a very very small percentage of humanity that’s pushing this stuff on everyone else. In fact most are saying “please stop”.
“We” as a species certainly aren’t making it happen.
Good one. "We" are not Demon Sam Altman or that clown of Anthropic or Google or Microsoft
Don't forget you are also not Meta, xAI, Mistral, Alibaba, DeepSeek, Zhipu AI, Moonshot AI, ByteDance, Baidu, 01.AI, MiniMax, or Tencent.
I think that this is an interesting attempt at taxonomy, but it's a bit on the magical thinking end (and I say this as somebody that does a good amount of what's described as the incanter role). It's a combination of the author's previous witchy aesthetic (see his excellent "<x>ing the technical interview" series) and progressive labor politics (which are asymptotically doomed in the current automation push).
The biggest failure of imagination, I think, is the assumption we'd use humans for most (or *any) of these jobs--for example, the work of the haruspex is better left to an LLM that can process the myriad of internal states (this is the mechanical interpretation field).
Yes, I had the same impression. I'm sympathetic to the author's perspective but I can't muster even the minimal optimism they've shown here. The "process engineers" as described would themselves quickly be replaced by an automated system. The "statistical engineers", I think, would never be able to keep up with the rate of change of the AI models, which would likely have different statistical behavior and biases in each language/context/etc with each update, and so it's unlikely anyone would pay them to develop that required deep expertise in the first place. More likely, that work would be done at an AI foundation model company -- but it would be done just once, and then incorporated into the training process.
A magic 8-ball "can process the myriad of internal states" of any questions you throw at it. But we don't use it even tho it can give us answers.
And when the haruspex LLM fails, what do we turn to?
> and progressive labor politics (which are asymptotically doomed in the current automation push).
What do you mean exactly by this?
As an engineer, I'm never more excited about this job.
My implementation speed and bug fixing my typed code to be the bottleneck - now I just think about an implementation and it then exist - As long as I thought about the structure/input/output/testability and logic flow correctly and made sure I included all that information, it just works, nicely, with tests.
Unix philosophy works well with LLM too - you can have software that does one thing well and only one thing well, that fit in their context and do not lead to haphazard behavior.
Now my day essentially revolves around delivering/improving on delivering concentrated engineering thinking, which in my opinion is the pure part about engineer profession itself. I like it quite a lot.
I mostly agree with you.
Though something I half-miss is using my own software as I build it to get a visceral feel for the abstractions so far. I've found that testability is a good enough proxy for "nice to use" since I think "nice to use" tends to mean that a subsystem is decoupled enough to cover unexpected usage patterns, and that's an incidental side-effect of testability.
One concern I have is that it's getting harder to demonstrate ability.
e.g. Github profiles were a good signal though one that nobody cared about unless the hiring person was an engineer who could evaluate it. But now that signal is even more rubbish. Even readmes and blog posts are becoming worse signals since they don't necessarily showcase your own communication skills anymore nor how you think about problems.
Funny enough, I think github and communication are still a huge part of what I see.
Github code itself maybe irrelevant, but is the product KISS/UNIX? Or is it an demonstration of complete lack of discipline about what "feature" should be added. If you see something that have multiple weakly or completely irrelevant feature strung together, it's saying something. Additionally, AI would often create speghetti structures, and require human shepherding to ensure the structure remain sound.
Same with communication. I have AI smell, I know if something is AI slop. In my current job, docs sent with expectation for others to read always prefaced with -- this section typed 100% by aperocky -- and I dispensed with grammar and spelling checks for added authenticity. I'll then add -- following section is AI generated -- to mark the end of my personal writing.
I think that is the way to go in the future. I pass intentional thinking into AI, not the other way around. There are knowledge flowing back for sure, but only humans possess intention, at least for now.
Those things are all still signals. If taken from a snapshot of the Internet pre-AI.
People were still gaming GitHub profiles before AI, sometimes even just reuploading existing repos as their own.
> But now that signal is even more rubbish. Even readmes and blog posts are becoming worse signals since they don't necessarily showcase your own communication skills anymore nor how you think about problems.
Yup. I've spotted former coworkers who I know for a fact can barely write in their native language, let alone in English, working for AWS and writing English-language technical blog posts in full AI-ese. Full of the usual "it's not X, it's Y", full of AI-slop. Most of the text is filler, with a few tidbits of real content here and there.
I don't know before, but now blog posts have become more noise than signal.
It's a strong signal in the negative direction, the best kind of signal really.
The "dead Internet" theory has become more real. It's especially bad on LinkedIn. Everyone is now an "AI expert", posting generated slop and updating their profiles with AI enhanced head shots.
> It's especially bad on LinkedIn
Agreed, but to be fair, LinkedIn was especially bad to begin with.
Even before AI-slop, LinkedIn posts were rightfully mocked. Self-congratulatory or self-pitying, full of empty platitudes and "lessons learned" and "journeys" (ended or started). There was never anything worth reading to begin with.
Now it's of course worse. I don't think I can stand reading about another self-appointed expert on LinkedIn writing about their completely unwarranted strategy and/or lessons and/or skepticism about AI.
I only go to LinkedIn for the daily puzzles!
> My implementation speed and bug fixing my typed code to be the bottleneck
I remember those days fondly and often wish I could return to them. These days it's not uncommon to go a couple days without writing a meaningful amount of code. The cost of becoming too senior I suppose.
Anecdotally I've been observing a significant uptick in the amount of code being produced by my peers who are in senior engineer, leadership and engineering management positions.
They can take their 20+ years of experience and use it to build working systems in the gaps between meetings now. Previously they would have to carve out at least half a day of uninterrupted time to get something meaningful done.
> build working systems in the gaps between meetings now
Agreed, I've actually done this. Sitting in a meeting where someone was asking about what tooling we could build, what it might be capable of, what their options were. So while we were chatting I was having Claude build a working demo.
In the end it still needs to be turned into an enterprise app with all the annoying accoutrements that go with that, but for demo work it was phenomenal.
I'm excited and scared at the same time.
Yes I'm much more productive than before, and I'm convinced we can't get rid of engineers altogether... But how long until my team of 5 gets replaced by a single engineer? Am I going to be the one to keep my job or one of the 4 to be let go?
If the team does the exact same thing, not very long.
The ability to know what to build and what not to build is going to be as important as knowing how to build it. I still think engineers have an edge here. All my childhood dreams of what I should be able to do or build are coming to a reality and the only thing that is blocking me is lack of time. I want to go faster still
When I was in automation a decade ago, they keep telling us to never tell people this is going to replace them. What you tell them is it will allow their teams to finally focus on what really matters. Instead of working on all these repetitive tasks, now they can focus on the much larger issues. Everybody bought in, teams felt like the automation we were doing was really going to make their jobs easier.
It never did.
Managers realized they could trim their teams down after we were done and did in fact, layoff people by the hundreds. Doing the same work with less people was beneficial to them because now they got bigger bonuses and salary increases for adding to the bottom line of the company. Many managers who did nothing more than layoff half their team were promoted faster up the ranks.
So yes, be scared, be VERY scared and have a Plan B and a Plan C going forward. The people who created this have rose colored glasses on how its going to revolutionize business. The actual businesses owners and CEO's just a see another new way to reduce human capital in order to increase profits.
> As an engineer, I'm never more excited about this job.
How long do you think it'll take for the AI trend to mostly automate the parts of your job that still make you excited?
Everyone thinks it won't be them, it will be others that will be impacted. We all think what we do is somehow unique and cannot be automated away by AI, and that our jobs are safe for the time being.
> How long do you think it'll take for the AI trend to mostly automate the parts of your job that still make you excited?
The exciting part of the job is, and always has been, listening to idle chitchat where you pick up on the subtle cues of where one is finding difficulty in their life and then solving those problems. I think AI could already largely handle that today just fine, except:
You have to convince, especially non-technical, people to have idle chitchat with machines instead of humans
-or-
Convince them of and into having a machine always listening in to their idle conversations with humans
Neither of those are all that palatable in the current social landscape. If anything, people seem to be growing more weary of letting technology into their thoughts. Maybe there is never a future where humans become accepting of machines being always there trying to figure out what is wrong with them.
As someone in 99th percentile in terms of token usage, it's super clear to me where the agent will not be able to replace my judgement, two areas:
1. if it exceed the context the agent does random stuff, that are often against simplicity and coherent logical structure.
2. LLM has zero intention, and rely on you to decide what to build and more importantly not build.
As such, I'm the limit of the numbers of concurrent agents working fo rme, because there is still a limit to my output of engineering judgement. I do get better, both at generating and delivering this judgement. Exceeding this limit, the output becomes garbage.
At this current year and date, the AI does not automate me in anyway, I have something that they just flat out don't have.
Playing devil's advocate here, I'm not antagonizing you but thinking out loud.
> if it exceed the context the agent does random stuff, that are often against simplicity and coherent logical structure.
That's a current technical limitation. Are you so sure it won't be overcome in the near/mid future?
> LLM has zero intention, and rely on you to decide what to build and more importantly not build
But work is being done to even remove or automate this layer, right? It can be hyperbole (in fact, it is) but aren't Anthropic et al predicting this? Why wouldn't your boss, or your boss' boss, do this instead of you? If they lack the judgment currently, are you so sure they cannot gain it, if they don't have to waste time learning how to code? If not now, what about soon-ish?
> At this current year and date, the AI does not automate me in anyway
Not now, granted. But what about soon? In other words, shouldn't you be worried as well as excited?
Well if you do nothing you should definitely be worried, because not using LLM is rapidly becoming untenable.
If you do a lot, you'll grow skeptical about some of the claims and hype, and have a sense of where this is leading to.
My position is that if someone use LLM a lot, they maybe right or wrong about the future of LLM. If they don't, then they definitely are not right or are only lucky.
My personal judgement is both of these are hard caps until they invented something that's not a transformer, start from scratch bascially.
> because not using LLM is rapidly becoming untenable
Completely agreed. This is not what I'm advocating for. And definitely, there's a lot of self-serving hype (and fearmongering can be another kind of hype) by AI companies. But some of it I think will be true, or enough companies will believe it to be true, which amounts to the same.
I'm just worried, I cannot help it. And I'm not saying "don't use AI", I'm pushing back about the feeling of reckless "excitement".
Does it seem to you like those issues will be solved soon? Does your boss have the time to do this AI wrangling work on top of their other tasks even if they don't have to learn to code?
> Does it seem to you like those issues will be solved soon?
No.
But I was also very skeptical about AI being able to code semi-reliably during the early stages of GPT hype, and look where I'm now: most of the code I produce is written by an AI. So I was wrong before, which makes me doubt my own ability to predict the near future.
> Does your boss have the time to do this AI wrangling work on top of their other tasks even if they don't have to learn to code?
My boss' boss would probably love to get rid of both me and my direct boss. And a whole class of problems will disappear, freeing time of people higher up the chain to focus on this... either them or a tiny group of engineers, which leaves me out of a job either way. I've already seen people in small shops get fired because their immediate semi-technical boss can now do their job with AI (cannot go details because of privacy reasons. Also, it doesn't matter if the end result is flawed, it matters that "mission accomplished" and someone is out of a job).
If they never learned to code, it wouldn't be very easy to build or catch the BS that AI generate.
> How long do you think it'll take for the AI trend to mostly automate the parts of your job that still make you excited?
Yeah, no one ever thinks beyond "whoa, how cool, I cloned Slack in 15 minutes!"
Personally, the thing I find more depressing is turning a career that was primarily about solving interesting puzzles in elegant ways into managing a swarm of idiot savant chatbots with "OK, that looks good" or "no, do it better" commands.
The problem that I'm trying to solve with agent is similar here, for instance, my comment likely made zero impression on you because I'm against both of the things that you are also against here.
All plausible, but not very transformative. Like imagining that the new jobs enabled for the automobile include automobile maintenance, tire shops, and so on. Traveling nurses, motel operators, military tanks, doordash, suburban life, beer sales at NASCAR, those were all enabled by the car (and its larger sibling the truck). Still missing are the jobs snd industries enabled by "AI" that are not themselves "AI".
> I think a part of the reason is that these roles are not just about sending emails and looking at graphs, but also about dangling a warm body over the maws of the legal system and public opinion.
Spoilers for "How I Met Your Mother" ... but there's a character who has that kind of job, as a legal meat-shield. Now, ~10 years after airing, this funny clip feels like it would only need slight adjustments.
https://www.youtube.com/watch?v=8u62HptZ6TE
"Unavailable Due to the UK Online Safety Act" - without my VPN... do you know why?
Geo blocking the UK satisfies any age verification, otherwise the site owner would have to check if their content is considered adult in the UK and implement something.
"otherwise the site owner would have to check if their content is considered adult in the UK and implement something"
IMO a small blog website is not going to get pulled-up for this - it's about the author making a point. They're entitled to do so of course.
> IMO a small blog website is not going to get pulled-up for this
Well, maybe not the typical engineering blog but I think if you're a puritan some posts/texts from Aphyr probably reaches borderline "adult content", so I'm not that surprised Aphyr rather play it safe and also make a point at the same time.
It's "playing it safe" in the same way that wearing full hockey gear to go to the store is "playing it safe".
He is either making a political point or excessively paranoid.
It's probably a political point, but I think your comparison over sells how inconvenient it is for someone to geoblock one small country and the headache if anything did happen. It's not much more effort than doing nothing really?
And clearly users in the UK can find their own way to read it if they like, so the cost is also small there.
>geoblock one small country
Considering that there is multiple "why is this blocked in the uk" comments on every single one of these posts maybe the UK isn't such a small country. Geoblocking a decent chunk of your readership would be a pretty big inconvenience for a writer I would imagine.
the culture section of this writeup links to explicitly adult/erotic content in the footnotes and discusses 'adult themes' directly. his caution seems reasonable.
>or excessively paranoid.
Have you even read the shit politicians are either pulling or trying to these days? There is no amount of paranoia that is too little when talking about things like cross national prosecution, laws regarding users not considered adults, and age verification.
Never know when one of your posts might gain serious traction. Not worth the risk. Very easy to find many examples of people making decisions thinking “I/we will never be big enough for that to be relevant” only to be haunted by that decision later. Classic example: partnership agreements/contracts between friends and family on small endeavors.
aphyr may have some NSFW photos on the site IIRC which may have got the domain swept up with the new UK laws.
It's self-imposed, I think? curl connects to the same aphyr.com in both cases, but when connecting from the UK it receives a different response body. Probably sensible I expect, if you just want things to work, legally speaking.
https://archive.is/OjGox
Why post an archive link for a static site with no ads or subscriptionware?
https://news.ycombinator.com/item?id=47779352
I see, thanks. Strange times for internet censorship in the west.
Agreed. If I remember correctly from the other sections of this post, I think it's a self-directed ban.
Can people make a soft assumption that if somebody went through the trouble of digging up an archive link, then access to the article is limited in some way?
Sure, but I would likewise appreciate if you made the assumption that my question wasn't snark and that I actually wanted to learn the answer.
With AI, you just have to choose between going slow vs making a huge blunder later at some point.
If you go fast, you are bound to come across AI bugs later. Then you ultimately slow down to fix them. Which takes more time.
That black box will keep evolving. The AI interpreter will have to keep catching up with it.
I don't understand the title. It doesn't seem exactly clickbait but also doesn't seem to be what the article is about?
Anyway: The new job types might seem overspecialized now but history shows us this is indeed what happens as new industries open up. I think these predictions look quite solid.
I think, long term, there will be only one job comprising all these aspects. The only absolutely-minimum-required skill will be critical thinking, possibly along with data / statistical literacy for grounding.
"Specialization is for insects." (Or for someone pushing the boundaries of human knowledge.)
jobs that have liability coverage are the last ones to get automated. even if its not doing anything all day long in terms or "work". but someone needs to go to jail when shit goes wrong as mandate by law quite often. the "oops the computer did it" isn't a sufficient excuse in some situations, accountability is a thing. so while we mechanically could replace also CEOs with machines and arguably at times would be even better than a few humans out there, some body that can be dragged to court is mandatory no matter how good the tech gets.
Humans will be held accountable, not machines, whatever is the technology used. The jobs you suggest are based on the state of LLM right now, this could change rapidly, considering the state of progress. These are just activities that are already done by people that work with these instruments, because they want to optimize and obtain the best/safest output from these machines.
> Humans will be held accountable, not machines, whatever is the technology used
Isn't this addressed explicitly in TFA, in section "meat shields"?
As for the rest, if you predict even the jobs described in TFA will be obsoleted by future LLMs+tools, then the future is even more dire than predicted by Aphyr, right? Fewer jobs for humans to do.
https://archive.is/OjGox
we are in the times of irrational exuberance - rationality will set in soon!
This is part 9 of a 10-part series. The author has posted every chapter to Hacker News every day for the past 9 days. Every time four of the first five or so comments are:
Someone noting it is unavailable in the UK.
Someone posting an archive.is link.
Someone asking why the above posted an archive link to a static site.
An answer that it is because the content is otherwise unavailable in the UK.
Do we really need to see this every single time?
I realize I am also not adding to the real discussion now as well, but Jesus Christ, this is irritating. Can we get a new rule that an author posting their own content, knowing it is unavailable in the UK, has to post their own archive link and explain why they're doing so as part of the submission?
>Can we get a new rule that an author posting their own content, knowing it is unavailable in the UK, has to post their own archive link and explain why they're doing so as part of the submission?
[Author blocks link to avoid being potentially in violation of the law]
You ask author to willingly provide link to again potentially be in violation of the law
You do not see the irony in your question
So you are thinking that the UK government is going to do an international criminal investigation against aphyr for posting an archive link on a hacker news thread.
You're asking the wrong question.
Does the UK government have the legal right to do an international criminal investigation against any website that is potentially violating their laws by having visitors from the UK accessing the site?
Answer yes or no, this is an easy binary question, and not one that requires any probabilistic thinking.
I wish we could flag some posts (like as "tangential") instead of this archaic upvote/downvote.
And obviously a way to filter in/out those flags.
https://xkcd.com/1053/
Relax, not everyone sees every article everyday
Is there a way back to calling human beings human beings and not "meat"? Or is the sociopathic Jeffrey Dahmer undertone now the new normal?
That is exactly why the term is being used.
A company like Amazon doesn't treat its warehouse workers as human beings. Workers are seen as disposable: forced to piss in bottles, forced to work around the corpses of their collapsed coworkers, paid the absolute minimum possible, and replaced the second they don't operate like a perfect unfailing machine. You aren't viewed like a human, you are a tool. Cattle. A piece of meat they are forced to retain because a robot isn't quite capable of doing your task yet.
The article's use of "meat shields" isn't any different. Humans are going to be hired for the sole reason of taking accountability for actions dictated by AI. They are there only because the company can't put blame on a machine and will be sued to oblivion if there's nobody to blame at all. Your existence as a person is irrelevant, they are just interested in someone with a heartbeat they can blame when stuff inevitably goes wrong.
That's hilarious given that this entire site here is using the term unironically to refer to people in general, good stuff
> someone with a heartbeat they can blame when stuff inevitably goes wrong.
if said person can be blamed (and take on the liability), but cannot stop the action or audit the action, take preventative measures (which costs money) etc, then they cannot take responsibility for real and thus whether the blame falls on them on paper is irrelevant - if there's real punishment (like jail time), but no real power to enforce anything, then who would be stupid enough to take on this job? If there's no real punishment, then what does it matter that the blame on paper is there?
Someone who needs to feed their family given that AI CEOs predicting that their technology will either destroy the world, take everyone's jobs or both.
In the article, Ctrl+F for "meat" returns 3 results, while "human" returns 8. Seems like "human" remains the dominant word of choice in this author's vernacular.
Edit: Further, the only times "meat" appears is in the phrase "meat shield", which is an analogy that is very apt relative to the crux of the article.
Edit 2: "People" appears 13 times!
humans are made of meat -> https://news.ycombinator.com/item?id=47688678
"meatshield" has the correct connotations for that sort of work.
What are you talking about, the only use of meat is in "Meat Shield", a phrase that's been around a long time now.
Sure. Would you like WWII, medieval-era Christianity, or Khanate Asia?
stone age gets a vote from me!
Meat pupeteering, has nothing to do with Jeffrey just state of slowly getting pushed into doing 95% of devwork with Agents.
There has always been a subset of highly technical people in the software world who are anti-organic. They dislike the "meatspace" and humans and relate more with machines and software.
Yeah, that was what I was refering to, no the specific part of the article. I've seen it much more here recently. Kind of disgusting and sad, but on the other hand it's good if people show their real face that way.