Interesting to see an opinion piece on the register pointing out 3 camps, none of which apply to most people I know who work in software development.
Mainly that ai is a useful tool. Sometimes it is magical but has limits and is often wrong. It can be a great illustrator of sunk cost fallacy when working on very complex problems. But light years faster and more useful than googling for solutions when faced with a difficult debugging challenge. On net I would much prefer to have ai around than not.
I think it is a miss that software development is completely omitted in this article, esp a tech or tech adjacent publication that's been around for forever.
Taking out the fact we're talking about "AI" for the moment.. doesn't it seem unusual to speculate that despite recent progress, it's just going to be.. flat from here out? (That's directed at El Reg, not you.)
Hard to take the rest of it seriously with them taking a position like that. I can't think of a time that's been true for any technology in my career. Whether they were ones I found useful or not.
NFTs were kind of a scam to sell you jpgs on the grounds that insider Bob had sold one to insider Harry for $20k and so you're getting a bargain buying one for $15k. It's a different thing really.
> doesn't it seem unusual to speculate that despite recent progress, it's just going to be.. flat from here out?
No? When new tech arrives there is always a bunch of low hanging fruit around so there is quick progress immediately afterwards, but then it flatlines relatively quickly and progress is as slow as usual again.
So its a safe bet that progress will slow down to the usual level sooner or later, and it seems to be around now for text models, as this flatlining happens faster the more you invest into it since you exhaust the low hanging fruit faster.
Why is it silly? Cars haven't fundamentally changed in the past 50 years, they have gotten a lot better but not in a game changing way, society still functions the same with cars as 50 years ago.
I see the same thing with text models, you can say they improve but not in a game changing way, and you have the same scenario as cars. It wouldn't be wrong for a person to say "cars are as good as they ever going to get" 50 years ago, in his lifetime he was right, nothing happened with cars that would force him to change his habits during his life.
But up to 50 years ago cars changed quite quickly, so you could say it is weird to say cars wouldn't start flying or such in 50 years, but here we are, nothing dramatically changed.
Tesla self-driving/Waymo/Comma.ai isn't perfect, but they're good at what they do. That's a pretty dramatic change, in the last year or so. You get in the car, and then don't actually have to drive it, the car does it for you. Sure there are some corner cases that still haven't been solved, but most of the time, I get in the car and it just does its thing for me.
Something I've been doing a lot lately is investigating the people on HN that push various beliefs, and in this comment thread there's two voices pushing for how much AI is going to continue to grow, going forwards.
Who are these two voices? Well, we've got fragmede, who, looking through their HN profile, works at NVIDIA as a "senior AI infrastructure engineer", and we've got mh-, who, looking through their HN profile, works at Wunderkind, which is "pioneering a new category of AI agentic marketing".
So, the two people in here pushing messaging about how great and valuable AI is, and how it'll continue to get better, have their jobs/livelihood tied to AI and people continuing to pour money into AI.
It almost always turns out that way. The people protesting the loudest for some idea universally are somehow tied to profiting by convincing people of that idea. Not that that means they're wrong, of course. Just providing context.
My comment wasn’t about AI infra, my job, or broad societal changes. I write code for a living and worry about losing my job to AI like any other developer. I was just describing my experience with self-driving cars doing their thing. The key is whether the argument holds up on its merits. Pointing to someone’s job is background context, not a substitute for engaging with what they actually said.
Yeah, it does seem like progress has plateaued considerably. The leaps from GPT 2 to 3, 3 to 4, and 4 to 5 shrinks with each one, with 5 being particularly disappointing.
I, with no evidence, feel like GPT-5 was an efficiency release. Save as much power/compute while mitigating the quality loss leaving only the top model (using similar compute as previous models) to show real improvement.
> On net I would much prefer to have ai around than not.
I agree sort of, but on the other hand we don't know their true cost, whether that's the out-of-pocket expenses, or the pollution and high electricity/water costs that will result.
There are lots of things people do that are much more polluting and less useful or efficient than AI, such as eating meat, driving cars, traveling on airplanes etc, the latter two particularly in terms of business use cases especially when we have video call technology and remote work.
You forget to include the information pollution cost. AI slop is becoming inescapable in Google search results. So this new tool is completely obliterating the usefulness of old tools by flooding them with VC-subsidized crap.
Many fast food places now have tablet based ordering, so turns out AI wasn’t needed for software to take away jobs.
That said, AI could do that job perfectly well. The reason we still have human to human interaction when you order is that it creates a more interesting environment for employees, who crave at least some kind of human contact. And customers will pay marginally more for the food if they get some human contact as well.
> Many fast food places now have tablet based ordering
As an experience though, they could hardly have implemented it in a worse way. A big part of the reason I don't go to McDonald's any more is that the experience of using the ordering kiosks is so awful compared to just telling a human what I want.
> A big part of the reason I don't go to McDonald's any more is that the experience of using the ordering kiosks is so awful compared to just telling a human what I want.
Huh, for me is the polar opposite. Maybe it's because I mumble, is unclear or don't speak the native language as well as the natives where I live, but I always preferred the kiosks. I seem to always get what I order then, and it's a lot easier to customize things. Generally just feels faster, which I guess is the most important thing about fast-food, I want to be in-and-out of there as quick as possible, the less humans I have to deal with, the better.
When I worked at McDonalds, a major thing I learned is that when 95% of people say they got an order wrong, the customer ordered it wrong without realizing it. Customer says "I want a cheeseburger, plain", Mcdonalds puts that in as "Cheesebuger, no ketchup, mustard, onion or pickle". Customers will take plain to mean a dozen different things.
I also worked at McDonalds. You example is interesting because this specific issue came up often. Different people interpreted it to mean different things. The cashiers who'd been around the longest had learned to clarify exactly what the customer wants.
Also, this is absolutely NOT a customer issue. It's a restaurant issue to clarify. Plain means different things at different restaurants, so the solution is to _always_ clarify exactly what the customer means.
Every fast food place I've been to, a "plain cheeseburger" includes cheese. However, at every high-end burger place I've been to, "plain" does NOT include cheese. So there is a somewhat standard meaning.
I have this conversation enough that I now call out "plain no cheese" and ensure "no cheese" is written on the ticket.
McDonald's main menu items aren't actually called "cheeseburgers"; they're called "burgers with cheese". To me, this reads that "cheese" is a "topping" on the burger.
Further, they _only_ showcase the "burger with cheese variant" in their combos and special. This further drives home that you should be thinking about cheese in the same way as toppings.
That's a strange source of confusion. A plain cheeseburger without cheese has a different name: a plain hamburger. I can't imagine saying "cheeseburger", no matter what qualifying words around it, and being surprised it included cheese.
I have a very funny childhood memory of being in a McDonalds with my Cub Scout chapter; one of the boys ordered a "cheeseburger with no cheese", which - of course - was delivered with cheese, and the boy's father escalated the 'situation' to management.
You're conflating two different things. A plain cheeseburger obviously will include cheese by definition. A "plain" at a "burger place" would mean a plain hamburger. Both are correct usages of the adjective "plain" because the nouns they're describing are inherently different.
> Customer says "I want a cheeseburger, plain", Mcdonalds puts that in as "Cheesebuger, no ketchup, mustard, onion or pickle". Customers will take plain to mean a dozen different things.
Isn't it up to the person who is receiving the order to ask clarifying questions then? Since they know it's potentially unclear/ambiguous, why not try to resolve the ambiguity before making the order?
99% of orders are not made incorrectly, and you're being told to go faster and keep times low. Is this the 1 order of the hour that the person will come back and complain that its wrong? Is this person going to come back regardless and say it was wrong to get free food? Its unambiguous enough to not be worth the time.
Be a bit smooth about it: as you type in the order, verbally say out loud what you're entering with other words, then ask them to confirm. No lost time, potentially less people to deal with in the future, win-win.
It's been several lifetimes since I worked in a fast food place (not McDonald's), but at least then, this was how we were trained to do it. Reading the order back like that was required.
That'd just annoy the 99% of people that agree with McDonald's on what plain means. People at McDonald's are often in a hurry, and some would get really annoyed at stupid questions. Better to piss off the 1% than the 99%.
If you're clarifying at this level, there are likely many other questions that you'd ask.
Even better when the customer just plain orders the wrong thing. I have a vivid memory of my pregnant wife and I at breakfast one morning, and she ordered apple juice. When the waiter brought apple juice she said she’d ordered orange, and when the waitress looked confused, I had to remind my wife she’d ordered apple. Total brain fart on her part (pregnancy brain seems to be a very real thing). Waitress didn’t care, especially after we laughed it off, and brought orange. But I know I’ve undoubtedly done the same thing, so I’m patient with mixups.
I've customized our McD's orders for my entire life - they mess it up maybe 1/20 times, about the same ratio I'll have to park and wait. Otherwise it's always worked for me!
I almost always (except if there is a big queue) customize something, because then it's guaranteed made at the spot, instead of the heated old stuff. And since I started using the kiosks, the orders are always right, and fresh.
A custom order used to be a hack to be sure you got a freshly-made sandwich and not one that had been sitting in a warmer for 15 minutes, but they make everything to order now. And they still fuck them up, often.
Hard to know for a fact without knowing where I live, I'm guessing :) FWIW, it's not true at McDonalds in Spain, they definitely have popular stuff sitting behind the counter for longer than the items you customize here.
You can order a hamburger with "no salt" now via the app (and probably the kiosk?). But I'm not sure if there are other seasonings you're referring to, though.
> the experience of using the ordering kiosks is so awful compared to just telling a human what I want.
Those kiosks are horrific and greatly reduced the number of visits I made to McDonald's. The insane pricing since then further reduced those visits to zero.
That's really the worst part. The upsells are insulting and get in the way of ordering what I want. Classic enshittification, starting with something which people like and slowly deteriorating it in order to make an extra buck. I'm sure the MBAs love it though.
> in 2025 how is it so hard to make a user interface that doesn't lag like a bastard on every scroll/click/...
Because everything is done in fucking React or Node or Blazor or whatever the newest flavor of this wRiTe oNcE rUn aNyWheRe bollocks is, because it always, always, always the exact same fucking thing: abstracting UI elements to fucking goddamn JavaScript and running it in a browser.
And heaven knows McDonald's can't possibly pay for proper software development, they only made like 14 billion last year. They're barely scraping by.
That's not the issue. All these technologies can be nice and snappy. The problem is that developers suck at making software nice and snappy.
I've taken over several react apps over the years and one thing I always end up doing is remove a bunch of spinners because you don't need a spinner when the page loads instantly - as it should. Its very common for pages to take 10-60+ seconds to load, and when I look into it it's always obvious why they're so slow and easy to fix. The devs who made it just sucked.
> I always end up doing is remove a bunch of spinners because you don't need a spinner when the page loads instantly
I always have to remind people to add spinners, just because it loads instantly on your developer machine with a fiber connection (if not talking to a local container even) doesn't mean it will in the real world. But spinners only show when actively fetching so if it's fast they only show for a split-second. It's the best of both worlds.
Yeah, I don't like the split second spinners. And I work on company internal apps so every user is generally on a good connection.
What I might do is just add a global spinner using tanstack query, what I don't like is having 50 different spinners for every little component. Makes the site feel janky and weird.
I just don't see the point unless it's loading for 5+ seconds. If it's faster than that then the user won't have time to wonder if it's stuck anyway. And I prefer to have one or very few requests, rather than 10+ different ones for a single page.
No, devs being bad is not a solvable problem. You just have to find devs who know what they're doing, or at least have a few who can teach the less talented ones and check their work.
There is no programming language that you can't write slow code in.
In my humble opinion, this kind of thing is the largest blind spot in the current tech economy.
Massive LLMs had a breakout moment with chat, and now everyone has invested HARD into that technology while in fact there is really no good reason to think that massive models (billions of parameters, requiring billions of dollars to train, and requiring power-gulping servers to run) are needed or even preferred for most AI tasks.
We had algorithmic automation for all kinds of things in the 80s, and that has been steadily improving for everything from chess engines to computer vision to content suggestion ever since. Photo touch-up runs on handheld devices and is nearly instantaneous. Self-checkout is ubiquitous. Digital CNC and 3D printing is no longer to relegated to professionals, the point that amateurs can buy off-the-shelf solutions and start creating products with a few mouse clicks.
Billions is being spent on shovels in the current gold rush but are they really needed?
Expectations: AI will deliver food to you and your high-paid programmers colleagues
Reality: You and your colleagues work at Mac because AI took your high-paid programmers jobs
As someone who worked in fast food in high school, I completely disagree with this.
Order taking via drive through can be surprisingly hard.
* Often lots of background noise
* Sometimes multiple people try to order (often with one of those being way away from the mic)
* People don't always know exactly what they want or what it's called. Sometimes things have a regional or local name that's not on the board. Right now, I order a "$5 meal deal at McDonals". This is often not listed on the board and it's not called "$5 meal deal" - but literally every cashier knows exactly what I'm talking about. I doubt AI would figure this out.
* People often have custom requests that don't follow the "official menu".
* The actual food ticketing system that gets sent to the grill has significant limitations in resolution. If you're doing anything other than a basic deletion, it's likely just coming through to the grill as "ask me".
* It's extremely hard to handle edge cases like makeup meals, incorrect orders, coupons, etc. These generally require human judgement and a bit of contextual understanding. Generally, these are things you only understand by actually looking at the real world. For example, is there an unaccounted burger now sitting at the end of the grill line - looks like someone grabbed the wrong food.
* Human cashiers are really good at hearing someone shoutout something like "ice cream machine is down" or "hold on fries" or "we're out of chicken" or "no fire sauce" and understanding what the means in terms of orders. It's a pain to get an AI system to be able to understand all of this nuance.
Yes, this is a surprisingly difficult job that has a lot of complications. In fact, most jobs have surprising complications to them, and that surprising difficulty is why there's skepticism about AI taking over other jobs.
Actually one of the fast food restaurants here takes automated spoken orders at the drive through. I've only used it once, but I was surprised that it worked flawlessly for me...
The thing about automated systems is that they typically cover the happy paths, and leave people who fall outside of those happy paths extremely frustrated.
Take automated phone menu systems, for example.
"If you are calling about X, press 1
If you are calling about Y, press 2
If you are calling about Z, press 3"
customers presses 0 because they are calling about none-of-the-above and wants to talk to a human
"I'm sorry. I don't recognize that menu option. To hear the options again, please press 9."
Oh just today, to give another example of how automation can seriously frustrate end users, I'm trying to get a Square POS account approved for my new business. Their automated verification system sent me a form requesting more information about my business because certain information "could not be verified." One of the questions on the form was asking me to explain a discrepancy between the legal business name I typed in when setting up the account and the business name as it appears on the articles of incorporation that I submitted. The discrepancy in question: white-space and capitalization. No human being would read the two strings as distinct or recognize any discrepancy. Only software does that.
As others have pointed out, LLMs were not involved in the project from the article. But this transition will happen quickly—fast food chains are ruthless about efficiency and ordering from a discrete set of available options actually is something that AIs can do really well.
I just did a captcha the other day that asked the user to select which items can fit inside the sample item (which was a handbag). You'd think that a multimodal deep learning model could figure out what objects fit inside other objects if it's going to cure cancer or whatever, but no I'm assuming that it needs to be taught explicitly.
Prob what will happen is that you will need to use the app to "order" and then scan a generated code before accessing the drive thru. Not really AI needed, kiosk has been replacement these jobs since Corona (and prob earlier than that).
The analogy to the Dot Com bubble seems strong. Pretty much everything promised in the Dot Com bubble has come to pass. It just didn't come to pass on the time frame the stock valuations implied. And given that the time value of money is an exponential process and not just a linear one, missing by 10-15 years is missing by a lot.
I don't know that AI is going to miss in the same way by 15 years but I also don't see how it can possibly justify the promises implied by the current valuations and investments. A thing nobody wants to talk about is how quickly all the hardware that was so frantically purchased is depreciating, for instance. And even missing by 5 years with the exponential time value of money is still missing by a lot, although perhaps I won't italicize this particular "lot".
It just didn't come to pass on the time frame the stock valuations implied.
There’s always “urgency” involved when pumping money around the system. And some win big with overselling this need to act fast. I don’t see how patience can be introduced.
Agreed on both fronts, but it bears repeating two that both things can be true at the same time. More often than not people think when the AI bubble bursts, LLMs, diffusion models etc. will just vanish, and they will absolutely not vanish. This tech is here to stay, for good.
When the dot com bubble burst, the internet did not go away. Just the over inflated companies did. The underlying tech was not to blame, but the mismanaged VC funded nonsense that goes on inside hype machines. We lathered and rinsed, now we're repeating.
The prices recovered. The housing and especially the rental market have been insane ever since. You can view it as a success for investors, but not for anyone else.
> people think when the AI bubble bursts, LLMs, diffusion models etc. will just vanish, and they will absolutely not vanish
I don't care if they vanish. I just want the hype to die. The last few months this site would more truthfully have been titled LLM News. I use LLMs but for the most part I find discussions about them boring.
Thanks for posting this and yes you’re right, it’s a more detailed and well written post, giving a better explanation of the AI situation.
We “old timers” love TheRegister but it has a certain style and way of writing, especially the original UK version (where BOFH gained fame).
The MIT report that found 90% of workers are regularly using LLMs at work, many of them even multiple times daily[1], somehow has become the spark of the recent explosion of "AI has failed" articles. Which is ironic, because it shows that the people we trust to write with integrity are either only reading the headlines or purposely misleading with their articles.
If that's the case, LLMs cannot replace these primary source summarizing clowns fast enough.
If everyone is competing to be THE notes app, and there is an expectation for every person in the entire world to use said Notes apps for nearly all tasks in the future? Yea, it absolutely could deserve a global investment of roughly just 3% U.S. GDP. Moreover, isn’t it up to the people investing to decide that, rather than people who used ChatGPT once and then complained on the internet?
No, it absolutely wouldn't justify that level of investment and if that was happening we'd have hacker news articles rightly calling the notes app bubble.
No, because to replace a notes app you need pen and paper. But to replace LLM you need at least a human. There's literally nothing else. And humans don't come cheap.
The PDF is viewable here [0]; the link above doesn't take you there.
And from that PDF, I'm not seeing anything that is incongruent with what is stated in TFA:
From TFA:
> To be precise, the report states: "The GenAI Divide is starkest in deployment rates, only 5 percent of custom enterprise AI tools reach production." It's not that people aren't using AI tools. They are. There's a whole shadow world of people using AI at work. They're just not using them "for" serious work. Instead, outside of IT's purview, they use ChatGPT and the like "for simple work, 70 percent prefer AI for drafting emails, 65 percent for basic analysis. But for anything complex or long-term, humans dominate by 9-to-1 margins."
From PDF:
> Tools like ChatGPT and Copilot are widely adopted. Over 80 percent of organizations have explored or piloted them, and nearly 40 percent report deployment. But these tools primarily enhance individual productivity, not P&L performance. Meanwhile, enterprise-grade systems, custom or vendor-sold, are being quietly rejected. Sixty percent of organizations evaluated such tools, but only 20 percent reached pilot stage and just 5 percent reached production. Most fail due to brittle workflows, lack of contextual learning, and misalignment with day-to-day operations.
Words can never be automated, they're the equivalent of vaporware without underlying neural syntax, unreproduceable in binary.
AI is the precisely same thing, vaporware minus neural syntax.
Very dumbed-down take on the subject. What's "AI"? ChatGPT and Google AlphaFold are both AI.
I very much doubt speech and voice recognition and synthesis, as well as visual object recognition, are "as good as they will get" (however scary some of the practical applications might look like). Ditto specialized neural networks like aforementioned AlphaFold.
General-purpose chatbots trained on randomly selected data from stolen books and social media (and increasingly on its own slop)? Very likely.
Architectures allowing the said chatbots trigger actions online or (worse yet) IRL? Almost definitely.
I very much disagree. Terms and definitions matter, and in this case what you mean by "AI" changes the answer. Again, general-purpose LLMs might be a dead end. Specialized neural networks are not. One might argue that even specialized LLMs (ie fine tuned for code generation) have ways to go, too.
I'm not sure about target audience of the TheRegister but here on HN we should be more precise in our discussion.
I've used AI to get meaningful results(working code) out of it.
The service provider got little useful out of it because I'm fine using their free versions for it.
Also. AI is not as good as it's going to get. It going to get much much better than it is but it's going to follow a mostly mundane trajectory getting there.
According to the AI hype merchants we should see titanic super-AGI slug it out in the sky, by year 2023. That clearly did not happen.
What we will get is hype for current model+1 and disappointment when it's released for the coming decades.
Interesting to see an opinion piece on the register pointing out 3 camps, none of which apply to most people I know who work in software development.
Mainly that ai is a useful tool. Sometimes it is magical but has limits and is often wrong. It can be a great illustrator of sunk cost fallacy when working on very complex problems. But light years faster and more useful than googling for solutions when faced with a difficult debugging challenge. On net I would much prefer to have ai around than not.
I think it is a miss that software development is completely omitted in this article, esp a tech or tech adjacent publication that's been around for forever.
Read on then — the author suggests a 4th camp that I believe you (and I as a matter of fact) can agree with.
"I propose a fourth: AI is now as good as it's going to get, and that's neither as good nor as bad as its fans and haters think…"
I suppose I disagree with "as good as it's going to get" … but for the time being (this decade?) this might be correct.
Taking out the fact we're talking about "AI" for the moment.. doesn't it seem unusual to speculate that despite recent progress, it's just going to be.. flat from here out? (That's directed at El Reg, not you.)
Hard to take the rest of it seriously with them taking a position like that. I can't think of a time that's been true for any technology in my career. Whether they were ones I found useful or not.
Have you not heard of the various AI winters? Or the Gartner hype cycle?
There was lots of progress in NFTs. Not much going on there these days.
NFTs were kind of a scam to sell you jpgs on the grounds that insider Bob had sold one to insider Harry for $20k and so you're getting a bargain buying one for $15k. It's a different thing really.
> doesn't it seem unusual to speculate that despite recent progress, it's just going to be.. flat from here out?
No? When new tech arrives there is always a bunch of low hanging fruit around so there is quick progress immediately afterwards, but then it flatlines relatively quickly and progress is as slow as usual again.
So its a safe bet that progress will slow down to the usual level sooner or later, and it seems to be around now for text models, as this flatlining happens faster the more you invest into it since you exhaust the low hanging fruit faster.
Sure, slowing is natural for the reason you say. But the statement we're commenting on is:
> AI is now as good as it's going to get
And that's just silly, from my point of view.
Why is it silly? Cars haven't fundamentally changed in the past 50 years, they have gotten a lot better but not in a game changing way, society still functions the same with cars as 50 years ago.
I see the same thing with text models, you can say they improve but not in a game changing way, and you have the same scenario as cars. It wouldn't be wrong for a person to say "cars are as good as they ever going to get" 50 years ago, in his lifetime he was right, nothing happened with cars that would force him to change his habits during his life.
But up to 50 years ago cars changed quite quickly, so you could say it is weird to say cars wouldn't start flying or such in 50 years, but here we are, nothing dramatically changed.
Tesla self-driving/Waymo/Comma.ai isn't perfect, but they're good at what they do. That's a pretty dramatic change, in the last year or so. You get in the car, and then don't actually have to drive it, the car does it for you. Sure there are some corner cases that still haven't been solved, but most of the time, I get in the car and it just does its thing for me.
Something I've been doing a lot lately is investigating the people on HN that push various beliefs, and in this comment thread there's two voices pushing for how much AI is going to continue to grow, going forwards.
Who are these two voices? Well, we've got fragmede, who, looking through their HN profile, works at NVIDIA as a "senior AI infrastructure engineer", and we've got mh-, who, looking through their HN profile, works at Wunderkind, which is "pioneering a new category of AI agentic marketing".
So, the two people in here pushing messaging about how great and valuable AI is, and how it'll continue to get better, have their jobs/livelihood tied to AI and people continuing to pour money into AI.
It almost always turns out that way. The people protesting the loudest for some idea universally are somehow tied to profiting by convincing people of that idea. Not that that means they're wrong, of course. Just providing context.
Thanks. It's worthwhile context that I perhaps should have disclosed, but I don't think it affects my opinions in this thread.
My opinion was simply in reaction to an, IMO, nonsensical claim:
> AI is now as good as it's going to get
And it would have been the same no matter what* technology we're discussing.
* Ok, someone commented NFTs. But I never considered that a technology.
(Since it's in the thread now: my opinions are mine, not my employer's.)
My comment wasn’t about AI infra, my job, or broad societal changes. I write code for a living and worry about losing my job to AI like any other developer. I was just describing my experience with self-driving cars doing their thing. The key is whether the argument holds up on its merits. Pointing to someone’s job is background context, not a substitute for engaging with what they actually said.
and you didn't even mention the fact that they can now run on solar power instead of dinosaur fuel
quite a lot has changed with cars
doesn't really have anything to do with the future of AI, tho
Yeah, it does seem like progress has plateaued considerably. The leaps from GPT 2 to 3, 3 to 4, and 4 to 5 shrinks with each one, with 5 being particularly disappointing.
I, with no evidence, feel like GPT-5 was an efficiency release. Save as much power/compute while mitigating the quality loss leaving only the top model (using similar compute as previous models) to show real improvement.
[flagged]
> On net I would much prefer to have ai around than not.
I agree sort of, but on the other hand we don't know their true cost, whether that's the out-of-pocket expenses, or the pollution and high electricity/water costs that will result.
There are lots of things people do that are much more polluting and less useful or efficient than AI, such as eating meat, driving cars, traveling on airplanes etc, the latter two particularly in terms of business use cases especially when we have video call technology and remote work.
You forget to include the information pollution cost. AI slop is becoming inescapable in Google search results. So this new tool is completely obliterating the usefulness of old tools by flooding them with VC-subsidized crap.
Of course we know, it’s not magic, but some people like to spread FUD to further their own agendas.
Big number scary
One of the lowest level jobs in the market is taking orders at a fast food drive thru.
If AI can't do this job, it probably can't do yours either.
https://www.bbc.com/news/articles/c722gne7qngo
Bottom line: AI has very poor grasp of reality --- because (surprise, surprise) it has zero real world experience.
>A trial of the system, which was developed by IBM and uses voice recognition software to process orders, was announced in 2019.
I don't think the problem was AI technology...
Many fast food places now have tablet based ordering, so turns out AI wasn’t needed for software to take away jobs.
That said, AI could do that job perfectly well. The reason we still have human to human interaction when you order is that it creates a more interesting environment for employees, who crave at least some kind of human contact. And customers will pay marginally more for the food if they get some human contact as well.
> Many fast food places now have tablet based ordering
As an experience though, they could hardly have implemented it in a worse way. A big part of the reason I don't go to McDonald's any more is that the experience of using the ordering kiosks is so awful compared to just telling a human what I want.
> A big part of the reason I don't go to McDonald's any more is that the experience of using the ordering kiosks is so awful compared to just telling a human what I want.
Huh, for me is the polar opposite. Maybe it's because I mumble, is unclear or don't speak the native language as well as the natives where I live, but I always preferred the kiosks. I seem to always get what I order then, and it's a lot easier to customize things. Generally just feels faster, which I guess is the most important thing about fast-food, I want to be in-and-out of there as quick as possible, the less humans I have to deal with, the better.
When I worked at McDonalds, a major thing I learned is that when 95% of people say they got an order wrong, the customer ordered it wrong without realizing it. Customer says "I want a cheeseburger, plain", Mcdonalds puts that in as "Cheesebuger, no ketchup, mustard, onion or pickle". Customers will take plain to mean a dozen different things.
I also worked at McDonalds. You example is interesting because this specific issue came up often. Different people interpreted it to mean different things. The cashiers who'd been around the longest had learned to clarify exactly what the customer wants.
Also, this is absolutely NOT a customer issue. It's a restaurant issue to clarify. Plain means different things at different restaurants, so the solution is to _always_ clarify exactly what the customer means.
Every fast food place I've been to, a "plain cheeseburger" includes cheese. However, at every high-end burger place I've been to, "plain" does NOT include cheese. So there is a somewhat standard meaning.
I have this conversation enough that I now call out "plain no cheese" and ensure "no cheese" is written on the ticket.
McDonald's main menu items aren't actually called "cheeseburgers"; they're called "burgers with cheese". To me, this reads that "cheese" is a "topping" on the burger.
Further, they _only_ showcase the "burger with cheese variant" in their combos and special. This further drives home that you should be thinking about cheese in the same way as toppings.
That's a strange source of confusion. A plain cheeseburger without cheese has a different name: a plain hamburger. I can't imagine saying "cheeseburger", no matter what qualifying words around it, and being surprised it included cheese.
I have a very funny childhood memory of being in a McDonalds with my Cub Scout chapter; one of the boys ordered a "cheeseburger with no cheese", which - of course - was delivered with cheese, and the boy's father escalated the 'situation' to management.
Even more confusing when McDonalds calls them "burgers with cheese". Cheese seems to be a uniquely distinct event.
Also "I'll take a number 3 meal plain" is void of an actual subject for the type of burger.
You're conflating two different things. A plain cheeseburger obviously will include cheese by definition. A "plain" at a "burger place" would mean a plain hamburger. Both are correct usages of the adjective "plain" because the nouns they're describing are inherently different.
The challenge is that there's an indeterminable point at which the "cheese" stops being an integral part of the burger.
For a "cheeseburger" cheese is obviously integral. For a Big Mac, it's less clear but a "plain" Big Mac usually includes cheese.
For a fancy place's "deluxe Wagyu beef burger" that has cheese/truffles/a bunch of other stuff, a "plain" version will likely not have cheese.
> Customer says "I want a cheeseburger, plain", Mcdonalds puts that in as "Cheesebuger, no ketchup, mustard, onion or pickle". Customers will take plain to mean a dozen different things.
Isn't it up to the person who is receiving the order to ask clarifying questions then? Since they know it's potentially unclear/ambiguous, why not try to resolve the ambiguity before making the order?
99% of orders are not made incorrectly, and you're being told to go faster and keep times low. Is this the 1 order of the hour that the person will come back and complain that its wrong? Is this person going to come back regardless and say it was wrong to get free food? Its unambiguous enough to not be worth the time.
Be a bit smooth about it: as you type in the order, verbally say out loud what you're entering with other words, then ask them to confirm. No lost time, potentially less people to deal with in the future, win-win.
It's been several lifetimes since I worked in a fast food place (not McDonald's), but at least then, this was how we were trained to do it. Reading the order back like that was required.
That'd just annoy the 99% of people that agree with McDonald's on what plain means. People at McDonald's are often in a hurry, and some would get really annoyed at stupid questions. Better to piss off the 1% than the 99%.
If you're clarifying at this level, there are likely many other questions that you'd ask.
Even better when the customer just plain orders the wrong thing. I have a vivid memory of my pregnant wife and I at breakfast one morning, and she ordered apple juice. When the waiter brought apple juice she said she’d ordered orange, and when the waitress looked confused, I had to remind my wife she’d ordered apple. Total brain fart on her part (pregnancy brain seems to be a very real thing). Waitress didn’t care, especially after we laughed it off, and brought orange. But I know I’ve undoubtedly done the same thing, so I’m patient with mixups.
Rule #1 at McDonald's: Never customize. If you don't want pickles, take them off yourself. Otherwise they'll just get it wrong.
Whenever I took my kids there I told them "if you don't want it the way they make it then don't order it."
I've customized our McD's orders for my entire life - they mess it up maybe 1/20 times, about the same ratio I'll have to park and wait. Otherwise it's always worked for me!
I almost always (except if there is a big queue) customize something, because then it's guaranteed made at the spot, instead of the heated old stuff. And since I started using the kiosks, the orders are always right, and fresh.
A custom order used to be a hack to be sure you got a freshly-made sandwich and not one that had been sitting in a warmer for 15 minutes, but they make everything to order now. And they still fuck them up, often.
> but they make everything to order now.
Hard to know for a fact without knowing where I live, I'm guessing :) FWIW, it's not true at McDonalds in Spain, they definitely have popular stuff sitting behind the counter for longer than the items you customize here.
And many people are now getting delivery which means another 15 minutes in the delivery vehicle picnic bag.
I doubt it. I’m a perfectly capable communicator and I also prefer them.
Removing the seasoning from a McDonald’s burger is only possible with human contact afaik. So no, not any customization is possible.
You can order a hamburger with "no salt" now via the app (and probably the kiosk?). But I'm not sure if there are other seasonings you're referring to, though.
> the experience of using the ordering kiosks is so awful compared to just telling a human what I want.
Those kiosks are horrific and greatly reduced the number of visits I made to McDonald's. The insane pricing since then further reduced those visits to zero.
It started out pretty okay but every time I order on one of those tablets they seem to have added an extra step of trying to sell me something more.
That's really the worst part. The upsells are insulting and get in the way of ordering what I want. Classic enshittification, starting with something which people like and slowly deteriorating it in order to make an extra buck. I'm sure the MBAs love it though.
“Do you want fries with that?” predates the MBA craze
True, but it's so much worse now. Several "special offers" you have to go through, with dark patterns to prevent skipping it.
they are quite impressively bad
in 2025 how is it so hard to make a user interface that doesn't lag like a bastard on every scroll/click/...
it's almost as bad as their terrible, terrible, terrible app
> in 2025 how is it so hard to make a user interface that doesn't lag like a bastard on every scroll/click/...
Because everything is done in fucking React or Node or Blazor or whatever the newest flavor of this wRiTe oNcE rUn aNyWheRe bollocks is, because it always, always, always the exact same fucking thing: abstracting UI elements to fucking goddamn JavaScript and running it in a browser.
And heaven knows McDonald's can't possibly pay for proper software development, they only made like 14 billion last year. They're barely scraping by.
What technology would you use instead then?
That's not the issue. All these technologies can be nice and snappy. The problem is that developers suck at making software nice and snappy.
I've taken over several react apps over the years and one thing I always end up doing is remove a bunch of spinners because you don't need a spinner when the page loads instantly - as it should. Its very common for pages to take 10-60+ seconds to load, and when I look into it it's always obvious why they're so slow and easy to fix. The devs who made it just sucked.
> I always end up doing is remove a bunch of spinners because you don't need a spinner when the page loads instantly
I always have to remind people to add spinners, just because it loads instantly on your developer machine with a fiber connection (if not talking to a local container even) doesn't mean it will in the real world. But spinners only show when actively fetching so if it's fast they only show for a split-second. It's the best of both worlds.
Yeah, I don't like the split second spinners. And I work on company internal apps so every user is generally on a good connection.
What I might do is just add a global spinner using tanstack query, what I don't like is having 50 different spinners for every little component. Makes the site feel janky and weird.
I just don't see the point unless it's loading for 5+ seconds. If it's faster than that then the user won't have time to wonder if it's stuck anyway. And I prefer to have one or very few requests, rather than 10+ different ones for a single page.
> All these technologies can be nice and snappy
Maybe in theory, but in practice I see it very rarely. Maybe it starts out great and fast and then devolves into a shitfest.
Given how frequent this is, maybe it’s time to actually blame the technology itself if it makes it so easy to mess up?
No, devs being bad is not a solvable problem. You just have to find devs who know what they're doing, or at least have a few who can teach the less talented ones and check their work.
There is no programming language that you can't write slow code in.
They have to upload video of your face to use for sentiment analysis at every click. /s ?
Also the kiosks are frequently broken.
I actually prefer the kiosk. No ambiguity.
I have exactly opposite experience. If tablets are out of order for some reason, I just go somewhere else rather than attempting to talk to a human.
And the best way is just sit at the table, order with your phone and somebody brings the tray to you.
I would love to meet the idiot that believes this. I have some money to make $$$
In my humble opinion, this kind of thing is the largest blind spot in the current tech economy.
Massive LLMs had a breakout moment with chat, and now everyone has invested HARD into that technology while in fact there is really no good reason to think that massive models (billions of parameters, requiring billions of dollars to train, and requiring power-gulping servers to run) are needed or even preferred for most AI tasks.
We had algorithmic automation for all kinds of things in the 80s, and that has been steadily improving for everything from chess engines to computer vision to content suggestion ever since. Photo touch-up runs on handheld devices and is nearly instantaneous. Self-checkout is ubiquitous. Digital CNC and 3D printing is no longer to relegated to professionals, the point that amateurs can buy off-the-shelf solutions and start creating products with a few mouse clicks.
Billions is being spent on shovels in the current gold rush but are they really needed?
There was this joke at some point of time:
Or, similarly:
> I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do laundry and dishes.
https://twitter.com/AuthorJMac/status/1773679197631701238?la...
As someone who worked in fast food in high school, I completely disagree with this.
Order taking via drive through can be surprisingly hard.
* Often lots of background noise
* Sometimes multiple people try to order (often with one of those being way away from the mic)
* People don't always know exactly what they want or what it's called. Sometimes things have a regional or local name that's not on the board. Right now, I order a "$5 meal deal at McDonals". This is often not listed on the board and it's not called "$5 meal deal" - but literally every cashier knows exactly what I'm talking about. I doubt AI would figure this out.
* People often have custom requests that don't follow the "official menu".
* The actual food ticketing system that gets sent to the grill has significant limitations in resolution. If you're doing anything other than a basic deletion, it's likely just coming through to the grill as "ask me".
* It's extremely hard to handle edge cases like makeup meals, incorrect orders, coupons, etc. These generally require human judgement and a bit of contextual understanding. Generally, these are things you only understand by actually looking at the real world. For example, is there an unaccounted burger now sitting at the end of the grill line - looks like someone grabbed the wrong food.
* Human cashiers are really good at hearing someone shoutout something like "ice cream machine is down" or "hold on fries" or "we're out of chicken" or "no fire sauce" and understanding what the means in terms of orders. It's a pain to get an AI system to be able to understand all of this nuance.
Yes, this is a surprisingly difficult job that has a lot of complications. In fact, most jobs have surprising complications to them, and that surprising difficulty is why there's skepticism about AI taking over other jobs.
Actually one of the fast food restaurants here takes automated spoken orders at the drive through. I've only used it once, but I was surprised that it worked flawlessly for me...
The thing about automated systems is that they typically cover the happy paths, and leave people who fall outside of those happy paths extremely frustrated.
Take automated phone menu systems, for example.
"If you are calling about X, press 1
If you are calling about Y, press 2
If you are calling about Z, press 3"
customers presses 0 because they are calling about none-of-the-above and wants to talk to a human
"I'm sorry. I don't recognize that menu option. To hear the options again, please press 9."
Oh just today, to give another example of how automation can seriously frustrate end users, I'm trying to get a Square POS account approved for my new business. Their automated verification system sent me a form requesting more information about my business because certain information "could not be verified." One of the questions on the form was asking me to explain a discrepancy between the legal business name I typed in when setting up the account and the business name as it appears on the articles of incorporation that I submitted. The discrepancy in question: white-space and capitalization. No human being would read the two strings as distinct or recognize any discrepancy. Only software does that.
As others have pointed out, LLMs were not involved in the project from the article. But this transition will happen quickly—fast food chains are ruthless about efficiency and ordering from a discrete set of available options actually is something that AIs can do really well.
I just did a captcha the other day that asked the user to select which items can fit inside the sample item (which was a handbag). You'd think that a multimodal deep learning model could figure out what objects fit inside other objects if it's going to cure cancer or whatever, but no I'm assuming that it needs to be taught explicitly.
This is a defense against AI, not a training step. Though a multimodal model should be able to pass it.
Prob what will happen is that you will need to use the app to "order" and then scan a generated code before accessing the drive thru. Not really AI needed, kiosk has been replacement these jobs since Corona (and prob earlier than that).
Corona is a brand of beer.
These are actually *not* using llms! Good ol' typical decision tree.
The article itself said it was developed by IBM in 2019 and wasn’t using LLMs. That’s not exactly a great citation.
Android kiosk can do that job.
You are describing a vending machine
6 years old and not using LLMs, try again.
Large Language Models will change a significant portion of how companies and thus the economy works. This will be transformational.
AI Companies are still overhyped.
The analogy to the Dot Com bubble seems strong. Pretty much everything promised in the Dot Com bubble has come to pass. It just didn't come to pass on the time frame the stock valuations implied. And given that the time value of money is an exponential process and not just a linear one, missing by 10-15 years is missing by a lot.
I don't know that AI is going to miss in the same way by 15 years but I also don't see how it can possibly justify the promises implied by the current valuations and investments. A thing nobody wants to talk about is how quickly all the hardware that was so frantically purchased is depreciating, for instance. And even missing by 5 years with the exponential time value of money is still missing by a lot, although perhaps I won't italicize this particular "lot".
Sounds like another case of Amara's law that says:
> We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.
Agreed on both fronts, but it bears repeating two that both things can be true at the same time. More often than not people think when the AI bubble bursts, LLMs, diffusion models etc. will just vanish, and they will absolutely not vanish. This tech is here to stay, for good.
When the dot com bubble burst, the internet did not go away. Just the over inflated companies did. The underlying tech was not to blame, but the mismanaged VC funded nonsense that goes on inside hype machines. We lathered and rinsed, now we're repeating.
"The internet" was invented in the 70ths.
[deleted extreme wrongthink]
What do you mean when you say housing hasn’t recovered from the 2008 bubble?
This isn’t true. Housing prices recovered by 2016
The prices recovered. The housing and especially the rental market have been insane ever since. You can view it as a success for investors, but not for anyone else.
More like 2021 if you adjust for inflation.
https://www.longtermtrends.net/home-price-vs-inflation/
> people think when the AI bubble bursts, LLMs, diffusion models etc. will just vanish, and they will absolutely not vanish
I don't care if they vanish. I just want the hype to die. The last few months this site would more truthfully have been titled LLM News. I use LLMs but for the most part I find discussions about them boring.
This post from earlier today was imo a better take https://news.ycombinator.com/item?id=45008209
Thanks for posting this and yes you’re right, it’s a more detailed and well written post, giving a better explanation of the AI situation. We “old timers” love TheRegister but it has a certain style and way of writing, especially the original UK version (where BOFH gained fame).
The MIT report that found 90% of workers are regularly using LLMs at work, many of them even multiple times daily[1], somehow has become the spark of the recent explosion of "AI has failed" articles. Which is ironic, because it shows that the people we trust to write with integrity are either only reading the headlines or purposely misleading with their articles.
If that's the case, LLMs cannot replace these primary source summarizing clowns fast enough.
[1]https://nanda.media.mit.edu/ai_report_2025.pdf
I use a notes app multiple times a day for work. Does this make notes apps worth $1T in global investments?
If everyone is competing to be THE notes app, and there is an expectation for every person in the entire world to use said Notes apps for nearly all tasks in the future? Yea, it absolutely could deserve a global investment of roughly just 3% U.S. GDP. Moreover, isn’t it up to the people investing to decide that, rather than people who used ChatGPT once and then complained on the internet?
No, it absolutely wouldn't justify that level of investment and if that was happening we'd have hacker news articles rightly calling the notes app bubble.
No, because to replace a notes app you need pen and paper. But to replace LLM you need at least a human. There's literally nothing else. And humans don't come cheap.
Most AI applications are just enhanced google (to make up for how trashed google got by LLMs!)
Still, you couldn't enhance google in this manner without hiring a person to do the googling for you
The PDF is viewable here [0]; the link above doesn't take you there.
And from that PDF, I'm not seeing anything that is incongruent with what is stated in TFA:
From TFA:
> To be precise, the report states: "The GenAI Divide is starkest in deployment rates, only 5 percent of custom enterprise AI tools reach production." It's not that people aren't using AI tools. They are. There's a whole shadow world of people using AI at work. They're just not using them "for" serious work. Instead, outside of IT's purview, they use ChatGPT and the like "for simple work, 70 percent prefer AI for drafting emails, 65 percent for basic analysis. But for anything complex or long-term, humans dominate by 9-to-1 margins."
From PDF:
> Tools like ChatGPT and Copilot are widely adopted. Over 80 percent of organizations have explored or piloted them, and nearly 40 percent report deployment. But these tools primarily enhance individual productivity, not P&L performance. Meanwhile, enterprise-grade systems, custom or vendor-sold, are being quietly rejected. Sixty percent of organizations evaluated such tools, but only 20 percent reached pilot stage and just 5 percent reached production. Most fail due to brittle workflows, lack of contextual learning, and misalignment with day-to-day operations.
0: https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Bus...
I'm more impressed that 10% can avoid it given how aggressively AI is being shoehorned into everything.
Executives are telling their employees to use AI or else..
It feels like tokens are the new pieces of flair.
>I propose a fourth: AI is now as good as it's going to get...
is just silly. It'll get way better. Not sure about the timing but it'll happen.
Words can never be automated, they're the equivalent of vaporware without underlying neural syntax, unreproduceable in binary. AI is the precisely same thing, vaporware minus neural syntax.
That MIT report shall now be known as the Paper that Spawned a Thousand Churnalisms.
Very dumbed-down take on the subject. What's "AI"? ChatGPT and Google AlphaFold are both AI.
I very much doubt speech and voice recognition and synthesis, as well as visual object recognition, are "as good as they will get" (however scary some of the practical applications might look like). Ditto specialized neural networks like aforementioned AlphaFold.
General-purpose chatbots trained on randomly selected data from stolen books and social media (and increasingly on its own slop)? Very likely.
Architectures allowing the said chatbots trigger actions online or (worse yet) IRL? Almost definitely.
As you well know, even when you're pretending you don't, in this context when people say "AI" they mean LLMs like ChatGPT and Claude, not AlphaFold.
I very much disagree. Terms and definitions matter, and in this case what you mean by "AI" changes the answer. Again, general-purpose LLMs might be a dead end. Specialized neural networks are not. One might argue that even specialized LLMs (ie fine tuned for code generation) have ways to go, too.
I'm not sure about target audience of the TheRegister but here on HN we should be more precise in our discussion.
I've used AI to get meaningful results(working code) out of it.
The service provider got little useful out of it because I'm fine using their free versions for it.
Also. AI is not as good as it's going to get. It going to get much much better than it is but it's going to follow a mostly mundane trajectory getting there.
According to the AI hype merchants we should see titanic super-AGI slug it out in the sky, by year 2023. That clearly did not happen.
What we will get is hype for current model+1 and disappointment when it's released for the coming decades.
[dupe] of every discussion from last week or so:
Say farewell to the AI bubble, and get ready for the crash
https://news.ycombinator.com/item?id=44964548
Tech, chip stock sell-off continues as AI bubble fears mount
https://news.ycombinator.com/item?id=44965187
Is the A.I. Sell-Off the Start of Something Bigger?
https://news.ycombinator.com/item?id=44963715
AI is predominantly replacing outsourced, offshore workers
https://news.ycombinator.com/item?id=44940944
95% of Companies See 'Zero Return' on $30B Generative AI Spend
https://news.ycombinator.com/item?id=44974104
[dead]