I work as a DevOps/SRE and have been doing it FinTech (bank, hedge funds, startups) and Crypto (L1 chain) for almost 20 years.
My thoughts on vibe coding vs production code:
- vibe coding can 100% get you to a PoC/MVP probably 10x faster than pre LLMs
- This is partly b/c it is good at things I'm not good at (e.g. front end design)
- But then I need to go in and double check performance, correctness, information flow, security etc
- The LLM makes this easier but the improvement drops to about 2-3x b/c there is a lot of back and forth + me reading the code to confirm etc (yes, another LLM could do some of this but then that needs to get setup correctly etc)
- The back and forth part can be faster if e.g. you have scripts/programs that deterministically check outputs
- Testing workloads that take hours to run still take hours to run with either a human or LLM testing them out (aka that is still the bottleneck)
So overall, this is why I think we're getting wildly different reports on how effective vibe coding is. If you've never built a data pipeline and a LLM can spin one up in a few minutes, you think it's magic. But if you've spent years debugging complicated trading or compliance data pipelines you realize that the LLM is saving you some time but not 10x time.
I'm building a Java HFT engine and the amount of things AI gets wrong is eye opening. If I didn't benchmark everything I'd end up with much less optimized solution.
Examples: AI really wants to use Project Panama (FFM) and while that can be significantly faster than traditional OO approaches it is almost never the best. And I'm not taking about using deprecated Unsafe calls, I'm talking about using primative arrays being better for Vector/SIMD operations on large sets of data. NIO being better than FFM + mmap for file reading.
You can use AI to build something that is sometimes better than what someone without domain specific knowledge would develop but the gap between that and the industry expected solution is much more than 100 hours.
AI is extremely good at the things that it has many examples for. If what you are doing is novel then it is much less of a help, and it is far more likely to start hallucinating because 'I don't know' is not in the vocabulary of any AI.
I haven't had that at all, not even a single time. What I have had is endless round trips with me saying 'no, that can't work' and the bot then turning around and explaining to me why it is obvious that it can't work... that's quite annoying.
> Please carefully review (whatever it is) and list out the parts that have the most risk and uncertainty. Also, for each major claim or assumption can you list a few questions that come to mind? Rank those questions and ambiguities as: minor, moderate, or critical.
> Afterwards, review the (plan / design / document / implementation) again thoroughly under this new light and present your analysis as well as your confidence about each aspect.
There's a million variations on patterns like this. It can work surprisingly well.
You can also inject 1-2 key insights to guide the process. E.g. "I don't think X is completely correct because of A and B. We need to look into that and also see how it affects the rest of (whatever you are working on)."
Of course! I get pretty lazy so my follow-up is often usually something like:
"Ok let's look at these issues 1 at a time. Can you walk me through each one and help me think through how to address it"
And then it will usually give a few options for what to do for each one as well as a recommendation. The recommendation is often fairly decent, in which case I can just say "sounds good". Or maybe provide a small bit of color like: "sounds good but make sure to consider X".
Often we will have a side discussion about that particular issue until I'm satisfied. This happen more when I'm doing design / architectural / planning sessions with the AI. It can be as short or as long as it needs. And then we move on to the next one.
My main goal with these strategies is to help the AI get the relevant knowledge and expertise from my brain with as little effort as possible on my part. :D
A few other tactics:
- You can address multiple at once: "Item 3, 4, and 7 sound good, but lets work through the others together."
- Defer a discussion or issue until later: "Let's come back to item 2 or possibly save for that for a later session".
- Save the review notes / analysis / design sketch to a markdown doc to use in a future session. Or just as a reference to remember why something was done a certain way when I'm coming back to it. Can be useful to give to the AI for future related work as well.
- Send the content to a sub-agent for a detailed review and then discuss with the main agent.
I would say that if AI has to make decisions about picking between framework or constructs irrelevant to the domain at hand, it feels to me like you are not using the AI correctly.
I think the main issue is treating LLM as a unrestrained black box, there's a reason nobody outside tech trust so blindly on LLMs.
The only way to make LLMs useful for now is to restrain their hallucinations as much as possible with evals, and these evals need to be very clear about what are the goal you're optimizing for.
See karpathy's work on the autoresearch agent and how it carry experiments, it might be useful for what you're doing.
We were working on translations for Arabic and in the spec it said to use "Arabic numerals" for numbers. Our PM said that "according to ChatGPT that means we need to use Arabic script numbers, not Arabic numerals".
It took a lot of back-and-forths with her to convince her that the numbers she uses every day are "Arabic numerals". Even the author of the spec could barely convince her -- it took a meeting with the Arabic translators (several different ones) to finally do it. Think about that for a minute. People won't believe subject matter experts over an LLM.
In my experience, people outside of tech have nearly limitless faith in AI, to the point that when it clashes with traditional sources of truth, people start to question them rather than the LLM.
It would help if you briefly specified the AI you are using here. There are wildly different results between using, say, an 8B open-weights LLM and Claude Opus 4.6.
I've been using several. LM Studio and any of the open weight models that can fit my GPU's RAM (24GB) are not great in this area. The Claude models are slightly better but not worth they extra cost most of the time since I typically have to spend almost the same amount of time reworking and re-prompting, plus it's very easy to exhaust credits/tokens. I mostly bounce back and forth between the codex and Gemini models right now and this includes using pro models with high reasoning.
You can achieve optimized C/C++ speeds, you just can't program the same way you always have. Step 1, switch your data layout from Array of Structures to Structure of Arrays. Step 2, after initial startup switch to (near) zero object creation. It's a very different way to program Java.
You have to optimize your memory usage patterns to fit in CPU cache as much as possible which is something typical Java develops don't consider. I have a background in assembly and C.
I'd say it's slightly harder since there is a little bit of abstraction but most of the time the JIT will produce code as good as C compilers. It's also an niche that often considers any application running on a general purpose CPU to be slow. If you want industry leading speed you start building custom FPGAs.
Not necessarily. Java can be insanely performant, far more than I ever gave it credit for in the first decade of its existence. There has been a ton of optimization and you can now saturate your links even if you do fairly heavy processing. I'm still not a fan of the language but performance issues seem to be 'mostly solved'.
There are actually cases when Java (the HotSpot JVM) runs faster than the same logic written in C/C++ because the JVM is doing dynamic analysis and selective JIT compilation to machine code.
Java has significant overhead, that most/every object is allocated on heap, synchronized and has extra overhead of memory and performance to be GC controlled. Its very hard/not possible to tune this part.
You program differently for this niche in any language. The hot path (number crunching) thread doesn't share objects with gateway (IO) threads. Passing data between them is off heap, you avoid object creation after warm up. There is no synchronization, even volatile is something you avoid.
how exactly you are passing data? You can pass some primitives without allocating them on heap. You can use some tiny subset of Java+standard library to write high performance code, but why would you do this instead of using Rust or C++?
Depends. Many reasons, but one is that Java has a much richer set of 3rd party libraries to do things versus rolling your own. And often (not always) third party libraries that have been extensively optimized, real world proven, etc.
Then things like the jit, by default, doing run time profiling and adaptation.
I personally know of an HFT firm that used Java approximately a decade ago. My guess would be they're still using it today given Java performance has only improved since then.
Optimal in what sense? In the java shops I've worked at it's usually viewed as a pretty optimal situation to have everything in one language. This makes code reuse, packaging, deployment, etc much simpler.
In terms of speed, memory usage, runtime characteristics... sure there are better options. But if java is good enough, or can be made good enough by writing the code correctly, why add another toolchain?
I am curious about what causes some to choose Java for HFT. From what I remember the amount of virgin sacrifices and dances with the wolves one must do to approach native speed in this particular area is just way too much of development time overhead.
Probably the same thing that makes most developers choice a language for a project, it's the language they know best.
It wasn't a matter of choosing Java for HFT, it was a matter of selecting a project the was a good fit for Java and my personal knowledge. I was a Java instructor for Sun for over a decade, I authored a chunk of their Java curriculum. I wrote many of the concurrency questions in the certification exams. It's in my wheelhouse :)
My C and assembly is rusty at this point so I believe I can hit my performance goals with Java sooner than if I developed in more bare metal languages.
Software HFT? I see people call Python code HFT sometimes so I understand what you mean. It's more in-line with low latency trading than today's true HFT.
I don't work for a firm so don't get to play with FPGAs. I'm also not co-located in an exchange and using microwave towers for networking. I might never even have access to kernel networking bypass hardware (still hopeful about this one). Hardware optimization in my case will likely top out at CPU isolation for the hot path thread and a hosting provider in close proximity to the exchanges.
The real goal is a combination of eliminating as much slippage as possible, making some lower timeframe strategies possible and also having best class back testing performance for parameter grid searching and strategy discovery. I expect to sit between industry leading firms and typical retail systematic traders.
The one person who understands HFT yeah. "True" HFT is FPGA now and also those trades are basically dead because nobody has such stupid order execution anymore, either via getting better themselves or by using former HFTs (Virtu) new order execution services.
So yeah there's really no HFT anymore, it's just order execution, and some algo trades want more or less latency which merits varying levels of technical squeezing latency out of systems.
Then you list all of the things you want it not to do and construct a prompt to audit the codebase for the presence of those things. LLMs are much better at reviewing code than writing it so getting what you want requires focusing more on feedback than creation instructions.
There’s a big gap between reality and the influencer posts about LLMs. I agree with you that LLMs do provide some significant acceleration, but the influencers have tried to exaggerate this into unbelievable numbers.
Even non-influencers are trying to exaggerate their LLM skills as a way to get hired or raise their status on LinkedIn. I rarely read the LinkedIn social feed but when I check mine it’s now filled with claims from people about going from idea to shipped product in N days (with a note at the bottom that they’re looking for a new job or available to consult with your company). Many of these posts come from people who were all in on crypto companies a few years ago.
The world really is changing but there’s a wave of influencers and trend followers trying to stake out their claims as leaders on this new frontier. They should be ignored if you want any realistic information.
I also think these exaggerated posts are causing a lot of people to miss out on the real progress that is happening. They see these obviously false exaggerations and think the opposite must be true, that LLMs don’t provide any benefit at all. This is creating a counter-wave of LLM deniers who think it’s just a fad that will be going away shortly. They’re diminishing in numbers but every LLM thread on HN attracts a few people who want to believe it’s all just temporary and we’re going back to the old ways in a couple years.
> I rarely read the LinkedIn social feed but when I check mine it’s now filled with claims from people about going from idea to shipped product in N days (with a note at the bottom that they’re looking for a new job or available to consult with your company).
This always seems to be the pattern. "I vibe coded my product and shipped it in 96 hours!" OK, what's the product? Why haven't I heard of it? Why can't it replace the current software I'm using? So, you're looking for work? Why is nobody buying it?
Where is the Quicken replacement that was vibecoded and shipping today? Where are the vibecoded AAA games that are going to kill Fortnite? Where is the vibecoded Photoshop alternative? Heck, where is the vibecoded replacement for exim3 that I can deploy on my self hosted E-mail server? Where are all of the actual shipping vibecoded products that millions of users are using?
I agree with your general point but ... "Where are the vibecoded AAA games". A game dev team is typically less than 15% programmers. Most of the team are artists, followed by game designers. Maybe someday those will be replaced too but at the moment, while you can get some interesting pictures from stable-diffusion techniques it's unlikely to make a cohesive game and even prompting to create all of it would still take many person years.
That said, I have had some good experiences getting a few features from zero to working via LLMs and it's helped me find lots of bugs far easier than my own looking.
I can imagine a vibe coded todo app. I can also kind of imagine a vibe coded gIMP/Photoshop though it would still take several person years, prompting through each and every feature.
I note that games are mostly art assets and things like level design, and players are already happy to instantly consign such products to the slop bin.
The whole thing is "market for lemons": app stores filling with dozens of indistinguishable clones of each product category will simply scare users off all of them.
Do you really need Turbotax? Just feed it the tax code, your financial data, and the relevant forms and it should be good to go. Now we have freed up the labor of accountants so they can go be productive in another segment of society. /s
"I come from a state that raises corn and cotton and cockleburs and Democrats, and frothy eloquence neither convinces nor satisfies me. I am from Missouri. You have got to show me."
>Many of these posts come from people who were all in on crypto companies a few years ago.
This is ditto my observation. There seems to be a certain "type" of people like this. And it's not just people looking for work.
My guess is either they have super low critical thinking, a very cynical view of the world where lies and exaggeration are the only way to make it, or something more pathological (narcissism etc).
“Day 7" would be amazing - all that I see YouTube recommending is "I tried it for 24 hours"
I was listening to an "expert" on a podcast earlier today up until the point where the interviewer asked how long his amazing new vibe-coded tooling has been in production, and the self-proclaimed expert replied "actually we have an all-hands meeting later today so I can brief the team and we will then start using the output..."
The “store on the chain” thing turned out to be a fad in terms of technology, even though it made a lot of money (in the billions and more) to some people via the crypto thing. That was less than 10 years ago, so many of us do remember the similarities of the discourse being made then to what’s happening now.
With all that said, today’s LLMs do seem so provide a little bit more value compared to the bit chain thing, for example OCR/.pdf parsing is I’d say a solved thing right now thanks to LLMs, which is nice.
This is exactly my experience at Lovable. For some parts of the organization, LLMs are incredibly powerful and a productivity multiplier. For the team I am in, Infra, it's many times distraction and a negative multiplier.
I can't say how many times the LLM-proposed solution to a jittery behavior is adding retries. At this point we have to be even more careful with controlling the implementation of things in the hot path.
I have to say though, giving Amp/Claude Code the Grafana MCP + read-only kubectl has saved me days worth of debugging. So there's definitely trade-offs!
My colleague recently shipped a "bug fix" that addresses a race condition by adding a 200ms delay somewhere, almost completely coded by LLM. LLM even suggests that "if this is not good enough, increase it to 300ms".
That says something about how much some people care about this.
I concur on the DevSecOps aspect for a more specific reason: If you're failing a pipeline because ThirdPartyTOol69 doesn't like your code style or W/E, you can have the LLM fix it. Or get you to 100% test coverage etc. Or have it update your Cypress/Jest/SonarQube configs until the pipeline passes without losing brain cells doing it by hand. Or finds you a set of dependency versions that passes.
How does that test suite get built and validated? A comprehensive and high quality test suite is usually much larger than the codebase it tests. For example, the sqlite test suite is 590x [1] the size of the library itself
You forgot a hope-driven development and release process and other optimism based ("i'm sure it's fine" method), or faith based approaches to testing (ship and pray, ...). Customer driven invluntary beta testing also comes to mind and "let's see what happens" 0-day testing before deployment. We also do user-driven error discovery, frequently.
> - This is partly b/c it is good at things I'm not good at (e.g. front end design)
Everyone thinks LLMs are good at the things they are bad at. In many cases they are still just giving “plausible” code that you don’t have the experience to accurately judge.
I have a lot of frontend app dev experience. Even modern tools (Claude w/Opus 4.6 and a decent Claude.md) will slip in unmaintainable slop in frontend changes. I catch cases multiple times a day in code review.
Not contradicting your broader point. Indeed, I think if you’ve spent years working on any topic, you quickly realize Claude needs human guidance for production quality code in that domain.
Yes I’ve seen this at work where people are promoting the usage of LLMs for.. stuff other people do.
There’s also a big disconnect in terms of SDLC/workflow in some places.
If we take at face value that writing code is now 10x faster, what about the other parts of the SDLC? Is your testing/PR process ready for 10x the velocity or is it going to fall apart?
What % of your SDLC was actually writing code? Maybe time to market is now ~18% faster because coding was previously 20% of the duration.
What I do now is I make an MVP with the AI, get it working. And then tear it all down and start over again, but go a little slower. Maybe tear down again and then go even more slowly. Until I get to the point where I'm looking at everything the AI does and every line of code goes through me.
Absolutely. In my experience there are more “good coders” than people who are good at code review/PR/iterative feedback with another dev.
A lot of people are OCD pedants about stuff that can be solved with a linter (but can’t be bothered to implement one) or just “LGTM” everything. Neither provide value or feedback to help develop other devs.
More generally: LLM effectiveness is inversely proportional to domain specificity. They are very good at producing the average, but completely stumble at the tails. Highly particular brownfield optimization falls into the tails.
At this point, every programmer who claims that vibecoding doesn't make you at least 10 times more productive is simply lying or worst, doesn't know how to vibe code.
-So, you want to tell me that you don't review the code you write? Or that others don't review it?
- You bring up ONE example with a bottleneck that has nothing to do with programming. Again, if you claim it doesn't make you 10x more productive, you don't know how to use AI, it is that simple.
- I pin up 10 agents, while 5 are working on apps, 5 do reviews and testing, I am at the end of that workflow and review the code WHILE the 10 agents keep working.
For me it is far more than 10x, but I consider noobs by saying 10x instead of 20x or more.
Just goes to show that most programmers have no idea what most programmers are mostly programming. Great that it works for you, but don't assume that this applies to everyone else.
Everyone keeps saying 80/20 but that undersells what's going on. The last 20% isn't just hard. It's hard because of what happened during the first 80%.
When an agent takes a shortcut early on, the next step doesn't know it was a shortcut. It just builds on whatever it was handed. And then the step after that does the same thing. So by hour 80 you're sitting there trying to fix what looks like a UI bug and you realize the actual problem is three layers back. You're not doing the "hard 20%." You're paying interest on shortcuts you didn't even know were taken. (As I type this I'm having flashbacks to helping my kid build lego sets.)
The author figured this out by accident. He stopped prompting and opened Figma to design what he actually wanted. That's the move. He broke the chain before the next stage could build on it. The 100 hours is what it costs when you don't do that.
The 100 hours number feels about right for a solo project. What people underestimate is that the last 20% isn't just polish — it's the boring defensive stuff that makes an app not crash on someone else's phone.
I shipped a React Native app recently and probably 30% of the total dev time was wrapping every async call in try/catch with timeouts, handling permission denials gracefully, making sure corrupted AsyncStorage doesn't brick the app, and testing edge cases on old devices. None of that is the fun part. None of it shows up in a demo. But it's the difference between "works on my machine" and "works in production."
Vibecoding gets you to the demo. The gap is everything after that.
> probably 30% of the total dev time was wrapping every async call in try/catch with timeouts, handling permission denials gracefully, making sure corrupted AsyncStorage doesn't brick the app
The gap is definitely real. But I think most of this thread is misdiagnosing why it exists. It's not that AI cannot produce production quality code, it's that the very mental model most people have of AI is leading them to use the wrong interaction model for closing that last 20% of complexity in production code bases.
The author accidentally proved it: the moment they stopped prompting and opened Figma to actually design what they wanted, Claude nailed the implementation. The bottleneck was NEVER the code generation, it was the thinking that had to happen BEFORE ever generating that code. It sounds like most of you offload the thinking to AFTER the complexity has arisen when the real pattern is frontloading the architectural thinking BEFORE a single line of code is generated.
Most of the 100-hour gap is architecture and design work that was always going to take time. AI is never going to eliminate that work if you want production grade software. But when harnessed correctly it can make you dramatically faster at the thinking itself, you just have to actually use it as a thinking partner and not just a code monkey.
I don't know how other people work, but writing the code for me has been essential in even understanding the problem space. The architecture and design work in a lot of cases is harder without going through that process.
I recently had to build a widget that lets the user pick from a list of canned reports and then preview them in an overlay before sending to the printer (or save to PDF). All I knew was that I wanted each individual report's logic and display to be in its own file, so if the system needed to grow to 100 reports, it wouldn't get any more complicated than with 6 reports.
The final solution ended up being something like:
1. Page includes new React report widget.
2. Widget imports generic overlay component and all canned reports, and lets user pick a report.
3. User picks report, widget sets that specific report component as a child of the overlay component, launches overlay.
4. Report component makes call to database with filters and business logic, passes generic set of inputs (report title, other specifics, report data) to a shared report display template.
My original plan was for the report display template to also be unique to each report file. But when the dust settled, they were so similar that it made sense to use a shared component. If a future report diverges significantly, we can just skip the shared component and create a one-off in the file.
I could have designed all this ahead of time, as I would need to do with an LLM. But it was 10x easier to just start coding it while keeping my ultimate scalability goals in mind.
That's a good point and honestly I occasionally do the same thing. Sometimes you have to build something wrong to understand what right looks like. I think the distinction is between exploratory prototyping (building to learn/think) and expecting the prototype to BE the product. The first is thinking, the second is where the 100-hour gap bites you in the ass.
This. It’s also much easier to tell someone what you don’t like if what you don’t like is right in front of you than to tell them what you want without a point of reference.
Additionally, the author seems to build an app just for the sake of building an app / learning, not to solve any real serious business problem. Another "big" claim on LLM capabilities based on a solo toy project.
YES YES YES!! I so wish that we could go back in time and never, ever have even suggested anything other that what you say here. AI doesn't do it for you. It does it with you.
You have to figure out what you want before the AI codes. The thinking BEFORE is the entire game.
Though I will also say that I use Claude for working out designs a lot. Literally hours sometimes with long periods of me thinking it through.
And I still get a ton more done and often use tech that I would never have approached before these glory days.
The hours of design thinking with Claude is exactly it. That's the part nobody talks about because it isn't 'sexy' and doesn't make for a good demo or tweet. But it's the secret sauce IMO.
The way I see it, the NFT part is actually just for convenience to distribute AI generated images.
It could have been a web app, but with NFTs and Farcaster miniapps, you market to people who are willing and able to spend using their wallet instead of asking “normies” for credit card information for a 2 dollar custom image (that you could also prompt out of a free Gemini session).
With Farcaster, you also already have the profile picture of the user, one less hurdle again.
I think there's simply a huge overlap between the Crypto Bros, the NFT Bros, and now the AI Bros. The same sorts of people are pumping each one. I knew a guy who was into LeadGen and Drop Shipping in the 2000s, then got into online poker, then of course, got into Crypto, then inevitably NFTs. I haven't kept up with him, but I'm almost 100% sure he's pumping some AI related scheme now. These guys get into this pipeline and at each stage they are convinced that they're going to get rich off it.
Crypto has very narrow usage unless you're a criminal or a bro, NFT has essentially 0 non-bro activity, surely AI attracts bros, but also some of the smartest people I've known have been working on it a long time to build truly useful things.
AI can be really attractive to bros but also be incredibly useful.
In other words, AI isn't a trend that's going to pass, it's permanently going to reshape the tech scene and economy in a way that cryptocoins and NFTs absolutely did not.
> AI isn't a trend that's going to pass, it's permanently going to reshape the tech scene and economy in a way that cryptocoins and NFTs absolutely did not.
This exact wording was used for crypto. "It isn't a trend that's going to pass" and "It's going to reshape everything." Why are we sure of it now for AI (and that we're going to be right), when they were also sure of it before for crypto (and they ended up wrong)?
The AI people have the exact same feelings of absolute certainty as the crypto people had.
I thought everyone realized by now that a digital image made available via block chain or any other mechanism, can be duplicated indefinitely. The only thing you get is a copyright on some generated image or set of bits. And what are the chances any random digital image is going to be appreciated as art? You can't hang it in a living room or sit it on a coffee table. It's beanie babies, but without even a hill of beans.
Are people just expecting there's going to be enough digital fools to make a market?
A movie can be duplicated indefinitely. There's no guarantee your song will be appreciated as art. I'm not sure why you say you can't print out an image and hang it in your living room; we do that all the time at home.
I've personally never dabbled in NFTs, but I don't think it's fair to ascribe the inherent conflict between information and scarcity uniquely to them.
The more I evaluate Claude Code, the more it feels like the world's most inconsistent golfer. It can get within a few paces of the hole in often a single strike, and then it'll spend hours, days, weeks trying to nail the putt.
There's some 80-20:ness to all programming, but with current state of the art coding models, the distribution is the most extreme it's ever been.
Related anecdote: My 12yo son didn't like the speed cubing online timer he was using because it kept crashing the browser and interrupted him with ads. Instead of googling a better alternative we sat down with claude code and put together the version of the website that behaved and looked exactly as he wanted. He got it working all by himself in under an hour with less than 10 prompts, I only helped a bit putting it online with github pages so he can use it from anywhere.
Turns out that knowing what a plain text file is will be the criterion that distinguishes users who are digitally free from those locked into proprietary platforms.
Many parents are extremely interested in quickly building digital tools for their kids (education and entertainment) that they know are free from advertising, social media integration, user monitoring etc.
That may be true. But you also have to give the average parent more credit by assuming they don't want tech companies spying on their children and forcing their toxic platforms on them.
There are well attended parent evenings in our school on that topic.
Thinking about it, we should turn these into vibe coding hackathons where we replace all the ad-ridden little games, learning tools, messengers we don't like with healthy alternatives.
Yes, because the current software paradigm (a shed/barn/warehouse full of tools to suite every possible users every possible need) doesn't make sense when LLMs can turn plain English into a software tool in the matter of minutes.
>LLMs can turn plain English into a software tool in the matter of minutes.
Unless LLMs can read minds, no one will bother to specify, even in plain english with the required level of detail. And that is assuming the user has the details in mind, which is also something pretty improbable...
yes. claude added a suggested random scramble (if that's what you mean?), also running average of 5/12/100, local storage of past times on first iteration, my son told it to also add a button for +2s penalties and touch screen support.
I dont want that though, I want someone to spend much more time than I can afford thinking about and perfecting a product that I can pay for and dont worry about it
And some people do, both things can be true. I'd rather make a tool just for me that breaks when I introduce a new requirement and I just add into it and keep going.
The statement wasn't: "no one ever vibe codes an alternative to product X"
It was: "With sufficiently advanced vibe coding the need for certain type of product just vanishes."
If a product has 100 thousand users and 1% of them vibe codes an alternative for themselves, the product / business doesn't vanish. They still have 99 thousand of users.
That was the rebuttal, even if not presented as persuasively and intelligently as I just did.
So no, it's not the case of "both things being true". It's a case of: he was wrong.
At some point there will be market consequences for that kind of behavior. So where market dynamics are not dominated by bullshit (politics, friendships forged on Little St James, state intervention, cartel behavior, etc.) if my company provides the same service as another, but I replaced all of the low quality software as a service products my competitor uses with low quality vibe coded products, my overhead cost will be lower and that will give me an advantage.
I built a jira with attachments and all sorts of bells and whistles. Purrs like a kitten. Saas are going extinct. At least the jobs that charged $1000 a day to write jira plugins.
Some minor UX enhancement SaaS of the most recent VC-funded wave will do. Maybe those who forgot how to invest in R&D and spent last 20 years just fixing bugs. There’s plenty of SaaS on the market that offers added value beyond the code. Data brokers. Domain experts, etc. Even if homemade solution is sometimes possible, initial development costs are going to be just one of several important factors in choosing whether to build or to buy.
SaaS are not going exctinct. This reminds me of the LinkedIn posts saying they clone Slack in two hours, copying the UI, etc. Yeah, if you think Slack is private chat rooms then you should use IRC for your company.
One of the most valuable things about Slack is the ecosystem: apps, API support, etc. If you need to receive notifications from external apps (like PageDuty or Incident.io or something like that), good luck expecting them having a setup for your own version of the app. Yeah, some of them provide webhooks (not all of them), but in the end you have to maintain that too...
Yes, it seems like it got to some tipping point around 2013 where so many product and management people were familiar with it, and from there it became this “industry standard” that management always wanted everyone to use.
Also though, I feel like being attached to Confluence helped it because there is a lot less competition in the world of documentation wikis than there is in task management.
Products where the only value was the code are definitely under pressure. But, how many products are really like that? I suggest everyone look up HALO that’s so popular in investing right now, and start looking at companies with the assumption that the value of the code is zero so what other value is there. There’s often a lot more there than people realize.
How many products are actually like that? If I could easily replace github, datadog/sentry/whatever, cloudflare, aws, tailscale that would be great. In my view building and owning is better than buying or renting. Especially when it comes to data--it would be much better for me to own my telemetry data for example than to ship it off to another company. But I don't think you (or anyone) will be vibecoding replacements for these services anytime soon. They solve big, hard, difficult problems.
Github is on the chopping block as a tool (it's sticky as a social network). The other stuff not so much.
The things that are going away are tools that provide convenience on top of a workflow that's commoditized. Anything where the commercial offering provides convenience rather than capabilities over the open source offerings is gonna get toasted.
Even at recent levels of uptime I think it would be very difficult to build a competing product that could function at the scale of even a small company (10 engineers). How would you implement Actions? Code review comments/history? Pull requests? Issues? Permalinks? All of these things have serious operational requirements. If you just want some place to store a git repository any filesystem you like will do it but when you start talking about replacing github that's a different story altogether and TBH I don't think building something that appears to function the same is even the hard part, it's the scaling challenges you run into very quickly.
The future is narrow bespoke apps custom tailored for exactly that one single users use case.
An example would be if the user only ever works with .jpg files, then you don't need to support any of the dozens of other formats an image program would support.
I cannot stress enough how many software users out there are only using 1-10% of a program's capability, yet they have to pay for a team of devs who maintain 100% of it.
"The future" is fiction. It's a blank canvas where you can make a fingerpainting of any fantasy you like. Whenever people tell me about "the future" I know they're talking absolute rubbish. And I also like your fantasy! But it probably won't happen.
I call it "Psychics for Programmers." People will scoff at psychics and fortune telling and palm reading, but then the same people will listen to Elon or some founder or VC and be utterly convinced that that person is a visionary and can describe the future.
It's just reading the room. People hate having to use their computers through the lens of quasi-robot humans (saying that as one of those robots). They hate having to pay monthly just so dumb features and UI overhauls can be pushed on them.
They just want the software to do the few things they need it to do. AI labs are falling over themselves to remove the gate keeping regular people from using their computing device the way they want to use it. And the progress there in the last few years is nothing short of absolutely astounding.
> the progress there in the last few years is nothing short of absolutely astounding
Yet, all the astounding progress notwithstanding, I don't have a suite of bespoke tools replacing the ones I depend on. I cannot say "hey claude, make me a suite of bespoke software infrastructure monitoring and operational tooling tailored to my specific needs" and expect anything more than a giant headache and wasted time. So maybe we just need to wait? Or maybe it's just not actually real. My view is unless you show me a working demo it's vaporware. Show me that the problem is solved, don't tell me that it might be solved later sometime.
And what exactly is preventing you from building bespoke software for "infrastructure monitoring and operational tooling tailed to your specific needs"?
I could certainly imagine building myself some sort of dashboard. It would seem like a prime use case.
You want to hear about a problem solved? Recently I extended a tool that snaps high resolution images to a Pixel art grid, adding a GUI. I added features to remove the background, to slice individual assets out of it automatically, and to tile them in 9-slice mode.
Could I have realistically implemented the same bespoke tool before AI? No.
> And what exactly is preventing you from building bespoke software for "infrastructure monitoring and operational tooling tailed to your specific needs"?
Let's say I emit roughly 1TB of telemetry data per day--logs, metrics, etc. That's roughly what you might expect from medium sized tech company or a specific department (say, security) at a large company. There is going to be a significant infrastructure investment to replicate datadog's function in my organization, even if I only use a small subset of their product. It's not just "building a dashboard" it's building all the infrastructure to collect, normalize, store, and retrieve the data to even be able to draw that dashboard.
The dashboard is the trivial part. The hard part is building, operating, and maintaining all the infrastructure. Claude doesn't do a very good job helping with this, and in some sense it actually hinders.
EDIT: I'm not saying you shouldn't take ownership of your telemetry data. I think that's a strategically (and potentially from a user's perspective) better end result. But it is a mistake to trivialize the effort of that undertaking. Claude is not going to vibeslop it for you.
This is a pipe dream and “sufficiently advanced” is doing a lot of heavy lifting. You really think people would rather spin up and debug their own self-made software rather than pay for something that has been tested, debugged, and proven by thousands of users? Why would anyone do that for anything more than a very simple script? It makes zero sense unless the LLM outputs literally perfect one-shot software reliably.
Perplexity just launched a tool that builds and hosts small bespoke tools.
I tried it works wells. I can do the same thing in my Linux machine, but even my 12 year old now can get perplexity to build him a tool to compare ram prices at different chinease vendors.
Photoshop is a good example -- not that I agree with everything in the app, but just to design all the interactions properly in photoshop would take hundreds of hours (not to mention testing and figuring out the edges). If your goal is a 1-to-1 clone why not use Krita or photoshop? With LLM you'll get "mostly there" with many many hours of work, and lots of sharp edges. If all you need is paint bucket, basic brush / pencil, and save/load, ok maybe you can one-shot it in a few hours... or just use paint / aesprite...
When we start selling the software, and asking people to pay for/depend upon our product, the rules change -substantially.
Whenever we take a class or see a demo, they always use carefully curated examples, to make whatever they are teaching, seem absurdly simple. That's what you are seeing, when folks demonstrate how "easy" some new tech is.
A couple of days ago, I visited a friend's office. He runs an Internet Tech company, that builds sites, does SEO, does hosting, provides miscellaneous tech services, etc.
He was going absolutely nuts with OpenClaw. He was demonstrating basically rewiring his entire company, with it. He was really excited.
On my way out, I quietly dropped by the desk of his #2; a competent, sober young lady that I respect a lot, and whispered "Make sure you back things up."
I'm having somewhat good experiences with AI but I think that's because I'm only half-adopting it: instead of the full agentic / Ralphing / the-AI-can-do-anything way, I still do work in very small increments and review each commit. I'm not as fast as others, but I can catch issues earlier. I also can see when code is becoming a mess and stop to fix things. I mean, I don't fix them manually, I point Claude at the messy code and ask it to refactor it appropriately, but I do keep an eye to make sure Claude doesn't stray off course.
Honestly, seeing all the dumb code that it produces, calling this thing "intelligent" is rather generous...
I’ve had a similar experience. I’ve been vibecoding a personal kanban app for myself. Claude practically one-shotted 90% of the core functionality (create boards, lanes, cards, etc.) in a single session. But after that I’ve now spent close to 30 hours planning and iterating on the remaining features and UI/UX tweaks to make the app actually work for me, and still, it doesn’t feel "ready" yet. That’s not to say it hasn’t sped up the process considerably; it would’ve taken me hours to achieve what Claude did in the first 10 minutes.
I've got a few projects I've generated, along with a wholly handwritten project started in Dec.
The difference I've noticed is that the act of actually typing out code made me backtrack a few times refining the possible solutions before even starting the integration tests, sometimes before even doing a compile.
When generating, the LLM never backtracked, even in the face of broken tests. It would proceed to continue band-aiding until everything passed. It would add special exceptions to general code instead of determining that the general rule should be refined or changed.
The reason that some devs are reporting 10x productivity is because a bunch of duct-taped, band-aided, instant-legacy code is acceptable. Others who dont see that level of productivity increase are spending time fixing the code to be something they can read.
Not sure yet if accepting the spaghetti is the right course. If future LLMs can understand this spaghetti then theres no point in good code. If we still need human coders, then the productivity increase is very small.
I think there's a lot to pick apart here but I think the core premise is full of truth. This gap is real contrary to what you might see influencers saying and I think it comes from a lot of places but the biggest one is writing code is very different than architecting a product.
I've always said, the easiest part of building software is "making something work." The hardest part is building software that can sustain many iterations of development. This requires abstracting things out appropriately which LLMs are only moderately decent at and most vibe coders are horrible at. Great software engineers can architect a system and then prompt an LLM to build out various components of the system and create a sustainable codebase. This takes time an attention in a world of vibe coders that are less and less inclined to give their vibe coded products the attention they deserve.
An advantage I have enjoyed is that I am insanely careful about my fundamental architecture and I have a project scaffold that works correctly.
It has examples of all the parts of a web app written, over many years, to be my own ideal structure. When the LLM era arrived, I added a ton of comments explaining what, why and how.
It turns out to serves as a sort of seed crystal for decent code. Though, if I do not remind it to mimic that architecture, it sometimes doesn't and that's very weird.
Still, that's a tip I suggest. Give it examples of good code that are commented to explain why its good.
My non-technical client has totally vibe coded a SaaS prototype with lots of features, way bigger product than OP and it sort of works. They spent like 200 hours on it. I wonder what would have been the time needed to clean it up and approve it is secure. I declined to work on it, as I was not sure if it's even possible or if it would be better to rewrite the entire thing from scratch with better prompts. I was not that sure about it given the cost and the fact that they had a product that sort of worked and I let them go to find someone to clean it up. My reasoning is that if the client took 200h to develop this without stopping to check the code, it would take me 2 - 3 x to rewrite it with AI, but the right way, while the cleanup may be so painful it would be way better value for money to rewrite it from scratch.
I'd also say for a lot of applications -- most applications perhaps -- outside of "consumer" ones, the number of features is quite a bit more important than the shape of a button or the animations during a page transition.
Even pretty massive companies like databricks don't think about those things and basically have a UI template library that they then compose all their interfaces from. Nothing fancy. Its all about features, and LLM create copious amounts of features.
The interesting part about vibe coding is the spectrum of experiences and attitudes. I have been playing with it for 2-3hrs a day for the last 4 months now. None of my friends who are using it are using it in the same way. Some people vibe and then refactor, some spec-everything and micro-prompt the solutions. Nobody is feeling like this thing can go unsupervised.
And then there is one guy, a friend of mine, who is planning to release a "submit a bug report, we will fix it immediately" feature (so, collect error report from a user, possibly interview them, then assess if its a bug or not with a "product owner LLM", and then autonomously do it, and if it passes the tests - merge and push to prod - all under one hour. Thats for a mid cap company, for their client-facing product. F*** hell! I have a full bag of bug reports ready for when this hits prod :->
I started working on one of my apps around a year ago. There was no ai CLI back then. My first prototype was done in Gemini chat. It took a week copy and pasting text between windows. But I was obsessed.
The result worked but that's just a hacked together prototype. I showed it to a few people back then and they said I should turn it into a real app.
To turn it into a full multi user scaleable product... I'm still at it a year later. Turns out it's really hard!
I look at the comments about weekend apps. And I have some of those too, but to create a real actual valuable bug free MVP. It takes work no matter what you do.
Sure, I can build apps way faster now. I spent months learning how to use ai. I did a refactor back in may that was a disaster. The models back then were markedly worse and it rewrote my app effectively destroying it. I sat at my desk for 12 hours a day for 2 weeks trying to unpick that mess.
Since December things have definitely gotten better. I can run an agent up to 8 hours unattended, testing every little thing and produce working code quite often.
But there is still a long way to go to produce quality.
Most of the reason it's taking this long is that the agent can't solve the design and infra problems on its own. I end up going down one path, realising there is another way and backtracking. If I accepted everything the ai wanted, then finishing would be impossible.
> Late in the night most problems were fixed and I wrote a script that found everyone whose payment got stuck. I sent them money back (+ extra $1 as a ‘thank you for your patience’ note), and let them know via DMs.
(emphasis added)
Not sure if it was actually written by hand or AI was glossed over, but as soon as giving away money was on the table, the author seems to have ditched AI.
> Now I'm pretty sure that people who say they "vibecoded an app in 30 minutes" are either building simple copies of existing projects, produce some buggy crap, or just farm engagement.
Some people seem to be better at it than others. I see a huge gulf in what people can do. Oddly there is a correlation between was a good engineer pre AI and can vibe code well.
But I see one odd thing. A subset of those who people would consider good or even amazing pre AI struggle. The best I can tell at this stage is because they lacked get int good results with unskilled workers in the past and just relied on their own skills to carry the project.
AI coders can do some amazing things. But at this stage you have to be careful about how you guide it down a path in the same way you did with junior engineers. I am not making a comparison to AI being junior, they by far can code better than most senior engineers, and have access to knowledge at lighting speed.
I’m sure someone else has probably coined the term before me (or it’s just me being dumb, often the case) but I’ve started calling this phase of SWE ‘Ricky Bobby Development’.
So many people are just shouting ‘I wanna go fast’ and completely forgetting the lessons learned over the past few decades. Something is going to crash and burn, eventually.
I say this as a daily LLM user, albeit a user with a very skeptical view of anything the LLM puts in front of me.
Author admittedly didn’t know how to scale his app for thousands or hundreds of thousands of users. He jokes about it working great on localhost or “my machine”.
Not knocking the premise of the post. It probably works well for one single user if it’s an iPhone or Android app. But his 100 power hours are probably just right for what he ended up launching as he iterated through the requirements and learned how to set this up through reinforced learning and user feedback.
Used Codex for the whole project. At first I used claude for the architect of the backend since thats where I usually work and got experience in. The code runner and API endpoints were easy to create for the first prototype. But then it got to the UI and here's where sh1t got real. The first UI was in react though I had specifically told it to use Vue. The code editor and output window were a mess in terms of height, there was too much space between the editor and the output window and no matter how much time I spent prompting it and explaining to it, it just never got it right. Got tired and opened figma, used it to refine it to what I wanted. Shared the code it generated to github, cloned the code locally then told codex to copy the design and finally it got it right.
Then came the hosting where I wanted the code runner endpoint to be in a docker container for security purpose since someone could execute malicious code that took over the server if I just hosted it without some protection and here it kept selecting out of date docker images. Had to manually guide it again on what I needed. Finally deployed and got it working especially with a domain name. Shared it with a few friends and they suggested some UI fixes which took some time.
For the runner security hardening I used Deepseek and claude to generate a list of code that I could run to show potential issues and despite codex showing all was fine, was able to uncover a number of issues then here is where it got weird, it started arguing with me despite showing all the issues present. So I compiled all the issues in one document, shared the dockerfile and linux secomp config tile with claude and the also issues document. It gave me a list of fixes for the docker file to help with security hardening which I shared back with codex and that's when it fixed them.
Currently most of the issues were resolved but the whole process took me a whole week and I am still not yet done, was working most evenings. So I agree that you cannot create a usable product used by lots of users in 30 minutes not unless it's some static website. It's too much work of constant testing and iteration.
I have had things like your React instead of Vue problem. I solved it by always having Claude write a full implementation spec/plan in markdown which I give to a fresh context Claude to implement. Typically, I have comments and make it revise until I am happy.
What I really want to know is... as a software developer for 25+ years, when using these AI tools- it is still called "vibecoding"? Or is "vibecoding" reserved for people with no/little software development background that are building apps. Genuine question.
Steve Yegge has been a dev for several decades with lead spots at Amazon and Google, has completely converted to using AI, wrote a book about it using it effectively for large production-ready projects, and still calls it vibe coding.
I don't think I'll ever adopt this term, I'm not a fan of it at all. I find myself saying "I was working with AI" and just leave it at that. It is a collaboration afterall.
I came across the following yesterday: "The Great Way is not difficult for those who have no preferences," a famous Zen teaching from the Hsin Hsin Ming by Sengstan
As we move from tailors to big box stores I think we have to get used to getting what we get, rather than feeling we can nitpick every single detail.
I'd also be more interested in how his 3rd, 4th or 5th vibe coded app goes.
The 80/20 rule doesn’t go away. I am an AI true believer and I appreciate how fast we can get from nothing to 80% but the last “20%” still takes 80%+ of the time.
I have not been coding for a few years now. I was wondering if vibe coding could unstick some of my ideas. Here is my question, can I use TDD to write tests to specify what I want and then get the llm to write code to pass those tests?
That's a great approach, though I'd also recommend setting up a strong basis for linting, type checking, compilation, etc depending on the language. An LLM given a full test suite and guard rails of basic code style rules will likely do a pretty good job.
I would find it a bit tricky to write a full test suite for a product without any code though. You'd need to understand the architecture a bit and likely end up assuming, or mocking, what helpers, classes, config, etc will be built.
You absolutely can. This is one of recommended directions with agentic coding. But you can go farther and ask llm to write tests too. The review/approve them.
Yes, I mostly do spec driven developement. And at the design stage, I always add in tests. I repeat this pattern for any new features or bug fixes, get the agent to write a test (unit, intergration or playwright based), reproduce the issue and then implement the change and retest etc... and retest using all the other tests.
To expand on the "Yes": the AI tools work extremely well when they can test for success. Once you have the tests as you'd like them, you may want to tell the LLM not to modify the tests because you can run into situations where it'll "fix" the tests rather than fixing the code.
yes. depending on the techstack your experience might be better or worse.
HTML/CSS/React/Go worked great, but it struggled with Swift (which I had no experience in).
Can you expand on this? You definitely don’t need 6 months for a note taking app to be useable it is more you need to compete with the state of the art right
It depends entirely on what you want. You can literally code a JavaScript 1-liner that will make a <textarea> then put the content back in the URL and it will work serverless on pretty much any platform with a Web browser.
You can also write a note taking app that will be federated yet private, that will have its own scripting language, etc. I mean you can yak-shave your way to write your own OS or even designing your own CPU for that.
So... I'm not sure that metric, time, means much without a proper context, including who does it. It's quite different if to do that, regardless of the tooling used, if you are a professional developer, designer, fullstack dev, prototypist, PM, marketer, writer, etc.
Obsidian is super popular and is generally local first and device specific.
And even so if your starting a note taking app most of those problems like file corruption and image support are largely solved problems. There is also the benefit of being able to reference tons of open source implementations.
I think one month to notion like app that is prod ready if you just need Auth + markdown + images + standard text editing
I don't disagree, but I found it ironic I built ZenPlan, my ideal hybrid task/notetaking app, in about 50 hours with Claude Code this month after being frustrated with notebook and task management sprawl in OneNote. www.getzenplan.com
Ah, note taking as hobby finally explains to me why these apps seem so popular. I don't think I have ever considered that I need one. And it to be something that shouldn't be fully solved multiple times now. But it really being hobby does kinda make the point for me.
whatever you prototype - the one who built it in 6 month will have economy of scale to make it cheaper than your diy solution, and because they serve many customers and developed it for 6 months - their product will be 100x better than the one you diy
there is very very rare use case when diy makes sense. in 99% of cases its just a toy that feels nice as you kinda did it. but if you factor in the time etc it is always costs 100x more than $5/month you could usually buy
i found that to be effective is to use multiple AI tools at once. I'm using Gemini newest model i cant think of at the top of my head right now, and Claude newest model. i have each for its purpose with rustover IDE to speed things up. Rustover is particularly helpful because of how rust is worked with, the constant cargo cli commands and database interactions right in the IDE. i know visual code has this to a certain limit but IMO i prefer Rustover.
Using multiple models is because i know what each one is good at and how my knowledge works with their output, makes my life way easier and drives frustration down, which is needed when you need creativity at the forefront.
This is being said it def helps to know what you are doing if not 100% at least 60% of the things you are asking the models to do for you, I have caught mistakes and know when a model might make mistake which im fine with, sometimes i just want to see how something is done like the structure for a certain function of crate as im reading cargo.io doc constantly to learn what im doing.
There are plenty of ways to code and use code, which-ever works for you is good just improve on it and make it more effective. I have multiple screens on my computer, i don't like jumping back and fourth opening tabs and browsers so i have my set up the best way that works for me. As for the AI models, they are not going to be that helpful to you if you don't understand why its doing what its doing in a particular function or crate (in case of rust) or library. I imagine the the over the top coder that has years of experience and multiple knowledge in various languages and depth knowledge in libraries, using the same technique he can replace a whole Department by himself technically.
It seems like the entire "product" here is just a ChatGPT system prompt: "combine this image of a person with this image of a dinosaur".
The only thing he needed to code was an NFT wrapper, which presumably is just forking an existing NFT wholesale.
The interesting, user-facing part of the project isn't code at all! It's just an HTML front end on someone else's image generator and a "pay me" button.
Woodworking is an analogy that I like to use in deciding how to apply coding agents. The finished product needs to be built by me, but now I can make more, and more sophisticated, jigs with the coding agents, and that in turn lets me improve both quality and quantity.
It already starts with BS. Yes there are apps you can build in 30 minutes and they are great, not buggy or crap as he says it. And there are apps you need 1 hour or even weeks. It depends on what you want to build.
To start off by saying that every app build in 30 minutes is crap, simply shows that he did not want to think about it, is ignorant or he simply wanted to push himselve higher up by putting others down.
At this point, every programmer who claims that vibecoding doesn't make you at least 10 times more productive is simply lying or worst, doesn't know how to vibe code.
> With AI, it’s easier to get the first 90 percent out there. This means we can spend more time on the remaining 10 percent, which means more time for craftsmanship and figuring out how to make your users happy.
EXCEPT... you've just vibe coded the first 90 percent of the product, so completing the remaining 10 percent will take WAY longer than normal because the developers have to work with spaghetti mess.
And right there this guy has shown exactly how little people who are not software developers with experience understand about building software.
I keep seeing things that were vibe coded and thinking, "That's really impressive for something that you only spent that much time on".
To have a polished software project, you must spend time somewhat menially iterating and refining (as each type of user).
To have a polished software project,
you need to have started with tests and test coverage from the start for the UI, too.
Writing tests later is not as good.
I have taken a number of projects from a sloppy vibe coded prototype to 100% test coverage. Modern coding llm agents are good at writing just enough tests for 100% coverage.
But 100% test coverage doesn't mean that it's quality software, that it's fuzzed, or that it's formally verified.
Quality software requires extensive manual testing, iteration, and revision.
I haven't even reviewed this specific project; it's possible that the author developed a quality (CLI?) UI without e2e tests in so much time?
Was the process for this more like "vibe coding" or "pair programming with an LLM"?
> That's really impressive for something that you only spent that much time on"
Again, I haven't even read this particular project;
There's:
Prompt insufficiency:
Was the specification used to prompt the model to develop the software sufficient in relation to what are regarded as a complete enough software specifications?
Model and/or Agent insufficiency,
Software Development methods and/or Project Management insufficiency,
QA insufficiency,
Peer review sufficiency;
Is it already time to rewrite the product using the current project as a more sufficient specification?
But then how many hours of UI and business logic review would be necessary again?
A 40-hour work year has 2,080 hours per person per year.
The "10,000" hours necessary to be really good at anything number was the expert threshold that they used to categorize test subjects who performed neuroimaging studies while compassion meditating. "10,000" hours to be an expert is about 5 years at full time.
But how many hours to have a good software product?
Usually I check for tests and test coverage first. You could have spent 1,000 hours on a software project and if it doesn't have automated tests, we can't evolve the software and be sure that we haven't caused regressions.
I can't say I'm impressed by this at all. 100+ hours to build a shitty NFT app that takes one picture and a predefined prompt, then mints you a dinosaur NFT. This is the kind of thing I would've seen college students slam out over a weekend for a coding jam with no experience and a few cans of red bull with more quality and effort. Has our standards really gotten so low? I don't see any craftsmanship at play here.
Also the process sounds like a nightmare: "it broke and I asked 4 different LLMs to fix it; my `AGENTS.md` file contained hundreds of special cases; etc." I thought this article was intended to be a horror story, not an advertisement
> The "remaining 10 percent" is a difference between slop and something people enjoy.
I would say the remaining 10% are about how robust your solution is - anything associated with 'vibe' feels inherently unsecure. If you can objectively proof it is not, that's 10 % time well spend.
Of course vibe coding is going to be a headache if you have very particular aesthetic constraints around both the code and UX, and you aren't capable of clearly and explicitly explaining those constraints (which is often hard to do for aesthetics).
There are some good points here to improve harnesses around development and deployment though, like a deployment agent should ask if there is an existing S3 bucket instead of assuming it has to set everything up. Deployment these days is unnecessarily complicated in general, IMO.
If you hear someone spouting off about how vibe coding allows for creation of killer apps in a fraction of the time/cost, just ask them if you can see what successful killer apps they’ve created with it. It’s always crickets at that point because it’s somewhere between wishful thinking and an outright lie.
Im an 20 year veteran of application development consulting. Contributor level... not talking head. I do more estimating than anyone you likely know. Consulting is cooked. I just AI native built (not vibe coding...) an application with a buddy, another Principal level engineer and what would cost a client 500-750k and 8-12 weeks, we did for $200 and 1 sprint. Its a passion project but highly complex mapping and navigation app with host/client multi-user sync'd state. Cooked.
I realize this sounds one sided. Ive also founded companies and worked across the range of startup to faang. Everything has changed... For the better if you ask me.
I mean the worst part about this is the author also vibe coded their security. It could have been much more catastrophic if they built a crypto wallet or trading system. But because it was NFTs I guess the max damage was limited.
I have to say its a little sad that so many devs think of security and cryptography in the same way as library frameworks. In that they see it as just some black box API to use for their projects rather than respecting that its a fully developed, complex field that demands expertise to avoid mistakes.
I work as a DevOps/SRE and have been doing it FinTech (bank, hedge funds, startups) and Crypto (L1 chain) for almost 20 years.
My thoughts on vibe coding vs production code:
- vibe coding can 100% get you to a PoC/MVP probably 10x faster than pre LLMs
- This is partly b/c it is good at things I'm not good at (e.g. front end design)
- But then I need to go in and double check performance, correctness, information flow, security etc
- The LLM makes this easier but the improvement drops to about 2-3x b/c there is a lot of back and forth + me reading the code to confirm etc (yes, another LLM could do some of this but then that needs to get setup correctly etc)
- The back and forth part can be faster if e.g. you have scripts/programs that deterministically check outputs
- Testing workloads that take hours to run still take hours to run with either a human or LLM testing them out (aka that is still the bottleneck)
So overall, this is why I think we're getting wildly different reports on how effective vibe coding is. If you've never built a data pipeline and a LLM can spin one up in a few minutes, you think it's magic. But if you've spent years debugging complicated trading or compliance data pipelines you realize that the LLM is saving you some time but not 10x time.
I'm building a Java HFT engine and the amount of things AI gets wrong is eye opening. If I didn't benchmark everything I'd end up with much less optimized solution.
Examples: AI really wants to use Project Panama (FFM) and while that can be significantly faster than traditional OO approaches it is almost never the best. And I'm not taking about using deprecated Unsafe calls, I'm talking about using primative arrays being better for Vector/SIMD operations on large sets of data. NIO being better than FFM + mmap for file reading.
You can use AI to build something that is sometimes better than what someone without domain specific knowledge would develop but the gap between that and the industry expected solution is much more than 100 hours.
AI is extremely good at the things that it has many examples for. If what you are doing is novel then it is much less of a help, and it is far more likely to start hallucinating because 'I don't know' is not in the vocabulary of any AI.
> because 'I don't know' is not in the vocabulary of any AI.
That is clearly false. I’m only familiar with Opus, but it quite regularly tells me that, and/or decides it needs to do research before answering.
If I instruct it to answer regardless, it generally turns out that it indeed didn’t know.
I haven't had that at all, not even a single time. What I have had is endless round trips with me saying 'no, that can't work' and the bot then turning around and explaining to me why it is obvious that it can't work... that's quite annoying.
Try something like:
> Please carefully review (whatever it is) and list out the parts that have the most risk and uncertainty. Also, for each major claim or assumption can you list a few questions that come to mind? Rank those questions and ambiguities as: minor, moderate, or critical.
> Afterwards, review the (plan / design / document / implementation) again thoroughly under this new light and present your analysis as well as your confidence about each aspect.
There's a million variations on patterns like this. It can work surprisingly well.
You can also inject 1-2 key insights to guide the process. E.g. "I don't think X is completely correct because of A and B. We need to look into that and also see how it affects the rest of (whatever you are working on)."
Ok! I will try that, thank you very much.
Of course! I get pretty lazy so my follow-up is often usually something like:
"Ok let's look at these issues 1 at a time. Can you walk me through each one and help me think through how to address it"
And then it will usually give a few options for what to do for each one as well as a recommendation. The recommendation is often fairly decent, in which case I can just say "sounds good". Or maybe provide a small bit of color like: "sounds good but make sure to consider X".
Often we will have a side discussion about that particular issue until I'm satisfied. This happen more when I'm doing design / architectural / planning sessions with the AI. It can be as short or as long as it needs. And then we move on to the next one.
My main goal with these strategies is to help the AI get the relevant knowledge and expertise from my brain with as little effort as possible on my part. :D
A few other tactics:
- You can address multiple at once: "Item 3, 4, and 7 sound good, but lets work through the others together."
- Defer a discussion or issue until later: "Let's come back to item 2 or possibly save for that for a later session".
- Save the review notes / analysis / design sketch to a markdown doc to use in a future session. Or just as a reference to remember why something was done a certain way when I'm coming back to it. Can be useful to give to the AI for future related work as well.
- Send the content to a sub-agent for a detailed review and then discuss with the main agent.
I would say that if AI has to make decisions about picking between framework or constructs irrelevant to the domain at hand, it feels to me like you are not using the AI correctly.
I think the main issue is treating LLM as a unrestrained black box, there's a reason nobody outside tech trust so blindly on LLMs.
The only way to make LLMs useful for now is to restrain their hallucinations as much as possible with evals, and these evals need to be very clear about what are the goal you're optimizing for.
See karpathy's work on the autoresearch agent and how it carry experiments, it might be useful for what you're doing.
> there's a reason nobody outside tech trust so blindly on LLMs.
Man, I wish this was true. I know a bunch of non tech people who just trusts random shit that chatgpt made up.
I had an architect tell me "ask chatgpt" when I asked her the difference between two industrial standard measures :)
We had politicians share LLM crap, researchers doing papers with hallucinated citations..
It's not just tech people.
We were working on translations for Arabic and in the spec it said to use "Arabic numerals" for numbers. Our PM said that "according to ChatGPT that means we need to use Arabic script numbers, not Arabic numerals".
It took a lot of back-and-forths with her to convince her that the numbers she uses every day are "Arabic numerals". Even the author of the spec could barely convince her -- it took a meeting with the Arabic translators (several different ones) to finally do it. Think about that for a minute. People won't believe subject matter experts over an LLM.
We're cooked.
The architect should have required Hindu numbers. Same result, but even more confusion.
Man this is maddening.
And the worst part is, these people don't even use the flagship thinking models, they use the default fast ones.
In my experience, people outside of tech have nearly limitless faith in AI, to the point that when it clashes with traditional sources of truth, people start to question them rather than the LLM.
> AI really wants to use Project Panama
It would help if you briefly specified the AI you are using here. There are wildly different results between using, say, an 8B open-weights LLM and Claude Opus 4.6.
I've been using several. LM Studio and any of the open weight models that can fit my GPU's RAM (24GB) are not great in this area. The Claude models are slightly better but not worth they extra cost most of the time since I typically have to spend almost the same amount of time reworking and re-prompting, plus it's very easy to exhaust credits/tokens. I mostly bounce back and forth between the codex and Gemini models right now and this includes using pro models with high reasoning.
Wouldn't Java always lose in terms of latency against a similarly optimized native code in, let's say, C(++)?
You can achieve optimized C/C++ speeds, you just can't program the same way you always have. Step 1, switch your data layout from Array of Structures to Structure of Arrays. Step 2, after initial startup switch to (near) zero object creation. It's a very different way to program Java.
You have to optimize your memory usage patterns to fit in CPU cache as much as possible which is something typical Java develops don't consider. I have a background in assembly and C.
I'd say it's slightly harder since there is a little bit of abstraction but most of the time the JIT will produce code as good as C compilers. It's also an niche that often considers any application running on a general purpose CPU to be slow. If you want industry leading speed you start building custom FPGAs.
Not necessarily. Java can be insanely performant, far more than I ever gave it credit for in the first decade of its existence. There has been a ton of optimization and you can now saturate your links even if you do fairly heavy processing. I'm still not a fan of the language but performance issues seem to be 'mostly solved'.
"Saturating your links" is rarely the goal in HFT.
You want low deterministic latency with sharp tails.
If all you care about is throughput then deep pipelines + lots of threads will get you there at the cost of latency.
There are actually cases when Java (the HotSpot JVM) runs faster than the same logic written in C/C++ because the JVM is doing dynamic analysis and selective JIT compilation to machine code.
As long as you tune the JVM right it can be faster. But its a big if with the tune, and you need to write performant code
Java has significant overhead, that most/every object is allocated on heap, synchronized and has extra overhead of memory and performance to be GC controlled. Its very hard/not possible to tune this part.
You program differently for this niche in any language. The hot path (number crunching) thread doesn't share objects with gateway (IO) threads. Passing data between them is off heap, you avoid object creation after warm up. There is no synchronization, even volatile is something you avoid.
> Passing data between them is off heap
how exactly you are passing data? You can pass some primitives without allocating them on heap. You can use some tiny subset of Java+standard library to write high performance code, but why would you do this instead of using Rust or C++?
Depends. Many reasons, but one is that Java has a much richer set of 3rd party libraries to do things versus rolling your own. And often (not always) third party libraries that have been extensively optimized, real world proven, etc.
Then things like the jit, by default, doing run time profiling and adaptation.
Java has huge ecosystem in enterprise dev, but very unlikely it has ecosystem edge in high performance/real time compute.
I personally know of an HFT firm that used Java approximately a decade ago. My guess would be they're still using it today given Java performance has only improved since then.
it doesn't mean Java is optimal or close to optimal choice. Amount of extra effort they do to achieve goals could be significant.
Optimal in what sense? In the java shops I've worked at it's usually viewed as a pretty optimal situation to have everything in one language. This makes code reuse, packaging, deployment, etc much simpler.
In terms of speed, memory usage, runtime characteristics... sure there are better options. But if java is good enough, or can be made good enough by writing the code correctly, why add another toolchain?
I am curious about what causes some to choose Java for HFT. From what I remember the amount of virgin sacrifices and dances with the wolves one must do to approach native speed in this particular area is just way too much of development time overhead.
Probably the same thing that makes most developers choice a language for a project, it's the language they know best.
It wasn't a matter of choosing Java for HFT, it was a matter of selecting a project the was a good fit for Java and my personal knowledge. I was a Java instructor for Sun for over a decade, I authored a chunk of their Java curriculum. I wrote many of the concurrency questions in the certification exams. It's in my wheelhouse :)
My C and assembly is rusty at this point so I believe I can hit my performance goals with Java sooner than if I developed in more bare metal languages.
"HFT" means different things to different people.
I've worked at places where ~5us was considered the fast path and tails were acceptable.
In my current role it's less than a microsecond packet in, packet out (excluding time to cross the bus to the NIC).
But arguably it's not true HFT today unless you're using FPGA or ASIC somewhere in your stack.
Software HFT? I see people call Python code HFT sometimes so I understand what you mean. It's more in-line with low latency trading than today's true HFT.
I don't work for a firm so don't get to play with FPGAs. I'm also not co-located in an exchange and using microwave towers for networking. I might never even have access to kernel networking bypass hardware (still hopeful about this one). Hardware optimization in my case will likely top out at CPU isolation for the hot path thread and a hosting provider in close proximity to the exchanges.
The real goal is a combination of eliminating as much slippage as possible, making some lower timeframe strategies possible and also having best class back testing performance for parameter grid searching and strategy discovery. I expect to sit between industry leading firms and typical retail systematic traders.
The one person who understands HFT yeah. "True" HFT is FPGA now and also those trades are basically dead because nobody has such stupid order execution anymore, either via getting better themselves or by using former HFTs (Virtu) new order execution services.
So yeah there's really no HFT anymore, it's just order execution, and some algo trades want more or less latency which merits varying levels of technical squeezing latency out of systems.
Then you list all of the things you want it not to do and construct a prompt to audit the codebase for the presence of those things. LLMs are much better at reviewing code than writing it so getting what you want requires focusing more on feedback than creation instructions.
I've seen SQL injection and leaked API tokens to all visitors of a website :)
There’s a big gap between reality and the influencer posts about LLMs. I agree with you that LLMs do provide some significant acceleration, but the influencers have tried to exaggerate this into unbelievable numbers.
Even non-influencers are trying to exaggerate their LLM skills as a way to get hired or raise their status on LinkedIn. I rarely read the LinkedIn social feed but when I check mine it’s now filled with claims from people about going from idea to shipped product in N days (with a note at the bottom that they’re looking for a new job or available to consult with your company). Many of these posts come from people who were all in on crypto companies a few years ago.
The world really is changing but there’s a wave of influencers and trend followers trying to stake out their claims as leaders on this new frontier. They should be ignored if you want any realistic information.
I also think these exaggerated posts are causing a lot of people to miss out on the real progress that is happening. They see these obviously false exaggerations and think the opposite must be true, that LLMs don’t provide any benefit at all. This is creating a counter-wave of LLM deniers who think it’s just a fad that will be going away shortly. They’re diminishing in numbers but every LLM thread on HN attracts a few people who want to believe it’s all just temporary and we’re going back to the old ways in a couple years.
> I rarely read the LinkedIn social feed but when I check mine it’s now filled with claims from people about going from idea to shipped product in N days (with a note at the bottom that they’re looking for a new job or available to consult with your company).
This always seems to be the pattern. "I vibe coded my product and shipped it in 96 hours!" OK, what's the product? Why haven't I heard of it? Why can't it replace the current software I'm using? So, you're looking for work? Why is nobody buying it?
Where is the Quicken replacement that was vibecoded and shipping today? Where are the vibecoded AAA games that are going to kill Fortnite? Where is the vibecoded Photoshop alternative? Heck, where is the vibecoded replacement for exim3 that I can deploy on my self hosted E-mail server? Where are all of the actual shipping vibecoded products that millions of users are using?
I agree with your general point but ... "Where are the vibecoded AAA games". A game dev team is typically less than 15% programmers. Most of the team are artists, followed by game designers. Maybe someday those will be replaced too but at the moment, while you can get some interesting pictures from stable-diffusion techniques it's unlikely to make a cohesive game and even prompting to create all of it would still take many person years.
That said, I have had some good experiences getting a few features from zero to working via LLMs and it's helped me find lots of bugs far easier than my own looking.
I can imagine a vibe coded todo app. I can also kind of imagine a vibe coded gIMP/Photoshop though it would still take several person years, prompting through each and every feature.
I found one example of this going very wrong on reddit the other day -
https://www.reddit.com/r/selfhosted/comments/1rckopd/huntarr...
One redditor security reviews a vibe coded project
Wow, great example, and great example of what these fakers do when called out. Summary:
The maintainer, instead of listening to the security researcher and accepting feedback about his development process, instead:
1. Denied the problem
2. Censored discussion of the problem
3. Banned the people calling out the problem
...and then when the security issues were posted more publicly and got traction...
4. Made the subreddit private
5. Wiped and deleted his account
6. Wiped and deleted the GitHub repo
7. Took the project's web site off the web
Absolutely wild and unhinged behavior.
> Where are all of the actual shipping vibecoded products that millions of users are using?
Claude Code and OpenClaw - they are vibecoded. And I believe more coming.
Claude Code is not vibecoded, it is made using Claude Code but it is not vibecoded using Claude Code.
But it's like crypto then, good for buying other crypto, or illegal stuff.
Also people are using CC for the cheap access to the model, otherwise they'd be using opencode.
I regret only having one upvote for this.
I note that games are mostly art assets and things like level design, and players are already happy to instantly consign such products to the slop bin.
The whole thing is "market for lemons": app stores filling with dozens of indistinguishable clones of each product category will simply scare users off all of them.
Yeah, I really wonder if someone would trust to do their taxes in a vibe-coded version of Turbotax...
Do you really need Turbotax? Just feed it the tax code, your financial data, and the relevant forms and it should be good to go. Now we have freed up the labor of accountants so they can go be productive in another segment of society. /s
"I come from a state that raises corn and cotton and cockleburs and Democrats, and frothy eloquence neither convinces nor satisfies me. I am from Missouri. You have got to show me."
>Many of these posts come from people who were all in on crypto companies a few years ago.
This is ditto my observation. There seems to be a certain "type" of people like this. And it's not just people looking for work.
My guess is either they have super low critical thinking, a very cynical view of the world where lies and exaggeration are the only way to make it, or something more pathological (narcissism etc).
Day 7 of using Claude Code here are my takes...
“Day 7" would be amazing - all that I see YouTube recommending is "I tried it for 24 hours"
I was listening to an "expert" on a podcast earlier today up until the point where the interviewer asked how long his amazing new vibe-coded tooling has been in production, and the self-proclaimed expert replied "actually we have an all-hands meeting later today so I can brief the team and we will then start using the output..."
The “store on the chain” thing turned out to be a fad in terms of technology, even though it made a lot of money (in the billions and more) to some people via the crypto thing. That was less than 10 years ago, so many of us do remember the similarities of the discourse being made then to what’s happening now.
With all that said, today’s LLMs do seem so provide a little bit more value compared to the bit chain thing, for example OCR/.pdf parsing is I’d say a solved thing right now thanks to LLMs, which is nice.
This is exactly my experience at Lovable. For some parts of the organization, LLMs are incredibly powerful and a productivity multiplier. For the team I am in, Infra, it's many times distraction and a negative multiplier.
I can't say how many times the LLM-proposed solution to a jittery behavior is adding retries. At this point we have to be even more careful with controlling the implementation of things in the hot path.
I have to say though, giving Amp/Claude Code the Grafana MCP + read-only kubectl has saved me days worth of debugging. So there's definitely trade-offs!
My colleague recently shipped a "bug fix" that addresses a race condition by adding a 200ms delay somewhere, almost completely coded by LLM. LLM even suggests that "if this is not good enough, increase it to 300ms".
That says something about how much some people care about this.
Even doubly so because that's how most people have solved a similar problem, so that the LLM suggests that
I concur on the DevSecOps aspect for a more specific reason: If you're failing a pipeline because ThirdPartyTOol69 doesn't like your code style or W/E, you can have the LLM fix it. Or get you to 100% test coverage etc. Or have it update your Cypress/Jest/SonarQube configs until the pipeline passes without losing brain cells doing it by hand. Or finds you a set of dependency versions that passes.
The magic is testing. Having locally available testing and high throughput testing with high amount of test cases now unlocks more speed.
The test cases themselves becomes the foci - the LLM usually can't get them right.
How does that test suite get built and validated? A comprehensive and high quality test suite is usually much larger than the codebase it tests. For example, the sqlite test suite is 590x [1] the size of the library itself
1. https://sqlite.org/testing.html
sqlite is an extreme outlier not a typical example, with regard to test suite size and coverage.
> The magic is testing.
No it is not.
There os no amount of testing that can fix a flawed design
The word "Testing" is a very loaded term. Few non-professionals, or even many professionals, fully understand what is meant by it.
Consider the the following: Unit, Integration, System, UAT, Smoke, Sanity, Regression, API Testing, Performance, Load, Stress, Soak, Scalability, Reliability, Recovery, Volume Testing, White Box Testing, Mutation Testing, SAST, Code Coverage, Control Flow, Penetration Testing, Vulnerability Scanning, DAST, Compliance (GDPR/HIPAA), Usability, Accessibility (a11y), Localization (L10n), Internationalization (i18n), A/B Testing, Chaos Engineering, Fault Injection, Disaster Recovery, Negative Testing, Fuzzing, Monkey Testing, Ad-hoc, Guerilla Testing, Error Guessing, Snapshot Testing, Pixel-Perfect Testing, Compatibility Testing, Canary Testing, Installation Testing, Alpha/Beta Testing...
...and I'm certain I've missed dozens of other test approaches.
There is no science to testing, no provable best way, despite many people's vehement opinions
You forgot a hope-driven development and release process and other optimism based ("i'm sure it's fine" method), or faith based approaches to testing (ship and pray, ...). Customer driven invluntary beta testing also comes to mind and "let's see what happens" 0-day testing before deployment. We also do user-driven error discovery, frequently.
> - This is partly b/c it is good at things I'm not good at (e.g. front end design)
Everyone thinks LLMs are good at the things they are bad at. In many cases they are still just giving “plausible” code that you don’t have the experience to accurately judge.
I have a lot of frontend app dev experience. Even modern tools (Claude w/Opus 4.6 and a decent Claude.md) will slip in unmaintainable slop in frontend changes. I catch cases multiple times a day in code review.
Not contradicting your broader point. Indeed, I think if you’ve spent years working on any topic, you quickly realize Claude needs human guidance for production quality code in that domain.
Yes I’ve seen this at work where people are promoting the usage of LLMs for.. stuff other people do.
There’s also a big disconnect in terms of SDLC/workflow in some places. If we take at face value that writing code is now 10x faster, what about the other parts of the SDLC? Is your testing/PR process ready for 10x the velocity or is it going to fall apart?
What % of your SDLC was actually writing code? Maybe time to market is now ~18% faster because coding was previously 20% of the duration.
It’s the Gell-Mann amnesia effect applied to LLM instead of media
What I do now is I make an MVP with the AI, get it working. And then tear it all down and start over again, but go a little slower. Maybe tear down again and then go even more slowly. Until I get to the point where I'm looking at everything the AI does and every line of code goes through me.
>Testing workloads that take hours to run still take hours to run with either a human or LLM testing them out (aka that is still the bottleneck)
Absolutely. Tight feedback loops are essential to coding agents and you can’t run pipelines locally.
Also, now you're reading someone else's code and not everybody likes that. In fact, most self-proclaimed 10x coders I know hate it.
So instead of the 10x coder doing it, the 1x coder does it, but then that factor of 3x becomes 0.3x.
Absolutely. In my experience there are more “good coders” than people who are good at code review/PR/iterative feedback with another dev.
A lot of people are OCD pedants about stuff that can be solved with a linter (but can’t be bothered to implement one) or just “LGTM” everything. Neither provide value or feedback to help develop other devs.
Isn’t that the reason why people advocate for spec-driven development instead of vibe coding?
More generally: LLM effectiveness is inversely proportional to domain specificity. They are very good at producing the average, but completely stumble at the tails. Highly particular brownfield optimization falls into the tails.
At this point, every programmer who claims that vibecoding doesn't make you at least 10 times more productive is simply lying or worst, doesn't know how to vibe code. -So, you want to tell me that you don't review the code you write? Or that others don't review it? - You bring up ONE example with a bottleneck that has nothing to do with programming. Again, if you claim it doesn't make you 10x more productive, you don't know how to use AI, it is that simple. - I pin up 10 agents, while 5 are working on apps, 5 do reviews and testing, I am at the end of that workflow and review the code WHILE the 10 agents keep working.
For me it is far more than 10x, but I consider noobs by saying 10x instead of 20x or more.
Can you link to one launched product with users for us?
Just goes to show that most programmers have no idea what most programmers are mostly programming. Great that it works for you, but don't assume that this applies to everyone else.
I can't tell if this is real or a joke.
What exactly are you producing? LinkedIn posts?
Everyone keeps saying 80/20 but that undersells what's going on. The last 20% isn't just hard. It's hard because of what happened during the first 80%.
When an agent takes a shortcut early on, the next step doesn't know it was a shortcut. It just builds on whatever it was handed. And then the step after that does the same thing. So by hour 80 you're sitting there trying to fix what looks like a UI bug and you realize the actual problem is three layers back. You're not doing the "hard 20%." You're paying interest on shortcuts you didn't even know were taken. (As I type this I'm having flashbacks to helping my kid build lego sets.)
The author figured this out by accident. He stopped prompting and opened Figma to design what he actually wanted. That's the move. He broke the chain before the next stage could build on it. The 100 hours is what it costs when you don't do that.
The 100 hours number feels about right for a solo project. What people underestimate is that the last 20% isn't just polish — it's the boring defensive stuff that makes an app not crash on someone else's phone.
I shipped a React Native app recently and probably 30% of the total dev time was wrapping every async call in try/catch with timeouts, handling permission denials gracefully, making sure corrupted AsyncStorage doesn't brick the app, and testing edge cases on old devices. None of that is the fun part. None of it shows up in a demo. But it's the difference between "works on my machine" and "works in production."
Vibecoding gets you to the demo. The gap is everything after that.
> probably 30% of the total dev time was wrapping every async call in try/catch with timeouts, handling permission denials gracefully, making sure corrupted AsyncStorage doesn't brick the app
This is the exact kind of task that LLMs excel at
c'm'on, drop that
This comment is written by an LLM, right?
Edit: It's interesting how I am getting downvoted here when pangram confirms my suspicions that this is 100% AI generated.
The gap is definitely real. But I think most of this thread is misdiagnosing why it exists. It's not that AI cannot produce production quality code, it's that the very mental model most people have of AI is leading them to use the wrong interaction model for closing that last 20% of complexity in production code bases.
The author accidentally proved it: the moment they stopped prompting and opened Figma to actually design what they wanted, Claude nailed the implementation. The bottleneck was NEVER the code generation, it was the thinking that had to happen BEFORE ever generating that code. It sounds like most of you offload the thinking to AFTER the complexity has arisen when the real pattern is frontloading the architectural thinking BEFORE a single line of code is generated.
Most of the 100-hour gap is architecture and design work that was always going to take time. AI is never going to eliminate that work if you want production grade software. But when harnessed correctly it can make you dramatically faster at the thinking itself, you just have to actually use it as a thinking partner and not just a code monkey.
I don't know how other people work, but writing the code for me has been essential in even understanding the problem space. The architecture and design work in a lot of cases is harder without going through that process.
I recently had to build a widget that lets the user pick from a list of canned reports and then preview them in an overlay before sending to the printer (or save to PDF). All I knew was that I wanted each individual report's logic and display to be in its own file, so if the system needed to grow to 100 reports, it wouldn't get any more complicated than with 6 reports.
The final solution ended up being something like: 1. Page includes new React report widget. 2. Widget imports generic overlay component and all canned reports, and lets user pick a report. 3. User picks report, widget sets that specific report component as a child of the overlay component, launches overlay. 4. Report component makes call to database with filters and business logic, passes generic set of inputs (report title, other specifics, report data) to a shared report display template.
My original plan was for the report display template to also be unique to each report file. But when the dust settled, they were so similar that it made sense to use a shared component. If a future report diverges significantly, we can just skip the shared component and create a one-off in the file.
I could have designed all this ahead of time, as I would need to do with an LLM. But it was 10x easier to just start coding it while keeping my ultimate scalability goals in mind.
See "Programming as Theory Building": https://pages.cs.wisc.edu/~remzi/Naur.pdf
That's a good point and honestly I occasionally do the same thing. Sometimes you have to build something wrong to understand what right looks like. I think the distinction is between exploratory prototyping (building to learn/think) and expecting the prototype to BE the product. The first is thinking, the second is where the 100-hour gap bites you in the ass.
- version 1 -- we build what we think is needed
- version 2 -- we realise we're solving a completely different problem to what is needed
- version 3 -- we build what is actually needed
This. It’s also much easier to tell someone what you don’t like if what you don’t like is right in front of you than to tell them what you want without a point of reference.
Absolutely. You need to treat it like a real program from the very beginning.
This.
Additionally, the author seems to build an app just for the sake of building an app / learning, not to solve any real serious business problem. Another "big" claim on LLM capabilities based on a solo toy project.
Yeah, communicating what you want can be hard.
I'm doing a simple single line text editor, and designing some frame options. Which has a start end markers.
This was really hard to get the LLM to do right.. until just took a pen and paper, drew what I wanted, took a photo and gave it to the llm
YES YES YES!! I so wish that we could go back in time and never, ever have even suggested anything other that what you say here. AI doesn't do it for you. It does it with you.
You have to figure out what you want before the AI codes. The thinking BEFORE is the entire game.
Though I will also say that I use Claude for working out designs a lot. Literally hours sometimes with long periods of me thinking it through.
And I still get a ton more done and often use tech that I would never have approached before these glory days.
The hours of design thinking with Claude is exactly it. That's the part nobody talks about because it isn't 'sexy' and doesn't make for a good demo or tweet. But it's the secret sauce IMO.
They're... launching an NFT product in 2026...
I know it's not the point of this article, but really?
And the viewpoint is from the development of such "product" with "manufactured virality".
It's bunk.
Yep. As much as the rest of it resonated with LLM coding experiences I'm having, the NFT thing is unfortunate.
I'd pay a few bucks for some cool avatar or w/e this is. It seems like a good use of NFTs.
The way I see it, the NFT part is actually just for convenience to distribute AI generated images.
It could have been a web app, but with NFTs and Farcaster miniapps, you market to people who are willing and able to spend using their wallet instead of asking “normies” for credit card information for a 2 dollar custom image (that you could also prompt out of a free Gemini session).
With Farcaster, you also already have the profile picture of the user, one less hurdle again.
I think there's simply a huge overlap between the Crypto Bros, the NFT Bros, and now the AI Bros. The same sorts of people are pumping each one. I knew a guy who was into LeadGen and Drop Shipping in the 2000s, then got into online poker, then of course, got into Crypto, then inevitably NFTs. I haven't kept up with him, but I'm almost 100% sure he's pumping some AI related scheme now. These guys get into this pipeline and at each stage they are convinced that they're going to get rich off it.
Crypto has very narrow usage unless you're a criminal or a bro, NFT has essentially 0 non-bro activity, surely AI attracts bros, but also some of the smartest people I've known have been working on it a long time to build truly useful things.
AI can be really attractive to bros but also be incredibly useful.
In other words, AI isn't a trend that's going to pass, it's permanently going to reshape the tech scene and economy in a way that cryptocoins and NFTs absolutely did not.
> AI isn't a trend that's going to pass, it's permanently going to reshape the tech scene and economy in a way that cryptocoins and NFTs absolutely did not.
This exact wording was used for crypto. "It isn't a trend that's going to pass" and "It's going to reshape everything." Why are we sure of it now for AI (and that we're going to be right), when they were also sure of it before for crypto (and they ended up wrong)?
The AI people have the exact same feelings of absolute certainty as the crypto people had.
People's grandmothers know what AI is and many have used it, even outside the west.
Probably zero grandmothers outside the west, and very few grandmothers within the west, know what NFT even stands for.
I have friends (well, friends of friends) who still play the NFT lottery. People love gambling lol
I thought everyone realized by now that a digital image made available via block chain or any other mechanism, can be duplicated indefinitely. The only thing you get is a copyright on some generated image or set of bits. And what are the chances any random digital image is going to be appreciated as art? You can't hang it in a living room or sit it on a coffee table. It's beanie babies, but without even a hill of beans.
Are people just expecting there's going to be enough digital fools to make a market?
Isn't the same true of any intellectual property?
A movie can be duplicated indefinitely. There's no guarantee your song will be appreciated as art. I'm not sure why you say you can't print out an image and hang it in your living room; we do that all the time at home.
I've personally never dabbled in NFTs, but I don't think it's fair to ascribe the inherent conflict between information and scarcity uniquely to them.
You don't have to believe in it. You just have to believe someone else will believe in it and be willing to pay a higher price.
The more I evaluate Claude Code, the more it feels like the world's most inconsistent golfer. It can get within a few paces of the hole in often a single strike, and then it'll spend hours, days, weeks trying to nail the putt.
There's some 80-20:ness to all programming, but with current state of the art coding models, the distribution is the most extreme it's ever been.
With sufficiently advanced vibe coding the need for certain type of product just vanishes.
I needed it, I quickly build it myself for myself, and for myself only.
Related anecdote: My 12yo son didn't like the speed cubing online timer he was using because it kept crashing the browser and interrupted him with ads. Instead of googling a better alternative we sat down with claude code and put together the version of the website that behaved and looked exactly as he wanted. He got it working all by himself in under an hour with less than 10 prompts, I only helped a bit putting it online with github pages so he can use it from anywhere.
I don't think people are grasping yet that this is the future of software, if by no metric other than "most software used is created by the user".
The average user doesn't even know what a file is
Turns out that knowing what a plain text file is will be the criterion that distinguishes users who are digitally free from those locked into proprietary platforms.
Wont happen.
The average user just has no interest in building things.
Many parents are extremely interested in quickly building digital tools for their kids (education and entertainment) that they know are free from advertising, social media integration, user monitoring etc.
I'm saying this with all my love and respect: you are living in a very small bubble
That may be true. But you also have to give the average parent more credit by assuming they don't want tech companies spying on their children and forcing their toxic platforms on them.
There are well attended parent evenings in our school on that topic.
Thinking about it, we should turn these into vibe coding hackathons where we replace all the ad-ridden little games, learning tools, messengers we don't like with healthy alternatives.
Which is why they will use AI to do the building...
So... The future is like the past?
That would be good news, but I doubt most people will do things like that.
>most software used is created by the user
You really believe that?
Yes, because the current software paradigm (a shed/barn/warehouse full of tools to suite every possible users every possible need) doesn't make sense when LLMs can turn plain English into a software tool in the matter of minutes.
>LLMs can turn plain English into a software tool in the matter of minutes.
Unless LLMs can read minds, no one will bother to specify, even in plain english with the required level of detail. And that is assuming the user has the details in mind, which is also something pretty improbable...
That wasn't being claimed, just proposed as the direction we're headed.
Another user had already written what I had in mind when I responded to your comment..
https://news.ycombinator.com/item?id=47387570
> I don't think people are grasping yet that this is the future of software
What about this is new?
Sitting down with a child to teach them the very basics of javascript in an hour? Trivial.
Needing Claude to do it is kind of embarassing, if anything.
Out of curiosity, did you also implement scramble support? Or just the timing stuff?
yes. claude added a suggested random scramble (if that's what you mean?), also running average of 5/12/100, local storage of past times on first iteration, my son told it to also add a button for +2s penalties and touch screen support.
... So at no point in this did anyone even question why it should be a website?
Because now that website is fully cross-platform and sandboxed with no practical downside
"use it from anywhere" was important, and I don't think there's an easier way than a freely hosted static website.
I dont want that though, I want someone to spend much more time than I can afford thinking about and perfecting a product that I can pay for and dont worry about it
The metaphor that’s popped into my head recently is baking bread.
You can learn to bake good bread. It’s not _that_ hard. And it’ll probably taste better than store bought bread.
But it almost certainly won’t be cheaper. And it’ll take a more more time and effort.
Still, sometimes you might bake your own bread for kicks. But most of the time, you’ll just buy the bread someone else has already perfected.
Baking bread also takes hours of waiting.
I can have fresh bread anytime I want from a handful of nearby stores.
And some people do, both things can be true. I'd rather make a tool just for me that breaks when I introduce a new requirement and I just add into it and keep going.
The statement wasn't: "no one ever vibe codes an alternative to product X"
It was: "With sufficiently advanced vibe coding the need for certain type of product just vanishes."
If a product has 100 thousand users and 1% of them vibe codes an alternative for themselves, the product / business doesn't vanish. They still have 99 thousand of users.
That was the rebuttal, even if not presented as persuasively and intelligently as I just did.
So no, it's not the case of "both things being true". It's a case of: he was wrong.
At some point there will be market consequences for that kind of behavior. So where market dynamics are not dominated by bullshit (politics, friendships forged on Little St James, state intervention, cartel behavior, etc.) if my company provides the same service as another, but I replaced all of the low quality software as a service products my competitor uses with low quality vibe coded products, my overhead cost will be lower and that will give me an advantage.
If we could return to one-off payments without dark patterns I would agree. Hopefully at least the software that rely on grift will start to vanish.
I built a jira with attachments and all sorts of bells and whistles. Purrs like a kitten. Saas are going extinct. At least the jobs that charged $1000 a day to write jira plugins.
Some minor UX enhancement SaaS of the most recent VC-funded wave will do. Maybe those who forgot how to invest in R&D and spent last 20 years just fixing bugs. There’s plenty of SaaS on the market that offers added value beyond the code. Data brokers. Domain experts, etc. Even if homemade solution is sometimes possible, initial development costs are going to be just one of several important factors in choosing whether to build or to buy.
SaaS are not going exctinct. This reminds me of the LinkedIn posts saying they clone Slack in two hours, copying the UI, etc. Yeah, if you think Slack is private chat rooms then you should use IRC for your company.
One of the most valuable things about Slack is the ecosystem: apps, API support, etc. If you need to receive notifications from external apps (like PageDuty or Incident.io or something like that), good luck expecting them having a setup for your own version of the app. Yeah, some of them provide webhooks (not all of them), but in the end you have to maintain that too...
jira is a perfect example of an abysmal product that was marketed well.
Yes, it seems like it got to some tipping point around 2013 where so many product and management people were familiar with it, and from there it became this “industry standard” that management always wanted everyone to use.
Also though, I feel like being attached to Confluence helped it because there is a lot less competition in the world of documentation wikis than there is in task management.
Products where the only value was the code are definitely under pressure. But, how many products are really like that? I suggest everyone look up HALO that’s so popular in investing right now, and start looking at companies with the assumption that the value of the code is zero so what other value is there. There’s often a lot more there than people realize.
How many products are actually like that? If I could easily replace github, datadog/sentry/whatever, cloudflare, aws, tailscale that would be great. In my view building and owning is better than buying or renting. Especially when it comes to data--it would be much better for me to own my telemetry data for example than to ship it off to another company. But I don't think you (or anyone) will be vibecoding replacements for these services anytime soon. They solve big, hard, difficult problems.
Github is on the chopping block as a tool (it's sticky as a social network). The other stuff not so much.
The things that are going away are tools that provide convenience on top of a workflow that's commoditized. Anything where the commercial offering provides convenience rather than capabilities over the open source offerings is gonna get toasted.
Even at recent levels of uptime I think it would be very difficult to build a competing product that could function at the scale of even a small company (10 engineers). How would you implement Actions? Code review comments/history? Pull requests? Issues? Permalinks? All of these things have serious operational requirements. If you just want some place to store a git repository any filesystem you like will do it but when you start talking about replacing github that's a different story altogether and TBH I don't think building something that appears to function the same is even the hard part, it's the scaling challenges you run into very quickly.
The future is narrow bespoke apps custom tailored for exactly that one single users use case.
An example would be if the user only ever works with .jpg files, then you don't need to support any of the dozens of other formats an image program would support.
I cannot stress enough how many software users out there are only using 1-10% of a program's capability, yet they have to pay for a team of devs who maintain 100% of it.
"The future" is fiction. It's a blank canvas where you can make a fingerpainting of any fantasy you like. Whenever people tell me about "the future" I know they're talking absolute rubbish. And I also like your fantasy! But it probably won't happen.
I call it "Psychics for Programmers." People will scoff at psychics and fortune telling and palm reading, but then the same people will listen to Elon or some founder or VC and be utterly convinced that that person is a visionary and can describe the future.
It's just reading the room. People hate having to use their computers through the lens of quasi-robot humans (saying that as one of those robots). They hate having to pay monthly just so dumb features and UI overhauls can be pushed on them.
They just want the software to do the few things they need it to do. AI labs are falling over themselves to remove the gate keeping regular people from using their computing device the way they want to use it. And the progress there in the last few years is nothing short of absolutely astounding.
> the progress there in the last few years is nothing short of absolutely astounding
Yet, all the astounding progress notwithstanding, I don't have a suite of bespoke tools replacing the ones I depend on. I cannot say "hey claude, make me a suite of bespoke software infrastructure monitoring and operational tooling tailored to my specific needs" and expect anything more than a giant headache and wasted time. So maybe we just need to wait? Or maybe it's just not actually real. My view is unless you show me a working demo it's vaporware. Show me that the problem is solved, don't tell me that it might be solved later sometime.
And what exactly is preventing you from building bespoke software for "infrastructure monitoring and operational tooling tailed to your specific needs"?
I could certainly imagine building myself some sort of dashboard. It would seem like a prime use case.
You want to hear about a problem solved? Recently I extended a tool that snaps high resolution images to a Pixel art grid, adding a GUI. I added features to remove the background, to slice individual assets out of it automatically, and to tile them in 9-slice mode.
Could I have realistically implemented the same bespoke tool before AI? No.
> And what exactly is preventing you from building bespoke software for "infrastructure monitoring and operational tooling tailed to your specific needs"?
Let's say I emit roughly 1TB of telemetry data per day--logs, metrics, etc. That's roughly what you might expect from medium sized tech company or a specific department (say, security) at a large company. There is going to be a significant infrastructure investment to replicate datadog's function in my organization, even if I only use a small subset of their product. It's not just "building a dashboard" it's building all the infrastructure to collect, normalize, store, and retrieve the data to even be able to draw that dashboard.
The dashboard is the trivial part. The hard part is building, operating, and maintaining all the infrastructure. Claude doesn't do a very good job helping with this, and in some sense it actually hinders.
EDIT: I'm not saying you shouldn't take ownership of your telemetry data. I think that's a strategically (and potentially from a user's perspective) better end result. But it is a mistake to trivialize the effort of that undertaking. Claude is not going to vibeslop it for you.
This is a pipe dream and “sufficiently advanced” is doing a lot of heavy lifting. You really think people would rather spin up and debug their own self-made software rather than pay for something that has been tested, debugged, and proven by thousands of users? Why would anyone do that for anything more than a very simple script? It makes zero sense unless the LLM outputs literally perfect one-shot software reliably.
Perplexity just launched a tool that builds and hosts small bespoke tools.
I tried it works wells. I can do the same thing in my Linux machine, but even my 12 year old now can get perplexity to build him a tool to compare ram prices at different chinease vendors.
Yes, LLMs can be a better search tool.
It makes sense if you want bespoke software to do a specific job in a way best suited to your workflow.
Could you do the same in eg. Photoshop? Maybe, but even if, you would need to learn how.
Photoshop is a good example -- not that I agree with everything in the app, but just to design all the interactions properly in photoshop would take hundreds of hours (not to mention testing and figuring out the edges). If your goal is a 1-to-1 clone why not use Krita or photoshop? With LLM you'll get "mostly there" with many many hours of work, and lots of sharp edges. If all you need is paint bucket, basic brush / pencil, and save/load, ok maybe you can one-shot it in a few hours... or just use paint / aesprite...
https://xkcd.com/1205/ (is it worth the time matrix)
LLM's change the calculus of the above chart dramatically.
"working" != "shipping."
When we start selling the software, and asking people to pay for/depend upon our product, the rules change -substantially.
Whenever we take a class or see a demo, they always use carefully curated examples, to make whatever they are teaching, seem absurdly simple. That's what you are seeing, when folks demonstrate how "easy" some new tech is.
A couple of days ago, I visited a friend's office. He runs an Internet Tech company, that builds sites, does SEO, does hosting, provides miscellaneous tech services, etc.
He was going absolutely nuts with OpenClaw. He was demonstrating basically rewiring his entire company, with it. He was really excited.
On my way out, I quietly dropped by the desk of his #2; a competent, sober young lady that I respect a lot, and whispered "Make sure you back things up."
I'm having somewhat good experiences with AI but I think that's because I'm only half-adopting it: instead of the full agentic / Ralphing / the-AI-can-do-anything way, I still do work in very small increments and review each commit. I'm not as fast as others, but I can catch issues earlier. I also can see when code is becoming a mess and stop to fix things. I mean, I don't fix them manually, I point Claude at the messy code and ask it to refactor it appropriately, but I do keep an eye to make sure Claude doesn't stray off course.
Honestly, seeing all the dumb code that it produces, calling this thing "intelligent" is rather generous...
I would love it if someone explained what their ten agents Ralphing away were actually told to do.
I suppose if you are doing something that truly can be decided based on a test but, I just don't see it, at least for anything I do.
I think ralphing is for purely vibe coded stuff, where you're literally never looking at the code and only asking for changes to the final output.
If I'm reviewing all the code, so far I'm still the bottleneck even with a single agent and I don't see an easy way to change that.
I’ve had a similar experience. I’ve been vibecoding a personal kanban app for myself. Claude practically one-shotted 90% of the core functionality (create boards, lanes, cards, etc.) in a single session. But after that I’ve now spent close to 30 hours planning and iterating on the remaining features and UI/UX tweaks to make the app actually work for me, and still, it doesn’t feel "ready" yet. That’s not to say it hasn’t sped up the process considerably; it would’ve taken me hours to achieve what Claude did in the first 10 minutes.
I've got a few projects I've generated, along with a wholly handwritten project started in Dec.
The difference I've noticed is that the act of actually typing out code made me backtrack a few times refining the possible solutions before even starting the integration tests, sometimes before even doing a compile.
When generating, the LLM never backtracked, even in the face of broken tests. It would proceed to continue band-aiding until everything passed. It would add special exceptions to general code instead of determining that the general rule should be refined or changed.
The reason that some devs are reporting 10x productivity is because a bunch of duct-taped, band-aided, instant-legacy code is acceptable. Others who dont see that level of productivity increase are spending time fixing the code to be something they can read.
Not sure yet if accepting the spaghetti is the right course. If future LLMs can understand this spaghetti then theres no point in good code. If we still need human coders, then the productivity increase is very small.
> It would add special exceptions to general code instead of determining that the general rule should be refined or changed.
That is pretty bad..
I think there's a lot to pick apart here but I think the core premise is full of truth. This gap is real contrary to what you might see influencers saying and I think it comes from a lot of places but the biggest one is writing code is very different than architecting a product.
I've always said, the easiest part of building software is "making something work." The hardest part is building software that can sustain many iterations of development. This requires abstracting things out appropriately which LLMs are only moderately decent at and most vibe coders are horrible at. Great software engineers can architect a system and then prompt an LLM to build out various components of the system and create a sustainable codebase. This takes time an attention in a world of vibe coders that are less and less inclined to give their vibe coded products the attention they deserve.
An advantage I have enjoyed is that I am insanely careful about my fundamental architecture and I have a project scaffold that works correctly.
It has examples of all the parts of a web app written, over many years, to be my own ideal structure. When the LLM era arrived, I added a ton of comments explaining what, why and how.
It turns out to serves as a sort of seed crystal for decent code. Though, if I do not remind it to mimic that architecture, it sometimes doesn't and that's very weird.
Still, that's a tip I suggest. Give it examples of good code that are commented to explain why its good.
My non-technical client has totally vibe coded a SaaS prototype with lots of features, way bigger product than OP and it sort of works. They spent like 200 hours on it. I wonder what would have been the time needed to clean it up and approve it is secure. I declined to work on it, as I was not sure if it's even possible or if it would be better to rewrite the entire thing from scratch with better prompts. I was not that sure about it given the cost and the fact that they had a product that sort of worked and I let them go to find someone to clean it up. My reasoning is that if the client took 200h to develop this without stopping to check the code, it would take me 2 - 3 x to rewrite it with AI, but the right way, while the cleanup may be so painful it would be way better value for money to rewrite it from scratch.
I'd also say for a lot of applications -- most applications perhaps -- outside of "consumer" ones, the number of features is quite a bit more important than the shape of a button or the animations during a page transition.
Even pretty massive companies like databricks don't think about those things and basically have a UI template library that they then compose all their interfaces from. Nothing fancy. Its all about features, and LLM create copious amounts of features.
The interesting part about vibe coding is the spectrum of experiences and attitudes. I have been playing with it for 2-3hrs a day for the last 4 months now. None of my friends who are using it are using it in the same way. Some people vibe and then refactor, some spec-everything and micro-prompt the solutions. Nobody is feeling like this thing can go unsupervised.
And then there is one guy, a friend of mine, who is planning to release a "submit a bug report, we will fix it immediately" feature (so, collect error report from a user, possibly interview them, then assess if its a bug or not with a "product owner LLM", and then autonomously do it, and if it passes the tests - merge and push to prod - all under one hour. Thats for a mid cap company, for their client-facing product. F*** hell! I have a full bag of bug reports ready for when this hits prod :->
I started working on one of my apps around a year ago. There was no ai CLI back then. My first prototype was done in Gemini chat. It took a week copy and pasting text between windows. But I was obsessed.
The result worked but that's just a hacked together prototype. I showed it to a few people back then and they said I should turn it into a real app.
To turn it into a full multi user scaleable product... I'm still at it a year later. Turns out it's really hard!
I look at the comments about weekend apps. And I have some of those too, but to create a real actual valuable bug free MVP. It takes work no matter what you do.
Sure, I can build apps way faster now. I spent months learning how to use ai. I did a refactor back in may that was a disaster. The models back then were markedly worse and it rewrote my app effectively destroying it. I sat at my desk for 12 hours a day for 2 weeks trying to unpick that mess.
Since December things have definitely gotten better. I can run an agent up to 8 hours unattended, testing every little thing and produce working code quite often.
But there is still a long way to go to produce quality.
Most of the reason it's taking this long is that the agent can't solve the design and infra problems on its own. I end up going down one path, realising there is another way and backtracking. If I accepted everything the ai wanted, then finishing would be impossible.
Back then, also around May, I had Claude 3.old destroy a working app. Those were sad old days.
Hasn't happened in a long time. Opus 4.6 is a miracle improvement.
> Late in the night most problems were fixed and I wrote a script that found everyone whose payment got stuck. I sent them money back (+ extra $1 as a ‘thank you for your patience’ note), and let them know via DMs.
(emphasis added)
Not sure if it was actually written by hand or AI was glossed over, but as soon as giving away money was on the table, the author seems to have ditched AI.
> Now I'm pretty sure that people who say they "vibecoded an app in 30 minutes" are either building simple copies of existing projects, produce some buggy crap, or just farm engagement.
Some people seem to be better at it than others. I see a huge gulf in what people can do. Oddly there is a correlation between was a good engineer pre AI and can vibe code well.
But I see one odd thing. A subset of those who people would consider good or even amazing pre AI struggle. The best I can tell at this stage is because they lacked get int good results with unskilled workers in the past and just relied on their own skills to carry the project.
AI coders can do some amazing things. But at this stage you have to be careful about how you guide it down a path in the same way you did with junior engineers. I am not making a comparison to AI being junior, they by far can code better than most senior engineers, and have access to knowledge at lighting speed.
If you ask for something complicated this headline is more than true. But why complicate things, keep it simple and keep it fast.
Also this article uses 'pfp' like it's a word, I can't figure out what it means.
I'm able to vibe code simple apps in 30 minutes, polish it in four hours and now I've been enjoying it for 2 months.
I noticed this as well. I had to look it up. Apparently ‘pfp’ means ‘profile picture’.
Yeah I’ve always found that a cringe initialism given that it’s not Pro File Picture. I would just say avatar.
Apparently it means profile photo.
I’m sure someone else has probably coined the term before me (or it’s just me being dumb, often the case) but I’ve started calling this phase of SWE ‘Ricky Bobby Development’.
So many people are just shouting ‘I wanna go fast’ and completely forgetting the lessons learned over the past few decades. Something is going to crash and burn, eventually.
I say this as a daily LLM user, albeit a user with a very skeptical view of anything the LLM puts in front of me.
I love this!
Nobody is saying they're ready for production in 30 minutes, just that there is something real where an idea used to be.
Something much closer to production SDLC patterns than a Figma mockup.
This seems more like he is bad at describing what he wants and is prompting for “a UI” and then iterating “no, not like that” for 99 hours.
Author admittedly didn’t know how to scale his app for thousands or hundreds of thousands of users. He jokes about it working great on localhost or “my machine”.
Not knocking the premise of the post. It probably works well for one single user if it’s an iPhone or Android app. But his 100 power hours are probably just right for what he ended up launching as he iterated through the requirements and learned how to set this up through reinforced learning and user feedback.
Yeah but if you have to describe in very much details in english, you're better of just writing it with autocomplete.
I find that vibe coding is useful when it can be build with little details and it makes the right assumptions.
I have had the experience with creating https://swiftbook.dev/learn
Used Codex for the whole project. At first I used claude for the architect of the backend since thats where I usually work and got experience in. The code runner and API endpoints were easy to create for the first prototype. But then it got to the UI and here's where sh1t got real. The first UI was in react though I had specifically told it to use Vue. The code editor and output window were a mess in terms of height, there was too much space between the editor and the output window and no matter how much time I spent prompting it and explaining to it, it just never got it right. Got tired and opened figma, used it to refine it to what I wanted. Shared the code it generated to github, cloned the code locally then told codex to copy the design and finally it got it right.
Then came the hosting where I wanted the code runner endpoint to be in a docker container for security purpose since someone could execute malicious code that took over the server if I just hosted it without some protection and here it kept selecting out of date docker images. Had to manually guide it again on what I needed. Finally deployed and got it working especially with a domain name. Shared it with a few friends and they suggested some UI fixes which took some time.
For the runner security hardening I used Deepseek and claude to generate a list of code that I could run to show potential issues and despite codex showing all was fine, was able to uncover a number of issues then here is where it got weird, it started arguing with me despite showing all the issues present. So I compiled all the issues in one document, shared the dockerfile and linux secomp config tile with claude and the also issues document. It gave me a list of fixes for the docker file to help with security hardening which I shared back with codex and that's when it fixed them.
Currently most of the issues were resolved but the whole process took me a whole week and I am still not yet done, was working most evenings. So I agree that you cannot create a usable product used by lots of users in 30 minutes not unless it's some static website. It's too much work of constant testing and iteration.
I have had things like your React instead of Vue problem. I solved it by always having Claude write a full implementation spec/plan in markdown which I give to a fresh context Claude to implement. Typically, I have comments and make it revise until I am happy.
It has basically eliminated surprises like that.
You can say "shit" here if you like.
100 hours try 500 hours at least if you want a competitive product, unless you are a wizard at marketing where you out market the 80/20 guys.
if something like a popup appears that i didnt ask the page to do i snap close the page and never look at it again
What I really want to know is... as a software developer for 25+ years, when using these AI tools- it is still called "vibecoding"? Or is "vibecoding" reserved for people with no/little software development background that are building apps. Genuine question.
Steve Yegge has been a dev for several decades with lead spots at Amazon and Google, has completely converted to using AI, wrote a book about it using it effectively for large production-ready projects, and still calls it vibe coding.
I don't think I'll ever adopt this term, I'm not a fan of it at all. I find myself saying "I was working with AI" and just leave it at that. It is a collaboration afterall.
As a software developer over 30 years, AI is not a tool, it is not deterministic, it is an aide.
Don't have it do things for you. Have it do things with you.
I came across the following yesterday: "The Great Way is not difficult for those who have no preferences," a famous Zen teaching from the Hsin Hsin Ming by Sengstan
As we move from tailors to big box stores I think we have to get used to getting what we get, rather than feeling we can nitpick every single detail.
I'd also be more interested in how his 3rd, 4th or 5th vibe coded app goes.
The 80/20 rule doesn’t go away. I am an AI true believer and I appreciate how fast we can get from nothing to 80% but the last “20%” still takes 80%+ of the time.
The old rules still apply mainly.
Yes, so 80% of 100 hours is considerably less than 80% of 600 hours
In my experience, the last 20% tends to be the stuff that's less obvious, too, by it's very nature.
The details and pitfalls that are unique to your specific scenario, that you only discover by running into them.
And yet this less obvious, more uncommon stuff is also what AI will be weakest at.
The bottleneck seems to have shifted.
Before LLMs the slow part was writing code. Now the slow part is validating whether the generated code is actually correct.
I have not been coding for a few years now. I was wondering if vibe coding could unstick some of my ideas. Here is my question, can I use TDD to write tests to specify what I want and then get the llm to write code to pass those tests?
That's a great approach, though I'd also recommend setting up a strong basis for linting, type checking, compilation, etc depending on the language. An LLM given a full test suite and guard rails of basic code style rules will likely do a pretty good job.
I would find it a bit tricky to write a full test suite for a product without any code though. You'd need to understand the architecture a bit and likely end up assuming, or mocking, what helpers, classes, config, etc will be built.
You absolutely can. This is one of recommended directions with agentic coding. But you can go farther and ask llm to write tests too. The review/approve them.
Yes, I mostly do spec driven developement. And at the design stage, I always add in tests. I repeat this pattern for any new features or bug fixes, get the agent to write a test (unit, intergration or playwright based), reproduce the issue and then implement the change and retest etc... and retest using all the other tests.
To expand on the "Yes": the AI tools work extremely well when they can test for success. Once you have the tests as you'd like them, you may want to tell the LLM not to modify the tests because you can run into situations where it'll "fix" the tests rather than fixing the code.
yes. depending on the techstack your experience might be better or worse. HTML/CSS/React/Go worked great, but it struggled with Swift (which I had no experience in).
Yes
>> people who say they "vibecoded an app in 30 minutes" are either building simple copies of existing projects,
those are not copies, they aren't even features. usually part of a tiny feature that barely works only in demo.
with all vibe coding in the world today you still need at least 6 months full time to build a nice note taking app.
If we are talking something more difficult - it will be years - or you will need a team and it will still take a long time.
Everything less will result in an unusable product that works only for demo and has 80% churn.
Can you expand on this? You definitely don’t need 6 months for a note taking app to be useable it is more you need to compete with the state of the art right
I'd argue you need between 6 minutes and 6 years.
It depends entirely on what you want. You can literally code a JavaScript 1-liner that will make a <textarea> then put the content back in the URL and it will work serverless on pretty much any platform with a Web browser.
You can also write a note taking app that will be federated yet private, that will have its own scripting language, etc. I mean you can yak-shave your way to write your own OS or even designing your own CPU for that.
So... I'm not sure that metric, time, means much without a proper context, including who does it. It's quite different if to do that, regardless of the tooling used, if you are a professional developer, designer, fullstack dev, prototypist, PM, marketer, writer, etc.
> Can you expand on this?
sure. does your note taking app supports formatting? you don't need it today. you will need it at some point. images? same.
does it handle file corruption etc? no? then its pretty much useless.
does it work across devices? in modern world, again, it is pretty much useless without it
it works across devices? then it needs hosting. if it is hosted it needs auth, it needs backups
you can go on for ever.
the bar for very minimal note taking app that you actually will use is very high, with other software it is even higher.
and this is not even state of art, this is must haves
Obsidian is super popular and is generally local first and device specific.
And even so if your starting a note taking app most of those problems like file corruption and image support are largely solved problems. There is also the benefit of being able to reference tons of open source implementations.
I think one month to notion like app that is prod ready if you just need Auth + markdown + images + standard text editing
What universe do you live in
>with all vibe coding in the world today you still need at least 6 months full time to build a nice note taking app.
Bad example, note apps loaded with features are anti-productive and are for people who treat note taking as a hobby itself.
You have Obsidian anyway if you want something open source to work with.
I don't disagree, but I found it ironic I built ZenPlan, my ideal hybrid task/notetaking app, in about 50 hours with Claude Code this month after being frustrated with notebook and task management sprawl in OneNote. www.getzenplan.com
Ah, note taking as hobby finally explains to me why these apps seem so popular. I don't think I have ever considered that I need one. And it to be something that shouldn't be fully solved multiple times now. But it really being hobby does kinda make the point for me.
obsidian isn't open source
You seem to be making the assumption that "app" means "sellable product", rather than "one off that works for me". It doesn't.
When everyone is able to make their own one off prototype in 30 minutes, no one will pay for the thing that took someone 6 months.
whatever you prototype - the one who built it in 6 month will have economy of scale to make it cheaper than your diy solution, and because they serve many customers and developed it for 6 months - their product will be 100x better than the one you diy
there is very very rare use case when diy makes sense. in 99% of cases its just a toy that feels nice as you kinda did it. but if you factor in the time etc it is always costs 100x more than $5/month you could usually buy
Based on your earlier comment and your last paragraph, your impression of AI vibe coding ability is at least a year out of date.
In late 2024 it might have taken 6 months. Today, two weeks, maybe 3.
i found that to be effective is to use multiple AI tools at once. I'm using Gemini newest model i cant think of at the top of my head right now, and Claude newest model. i have each for its purpose with rustover IDE to speed things up. Rustover is particularly helpful because of how rust is worked with, the constant cargo cli commands and database interactions right in the IDE. i know visual code has this to a certain limit but IMO i prefer Rustover. Using multiple models is because i know what each one is good at and how my knowledge works with their output, makes my life way easier and drives frustration down, which is needed when you need creativity at the forefront. This is being said it def helps to know what you are doing if not 100% at least 60% of the things you are asking the models to do for you, I have caught mistakes and know when a model might make mistake which im fine with, sometimes i just want to see how something is done like the structure for a certain function of crate as im reading cargo.io doc constantly to learn what im doing.
There are plenty of ways to code and use code, which-ever works for you is good just improve on it and make it more effective. I have multiple screens on my computer, i don't like jumping back and fourth opening tabs and browsers so i have my set up the best way that works for me. As for the AI models, they are not going to be that helpful to you if you don't understand why its doing what its doing in a particular function or crate (in case of rust) or library. I imagine the the over the top coder that has years of experience and multiple knowledge in various languages and depth knowledge in libraries, using the same technique he can replace a whole Department by himself technically.
It seems like the entire "product" here is just a ChatGPT system prompt: "combine this image of a person with this image of a dinosaur".
The only thing he needed to code was an NFT wrapper, which presumably is just forking an existing NFT wholesale.
The interesting, user-facing part of the project isn't code at all! It's just an HTML front end on someone else's image generator and a "pay me" button.
Very disappointing.
The speed of prototyping right now is wild.
The interesting shift seems to be that building the first version is no longer the bottleneck — distribution, UX polish and reliability are.
Look at the screenshots to understand what the author means by 'product'.
We don't need to shit on someone who shared their experiences and thoughts.
I agree with you point, but I do look sidelong at the number of points the post has. It is, at the very least, unexpected.
This would have been generic slop if it wasn't for AI.
Woodworking is an analogy that I like to use in deciding how to apply coding agents. The finished product needs to be built by me, but now I can make more, and more sophisticated, jigs with the coding agents, and that in turn lets me improve both quality and quantity.
It already starts with BS. Yes there are apps you can build in 30 minutes and they are great, not buggy or crap as he says it. And there are apps you need 1 hour or even weeks. It depends on what you want to build. To start off by saying that every app build in 30 minutes is crap, simply shows that he did not want to think about it, is ignorant or he simply wanted to push himselve higher up by putting others down. At this point, every programmer who claims that vibecoding doesn't make you at least 10 times more productive is simply lying or worst, doesn't know how to vibe code.
this is why i use ai just for one file at the time, as extension of my own programming. not so fast, but keeps control
> With AI, it’s easier to get the first 90 percent out there. This means we can spend more time on the remaining 10 percent, which means more time for craftsmanship and figuring out how to make your users happy.
EXCEPT... you've just vibe coded the first 90 percent of the product, so completing the remaining 10 percent will take WAY longer than normal because the developers have to work with spaghetti mess.
And right there this guy has shown exactly how little people who are not software developers with experience understand about building software.
I keep seeing things that were vibe coded and thinking, "That's really impressive for something that you only spent that much time on".
To have a polished software project, you must spend time somewhat menially iterating and refining (as each type of user).
To have a polished software project, you need to have started with tests and test coverage from the start for the UI, too.
Writing tests later is not as good.
I have taken a number of projects from a sloppy vibe coded prototype to 100% test coverage. Modern coding llm agents are good at writing just enough tests for 100% coverage.
But 100% test coverage doesn't mean that it's quality software, that it's fuzzed, or that it's formally verified.
Quality software requires extensive manual testing, iteration, and revision.
I haven't even reviewed this specific project; it's possible that the author developed a quality (CLI?) UI without e2e tests in so much time?
Was the process for this more like "vibe coding" or "pair programming with an LLM"?
> That's really impressive for something that you only spent that much time on"
Again, I haven't even read this particular project;
There's:
Prompt insufficiency: Was the specification used to prompt the model to develop the software sufficient in relation to what are regarded as a complete enough software specifications?
Model and/or Agent insufficiency,
Software Development methods and/or Project Management insufficiency,
QA insufficiency,
Peer review sufficiency;
Is it already time to rewrite the product using the current project as a more sufficient specification?
But then how many hours of UI and business logic review would be necessary again?
Is 100 hours enough?
A 40-hour work year has 2,080 hours per person per year.
The "10,000" hours necessary to be really good at anything number was the expert threshold that they used to categorize test subjects who performed neuroimaging studies while compassion meditating. "10,000" hours to be an expert is about 5 years at full time.
But how many hours to have a good software product?
Usually I check for tests and test coverage first. You could have spent 1,000 hours on a software project and if it doesn't have automated tests, we can't evolve the software and be sure that we haven't caused regressions.
I can't say I'm impressed by this at all. 100+ hours to build a shitty NFT app that takes one picture and a predefined prompt, then mints you a dinosaur NFT. This is the kind of thing I would've seen college students slam out over a weekend for a coding jam with no experience and a few cans of red bull with more quality and effort. Has our standards really gotten so low? I don't see any craftsmanship at play here.
Also the process sounds like a nightmare: "it broke and I asked 4 different LLMs to fix it; my `AGENTS.md` file contained hundreds of special cases; etc." I thought this article was intended to be a horror story, not an advertisement
If nothing else, at least the age of AI finally got devs to write good documentation!
> The "remaining 10 percent" is a difference between slop and something people enjoy.
I would say the remaining 10% are about how robust your solution is - anything associated with 'vibe' feels inherently unsecure. If you can objectively proof it is not, that's 10 % time well spend.
> anything associated with 'vibe' feels inherently unsecure.
Only "feels"?
Instead of 10x devs you now have the super rare 100x devs. They are using AI how it should be used.
I can't take anyone seriously who says an AI edge will be a "superpower".
Which part of "commodity" is confusing???
Of course vibe coding is going to be a headache if you have very particular aesthetic constraints around both the code and UX, and you aren't capable of clearly and explicitly explaining those constraints (which is often hard to do for aesthetics).
There are some good points here to improve harnesses around development and deployment though, like a deployment agent should ask if there is an existing S3 bucket instead of assuming it has to set everything up. Deployment these days is unnecessarily complicated in general, IMO.
If you hear someone spouting off about how vibe coding allows for creation of killer apps in a fraction of the time/cost, just ask them if you can see what successful killer apps they’ve created with it. It’s always crickets at that point because it’s somewhere between wishful thinking and an outright lie.
Why did this crypto grifter AI app get traction on this site?
Im an 20 year veteran of application development consulting. Contributor level... not talking head. I do more estimating than anyone you likely know. Consulting is cooked. I just AI native built (not vibe coding...) an application with a buddy, another Principal level engineer and what would cost a client 500-750k and 8-12 weeks, we did for $200 and 1 sprint. Its a passion project but highly complex mapping and navigation app with host/client multi-user sync'd state. Cooked.
>highly complex mapping
Curious. Can you elaborate on this a bit?
Do you have a race car or race team? Happy to onboard you, otherwise, not here.
No, but I think I got my answer.
I realize this sounds one sided. Ive also founded companies and worked across the range of startup to faang. Everything has changed... For the better if you ask me.
You are invested in some kind of AI start up, right?
I mean the worst part about this is the author also vibe coded their security. It could have been much more catastrophic if they built a crypto wallet or trading system. But because it was NFTs I guess the max damage was limited.
I have to say its a little sad that so many devs think of security and cryptography in the same way as library frameworks. In that they see it as just some black box API to use for their projects rather than respecting that its a fully developed, complex field that demands expertise to avoid mistakes.
Wow. First realistic post about coding assistants that I've read on HN, I think.
[Disclaimer: that I have read. Doesn't mean there weren't others.]
Too bad it's about NFTs but we can't have everything, can we?