Author here. I built this in a few hours after the Claude Code leak.
I've been working on my own coding agent setup for a while. I mostly use pi [0] because it's minimal and easy to extend. When the leak happened, I wanted to study how Anthropic structured things: the tool system, how the agent loop flows, A 500K line codebase is a lot to navigate, so I mapped it visually to give myself a quick reference I could come back to while adapting ideas into my own harness and workflow.
I'm actively updating the site based on feedback from this thread. If anything looks off, or you find something I missed, lmk.
Of course I expect it is vibe coding. It would be insane to code anything by hand these days. But that doesn't mean there is no creative input by the author here.
Depends on what you’re building and whether it’s recreational or not. Complex architecture vs a ui analysis tool, for example. For a ui analysis tool, the only reason you code by hand is for the joy of coding by hand. Even though you can drive a car or fly in a plane there are times to walk or ride a bike still.
I genuinely believe this. Even if you're inventing a new algorithm it is better to describe the algorithm in English and have AI do the implementation.
One reason, beside basic altruism, is so you can put the projects on your resume. This is especially helpful if the project does very well or gets lots of stars.
This is nice, I really like the style/tone/cadence.
The only suggestion/nit I have is that you could add some kind of asterisk or hover helper to the part when you talk about 'Anthropic's message format', as it did make me want to come here and point out how it's ackchually OpenAI's format and is very common.
Only because I figure if this was my first time learning about all this stuff I think I'd appreciate a deep dive into the format or the v1 api as one of the optional next steps.
I’m using pi and cc locally in a docker container connected to a local llama.cpp so the whole agentic loop is 100% offline.
I had used pi and cc to analyze the unpacked cc to compare their design, architecture and implementation.
I guess your site was also coded with pi and it is very impressive. Wonderful if you can do a visualization for pi vs cc as well. My local models might not be powerful enough.
A 500k line codebase for an agent CLI proves one thing: making a probabilistic LLM behave deterministically is a massive state-management nightmare. Right now, they're great for prompting simple sites/platforms but they break at large enterprise repos.
If you don't have a rigid, external state machine governing the workflow, you have to brute-force reliability. That codebase bloat is likely 90% defensive programming; frustration regexes, context sanitizers, tool-retry loops, and state rollbacks just to stop the agent from drifting or silently breaking things.
The visual map is great, but from an architectural perspective, we're still herding cats with massive code volume instead of actually governing the agents at the system level.
I find it really strange that there is so much negative commentary on the _code_, but so little commentary on the core architecture.
My takeaway from looking at the tool list is that they got the fundamental architecture right - try to create a very simple and general set of tools on the client-side (e.g. read file, output rich text, etc) so that the server can innovate rapidly without revving the client (and also so that if, say, the source code leaks, none of the secret sauce does).
Overall, when I see this I think they are focused on the right issues, and I think their tool list looks pretty simple/elegant/general. I picture the server team constantly thinking - we have these client-side tools/APIs, how can we use them optimally? How can we get more out of them. That is where the secret sauce lives.
The tools was mostly already known, no? (I wish they had a "present" tool which allowed to model to copy-paste from files/context/etc. showing the user some content without forcing it through the model)
Yeah in fact one thing claude is freaking great at is decompilation.
If you can download it client side you can likely place a copy in a folder and ask claude
‘decompile the app in this folder to answer further questions on how it works. As an an example first question explain what happens when a user does X’.
I do this with obscure video games where i want to a guide on how some mechanics work. Eg.
https://pastes.io/jagged-all-69136 as a result of a session.
It can ruin some games but despite the possibility of hallucinations i find it waaay more reliable than random internet answers.
Works for apps too. Obfuscation doesn’t seem to stop it.
Why are "tools" for local IO interesting and not just the only way to do it? I can't really imagine a server architecture that gets to read your local files and present them without a fat client of some kind.
What is the naive implementation you're comparing against? Ssh access to the client machine?
It's early days and we don't fully understand LLM behavior to the extent that we can assume questions like this about agent design are resolved. For instance, is an agent smarter with Claude Code's tools or `exec_command` like Codex? And does that remain true for each subsequent model release?
It’s a distinction that IMHO likely doesn’t make much difference, at least for the mostly automated/non-interactive coding agent use case. What matters more is how well the post-training on synthetic harness traces works.
> but so little commentary on the core architecture.
The core architecture is not interesting? its an LLM tui, theres not much there to discuss architecturally. The code itself is the actual fascinating train wreck to look at.
It’s not surprising. There has been quite a bit of industrial research in how to manage mere apes to be deterministic with huge software control systems, and they are an unruly bunch I assure you.
It's hard to tell how much it says about difficulty of harnessing vs how much it says about difficulty of maintaining a clean and not bloated codebase when coding with AI.
Why not both? AI writes bloated spaghetti by default. The control plane needs to be human-written and rigid -> at least until the state machine is solid enough to dogfood itself. Then you can safely let the AI enhance the harness from within the sandbox.
We propped the entire economy up on it. Just look at the s&p top 10. Actually even top 50 holdings.
If it doesn't deliver on the promise we have bigger problems than "oh no the code is insecure". We went from "I think this will work" to "this has to work because if it doesn't we have one of those 'you owe the bank a billion dollars' situations"
As I’m reading this, I’m thinking about how in 1980. It was imagined that everyone needed to learn how to program in BASIC or COBOL, and that the way computers would become ubiquitous would be that everybody would be writing program programs for them. That turned out to be a quaint and optimistic idea.
It seems like the pitch today is that every company that has a software-like need will be able to use AI to manifest that software into existence, or more generally, to manifest some kind of custom solution into existence. I don’t buy it. Coding the software has never been the true bottleneck, anyone who’s done a hackathon project knows that part can be done quickly. It’s the specifying and the maintenance that is the hard part.
To me, the only way this will actually bear the fruit it’s promising is if they can deliver essentially AGI in a box. A company will pay to rent some units of compute that they can speak to like a person and describe the needs, and it will do anything - solve any problem - a remote worker could do. IF this is delivered, indeed it does invalidate virtually all business models overnight, as whoever hits AGI will price this rental X%[1] below what it would cost to hire humans for similar work, breaking capitalism entirely.
[1] X = 80% below on day 1 as they’ll be so flush with VC cash, and they’d plan to raise the price later. Of course, society will collapse before then because of said breaking of capitalism itself.
Kinda depends how much of it is vibe coded. It could easily be 5x larger than it needs to be just because the LLM felt like it if they've not been careful.
Claude folks proudly claim to have Claude effectively writing itself. The CEO claims it will read an issue and automatically write a fix, tests, commit and submit a PR for it.
Bingo. And them 'being careful' is exactly what bloats it to 500k lines. It's a ton of on-the-fly prompt engineering, context sanitizers, and probabilistic guardrails just to keep the vibes in check.
There seem to be multiple mechanisms compensating for imperfect, lossy memory. "Dreaming" is another band-aid on inability to reliably store memory without loss of precision. How lossy is this pruning process?
It's one thing to give Claude a narrow task with clear parameters, and another to watch errors or incorrect assumptions snowball as you have a more complex conversation or open-ended task.
The time is ripe for deterministic AI; incidentally, this was also released today: https://itsid.cloud/ - presumably will be useful for anyone who wants to quickly recreate an open source Python package or other copyrighted work to change its license.
Can you please explain the use here? I tried the demo, and cat, cp, echo, etc... seem to do the exact same thing without the cost.
Their demo even says:
`Paste any code or text below. Our model will produce an AI-generated, byte-for-byte identical output.`
Unless this is a parody site can you explain what I am missing here?
Token echoing isn't even to the lexeme/pattern level, and not even close to WSD, Ogden's Lemma, symbol-grounding etc...
The intentionally 'Probably approximately complete' statistical learning model work, fundamentally limits reproducibility for PAC/Stastical methods like transformers.
CFG inherently ambiguity == post correspondence problem == halt == open domain frame-problem == system identification problem == symbol-grounding problem == entscheidungsproblem
The only way to get around that is to construct a grammar that isn't. It will never exist for CFGs, programs, types, etc... with arbitrary input.
I just don't see why placing a `14-billion parameter identity transformer` that just basically echos tokens is a step forward on what makes the problem hard.
Indeed. In some ways, this is just kind of an extrapolation of the overall trend toward extreme bloat that we’ve seen in the past 15 years, just accelerated because LLMs code a lot faster. I’m pretty accustomed to dealing with Web application code bases that are 6-10 years old, where the hacks have piled up on top of other hacks, piled on top of early, tough-to-reverse bad decisions and assumptions, and nobody has had time to go back and do major refactors. This just seems like more of the same, except now you can create a 10 year-old hack-filled code base in three hours.
Herding cats is treating the LLM's context window as your state machine. You're constantly prompt-engineering it to remember the rules, hoping it doesn't hallucinate or silently drop constraints over a long session.
System-level governance means the LLM is completely stripped of orchestration rights. It becomes a stateless, untrusted function. The state lives in a rigid, external database (like SQLite). The database dictates the workflow, hands the LLM a highly constrained task, and runs external validation on the output before the state is ever allowed to advance. The LLM cannot unilaterally decide a task is done.
I got so frustrated with the former while working on a complex project that I paused it to build a CLI to enforce the latter. Planning to drop a Show HN for it later today, actually.
> The database dictates the workflow, hands the LLM a highly constrained task, and runs external validation on the output before the state is ever allowed to advance.
This sounds like where lat.md[0] is headed. Only thing is it doesn't do task constraint. Generally I find the path these tools are taking interesting.
Just posted it here: https://news.ycombinator.com/item?id=47601608
Thank you so much for the coffee offer, that genuinely made my day! I don't have a sponsor link set up. Honestly, the best support is just hearing if this actually helps you ship your personal project faster without losing your mind to prompt engineering. I really hope it gives you your sanity back. Let me know how it goes!
Looks like it was downvoted to hell and marked as dead super fast. I leave the flag for "dead" on in my HN settings (leaves it super desaturated) and this seems unusual
> A 500k line codebase for an agent CLI proves one thing: making a probabilistic LLM behave deterministically is a massive state-management nightmare.
Considering what the entire system ends up being capable of, 500k lines is about 0.001% of what I would have expected something like that to require 10 years ago.
You can combine that with all the training and inference code, and at the end of the day, a system that literally writes code ends up being smaller than the LibreOffice codebase.
> You can combine that with all the training and inference code, and at the end of the day, a system that literally writes code ends up being smaller than the LibreOffice codebase.
You really need to compare it to the model weights though. That’s the “code”.
... what are you even talking about? "The system
that literally writes code" has a few hundreds of trillions of parameters. How is this smaller than LibreOffice?
>A 500k line codebase for an agent CLI proves one thing: making a probabilistic LLM behave deterministically is a massive state-management nightmare. Right now, they're great for prompting simple sites/platforms but they break at large enterprise repos.
Is that the case? I'm pretty sure Claude Code is one of the most massively successful pieces of software made in the last decade. I don't know how that proves your point. Will this codebase become unmanageable eventually? Maybe, but literally every agent harness out there is just copying their lead at this point.
Claude code is a massively successful generator, I use it all the time, but it's not a governance layer.
The fact that the industry is copying a 500k-line harness is the problem. We're automating security vulnerabilities at scale because people are trying to put the guardrails inside the probabilistic code instead of strictly above it.
Standardizing on half a million lines of defensive spaghetti is a huge liability.
>Standardizing on half a million lines of defensive spaghetti is a huge liability.
Again, maybe it will be. Or maybe the way we make software and what is considered good practice will completely change with this new technology. I'm betting on the latter at this point.
I know it seems counter-intuitive but are there any agent harnesses that aren’t written with AI? All these half a million LoC codebases seem insane to me when I run my business on a full-stack web application that’s like 50k lines of code and my MvP was like 10k. These are just TUIs that call a model endpoint with some shell-out commands. These things have only been around in time measured in months, half a million LoC is crazy to me.
Who cares about LoC? Its a metric that hasn't mattered since we measured productivity in it in the 1980s. For all we know they made these design choices so they could more easily reuse the code in other codebases. Ideally you'd build the library to do that at the same time, but this is start up time constraints to repay loans and shit.
The bloat is mostly error handling for a fundamentally unpredictable system. A simple agent loop is 200 lines. But then you need graceful recovery when the model hallucinates a tool call, context window management, permission boundaries so it doesn't rm -rf your repo, state persistence across crashes, rate limiting, and cost tracking. Each one is simple individually. Together they're 500k lines.
"
Most people's mental model of Claude Code is that "it's just a TUI" but it should really be closer to "a small game engine".
For each frame our pipeline constructs a scene graph with React then
-> layouts elements
-> rasterizes them to a 2d screen
-> diffs that against the previous screen
-> finally uses the diff to generate ANSI sequences to draw
We have a ~16ms frame budget so we have roughly ~5ms to go from the React scene graph to ANSI written.
"
> These are just TUIs that call a model endpoint with some shell-out commands.
Claude Code CLI is actually horrible: it's a full headless browser rendering that's then converted in real-time to text to show in the terminal. And that fact leaks to the user: when the model outputs ASCII, the converter shall happily convert it to Unicode (no latter than yesterday there was a TFA complaining about Unicode characters breaking Unix pipes / parsers expecting ASCII commands).
It's ultra annoying during debugging sessions (that is not when in a full agentic loop where it YOLOs a solution): you can't easily cut/paste from the CLI because the output you get is not what the model did output.
Mega, mega, mega annoying.
What should be something simple becomes a rube-goldberg machinery that, of course, fucks up something fundamental: converting the model's characters to something else is just pathetically bad.
Anyone from Anthropic reading? Get your shit together: if you keep this "headless browser rendering converted to text", at least do not fucking modify the characters.*
For the animations specifically, it's using Motion (fka Framer Motion) Javascript library. If you describe some animations from the site to an LLM and ask it to use Framer motion, you get very similar results. The creator likely just prompted for a while until they were happy with the outcome.
Yup, strange to see people still don’t understand LLMs massively speed up coding greenfield pet projects. Anytime you see a bee web app it’s better to assume AI use rather than not anymore.
I'm not familiar enough with this animation library to answer that. Someone could be very used to this type of website and just copy paste things they've done before.
Isn't it a simple REPL with some tools and integrations, written in a very high level language? How the hell is it so big? Is it because it's vibecoded and LLMs strive for bloat, or is it meaningful complexity?
> Claude Code's 500k LOC doesn't seem out of the ordinary.
Aren't all the other products also vibe-coded? "All vibe-coded products look like this" doesn't really seem to answer the question "Why is it so damn large?"
It's a repl, that calls out to a blackbox/endpoint for data, and does basic parsing and matching of state with specific actions.
I feel the bulk of those lines should be actions that are performed. Either this is correct or this is not:
1. If the bulk of those lines implement specific and simple actions, why is it so large compared to other software that implements single actions (coreutils, etc)
2. If the actions constitute only a small part of the codebase, wtf is the rest of it doing?
>> I feel the bulk of those lines should be actions that are performed. Either this is correct or this is not:
> You're complaining about vibe coding while also complaining about how you "feel" about the code. Do you see the irony in that?
Where did I complain about how I feel about the actual code? I have feelings, negative ones, about the size of the code given the simple functionality it has, but I have no feelings on the code because I did not look at the code.
Bad by whose definition? They work really well in my experience. They aren't perfect but the amount of hand holding has gone down dramatically and you can fix any glaring problems with a code review at the end. I work on a multimillion line code base which does not use any popular frameworks and it does a great job. I may be benefiting from the fact that the codebase is open source and all models have obviously been trained on it.
Most of their issues have been solved a long time ago, with 1000x less code. It is depressing at this point. I really had no clue IT was in the shitters this much. I knew it was theatrical but I had no idea that it was by this much.
All these AI tools teams have most valid excuse "We are just a bunch of people who only know Javascript/typescript/NodeJS. Please bear with us while we resolve 10,000 open issues."
I haven't seen the scrolling glitch in months, where previously it was happening multiple times a day. Also haven't seen anyone complain about it in quite some time. Pretty sure they have resolved that.
yeah its honestly full of vibe fixes to vibe hacks with no overarching desig. . some great little empirical observations though!i think the only clever bit relative to my own designs is just tracking time since last cache ht to check ttl. idk why i hadnt thought of that, but makes perfect sense
I don't know if you're mindlessly repeating the HN trope that JS/typescript/Electron is bad and that all bloat can easily prevented, but if you're truly interested in answers to your questions: RTFA.
Other notable agents' LOC: Codex (Rust) ~519K, Gemini (TS) ~445K, OpenCode (TS) ~254K, Pi (TS) ~113K LOC. Pi's modular structure makes it simple to see where most of code is. Respectively core, unified API, coding agent CLI, TUI have ~3K, ~35K, ~60K, ~15K LOC. Interestingly, the just uploaded claw-code's Rust version is currently at only 28K.
edit: Claude is actually (TS) 395K. So Gemini is more bloat. Codex is arguable since is written in lower-level language.
Just check the leaked code yourself. Two biggest areas seem to be the `utils` module, which is a kitchen sink that covers a lot of functionality from sandboxing, git support, sessions, etc, and `components` module, which contains the react ui. You could certainly build a cli agent with much smaller codebase, with leaner ui code without react, but probably not with this truckload of functionality.
They are doing some strange "reinvent the wheel" stuff.
For example, I found an implementation of a PRNG, mulberry32 [1], in one of the files. That's pretty strange considering TS and Javascript have decent PRNGs built into the language and this thing is being used as literally just a shuffle.
mulberry32 is one of the smallest seedable prngs. Math.random() is not seedable.
If you search mulberry32 in the code, you'll see they use it for a deterministic random. They use your user ID to always pick the same random buddy. Just like you might use someone's user ID to always generate the same random avatar.
So that's 10 lines of code accounted for. Any other examples?
Well, at least that confirms they weren't lying when they said all recent updates to claude code were made by claude. You certainly won't do this stuff if you were writing the code yourself.
Software doesn’t end at the 20k loc proof of concept though.
What every developer learns during their “psh i could build that” weekendware attempt is that there is infinite polish to be had, and that their 20k loc PoC was <1% of the work.
That said, doesn't TFA show you what they use their loc for?
I guess because you see 3D stuff in a 3D game instead of text, people assume that it must be the most complex thing in software? Or because you solve hard math problems in 3D, those functions are gonna be the most loc?
It's a completely different domain, e.g. very different integration surface area and abstractions.
Claude Code's source is dumped online so there's probably a more concrete analysis to be had than "that sounds like too many loc".
It is a different domain but that wasn’t your argument. Your argument was that someone was comparing it to a POC when in fact they were comparing to a finished product.
Also a AAA game (with the engine) with physics, networking, and rendering code is up there in terms of the most complex pieces of software.
They just claimed that you can build a 3D game in 500k loc, thus Claude Code shouldn't use so many loc. They/you didn't render the argument for that.
For example, without looking at the code, the superstition also works in the opposite direction: Claude Code is an interface to using AI to do any computer task while a 3D game just lets you shoot some bad guys, so surely the 3D game must be done in fewer loc. That's equally unsatisfying.
You'd have to be more concrete than "sounds like a lot".
> Claude Code is an interface to using AI to do any computer task
Claude Code is quite literally a wrapper around a few APIs. At one point it needed 68GB of RAM to run and requires 11ms to "lay a scene graph" to display a few hundred characters on screen. All links here: https://news.ycombinator.com/item?id=47598488
> while a 3D game just lets you shoot some bad guys, so surely the 3D game must be done in fewer loc.
Yes, because they've vibed it into phenomenally unnecessary complexity. The mistake you continually make in this thread is to look at complexity and see something that is de facto praiseworthy and impressive. It is not.
I could run a text adventure with a Zmachine emulator under a 6502 based machine and 48k of RAM, with Ozmoo you can play games like Tristam Island. On a Commodore 64, or an Apple II for you US commenters. I repeat the game it's being emulated in a simple computer with barely more processing power than a current keyboard controller.
As the ZMachine interpreter (V3 games at least, enough for the mentioned example), even a Game Boy used to play Pokemon Red/Blue -and Crystal/Sylver/Blue, just slightly better specs than the OG GB- can run Tristam Island with keypad based input picking both selected words from the text or letter by letter as when you name a character in an RPG. A damn Game Boy, a pocket console from 1989.
Not straightly running a game, again. Emulating a simple text computer -the virtual machine- to play it. No slowdowns, no-nothing, and you can save the game (the interpreter status) in a battery backed cartridge, such as the Everdrive. Everything under... 128k.
Claude Code and the rest of 'examples' it's what happens when trade programmers call themselves 'engineers' without even a CS degree.
Take the loadInitialMessage function: It's encumbered with real world incremental requirements. You can see exactly the bolted-on conditionals where they added features like --teleport, --fork-session, etc.
The runHeadlessStreaming function is a more extreme version of that where a bunch of incremental, lateral subsystems are wired together, not an example of superfluous loc.
The file is more than 5000 lines of code. The main function is 3000. Code comments make reference to (and depend on guarantees in connection with) the specific behavior of code in other files. Do I need to explain why that's bad?
By real-world polish, I don't mean refining the code quality but rather everything that exists in the delta between proof of concept vs real world solution with actual users.
You don't have to explain why there might be better ways to write some code because the claim is about lines of code. It could be the case that perfectly organizing and abstracting the code would result in even more loc.
Comments like these remind me of the football spectators that shout "Even I could have scored that one" when they see a failed attempt.
Sure. You could have. But you're not the one playing football in the Champions League.
There were many roads that could have gotten you to the Champions League. But now you're in no position to judge the people who got there in the end and how they did it.
I don't think this is warranted given that the comment you're criticising is simply expressing an opinion explicitly solicited by the comment it's responding to.
It’s more like “Player A is better than Player B” coming from a professional player in a smaller league who is certainly qualified to have that opinion.
Yes, exactly. I like this analogy. I am surprised the level of pearl clutching in these discussions on Hacker News. Everybody wants to be an attention sharecropper, lol.
> Sure. You could have. But you're not the one playing football in the Champions League.
The only reason people are using Claude Code is because it's the only way to use their (heavily subsidized) subscription plans. People who are okay with using and paying for their APIs often opt out for other, better, tools.
Also, analogies don't work. As we know for a fact that Claude Code is a bloated mess that these "champions league-level engineers" can't fix. They literally talk about it themselves: https://news.ycombinator.com/item?id=47598488 (they had to bring in actual Champions League engineers from bun to fix some of their mess).
"Even I would have scored that goal"
==
"I would never ever have created a bloated mess like Anthropic"
You just repeat the same statement.
That bloated mess is what got them to the Champions League. They did what was necessary to get them here. And they succeeded so far.
But hey, according to some it can be replicated in 50k lines of wrapper code around a terminal command, so for Anthropic it's just one afternoon of vibe coding to get rid of this mess. So what's the problem? /s
Honest question: Why does it matter? They got the product shipped and got millions of paying customers and totally revolutionized their business and our industry.
Engineers using LOC as a measure of quality is the inverse of managers using LOC as a measure of productivity.
More code means more entropy, more room for bugs, harder to find issues, more time to fix, more attack surface, more memory used, more duplication, more inconsistencies... I bet you at some point we'll get someone reporting how AI performance deteriorates as the code base grows, and some blog post about how their team improved the success of their AI by trimming the code base down to less than 100k LOC or something like that.
The principles of good software don't suddenly vanish just because now it's a machine writing the code instead of a human, they still have to deal with the issues humans have for more than half a century. The history of programming is new developers coming up with a new paradigm, then rediscovering all the issues that the previous generation had figured out before them.
The history of programming is also each generation writing far less performant code than the one before it. The history of programming is each generation bemoaning the abstractions, waste and lack of performance of the code of the next generation.
It turns out that there is a tradeoff in code between velocity and quality that smart businesses consider relative to hardware cost/quality. The businesses that are outcompeting others are rarely those who have the highest quality code, but rather those that are shipping quickly at a quality level that is satisfactory for current hardware.
> far less performant code than the one before it.
That worked because of rapid advancements in CPU performance. We’ve left that era.
It’s about more than performance. Code is and always has been a liability. Even with agents, you start seeing massive slowdowns with code base size.
It’s why I can nearly one shot a simple game for my kid in 20 minutes with Claude, but using it at work on our massive legacy codebase is only marginally faster than doing it by hand.
You asked why the size of the code matters, I gave you the answer. If you want to ramble about the non technical aspects of software development talk to someone else, I'm not interested.
I asked a rhetorical question to get the reader to think about a topic. I was not looking for a rote recitation of a well-known textbook answer. Maybe you should not be on the comment section of an engineering website if you find discussion so offensive.
The reason it’s not useful as a measure of productivity is because it’s measure of complexity (not directly, but it’s correlated). But it tells you nothing about whether that complexity was necessary for the functionality it provides.
But given that we know the functionality of Claude Code, we can guess how much complexity should be required. We could also be wrong.
>Why does it matter?
If there’s massively more code than there needs to be that does matter to the end user because it’s harder to maintain and has more surface area for bugs and security problems. Even with agents.
Among the hundreds of thousands of lines of code that Anthropic produced was one that leaked the source code. It is likely to be a config file, not part of the Claude Code software itself, but it still something to track.
The more lines of code you have the more likely there is for one of them to be wrong and go unnoticed. It results in bugs, vulnerabilities,... and leaks.
It will be exactly that. But that is a 'them' problem. I can look at it a go 'that looks like a bad idea' but they are the ones who have to live with it.
At some point someone will probably take their LLM code and repoint it at the LLM and say 'hey lets refactor this so it uses less code is easier to read but does the same thing' and let it chrun.
One project I worked on I saw one engineer delete 20k lines of code one day. He replaced it with a few lines of stored procedure. That 20k lines of code was in production for years. No one wanted to do anything with it but it was a crucial part of the way the thing worked. It just takes someone going 'hey this isnt right' and sit down and fix it.
When a TUI requires 68 GB of RAM to run, or when they spend a week not being able to find a bug that causes multiple people to immediately run out of tokens, it's not a "them" problem.
Well, I assume this is all just generated with Claude Code, right? Whether there is much back and forth with the LLM is a valid question and nothing wrong with generating websites (I do it too for some side projects). Claude loves generating websites with a particular style of serif font. We also saw this with https://tboteproject.com/timeline/ and I've just generally seen it from various designs that coworkers have spit out over months using Claude defaults.
I guess I just find it weird because all the signals are messed up so whenever I see these sorts of layouts, I feel like I'm looking at the average where I don't think "gorgeous and interesting" at all. Instead, I'm forced to think "I should be skeptical of this based on the presentation because it presents as high quality but this may be hiding someone who is not actually aware of what they're presenting in any depth" as the author may have just shoved in a prompt and let it spin.
There's actually a similarly designed website (font weights, font styles etc) here in New Zealand (https://nzoilwatch.com/) where at a glance, it might seem like some overloaded professional-backed thing but instead it's just some guy who may or may not know anything about oil at all, yet people are linking it around the place like some sort of authoritative resource.
I would have way less of an issue if people just put their names by things and disclosed their LLM usage (which again, is fine) rather than giving the potentially false impression to unequipped people that the information presented is actually as accurate and trustworthy as the polish would suggest.
We do need "hard effortful careful work" to keep planes flying, electrical grids running and medical devices safe. It's very relevant but very undervalued by our current economy.
That was the leaked code and now it's just some random dudes harness btw. He swapped it out. Did a sloppy find and replace for "claude" and made it claw.
I was talking to one of the people who works at a big agentic coding tools. If I recall correctly, he was talking about how they use the tool to build the tool. I was complaining that all of the websites/frontends I make look pretty weak, and I'm amazed they get much slicker looking UIs with the same tool. He showed me that one way they do it is by having an extensive UI library of components/graphics/whatever, and also mentioned that the folks build their UIs know how to prompt/use the tool because it's backed by years of UI development knowledge & superior resources. I realized I didn't have any of that, and it actually made me feel better.
Last week we I was struggling to go from vague prompt to a OMG-it's-so-nice-looking web app, I remembered that example above and then decided to create my own component library, which I did in a couple days: https://www.substrateui.dev/. I was actually super happy that I was able to accomplish that, and then I realized I wanted to better understand the content that I had vibe coded into existence. So now I'm recreating that design system step by step w/ Claude code, filling in gaps in my knowledge & learning a bit about colors, typography, CSS, blah blah blah. It's actually a lot of fun because I'm able to explore all of the concepts and learn enough to build a front end that doesn't suck & is good enough for my use case without getting stuck for days on trying to center a stupid div by hand or play whack-mole-fix-something-and-break-something-else when trying to clean up AI slop.
I was referencing https://www.neobrutalism.dev/ and https://www.retroui.dev/ and slopped my way through it. A lot of it was just asking Claude Code "is this a proper design system?", then I kept doing that until it didn't have anything useful to add. Now I'm using my that as the template for understanding such things in more detail.
I'm criticizing the readability of that first "Agent Loop" section.
It's basically a slideshow which advances and presents several content areas which are intended to be read, all while advancing and resizing themselves.
Pausing and clicking through manually stepwise is also pretty obnoxious.
Would much rather just see the content all laid out at once
The people who don’t know how to use an LLM to make them more productive, or are scared it’s going to take their job, are louder than the people who are making good use of them to
make them more productive.
That just seems to be human nature unfortunately - the complainers are always louder.
As someone currently "making good use of" generative AI while simultaneously being painfully aware of its shortcomings, I think the overall discourse is a bit more nuanced. Bucketing folks into simple "for" and "against" GenAI camps does nothing to cover the vast spectrum in between, making your take ultimately built on a false dichotomy. Further implying those camps fall on the lines of those "in the know" of AI vs "those in denial/scared of" is patronizing at best, and I've grown tired of this oversimplification parroted out every time the topic of LLM systems come up.
Those within well informed, technical circles will fall somewhere in between the for/against labels, myself included.
The GenAI hype cycle is finally starting to collapse as the general population starts to realize that these systems aren't the panacea for "everything" after all. They provide enormous utility in some domains like coding, but even then there are massive tradeoffs, footguns and the usual horse blinder ills that come with every hype cycle. I just hope we stop having to "learn the hard way" with respect to undisciplined use of current-gen LLM systems writ large, and cooler heads prevail sooner rather than later.
What? We must have different internets, I agree in general, but the "AI is the second coming" crowd is louder than standing next to a jet on takeoff. I'm in the "AI is making me more productive but a worse developer" crowd, don't know what I count as.
You got shuttled into one bubble and the previous commenter into another advertising / news bubble. It's incredible how different the media experience is for people in different media bubbles.
That's a bit dishonest, the consensus on HN seems to be that LLMs are very good at oneshotting small projects from scratch. Especially when using super mainstream technologies like html and tailwind, as does the discussed website. And especially when it's a one time operation and the project will never need to be maintained, like the discussed website.
I mean, tools change, but I'd be happy to hear if any tool can create that by just saying create "Claude Code Unpack" with nice graphics. or some other single prompt. It likely was an iterative process and it would be lovely if more people started sharing that, because the process itself is also very interesting.
I've created some chinese characters learning website and I took me typing 1/3 of LoTR to get there[1]. I would have typed like 1% of that writing code directly. It is a different process, but it still needs some direction.
I think it is accurate. Where are the autonomous AI who beat the creator to the punch? When we write "Hello, World!" in C and compile it with `gcc`, do we give credit to every contributor to GNU? AI is a tool that thus far only humans are capable of using with the unique inspiration. Will this change in the future? Certainly. But is it the case now? I think my questions imply some reasonable objections.
I guess they really do eat their own dogfood and vibe code their way through it without care for technical debt? In a way, it’s a good challenge, but it’s fairly painful to watch the current state of the project (which is about a year old now, so it should be in prime shape).
> is about a year old now, so it should be in prime shape
A 1yo project may be in good shape if written by just one dev, maybe a few. But if you have many devs, I can guarantee it will be messy and buggy. If anything, at 1yo it is probably still full of bugs because not enough time has elapsed for people to run into them.
It's only 510k LoC, at ~100 lines of code a day for a year, this code base would take 23 engineers a year to write. That's for 220 working days in somewhere civilized.
And I'm sure we all know that when working on a greenfield project you can produce a lot more LoC per day than maintaining a legacy one.
Given that vibe code is significantly more verbose, you're probably talking about ~15 engineers worth of code?
I know that's all silly numbers, but this is just attempting to give people some context here, this isn't a massive code base. I've not read a lot of it, so maybe it's better than the verbose code I see Claude put out sometimes.
> It's only 510k LoC, at ~100 lines of code a day for a year, this code base would take 23 engineers a year to write.
Correction: a code base of 500kLoC would take 23 engineers a year to write. There is no indication that the functionality needed in a TUI app that does what this app does needs 500kLoC.
The previous poster was making out that in a year the code base would be a mess if people had done it.
This is a two-pizza team sized project, so it's not a project that the code quality would inevitably spiral out of control due to communication problems.
A single senior architect COULD have kept the code quality under control.
Put yourself in their shoes; either the quality of Claude's coding continues to improve or else their business is probably doomed if it stagnates, so for them it makes sense to punt technical debt to the future when more capable versions of their models will be able to better fix it.
This is why I personally don't take technical debt arguments about how LLM maintained code bases deteriorate with size/age seriously; it presumes that at some point I'll give up with the LLM and be left with a mess to clean up by hand, but that's not going to happen, future maintenance is to be left to LLMs and if that isn't possible for some reason then the project is as good as dead anyway. When you start a project with a LLM the plan should be to see it through with LLMs, planning to have unaided humans take over maintenance at some point is a mistake.
Doesn't this contradict the popular wisdom that "what's good for a human engineer is good for an LLM"? e.g. documentation, separation of concerns, organized files, DRY.
I find LLMs very useful and capable, but in my experience they definitely perform worse when things are unorganized. Maintenance isn't just aesthetics, it's a direct input to correctness.
Maybe a little. I don't hold fast to that popular wisdom, e.g. I think comments are not always a net positive for LLMs. With respect to technical debt, how much debt is too much debt before it gums up the works and arrests forward progress on the software? It probably depends on the individual programmer. LLMs do seem to have a higher tolerance for technical debt than myself personally at least.
I am more worried that we are moving toward creating black boxes and this might turn software "development" into a field as confused as philosophy and dialectics.
Which makes for an interesting thought / discussion; code is written to be read by humans first, executed by computers second. What would code look like if it was written to be read by LLMs? The way they work now (or, how they're trained) is on human language and code, but there might be a style that's better for LLMs. Whatever metric of "better" you may use.
Just a thought experiment, I very much doubt I'm the first one to think of it. It's probably in the same line of "why doesn't an LLM just write assembly directly"
LLMs read and write human-code because humans have been reading and writing human-code. The sample size of assembly problems is, in my estimate, too small for LLMs to efficiently read and write it for common use cases.
I liken it to the problem of applying machine learning to hard video games (e.g. Starcraft). When trained to mimic human strategies, it can be extremely effective, but machine learning will not discover broadly effective strategies on a reasonable timescale.
If you convert "human strategies" to "human theory, programming languages, and design patterns", perhaps the point will be clear.
But: could the ouroboric cycle of LLM use decay the common strategies and design patterns we use into inexplicable blobs of assembly? Can LLMs improve at programming if humans do not advance the theory or invent new languages, patterns, etc?
But starcraft training is not through mimicking human strategies - it was pure RL with a reward function shaped around winning, which allows it to emerge non-human and eventually super-human strategies (such as the worker oversaturation).
The current training loop for coding is RL as well - so a departure from human coding patterns is not unexpected (even if departure from human coding structure is unexpected, as that would require development of a new coding language).
> It's probably in the same line of "why doesn't an LLM just write assembly directly"
My suspicion is that the "language" part of LLMs means they tend to prefer languages which are closer to human languages than assembly and benefit from much of the same abstractions and tooling (hence the recent acquisition of bun and astral).
The problem with that is that assembly isn't portable, and x86 isn't as dominant as it once was, so then you've got arm and x86(_64). But you could target the LLVM machine if you wanted.
Yes but my point was that they seem to explicitly not care about code quality and/or the insane amount of bloat, and seem to just want the LLM to be able to deal with it.
I've heard somewhere that they have roughly 100% code churn every few months, so yes, they unfortunately don't care about code quality. It's a shame, because it's still the best coding agent, in my experience.
> they unfortunately don't care about code quality.
> It's a shame, because it's still the best coding agent, in my experience.
If it is the best, and if it delivers the value users are asking for, then why would they have an incentive to make further $$$ investments to make it of a "higher" quality if the value this difference could make is not substantial or hurts the ROI?
On many projects I found this "higher quality" not only to be false of delivering more substantial value but actually I found it was hurting the project to deliver the value that matters.
Maybe we are after all entering the era of SWE where all this bike-shedding is gone and only type of engineers who will be able to survive in it will be the ones who are capable of delivering the actual value (IME very few per project).
Is this why they ran into a bug with people hitting usage limits even on very short sessions and had to cease all communications for over a day after a week of gaslighting users because they couldn't find the root cause in the "quality doesn't matter" code base?
Or that's why tgey had to buy bun with actual engineers to work on Claude Code to reduce memory peaks from 68 GB (yes, 68 gigabytes) to a "measely" 1.7? Because code quality doesn't matter?
Or that a year later they still cannot figure out how to render anything in the terminal without flickering?
The only reason people use Claude Code is because it's the only way to use Anthropic's heavily subsidized subscription. You get banned if you use it through other, better, tools.
"Windows is the world's most popular desktop consumer OS. Microsoft are doing everything right, and should never ever change. Who are we to criticise them"
Yes, but as I said, it’s in a way the ultimate form of dogfooding: ideally they’ll be able to get the LLM smart enough to keep the codebase working well long-term.
Now whether that’s actually possible is a second topic.
Just finished looking at Ink here.. frontend world has no shame. Love the gloating about 40x less RAM as if that amount of memory for a text REPL even approaches defensible. "CC built CC" is not the flex people seem to suggest it is.
Frontend losers not realizing the turds they are releasing. An LLM client fits under netcat+echo+awk+jq runnable under a 486 if there's no SSL/TLS on its way, Pentium II could drive fast TLS connections like nothing and under 32MB of RAM with NetBSD for a simple terminal install, maybe with X and a simple WM with RXVT if you care.
Appreciate the effort, but this is very basic and nothing you need the source code to understand. I was expecting a deep dive into what specific decisions they made, but not how an loop of tool calls works
I found it a useful overview. My primary question about the client source was - is there any secret sauce in it? Based on this site, the answer is no, the client is quite simple/dumb, and all the secret sauce resides on the server/in the model.
I particularly valued the tool list. People in these comments are complaining about how bad the code is, but I found the client-side tools that the model actually uses to be pretty clean/general.
My takeaway was more that at a very basic level they know what they are doing - keep the client general, so that you can innovate on the server side without revving the client as much.
Pardon me, but I think it's rather obvious that it worked this way?
The real value of Anthropic is in the models that they spent hundreds of millions training. Anyone can build a frontend that does a loop, using the model to call tools and accomplish a task. People do it every day.
Sure, they've worked hard to perfect this particular frontend. But it's not like any of this is revolutionary.
There's this weird thing about AI generated content where it has the perfect presentation but conveys very little.
For example the whole animation on this website, what does it say beyond that you make a request to backend and get a response that may have some tool call?
We've moved from "move fast and break things" to "hallucinate fast and patch later." It's the inevitable side effect of using AI to curate AI-written codebases.
That's fair. The site isn't meant to be a deep technical dive, it's more of a visual high-level guide of what I've curated while exploring the codebase while assisted by AI, 500k loc codebase is just too much to sift through in a short amount of time.
I agree with you and I'm generally an AI "defender" when people superficially dismiss AI capabilities, but this is a more subtle point.
If you prompt with little raw material and little actual specification of what you want to see in the end, eg you just say make a detailed breakdown dashboard-like site that analyzes this codebase, the result will have this uncanny character.
I'd describe it as a kind of "fanfic", it (and now I'm not just talking about this website but my overall impression related to this phenomenon) reminds me a bit like how when I was 15 or so, I had an idea about how the world works then things turned out to be less flashy, less movie-like, less clear-cut, less-impressive-to-a-teenage-boy than I had thought.
If you know the concept of "stupid man's idea of a smart man", I'd say AI made stuff (with little iteration) gives this outward appearance of a smart man from the Reddit-midwit-cinematic-universe. It's like how guns in movies sound more like guns than real guns. It's hyperreality.
Again this is less about the capabilities of AI and it's more connected to the people-pleasing nature of it. It's like you prompt it for some epic dinner and it heaps you up some hmmm epic bacon with bacon yeah (referring to the hivemind-meme). Or BigMac on the poster vs the tray, and the poster one is a model made with different components that are more photogenic. It's a simulacrum.
It looks more like your naive currently imagined thing about what you think you need vs what you'd actually need. It's like prompting your ideal girlfriend into AI avatar existence. I'm sure she will fit your ideal thought and imagination much better but your actual life would need the actual thing.
This relates to the Persona thing that Anthropic has been exploring, that each prompt guides the model towards adopting a certain archetypal fiction character as it's persona and there are certain attraction basins that get reinforced with post training. And in the computer world, simulated action can be easily turned into real action with harnesses and tools, so I'm not saying that it doesn't accomplish the task. But it seems that there are more sloppy personas, and it seems that experts can more easily avoid summoning them by giving them context that reflects more mundane reality than a novice or an expert who gives little context. Otherwise the AI persona will be summoned from the Reddit midwit movie.
I'm not fully clear about all this, but I think we have a lot to figure out around how to use and judge the output of AI in a productive workflow. I don't think it will go away ever, but will need some trimming at the edges for sure.
Thanks to Claude Code, we got such a beautifully polished and dazzling website that gives a complete introduction to itself the very moment the leak happened :)
Kairos and auto-dream are more interesting than anything in the agent loop section. Memory consolidation between sessions is the actual unsolved problem. The rest is just plumbing tbh
I think it's good that it's out there, and I wonder why Anthropic have been keeping it closed source; clearly they can't possibly think that the CC source code is a competitive advantage...?
Agents in general are easy to make, and trivial to make for yourself especially, and the result will be much better than what any of the big providers can make for you.
`pi` with whatever commands/extensions you want to make for yourself is better than CC if you really don't want to go through the trouble of making your own thing.
If you think this is not a competitive advantage then youre missing the point. LLMs arent so good that they work through bad abstractions and pretty much everyone has bad abstractions. CC is what invents some of the best abstractions (not the first). I think theyre they first ones who nailed subagents well. Theres a lot to learn from them and while im learning a lot from their source code my heart bleeds that this happened to them.
Sincerely, someone running a team building similar things for analytics.
All of the above at exactly the token cost that it requires for you.
Anything general is always going to be worse for specific use cases, and agents from these big providers are very general. They'll spend tons of tokens doing things that you might not need, including spend extra tokens on supporting MCP, etc., when you might not even need that.
I feel the same way. Given it's AI-written, looking at the code isn't even worth it to me. I would rather read a blog post about how they develop it day to day.
On the one hand I don't understand why it needs to be half a million lines. However code is becoming machine shaped so the maintenance bloat of titanic amounts of code and state are actually shrinking.
I like the Claude desktop interface. The color scheme, presentation, fonts, etc. Is there a CSS I can find for the desktop version - I assume it's using some kind of web rendering engine and CSS is part of it.
I don't know why people obsess and spend so much time on this codebase. It isn't (and never was)alien technology. It's just mediocre typescript generated by an LLM
I doubt there is anything special about the transformer code the frontier labs use. The only thing proprietary in it are probably the infrastructure-specific optimizations for very large scale distributed training and some GPU kernel tricks. The real moat is the training data, especially the RLHF/finetuning data and verifiable reward environments, and the GPU clusters of course.
The open source models are quite close, and they'd probably be just as good with the equivalent amount of compute/data the frontier labs have access to.
However, I assume that usage data could be increasingly valuable as well. That will likely help the big commercial cloud models to maintain a head start for general use.
/stickers:
Displays earned achievement stickers for milestones like first commit, 100 tool calls, or marathon sessions. Stickers are stored in the user profile and rendered as ASCII art in the terminal.
That is not what it does at all - it takes you to a stickermule website.
What is the motivation for someone to put out junk like this?
The animated explanation at the top is also way too fast at 1x, almost impossible to follow; that immediately hinted at the author not fully reading/experiencing the result before publishing this.
A year ago I wouldn't have guessed a TUI could be a competitive advantage. But "harness engineering" became a thing, and it turns out the agent wrapper — tool orchestration, context management, permission flows — is where real product value lives. Not as much as the models themselves, but more than most people expected. This leak is a painful reminder of that.
Really nice visualisation of this, makes understanding the flow at a high levle pretty clear. Also the tool system and command catalog, particularly the gated ones are super interesting.
I mean, I get it: vibe-coded software deserves vibe-coded coverage. But I would at least appreciate it if the main part of it, the animation, went at a speed that at least makes it possible to follow along and didn't glitch out with elements randomly disappearing in Firefox...
It's on the front page because it looks really cool. You can complain about it being vibe coded, but it still looks good. If you ask Claude to allow the user to slow down the animation, it can do that quite easily, that's just not a problem caused by vibe coding. And I'm on FF and didn't notice anything glitching out.
Many people seem to believe the Claude Code has some sort of secret sauce in the agent itself for some reason.
I have no idea why because in my experience Claude Code and the same models inside of Cursor behave almost identically. I think all the secret sauce is in the RLHF.
519K lines of code for something that is using the baseline *nix tools for pretty much everything important, how do they even manage to bloat it this much? I mean I know how technically, but it's still depressing.
Can't they ask CC to make it good, instead of asking it to make it bigger?
Thanks, I'll use this for teaching next week (on what not to do). BashTool.ts :D But, in general, I guess it just shows yet again that the emperor has no clothes.
What exactly is shitty here? A program i use for hours every day to do the job previously done by many N human beings, without many bugs, seems to have code thats seemingly messy but still clearly works.
Author here. I built this in a few hours after the Claude Code leak.
I've been working on my own coding agent setup for a while. I mostly use pi [0] because it's minimal and easy to extend. When the leak happened, I wanted to study how Anthropic structured things: the tool system, how the agent loop flows, A 500K line codebase is a lot to navigate, so I mapped it visually to give myself a quick reference I could come back to while adapting ideas into my own harness and workflow.
I'm actively updating the site based on feedback from this thread. If anything looks off, or you find something I missed, lmk.
[0] https://pi.dev/
How about releasing your own source code? It is a beautiful site, love the UX as well as functionality.
It screams vibe coding. This is the anthropic look. Just ask Claude and give it a screenshot.
Vibe coding is also why this was released hours after leak instead of days/weeks.
Of course I expect it is vibe coding. It would be insane to code anything by hand these days. But that doesn't mean there is no creative input by the author here.
>> It would be insane to code anything by hand these days.
I strongly disagree, but it made me chuckle a bit, thinking about labeling software as "handmade" or marketing software house as "artisanal".
Depends on what you’re building and whether it’s recreational or not. Complex architecture vs a ui analysis tool, for example. For a ui analysis tool, the only reason you code by hand is for the joy of coding by hand. Even though you can drive a car or fly in a plane there are times to walk or ride a bike still.
Depending on your standards and what company is making it you could even have “cruelty free.”
You well on the path to AI-fueled psychosis if you genuinely believe this.
I genuinely believe this. Even if you're inventing a new algorithm it is better to describe the algorithm in English and have AI do the implementation.
Must everything be artisanal for some people? </s>
Guess what? People have ZERO reason to Open Source anything now.
One reason, beside basic altruism, is so you can put the projects on your resume. This is especially helpful if the project does very well or gets lots of stars.
This said Jeavon's Paradox will likely mean far more code is open sourced simply due to how much code will get written in total.
Why would you think that?
I'm a committed open source dev and I've flipped my own switch from "default public" to "default private".
As a cynical modern eng look for landing page skills
This is nice, I really like the style/tone/cadence.
The only suggestion/nit I have is that you could add some kind of asterisk or hover helper to the part when you talk about 'Anthropic's message format', as it did make me want to come here and point out how it's ackchually OpenAI's format and is very common.
Only because I figure if this was my first time learning about all this stuff I think I'd appreciate a deep dive into the format or the v1 api as one of the optional next steps.
I’m using pi and cc locally in a docker container connected to a local llama.cpp so the whole agentic loop is 100% offline.
I had used pi and cc to analyze the unpacked cc to compare their design, architecture and implementation.
I guess your site was also coded with pi and it is very impressive. Wonderful if you can do a visualization for pi vs cc as well. My local models might not be powerful enough.
Thanks for the hard work!
what level of success are you getting with a 100% offline loop (and on what hardware if you dont mind sharing)?
https://gistpreview.github.io/?30a4e491eb2df7523ecc120b86feb... (pi)
https://gist.github.com/ontouchstart/d7e3b7ec6e568164edfd482... (cc)
M5 (24G)
Thank you for this brother, well done
Is there any nice themes for pi?
Can you give me more info about your own agentic setup ?
A 500k line codebase for an agent CLI proves one thing: making a probabilistic LLM behave deterministically is a massive state-management nightmare. Right now, they're great for prompting simple sites/platforms but they break at large enterprise repos.
If you don't have a rigid, external state machine governing the workflow, you have to brute-force reliability. That codebase bloat is likely 90% defensive programming; frustration regexes, context sanitizers, tool-retry loops, and state rollbacks just to stop the agent from drifting or silently breaking things.
The visual map is great, but from an architectural perspective, we're still herding cats with massive code volume instead of actually governing the agents at the system level.
I find it really strange that there is so much negative commentary on the _code_, but so little commentary on the core architecture.
My takeaway from looking at the tool list is that they got the fundamental architecture right - try to create a very simple and general set of tools on the client-side (e.g. read file, output rich text, etc) so that the server can innovate rapidly without revving the client (and also so that if, say, the source code leaks, none of the secret sauce does).
Overall, when I see this I think they are focused on the right issues, and I think their tool list looks pretty simple/elegant/general. I picture the server team constantly thinking - we have these client-side tools/APIs, how can we use them optimally? How can we get more out of them. That is where the secret sauce lives.
The tools was mostly already known, no? (I wish they had a "present" tool which allowed to model to copy-paste from files/context/etc. showing the user some content without forcing it through the model)
Yeah in fact one thing claude is freaking great at is decompilation.
If you can download it client side you can likely place a copy in a folder and ask claude
‘decompile the app in this folder to answer further questions on how it works. As an an example first question explain what happens when a user does X’.
I do this with obscure video games where i want to a guide on how some mechanics work. Eg. https://pastes.io/jagged-all-69136 as a result of a session.
It can ruin some games but despite the possibility of hallucinations i find it waaay more reliable than random internet answers.
Works for apps too. Obfuscation doesn’t seem to stop it.
Why are "tools" for local IO interesting and not just the only way to do it? I can't really imagine a server architecture that gets to read your local files and present them without a fat client of some kind.
What is the naive implementation you're comparing against? Ssh access to the client machine?
It's early days and we don't fully understand LLM behavior to the extent that we can assume questions like this about agent design are resolved. For instance, is an agent smarter with Claude Code's tools or `exec_command` like Codex? And does that remain true for each subsequent model release?
It’s a distinction that IMHO likely doesn’t make much difference, at least for the mostly automated/non-interactive coding agent use case. What matters more is how well the post-training on synthetic harness traces works.
> but so little commentary on the core architecture.
The core architecture is not interesting? its an LLM tui, theres not much there to discuss architecturally. The code itself is the actual fascinating train wreck to look at.
It’s not surprising. There has been quite a bit of industrial research in how to manage mere apes to be deterministic with huge software control systems, and they are an unruly bunch I assure you.
Sunir! Hope you are doing well man, I got a good chuckle from this.
I am! I’ll reach out in another channel to connect.
You need state oriented programming to handle that. I know, because I made one. The keyword is „unpredictability”. Embrace nondeterminism.
It's hard to tell how much it says about difficulty of harnessing vs how much it says about difficulty of maintaining a clean and not bloated codebase when coding with AI.
Why not both? AI writes bloated spaghetti by default. The control plane needs to be human-written and rigid -> at least until the state machine is solid enough to dogfood itself. Then you can safely let the AI enhance the harness from within the sandbox.
Were human organizations (not individuals) any good at the latter anyway?
We propped the entire economy up on it. Just look at the s&p top 10. Actually even top 50 holdings.
If it doesn't deliver on the promise we have bigger problems than "oh no the code is insecure". We went from "I think this will work" to "this has to work because if it doesn't we have one of those 'you owe the bank a billion dollars' situations"
It's weird to look at the world like this. If they deliver doesn't that invalidate thousands of other business plans? What about paying for that?
If they fail, doesn't software and the giant companies that make it go back to owning the world?
“if they deliver”
As I’m reading this, I’m thinking about how in 1980. It was imagined that everyone needed to learn how to program in BASIC or COBOL, and that the way computers would become ubiquitous would be that everybody would be writing program programs for them. That turned out to be a quaint and optimistic idea.
It seems like the pitch today is that every company that has a software-like need will be able to use AI to manifest that software into existence, or more generally, to manifest some kind of custom solution into existence. I don’t buy it. Coding the software has never been the true bottleneck, anyone who’s done a hackathon project knows that part can be done quickly. It’s the specifying and the maintenance that is the hard part.
To me, the only way this will actually bear the fruit it’s promising is if they can deliver essentially AGI in a box. A company will pay to rent some units of compute that they can speak to like a person and describe the needs, and it will do anything - solve any problem - a remote worker could do. IF this is delivered, indeed it does invalidate virtually all business models overnight, as whoever hits AGI will price this rental X%[1] below what it would cost to hire humans for similar work, breaking capitalism entirely.
[1] X = 80% below on day 1 as they’ll be so flush with VC cash, and they’d plan to raise the price later. Of course, society will collapse before then because of said breaking of capitalism itself.
Kinda depends how much of it is vibe coded. It could easily be 5x larger than it needs to be just because the LLM felt like it if they've not been careful.
Claude folks proudly claim to have Claude effectively writing itself. The CEO claims it will read an issue and automatically write a fix, tests, commit and submit a PR for it.
Bingo. And them 'being careful' is exactly what bloats it to 500k lines. It's a ton of on-the-fly prompt engineering, context sanitizers, and probabilistic guardrails just to keep the vibes in check.
There seem to be multiple mechanisms compensating for imperfect, lossy memory. "Dreaming" is another band-aid on inability to reliably store memory without loss of precision. How lossy is this pruning process?
It's one thing to give Claude a narrow task with clear parameters, and another to watch errors or incorrect assumptions snowball as you have a more complex conversation or open-ended task.
The time is ripe for deterministic AI; incidentally, this was also released today: https://itsid.cloud/ - presumably will be useful for anyone who wants to quickly recreate an open source Python package or other copyrighted work to change its license.
April's fool. Check the career page
Can you please explain the use here? I tried the demo, and cat, cp, echo, etc... seem to do the exact same thing without the cost.
Their demo even says:
Unless this is a parody site can you explain what I am missing here?Token echoing isn't even to the lexeme/pattern level, and not even close to WSD, Ogden's Lemma, symbol-grounding etc...
The intentionally 'Probably approximately complete' statistical learning model work, fundamentally limits reproducibility for PAC/Stastical methods like transformers.
CFG inherently ambiguity == post correspondence problem == halt == open domain frame-problem == system identification problem == symbol-grounding problem == entscheidungsproblem
The only way to get around that is to construct a grammar that isn't. It will never exist for CFGs, programs, types, etc... with arbitrary input.
I just don't see why placing a `14-billion parameter identity transformer` that just basically echos tokens is a step forward on what makes the problem hard.
Please help me understand.
It's satire - just see the About page.
I don’t understand what this is, is it satire? What is it supposed to be doing or solving?
Take a look at the demo or about page ;)
edit: or click 'Start Pro Trial'
> Right now, they're great for prompting simple sites/platforms but they break at large enterprise repos
Can you expand on this?
My experience is they require excessive steering but do not “break”
I think the "breakage" is in terms of conciseness and compactness, not outright brokenness.
Like that drunk uncle that takes half an hour and 20 000 words to tell you a 500 word story.
Indeed. In some ways, this is just kind of an extrapolation of the overall trend toward extreme bloat that we’ve seen in the past 15 years, just accelerated because LLMs code a lot faster. I’m pretty accustomed to dealing with Web application code bases that are 6-10 years old, where the hacks have piled up on top of other hacks, piled on top of early, tough-to-reverse bad decisions and assumptions, and nobody has had time to go back and do major refactors. This just seems like more of the same, except now you can create a 10 year-old hack-filled code base in three hours.
> they break at large enterprise repos.
I don't know where you get this. you should ask folks at Meta. They are probably the biggest and happiest users of CC
You mean the company where engineers ask chat bots to write chess games in their spare time in order to hit their AI usage requirements? That Meta?
idk why you bring this up. this is irrelevant to whether CC actually works at big corps
I missed that, source?
What do you mean by "actually governing the agents at the system level", and how is it different from "herding cats"?
Herding cats is treating the LLM's context window as your state machine. You're constantly prompt-engineering it to remember the rules, hoping it doesn't hallucinate or silently drop constraints over a long session.
System-level governance means the LLM is completely stripped of orchestration rights. It becomes a stateless, untrusted function. The state lives in a rigid, external database (like SQLite). The database dictates the workflow, hands the LLM a highly constrained task, and runs external validation on the output before the state is ever allowed to advance. The LLM cannot unilaterally decide a task is done.
I got so frustrated with the former while working on a complex project that I paused it to build a CLI to enforce the latter. Planning to drop a Show HN for it later today, actually.
> The database dictates the workflow, hands the LLM a highly constrained task, and runs external validation on the output before the state is ever allowed to advance.
This sounds like where lat.md[0] is headed. Only thing is it doesn't do task constraint. Generally I find the path these tools are taking interesting.
[0] https://github.com/1st1/lat.md
I started that very personal project on Monday, waiting with baited breath, make sure to add a sponsor me a coffee link.
Just posted it here: https://news.ycombinator.com/item?id=47601608 Thank you so much for the coffee offer, that genuinely made my day! I don't have a sponsor link set up. Honestly, the best support is just hearing if this actually helps you ship your personal project faster without losing your mind to prompt engineering. I really hope it gives you your sanity back. Let me know how it goes!
Some of your comments have already been marked as "dead" oddly enough that just seemed like normal comments explaining your rationale.
edit: Also seems like peoples replies are getting downvoted to hell and getting marked as dead and dissapear. Someone must not like your idea :-)
Sounds good, I'll keep an eye out.
Just dropped the Show HN here: https://news.ycombinator.com/item?id=47601608. Would love to hear your thoughts on the architecture!
There's nothing at that link. Not even a title.
Looks like it was downvoted to hell and marked as dead super fast. I leave the flag for "dead" on in my HN settings (leaves it super desaturated) and this seems unusual
I think these folks are attempting to build systems with IAM, entity states, business rules: all built over two foundational DSLs - https://typmo.com
So this is more like an art than science - and Claude Code happens to be the best at this messy art (imo).
Thousands of developers are using Claude Code successfully (I think?).
So what specifically is the gripe? If it works, it works right?
brute-forcing pattern-matching at scale. These are brittle systems with enormous duct-taping to hold everything together. workarounds on workarounds.
> A 500k line codebase for an agent CLI proves one thing: making a probabilistic LLM behave deterministically is a massive state-management nightmare.
Considering what the entire system ends up being capable of, 500k lines is about 0.001% of what I would have expected something like that to require 10 years ago.
You can combine that with all the training and inference code, and at the end of the day, a system that literally writes code ends up being smaller than the LibreOffice codebase.
It boggles the mind, really.
Oh, you should have a look at Pi then.
https://github.com/badlogic/pi-mono/tree/main/packages/codin...
> You can combine that with all the training and inference code, and at the end of the day, a system that literally writes code ends up being smaller than the LibreOffice codebase.
You really need to compare it to the model weights though. That’s the “code”.
>You really need to compare it to the model weights though
Then you'd need to compare the education of any developer in relation to how many LOC their IDE is. That's the "code".
So yea, the analogy doesn't make a whole lot of sense.
It even wrote an entire browser!
By "just" wrapping a browser engine.
... what are you even talking about? "The system that literally writes code" has a few hundreds of trillions of parameters. How is this smaller than LibreOffice?
I know xkcd 1053, but come on.
>A 500k line codebase for an agent CLI proves one thing: making a probabilistic LLM behave deterministically is a massive state-management nightmare. Right now, they're great for prompting simple sites/platforms but they break at large enterprise repos.
Is that the case? I'm pretty sure Claude Code is one of the most massively successful pieces of software made in the last decade. I don't know how that proves your point. Will this codebase become unmanageable eventually? Maybe, but literally every agent harness out there is just copying their lead at this point.
Claude code is a massively successful generator, I use it all the time, but it's not a governance layer.
The fact that the industry is copying a 500k-line harness is the problem. We're automating security vulnerabilities at scale because people are trying to put the guardrails inside the probabilistic code instead of strictly above it.
Standardizing on half a million lines of defensive spaghetti is a huge liability.
>Standardizing on half a million lines of defensive spaghetti is a huge liability.
Again, maybe it will be. Or maybe the way we make software and what is considered good practice will completely change with this new technology. I'm betting on the latter at this point.
I know it seems counter-intuitive but are there any agent harnesses that aren’t written with AI? All these half a million LoC codebases seem insane to me when I run my business on a full-stack web application that’s like 50k lines of code and my MvP was like 10k. These are just TUIs that call a model endpoint with some shell-out commands. These things have only been around in time measured in months, half a million LoC is crazy to me.
Who cares about LoC? Its a metric that hasn't mattered since we measured productivity in it in the 1980s. For all we know they made these design choices so they could more easily reuse the code in other codebases. Ideally you'd build the library to do that at the same time, but this is start up time constraints to repay loans and shit.
Bugs and vulnerabilities are roughly linear to lines of code in a project.
"Who cares how much concrete we used in this bridge?"
That would be a sensible comparison if concrete was free
Since when are tokens free?
Check out pi coding agent, https://pi.dev
https://github.com/badlogic/pi-mono/blob/main/AGENTS.md
The bloat is mostly error handling for a fundamentally unpredictable system. A simple agent loop is 200 lines. But then you need graceful recovery when the model hallucinates a tool call, context window management, permission boundaries so it doesn't rm -rf your repo, state persistence across crashes, rate limiting, and cost tracking. Each one is simple individually. Together they're 500k lines.
> just TUIs
For starters, CC's TUI is React-based.
Somebody somewhere is bragging to someone about using React to render a grid of ASCII characters.
https://x.com/trq212/status/2014051501786931427
" Most people's mental model of Claude Code is that "it's just a TUI" but it should really be closer to "a small game engine".
For each frame our pipeline constructs a scene graph with React then -> layouts elements -> rasterizes them to a 2d screen -> diffs that against the previous screen -> finally uses the diff to generate ANSI sequences to draw
We have a ~16ms frame budget so we have roughly ~5ms to go from the React scene graph to ANSI written. "
Do they reconstruct the scene graph for each frame?! Maybe I'm overinterpreting the phrasing. Someone take a peek at the source?
You can argue that any UI is like a game engine in that sense. Some make sensible choices and don't need to pretend they have to render at 60fps.
Opencode actually has a pretty solid codebase quality wise. I have done brief pokes and its been largely fine.
> These are just TUIs that call a model endpoint with some shell-out commands.
Claude Code CLI is actually horrible: it's a full headless browser rendering that's then converted in real-time to text to show in the terminal. And that fact leaks to the user: when the model outputs ASCII, the converter shall happily convert it to Unicode (no latter than yesterday there was a TFA complaining about Unicode characters breaking Unix pipes / parsers expecting ASCII commands).
It's ultra annoying during debugging sessions (that is not when in a full agentic loop where it YOLOs a solution): you can't easily cut/paste from the CLI because the output you get is not what the model did output.
Mega, mega, mega annoying.
What should be something simple becomes a rube-goldberg machinery that, of course, fucks up something fundamental: converting the model's characters to something else is just pathetically bad.
Anyone from Anthropic reading? Get your shit together: if you keep this "headless browser rendering converted to text", at least do not fucking modify the characters.*
No it is not. Ink does not use a browser.
If it was 2020, it would be hard to imagine that after some hours/days you getting a visual representation of the leak with such detailed stats lol
I don't have a lot of experience with them but I would have thought static analysis tools circa 2020 would have managed it just fine.
How was this generated ? I'm quite sure "with ai/claude code" but what are the actual steps ?
For the animations specifically, it's using Motion (fka Framer Motion) Javascript library. If you describe some animations from the site to an LLM and ask it to use Framer motion, you get very similar results. The creator likely just prompted for a while until they were happy with the outcome.
Is there a reason to think it was done by an LLM?
AI-generated UIs, at least ones aimed at an engineering audience, have a very distinct appearance. They seem to always have the following attributes:
- Dark mode design with lots of colors
- Buttons that have vibrant, bright borders and duller backgrounds
- Excessive (IMO) usage of monospace fonts for stylistic reasons
None of this proves that it's AI (the other comments have covered that) but in my experience it's always correct.
It states "curation assisted by AI" at the bottom.
That doesn't mean the code in question was written by AI.
The author posted in this thread. It's AI.
You’re missing the point. It’s not a witch hunt, but rather a discussion on whether things are too quickly attributed to be AI generated.
The biggest reason would be: do you know a single developer who could have produced this in a couple of hours?
Yup, strange to see people still don’t understand LLMs massively speed up coding greenfield pet projects. Anytime you see a bee web app it’s better to assume AI use rather than not anymore.
I'm not familiar enough with this animation library to answer that. Someone could be very used to this type of website and just copy paste things they've done before.
What does Occam's razor say?
> 500k lines of code
Isn't it a simple REPL with some tools and integrations, written in a very high level language? How the hell is it so big? Is it because it's vibecoded and LLMs strive for bloat, or is it meaningful complexity?
I just checked competitors' codebases:
- Opencode (anomalyco/opencode) is about 670k LOC
- Codex (openai/codex) is about 720k LOC
- Gemini (google-gemini/gemini-cli) is about 570k LOC
Claude Code's 500k LOC doesn't seem out of the ordinary.
> Claude Code's 500k LOC doesn't seem out of the ordinary.
Aren't all the other products also vibe-coded? "All vibe-coded products look like this" doesn't really seem to answer the question "Why is it so damn large?"
It's a repl, that calls out to a blackbox/endpoint for data, and does basic parsing and matching of state with specific actions.
I feel the bulk of those lines should be actions that are performed. Either this is correct or this is not:
1. If the bulk of those lines implement specific and simple actions, why is it so large compared to other software that implements single actions (coreutils, etc)
2. If the actions constitute only a small part of the codebase, wtf is the rest of it doing?
You're complaining about vibe coding while also complaining about how you "feel" about the code. Do you see the irony in that?
>> I feel the bulk of those lines should be actions that are performed. Either this is correct or this is not:
> You're complaining about vibe coding while also complaining about how you "feel" about the code. Do you see the irony in that?
Where did I complain about how I feel about the actual code? I have feelings, negative ones, about the size of the code given the simple functionality it has, but I have no feelings on the code because I did not look at the code.
Are you ESL by any chance? You’re missing the forest for the trees.
All of them are really, REALLY bad.
Bad by whose definition? They work really well in my experience. They aren't perfect but the amount of hand holding has gone down dramatically and you can fix any glaring problems with a code review at the end. I work on a multimillion line code base which does not use any popular frameworks and it does a great job. I may be benefiting from the fact that the codebase is open source and all models have obviously been trained on it.
It takes 10 seconds for Gemini CLI to load. 10 seconds to show an input field. This is for a CLI program.
For comparison, it takes me less time to load Chrome and go to gemini.google.com.
> They work really well in my experience.
Yeahhh strong disagree there, I find Codex and CC to be buggy as hell. Desktop CC is very bad and web version is nigh unusable.
At least Gemini and Claude constantly break down with scrolling in various Linux terminals, something which was solved by countless TUIs decades ago.
I think a lot of the people prasing Claude & co are on Macs.
Most of their issues have been solved a long time ago, with 1000x less code. It is depressing at this point. I really had no clue IT was in the shitters this much. I knew it was theatrical but I had no idea that it was by this much.
All these AI tools teams have most valid excuse "We are just a bunch of people who only know Javascript/typescript/NodeJS. Please bear with us while we resolve 10,000 open issues."
I haven't seen the scrolling glitch in months, where previously it was happening multiple times a day. Also haven't seen anyone complain about it in quite some time. Pretty sure they have resolved that.
They have not! If I am scrolled up while more output is produced, the scrollback jumps to the top pretty consistently.
I'll try again but lately I've been using strictly the VS Code terminal. Gnome Terminal and Termux in Ubuntu 24.04 were unusable even with 1000 hacks.
I'm on a mac! And I still find bugs on a regular basis...
There’s probably a subconscious incentive to make a tool that’s “complex” because the underlying LLM also is complex.
yeah its honestly full of vibe fixes to vibe hacks with no overarching desig. . some great little empirical observations though!i think the only clever bit relative to my own designs is just tracking time since last cache ht to check ttl. idk why i hadnt thought of that, but makes perfect sense
I don't know if you're mindlessly repeating the HN trope that JS/typescript/Electron is bad and that all bloat can easily prevented, but if you're truly interested in answers to your questions: RTFA.
How many LoC should it be, for that kind of program?
Other notable agents' LOC: Codex (Rust) ~519K, Gemini (TS) ~445K, OpenCode (TS) ~254K, Pi (TS) ~113K LOC. Pi's modular structure makes it simple to see where most of code is. Respectively core, unified API, coding agent CLI, TUI have ~3K, ~35K, ~60K, ~15K LOC. Interestingly, the just uploaded claw-code's Rust version is currently at only 28K.
edit: Claude is actually (TS) 395K. So Gemini is more bloat. Codex is arguable since is written in lower-level language.
Well FFmpeg is roughly 1500k, but it's C+Asm and it's dozens of codecs and pretty complex features. SBCL is around 500k I guess.
I'm not saying that this is necessarily too much, I'm genuinely asking if this is a bloat or if it's justified.
It's a TUI API wrapper with a few commands bolted on.
I doubt it needs to be more than 20-50kloc.
You can create a full 3D game with a custom 3D engine in 500k lines. What the hell is Claude Code doing?
Just check the leaked code yourself. Two biggest areas seem to be the `utils` module, which is a kitchen sink that covers a lot of functionality from sandboxing, git support, sessions, etc, and `components` module, which contains the react ui. You could certainly build a cli agent with much smaller codebase, with leaner ui code without react, but probably not with this truckload of functionality.
They are doing some strange "reinvent the wheel" stuff.
For example, I found an implementation of a PRNG, mulberry32 [1], in one of the files. That's pretty strange considering TS and Javascript have decent PRNGs built into the language and this thing is being used as literally just a shuffle.
[1] https://github.com/AprilNEA/claude-code-source/blob/main/src...
mulberry32 is one of the smallest seedable prngs. Math.random() is not seedable.
If you search mulberry32 in the code, you'll see they use it for a deterministic random. They use your user ID to always pick the same random buddy. Just like you might use someone's user ID to always generate the same random avatar.
So that's 10 lines of code accounted for. Any other examples?
Well, at least that confirms they weren't lying when they said all recent updates to claude code were made by claude. You certainly won't do this stuff if you were writing the code yourself.
Software doesn’t end at the 20k loc proof of concept though.
What every developer learns during their “psh i could build that” weekendware attempt is that there is infinite polish to be had, and that their 20k loc PoC was <1% of the work.
That said, doesn't TFA show you what they use their loc for?
I think that’s why the author was comparing to to a finished 3D game.
I guess because you see 3D stuff in a 3D game instead of text, people assume that it must be the most complex thing in software? Or because you solve hard math problems in 3D, those functions are gonna be the most loc?
It's a completely different domain, e.g. very different integration surface area and abstractions.
Claude Code's source is dumped online so there's probably a more concrete analysis to be had than "that sounds like too many loc".
It is a different domain but that wasn’t your argument. Your argument was that someone was comparing it to a POC when in fact they were comparing to a finished product.
Also a AAA game (with the engine) with physics, networking, and rendering code is up there in terms of the most complex pieces of software.
They just claimed that you can build a 3D game in 500k loc, thus Claude Code shouldn't use so many loc. They/you didn't render the argument for that.
For example, without looking at the code, the superstition also works in the opposite direction: Claude Code is an interface to using AI to do any computer task while a 3D game just lets you shoot some bad guys, so surely the 3D game must be done in fewer loc. That's equally unsatisfying.
You'd have to be more concrete than "sounds like a lot".
> Claude Code is an interface to using AI to do any computer task
Shouldn't interfaces be smaller than the implementation?
No. We aren't talking about .h vs .c files nor PL interfaces.
A GUI/client can be arbitrarily more or less complex than the things it's GUI'ing.
> Claude Code is an interface to using AI to do any computer task
Claude Code is quite literally a wrapper around a few APIs. At one point it needed 68GB of RAM to run and requires 11ms to "lay a scene graph" to display a few hundred characters on screen. All links here: https://news.ycombinator.com/item?id=47598488
> while a 3D game just lets you shoot some bad guys, so surely the 3D game must be done in fewer loc.
Yes, most games should be done in fewer loc
Your claim was that they could implement the same app in 50k lines of code.
A cursory glance at the codebase shows that it's not just a wrapper around a few APIs.
Yes, because they've vibed it into phenomenally unnecessary complexity. The mistake you continually make in this thread is to look at complexity and see something that is de facto praiseworthy and impressive. It is not.
I could run a text adventure with a Zmachine emulator under a 6502 based machine and 48k of RAM, with Ozmoo you can play games like Tristam Island. On a Commodore 64, or an Apple II for you US commenters. I repeat the game it's being emulated in a simple computer with barely more processing power than a current keyboard controller.
As the ZMachine interpreter (V3 games at least, enough for the mentioned example), even a Game Boy used to play Pokemon Red/Blue -and Crystal/Sylver/Blue, just slightly better specs than the OG GB- can run Tristam Island with keypad based input picking both selected words from the text or letter by letter as when you name a character in an RPG. A damn Game Boy, a pocket console from 1989. Not straightly running a game, again. Emulating a simple text computer -the virtual machine- to play it. No slowdowns, no-nothing, and you can save the game (the interpreter status) in a battery backed cartridge, such as the Everdrive. Everything under... 128k.
Claude Code and the rest of 'examples' it's what happens when trade programmers call themselves 'engineers' without even a CS degree.
Check out `print.ts` to see how "more LOC" doesn't mean "more polished"
Okay, I'm looking at it. Now what?
This file is exactly what I'm talking about.
Take the loadInitialMessage function: It's encumbered with real world incremental requirements. You can see exactly the bolted-on conditionals where they added features like --teleport, --fork-session, etc.
The runHeadlessStreaming function is a more extreme version of that where a bunch of incremental, lateral subsystems are wired together, not an example of superfluous loc.
The file is more than 5000 lines of code. The main function is 3000. Code comments make reference to (and depend on guarantees in connection with) the specific behavior of code in other files. Do I need to explain why that's bad?
By real-world polish, I don't mean refining the code quality but rather everything that exists in the delta between proof of concept vs real world solution with actual users.
You don't have to explain why there might be better ways to write some code because the claim is about lines of code. It could be the case that perfectly organizing and abstracting the code would result in even more loc.
Comments like these remind me of the football spectators that shout "Even I could have scored that one" when they see a failed attempt.
Sure. You could have. But you're not the one playing football in the Champions League.
There were many roads that could have gotten you to the Champions League. But now you're in no position to judge the people who got there in the end and how they did it.
Or you can, but whatever.
I don't think this is warranted given that the comment you're criticising is simply expressing an opinion explicitly solicited by the comment it's responding to.
It’s more like “Player A is better than Player B” coming from a professional player in a smaller league who is certainly qualified to have that opinion.
Yes, exactly. I like this analogy. I am surprised the level of pearl clutching in these discussions on Hacker News. Everybody wants to be an attention sharecropper, lol.
> Sure. You could have. But you're not the one playing football in the Champions League.
The only reason people are using Claude Code is because it's the only way to use their (heavily subsidized) subscription plans. People who are okay with using and paying for their APIs often opt out for other, better, tools.
Also, analogies don't work. As we know for a fact that Claude Code is a bloated mess that these "champions league-level engineers" can't fix. They literally talk about it themselves: https://news.ycombinator.com/item?id=47598488 (they had to bring in actual Champions League engineers from bun to fix some of their mess).
"Even I would have scored that goal" == "I would never ever have created a bloated mess like Anthropic"
You just repeat the same statement.
That bloated mess is what got them to the Champions League. They did what was necessary to get them here. And they succeeded so far.
But hey, according to some it can be replicated in 50k lines of wrapper code around a terminal command, so for Anthropic it's just one afternoon of vibe coding to get rid of this mess. So what's the problem? /s
> Even I would have scored that goal" == "I would never ever have created a bloated mess like Anthropic"
Since you keep putting words in my mouth that I never said, and keep being deliberately obtuse, this particular branch is over.
Go enjoy Win11 written by same level of champions or something.
Adieu.
Honest question: Why does it matter? They got the product shipped and got millions of paying customers and totally revolutionized their business and our industry.
Engineers using LOC as a measure of quality is the inverse of managers using LOC as a measure of productivity.
More code means more entropy, more room for bugs, harder to find issues, more time to fix, more attack surface, more memory used, more duplication, more inconsistencies... I bet you at some point we'll get someone reporting how AI performance deteriorates as the code base grows, and some blog post about how their team improved the success of their AI by trimming the code base down to less than 100k LOC or something like that.
The principles of good software don't suddenly vanish just because now it's a machine writing the code instead of a human, they still have to deal with the issues humans have for more than half a century. The history of programming is new developers coming up with a new paradigm, then rediscovering all the issues that the previous generation had figured out before them.
The history of programming is also each generation writing far less performant code than the one before it. The history of programming is each generation bemoaning the abstractions, waste and lack of performance of the code of the next generation.
It turns out that there is a tradeoff in code between velocity and quality that smart businesses consider relative to hardware cost/quality. The businesses that are outcompeting others are rarely those who have the highest quality code, but rather those that are shipping quickly at a quality level that is satisfactory for current hardware.
> far less performant code than the one before it.
That worked because of rapid advancements in CPU performance. We’ve left that era.
It’s about more than performance. Code is and always has been a liability. Even with agents, you start seeing massive slowdowns with code base size.
It’s why I can nearly one shot a simple game for my kid in 20 minutes with Claude, but using it at work on our massive legacy codebase is only marginally faster than doing it by hand.
You asked why the size of the code matters, I gave you the answer. If you want to ramble about the non technical aspects of software development talk to someone else, I'm not interested.
I asked a rhetorical question to get the reader to think about a topic. I was not looking for a rote recitation of a well-known textbook answer. Maybe you should not be on the comment section of an engineering website if you find discussion so offensive.
It doesn't. LoC is only meaningful when you use it to belittle others' code.
hehe, belittle (to make smaller)
The reason it’s not useful as a measure of productivity is because it’s measure of complexity (not directly, but it’s correlated). But it tells you nothing about whether that complexity was necessary for the functionality it provides.
But given that we know the functionality of Claude Code, we can guess how much complexity should be required. We could also be wrong.
>Why does it matter?
If there’s massively more code than there needs to be that does matter to the end user because it’s harder to maintain and has more surface area for bugs and security problems. Even with agents.
Among the hundreds of thousands of lines of code that Anthropic produced was one that leaked the source code. It is likely to be a config file, not part of the Claude Code software itself, but it still something to track.
The more lines of code you have the more likely there is for one of them to be wrong and go unnoticed. It results in bugs, vulnerabilities,... and leaks.
More bugs. More costly maintenance.
Exactly. Imagine if Claude Code was a PHP script. Some folks would lose their damn minds
> Honest question: Why does it matter?
Because it's unmaintainable slop that they themselves don't know how to fix when something happens? https://news.ycombinator.com/item?id=47598488
It will be exactly that. But that is a 'them' problem. I can look at it a go 'that looks like a bad idea' but they are the ones who have to live with it.
At some point someone will probably take their LLM code and repoint it at the LLM and say 'hey lets refactor this so it uses less code is easier to read but does the same thing' and let it chrun.
One project I worked on I saw one engineer delete 20k lines of code one day. He replaced it with a few lines of stored procedure. That 20k lines of code was in production for years. No one wanted to do anything with it but it was a crucial part of the way the thing worked. It just takes someone going 'hey this isnt right' and sit down and fix it.
> But that is a 'them' problem. I
When a TUI requires 68 GB of RAM to run, or when they spend a week not being able to find a bug that causes multiple people to immediately run out of tokens, it's not a "them" problem.
Even today, I'm still astounded that there are people capable of building a gorgeous and interesting site like this in less than 2 days...
Well, I assume this is all just generated with Claude Code, right? Whether there is much back and forth with the LLM is a valid question and nothing wrong with generating websites (I do it too for some side projects). Claude loves generating websites with a particular style of serif font. We also saw this with https://tboteproject.com/timeline/ and I've just generally seen it from various designs that coworkers have spit out over months using Claude defaults.
I guess I just find it weird because all the signals are messed up so whenever I see these sorts of layouts, I feel like I'm looking at the average where I don't think "gorgeous and interesting" at all. Instead, I'm forced to think "I should be skeptical of this based on the presentation because it presents as high quality but this may be hiding someone who is not actually aware of what they're presenting in any depth" as the author may have just shoved in a prompt and let it spin.
There's actually a similarly designed website (font weights, font styles etc) here in New Zealand (https://nzoilwatch.com/) where at a glance, it might seem like some overloaded professional-backed thing but instead it's just some guy who may or may not know anything about oil at all, yet people are linking it around the place like some sort of authoritative resource.
I would have way less of an issue if people just put their names by things and disclosed their LLM usage (which again, is fine) rather than giving the potentially false impression to unequipped people that the information presented is actually as accurate and trustworthy as the polish would suggest.
I really wish I had that clout-chasing gene - it doesn't even occur to me until I see someone else do it.
I'm serious. The hype chasing clearly clearly matters. .
things like this: https://github.com/instructkr/claw-code I mean ok, serious people put in years of effort for 100 of those stars ...
it's continually wild how extremely irrelevant hard effortful careful work is.
I think that's the game. Get up, look at the headlines, figure out how you can exploit them with vibe coding, do some hyphy project and repeat.
Maybe some lobster themed bullshit between openclaw and the claudecode leak.
I'm not being a cynic here, I'm just telling you what I'm going to do tomorrow.
We do need "hard effortful careful work" to keep planes flying, electrical grids running and medical devices safe. It's very relevant but very undervalued by our current economy.
That was the leaked code and now it's just some random dudes harness btw. He swapped it out. Did a sloppy find and replace for "claude" and made it claw.
It's sloppy work
Does not matter. Sloppiness is unimportant
here's my attempt: https://github.com/kristopolous/Claudette
My shit's always too complicated. let's see
This website has "Curation assisted by AI." at the bottom.
Personally, I don't think I will be putting any such disclaimers or disclosures on my work, unless I deem it relevant to the functionality.
Claude itself can generate this in minutes if you know how to ask.
I was talking to one of the people who works at a big agentic coding tools. If I recall correctly, he was talking about how they use the tool to build the tool. I was complaining that all of the websites/frontends I make look pretty weak, and I'm amazed they get much slicker looking UIs with the same tool. He showed me that one way they do it is by having an extensive UI library of components/graphics/whatever, and also mentioned that the folks build their UIs know how to prompt/use the tool because it's backed by years of UI development knowledge & superior resources. I realized I didn't have any of that, and it actually made me feel better.
Last week we I was struggling to go from vague prompt to a OMG-it's-so-nice-looking web app, I remembered that example above and then decided to create my own component library, which I did in a couple days: https://www.substrateui.dev/. I was actually super happy that I was able to accomplish that, and then I realized I wanted to better understand the content that I had vibe coded into existence. So now I'm recreating that design system step by step w/ Claude code, filling in gaps in my knowledge & learning a bit about colors, typography, CSS, blah blah blah. It's actually a lot of fun because I'm able to explore all of the concepts and learn enough to build a front end that doesn't suck & is good enough for my use case without getting stuck for days on trying to center a stupid div by hand or play whack-mole-fix-something-and-break-something-else when trying to clean up AI slop.
that's really awesome. how did you go about building the component library?
I was referencing https://www.neobrutalism.dev/ and https://www.retroui.dev/ and slopped my way through it. A lot of it was just asking Claude Code "is this a proper design system?", then I kept doing that until it didn't have anything useful to add. Now I'm using my that as the template for understanding such things in more detail.
Is this gorgeous?
Content resizing, needing to juggle a speed knob to read, and the overall presentation makes it feel like Edward Tufte flavored nightmare fuel.
It is pretty good, shows numbers clearly on desktop and phone. Not sure what the criticism even means.
I'm criticizing the readability of that first "Agent Loop" section.
It's basically a slideshow which advances and presents several content areas which are intended to be read, all while advancing and resizing themselves.
Pausing and clicking through manually stepwise is also pretty obnoxious.
Would much rather just see the content all laid out at once
But somehow, according to HN, LLMs make you less productive, not more :)
The people who don’t know how to use an LLM to make them more productive, or are scared it’s going to take their job, are louder than the people who are making good use of them to make them more productive.
That just seems to be human nature unfortunately - the complainers are always louder.
As someone currently "making good use of" generative AI while simultaneously being painfully aware of its shortcomings, I think the overall discourse is a bit more nuanced. Bucketing folks into simple "for" and "against" GenAI camps does nothing to cover the vast spectrum in between, making your take ultimately built on a false dichotomy. Further implying those camps fall on the lines of those "in the know" of AI vs "those in denial/scared of" is patronizing at best, and I've grown tired of this oversimplification parroted out every time the topic of LLM systems come up.
Those within well informed, technical circles will fall somewhere in between the for/against labels, myself included.
The GenAI hype cycle is finally starting to collapse as the general population starts to realize that these systems aren't the panacea for "everything" after all. They provide enormous utility in some domains like coding, but even then there are massive tradeoffs, footguns and the usual horse blinder ills that come with every hype cycle. I just hope we stop having to "learn the hard way" with respect to undisciplined use of current-gen LLM systems writ large, and cooler heads prevail sooner rather than later.
What? We must have different internets, I agree in general, but the "AI is the second coming" crowd is louder than standing next to a jet on takeoff. I'm in the "AI is making me more productive but a worse developer" crowd, don't know what I count as.
You got shuttled into one bubble and the previous commenter into another advertising / news bubble. It's incredible how different the media experience is for people in different media bubbles.
That's a bit dishonest, the consensus on HN seems to be that LLMs are very good at oneshotting small projects from scratch. Especially when using super mainstream technologies like html and tailwind, as does the discussed website. And especially when it's a one time operation and the project will never need to be maintained, like the discussed website.
More like 2 hours.
.
I mean, tools change, but I'd be happy to hear if any tool can create that by just saying create "Claude Code Unpack" with nice graphics. or some other single prompt. It likely was an iterative process and it would be lovely if more people started sharing that, because the process itself is also very interesting.
I've created some chinese characters learning website and I took me typing 1/3 of LoTR to get there[1]. I would have typed like 1% of that writing code directly. It is a different process, but it still needs some direction.
1. https://hanzirama.com/making-of
I think it is accurate. Where are the autonomous AI who beat the creator to the punch? When we write "Hello, World!" in C and compile it with `gcc`, do we give credit to every contributor to GNU? AI is a tool that thus far only humans are capable of using with the unique inspiration. Will this change in the future? Certainly. But is it the case now? I think my questions imply some reasonable objections.
“Che cos’è il genio? È fantasia, intuizione, colpo d’occhio e velocità di esecuzione”
Here's a codeberg repo with the leaked source: https://codeberg.org/wklm/claude-code
Dang! Glad to see others doing this. I totally made this site yesterday like 11 hours ago :/ but did not get the traction.
I love your implementation.
Here was my first stab:
https://news.ycombinator.com/item?id=47595140
https://brandonrc.github.io/journey-through-claude-code/
That's where we are now, lots of us doing similar. Don't get hung up on who gets noticed, that "value" is almost zero now.
I guess they really do eat their own dogfood and vibe code their way through it without care for technical debt? In a way, it’s a good challenge, but it’s fairly painful to watch the current state of the project (which is about a year old now, so it should be in prime shape).
> is about a year old now, so it should be in prime shape
A 1yo project may be in good shape if written by just one dev, maybe a few. But if you have many devs, I can guarantee it will be messy and buggy. If anything, at 1yo it is probably still full of bugs because not enough time has elapsed for people to run into them.
It's only 510k LoC, at ~100 lines of code a day for a year, this code base would take 23 engineers a year to write. That's for 220 working days in somewhere civilized.
And I'm sure we all know that when working on a greenfield project you can produce a lot more LoC per day than maintaining a legacy one.
Given that vibe code is significantly more verbose, you're probably talking about ~15 engineers worth of code?
I know that's all silly numbers, but this is just attempting to give people some context here, this isn't a massive code base. I've not read a lot of it, so maybe it's better than the verbose code I see Claude put out sometimes.
> It's only 510k LoC, at ~100 lines of code a day for a year, this code base would take 23 engineers a year to write.
Correction: a code base of 500kLoC would take 23 engineers a year to write. There is no indication that the functionality needed in a TUI app that does what this app does needs 500kLoC.
When you say it’s not a massive codebase, I’m curious, what are you comparing it to?
The previous poster was making out that in a year the code base would be a mess if people had done it.
This is a two-pizza team sized project, so it's not a project that the code quality would inevitably spiral out of control due to communication problems.
A single senior architect COULD have kept the code quality under control.
Splunk.
Put yourself in their shoes; either the quality of Claude's coding continues to improve or else their business is probably doomed if it stagnates, so for them it makes sense to punt technical debt to the future when more capable versions of their models will be able to better fix it.
This is why I personally don't take technical debt arguments about how LLM maintained code bases deteriorate with size/age seriously; it presumes that at some point I'll give up with the LLM and be left with a mess to clean up by hand, but that's not going to happen, future maintenance is to be left to LLMs and if that isn't possible for some reason then the project is as good as dead anyway. When you start a project with a LLM the plan should be to see it through with LLMs, planning to have unaided humans take over maintenance at some point is a mistake.
Doesn't this contradict the popular wisdom that "what's good for a human engineer is good for an LLM"? e.g. documentation, separation of concerns, organized files, DRY.
I find LLMs very useful and capable, but in my experience they definitely perform worse when things are unorganized. Maintenance isn't just aesthetics, it's a direct input to correctness.
Maybe a little. I don't hold fast to that popular wisdom, e.g. I think comments are not always a net positive for LLMs. With respect to technical debt, how much debt is too much debt before it gums up the works and arrests forward progress on the software? It probably depends on the individual programmer. LLMs do seem to have a higher tolerance for technical debt than myself personally at least.
Good points, I've also found that comments are really hit or miss. Especially because the agents tend not to update them (sounds familiar!).
I am more worried that we are moving toward creating black boxes and this might turn software "development" into a field as confused as philosophy and dialectics.
Boris Cherny, the creator of Claude Code said he uses CC to build CC.
Which makes for an interesting thought / discussion; code is written to be read by humans first, executed by computers second. What would code look like if it was written to be read by LLMs? The way they work now (or, how they're trained) is on human language and code, but there might be a style that's better for LLMs. Whatever metric of "better" you may use.
Just a thought experiment, I very much doubt I'm the first one to think of it. It's probably in the same line of "why doesn't an LLM just write assembly directly"
LLMs read and write human-code because humans have been reading and writing human-code. The sample size of assembly problems is, in my estimate, too small for LLMs to efficiently read and write it for common use cases.
I liken it to the problem of applying machine learning to hard video games (e.g. Starcraft). When trained to mimic human strategies, it can be extremely effective, but machine learning will not discover broadly effective strategies on a reasonable timescale.
If you convert "human strategies" to "human theory, programming languages, and design patterns", perhaps the point will be clear.
But: could the ouroboric cycle of LLM use decay the common strategies and design patterns we use into inexplicable blobs of assembly? Can LLMs improve at programming if humans do not advance the theory or invent new languages, patterns, etc?
But starcraft training is not through mimicking human strategies - it was pure RL with a reward function shaped around winning, which allows it to emerge non-human and eventually super-human strategies (such as the worker oversaturation).
The current training loop for coding is RL as well - so a departure from human coding patterns is not unexpected (even if departure from human coding structure is unexpected, as that would require development of a new coding language).
> It's probably in the same line of "why doesn't an LLM just write assembly directly"
My suspicion is that the "language" part of LLMs means they tend to prefer languages which are closer to human languages than assembly and benefit from much of the same abstractions and tooling (hence the recent acquisition of bun and astral).
The problem with that is that assembly isn't portable, and x86 isn't as dominant as it once was, so then you've got arm and x86(_64). But you could target the LLVM machine if you wanted.
Yes but my point was that they seem to explicitly not care about code quality and/or the insane amount of bloat, and seem to just want the LLM to be able to deal with it.
I've heard somewhere that they have roughly 100% code churn every few months, so yes, they unfortunately don't care about code quality. It's a shame, because it's still the best coding agent, in my experience.
> they unfortunately don't care about code quality.
> It's a shame, because it's still the best coding agent, in my experience.
If it is the best, and if it delivers the value users are asking for, then why would they have an incentive to make further $$$ investments to make it of a "higher" quality if the value this difference could make is not substantial or hurts the ROI?
On many projects I found this "higher quality" not only to be false of delivering more substantial value but actually I found it was hurting the project to deliver the value that matters.
Maybe we are after all entering the era of SWE where all this bike-shedding is gone and only type of engineers who will be able to survive in it will be the ones who are capable of delivering the actual value (IME very few per project).
Is this why they ran into a bug with people hitting usage limits even on very short sessions and had to cease all communications for over a day after a week of gaslighting users because they couldn't find the root cause in the "quality doesn't matter" code base?
Or that's why tgey had to buy bun with actual engineers to work on Claude Code to reduce memory peaks from 68 GB (yes, 68 gigabytes) to a "measely" 1.7? Because code quality doesn't matter?
Or that a year later they still cannot figure out how to render anything in the terminal without flickering?
The only reason people use Claude Code is because it's the only way to use Anthropic's heavily subsidized subscription. You get banned if you use it through other, better, tools.
Sure, now the only thing remaining is you convincing Anthropic that they're doing wrong. Or alternatively you change your perspective.
"Windows is the world's most popular desktop consumer OS. Microsoft are doing everything right, and should never ever change. Who are we to criticise them"
Meanwhile I apparently need to change my persoective about this: https://news.ycombinator.com/item?id=47598488
Yes, but as I said, it’s in a way the ultimate form of dogfooding: ideally they’ll be able to get the LLM smart enough to keep the codebase working well long-term.
Now whether that’s actually possible is a second topic.
They explicitly boast about using claude code to write code: https://x.com/bcherny/status/2007179836704600237
That's how you get "oh this TUI API wrapper needs 68GB of RAM" https://x.com/jarredsumner/status/2026497606575398987 or "we need 16ms to lay out a few hundred characters on screen that's why it's a small game engine": https://x.com/trq212/status/2014051501786931427
Just finished looking at Ink here.. frontend world has no shame. Love the gloating about 40x less RAM as if that amount of memory for a text REPL even approaches defensible. "CC built CC" is not the flex people seem to suggest it is.
Indeed a sad state of affairs.
Frontend losers not realizing the turds they are releasing. An LLM client fits under netcat+echo+awk+jq runnable under a 486 if there's no SSL/TLS on its way, Pentium II could drive fast TLS connections like nothing and under 32MB of RAM with NetBSD for a simple terminal install, maybe with X and a simple WM with RXVT if you care.
Any loser is a "full stack software engineer" nowadays thanks to claude.
Feel free to add this to Awesome Claude code. https://github.com/rosaboyle/awesome-cc-oss
Appreciate the effort, but this is very basic and nothing you need the source code to understand. I was expecting a deep dive into what specific decisions they made, but not how an loop of tool calls works
I found it a useful overview. My primary question about the client source was - is there any secret sauce in it? Based on this site, the answer is no, the client is quite simple/dumb, and all the secret sauce resides on the server/in the model.
I particularly valued the tool list. People in these comments are complaining about how bad the code is, but I found the client-side tools that the model actually uses to be pretty clean/general.
My takeaway was more that at a very basic level they know what they are doing - keep the client general, so that you can innovate on the server side without revving the client as much.
Pardon me, but I think it's rather obvious that it worked this way?
The real value of Anthropic is in the models that they spent hundreds of millions training. Anyone can build a frontend that does a loop, using the model to call tools and accomplish a task. People do it every day.
Sure, they've worked hard to perfect this particular frontend. But it's not like any of this is revolutionary.
Is this what perfection looks like?
Okay those "hidden features" are amazing, especially the cross-session referencing. I hope we can look forward to that in the future
Also I definitely want a Claude Code spirit animal
It's live! If you're on the latest cc you can use /buddy now.
It's a ridiculous folly. I've already lost a well-constructed question because I accidentally tabbed into my pointless 'buddy'.
(Yes, I know I can turn it off. I have.)
I find Claude Code features fall into 2 categories, "hmmmm that could be actually useful" vs "there is more kool aid where that came from"
Ok! First prompt, obviously:
“Complete thyself.”
And I want an octopus. Who orchestrates octopuses.
I need the dragon pet... someone add it to open code / pi, please!
> also related: https://www.ccleaks.com
This deployment is temporarily paused
It's available on the internet archive
https://web.archive.org/web/20260331105051/https://www.cclea...
BTW, that's why you should use your own infrastructure and not depend on Vercel
Same!
This tool is useful if you want to see all the internal commands claude agents are making in real-time:
https://github.com/simple10/agents-observe
There's this weird thing about AI generated content where it has the perfect presentation but conveys very little.
For example the whole animation on this website, what does it say beyond that you make a request to backend and get a response that may have some tool call?
Also it's just randomly incorrect in places. For instance, it lists "fox" as one of the "Buddy" species, but that's not in the code.
The classification is pretty weird sometimes, too. For example the `/exit` slash command is filed under advanced and experimental commands...
That's been corrected, I did another fact checking pass!
Another? Why weren't all the facts checked on the first pass?
We've moved from "move fast and break things" to "hallucinate fast and patch later." It's the inevitable side effect of using AI to curate AI-written codebases.
When you're picking most likely tokens, you get least surprising tokens, ones with least entropy and least information per token.
That's fair. The site isn't meant to be a deep technical dive, it's more of a visual high-level guide of what I've curated while exploring the codebase while assisted by AI, 500k loc codebase is just too much to sift through in a short amount of time.
Really Weird but then it's so easy spot AI text by this pattern
I agree with you and I'm generally an AI "defender" when people superficially dismiss AI capabilities, but this is a more subtle point.
If you prompt with little raw material and little actual specification of what you want to see in the end, eg you just say make a detailed breakdown dashboard-like site that analyzes this codebase, the result will have this uncanny character.
I'd describe it as a kind of "fanfic", it (and now I'm not just talking about this website but my overall impression related to this phenomenon) reminds me a bit like how when I was 15 or so, I had an idea about how the world works then things turned out to be less flashy, less movie-like, less clear-cut, less-impressive-to-a-teenage-boy than I had thought.
If you know the concept of "stupid man's idea of a smart man", I'd say AI made stuff (with little iteration) gives this outward appearance of a smart man from the Reddit-midwit-cinematic-universe. It's like how guns in movies sound more like guns than real guns. It's hyperreality.
Again this is less about the capabilities of AI and it's more connected to the people-pleasing nature of it. It's like you prompt it for some epic dinner and it heaps you up some hmmm epic bacon with bacon yeah (referring to the hivemind-meme). Or BigMac on the poster vs the tray, and the poster one is a model made with different components that are more photogenic. It's a simulacrum.
It looks more like your naive currently imagined thing about what you think you need vs what you'd actually need. It's like prompting your ideal girlfriend into AI avatar existence. I'm sure she will fit your ideal thought and imagination much better but your actual life would need the actual thing.
This relates to the Persona thing that Anthropic has been exploring, that each prompt guides the model towards adopting a certain archetypal fiction character as it's persona and there are certain attraction basins that get reinforced with post training. And in the computer world, simulated action can be easily turned into real action with harnesses and tools, so I'm not saying that it doesn't accomplish the task. But it seems that there are more sloppy personas, and it seems that experts can more easily avoid summoning them by giving them context that reflects more mundane reality than a novice or an expert who gives little context. Otherwise the AI persona will be summoned from the Reddit midwit movie.
I'm not fully clear about all this, but I think we have a lot to figure out around how to use and judge the output of AI in a productive workflow. I don't think it will go away ever, but will need some trimming at the edges for sure.
If you need flashy motion graphics to explain 'returns data from API,' you probably can't justify the pixel budget or the user's time.
Thanks to Claude Code, we got such a beautifully polished and dazzling website that gives a complete introduction to itself the very moment the leak happened :)
Kairos and auto-dream are more interesting than anything in the agent loop section. Memory consolidation between sessions is the actual unsolved problem. The rest is just plumbing tbh
Projects like Beads help with memory consolidation by making it somewhat moot, since it stays "offline" and can be recollected at any moment.
I built a site that lets you explore and browse all the Claude Code prompts in a structured way:
https://ccprompts.info
Now we just let the AI "move fast and break things".
ccleaks.com seems to be "temporarily paused" from Vercel.
Here is another one that goes in depth as well: www.markdown.engineering for anyone going deep on learning.
Is it just me or do I not find the Claude Code application that fascinating?
I use it all day and love it. Don't get me wrong. But it's a terminal-based app that talks to an LLM and calls local functions. Ooookay…
I think it's good that it's out there, and I wonder why Anthropic have been keeping it closed source; clearly they can't possibly think that the CC source code is a competitive advantage...?
Agents in general are easy to make, and trivial to make for yourself especially, and the result will be much better than what any of the big providers can make for you.
`pi` with whatever commands/extensions you want to make for yourself is better than CC if you really don't want to go through the trouble of making your own thing.
If you think this is not a competitive advantage then youre missing the point. LLMs arent so good that they work through bad abstractions and pretty much everyone has bad abstractions. CC is what invents some of the best abstractions (not the first). I think theyre they first ones who nailed subagents well. Theres a lot to learn from them and while im learning a lot from their source code my heart bleeds that this happened to them.
Sincerely, someone running a team building similar things for analytics.
why do you think agents you make yourself will be better for you? integration with tooling that you prefer? your local dev setup built in?
curious as i haven't gotten around to writing my own agent yet
All of the above at exactly the token cost that it requires for you.
Anything general is always going to be worse for specific use cases, and agents from these big providers are very general. They'll spend tons of tokens doing things that you might not need, including spend extra tokens on supporting MCP, etc., when you might not even need that.
I feel the same way. Given it's AI-written, looking at the code isn't even worth it to me. I would rather read a blog post about how they develop it day to day.
That’s what every agent does. They are fundamentally simple.
But you can do a lot of interesting things on top of this. I highly recommend writing an agent and hooking it up to a local model.
Clever architecture often can still beat clever programming.
On the one hand I don't understand why it needs to be half a million lines. However code is becoming machine shaped so the maintenance bloat of titanic amounts of code and state are actually shrinking.
Please share the prompt/skills used to build it
I like the Claude desktop interface. The color scheme, presentation, fonts, etc. Is there a CSS I can find for the desktop version - I assume it's using some kind of web rendering engine and CSS is part of it.
I'm just admiring the visualizations this guy built in under a day. Wondering how he did it so fast
Don't do the "noise" thing this web page does. It hurts my eyes so bad. Why would you purposefully make your page look like a low-quality JPG?
Nice job - I'm a fan. Makes it easy to get the big picture so I know where to dive in.
FYI - This pops at my work as a sec threat via Cisco Umbrella. :D
I don't know why people obsess and spend so much time on this codebase. It isn't (and never was)alien technology. It's just mediocre typescript generated by an LLM
This is funny because this looks exactly like my vibe coded Portfolio - colors and all.
would be nice if the transformers code for one of these frontier LLM models got leaked, HN will have a field day with a reveal like that
I doubt there is anything special about the transformer code the frontier labs use. The only thing proprietary in it are probably the infrastructure-specific optimizations for very large scale distributed training and some GPU kernel tricks. The real moat is the training data, especially the RLHF/finetuning data and verifiable reward environments, and the GPU clusters of course.
The open source models are quite close, and they'd probably be just as good with the equivalent amount of compute/data the frontier labs have access to.
That’s what I‘m thinking as well.
However, I assume that usage data could be increasingly valuable as well. That will likely help the big commercial cloud models to maintain a head start for general use.
I'm developing an agent focused on A2A, support for small models, and privacy (https://swival.dev).
I looked at the leaked code expecting some "secret sauce", but honestly didn't found anything interesting.
I don't get the hype around Claude Code. There's nothing new or unique. The real strength are the models.
Has anyone tried USER_TYPE "ant"? I might be crazy, but I have not hit my limit yet after 3 questions.
You guys all get it’s an April joke?
Btw, the 500K is just the source - it does not include tests. I would imagine there are at least 2-4x tests.
This is AI slop.
First command I looked at:
That is not what it does at all - it takes you to a stickermule website.What is the motivation for someone to put out junk like this?
> What is the motivation for someone to put out junk like this?
Getting something with a link to their GitHub onto the frontpage of HN. Because form matters much more in this world than substance.
Clout and reaching the top of HN apparently.
The animated explanation at the top is also way too fast at 1x, almost impossible to follow; that immediately hinted at the author not fully reading/experiencing the result before publishing this.
Why is it that some people feel entitled to take this kind of tone as soon as AI is used?
It's inappropriate to label a free side project 'junk' or 'slop' even if it contains major errors.
Particularly when there's a disclaimer about possible inaccuracies on the page.
People don't like having their time wasted
> Why is it that some people feel entitled to take this kind of tone as soon as AI is used?
BECAUSE ITS WRONG! THE DATA IS WRONG!
A year ago I wouldn't have guessed a TUI could be a competitive advantage. But "harness engineering" became a thing, and it turns out the agent wrapper — tool orchestration, context management, permission flows — is where real product value lives. Not as much as the models themselves, but more than most people expected. This leak is a painful reminder of that.
No point in reading this, they are continuing to lobotomize it daily...
Really nice visualisation of this, makes understanding the flow at a high levle pretty clear. Also the tool system and command catalog, particularly the gated ones are super interesting.
Looks like ccleaks is down eek - not long before ccunpacked has same fate.
Nice presentation. The reality is there is nothing really special about the claude code harness?
Nice site. I might suggest moving SendMessage to the Hidden Features as they don't appear to have implemented a ReadMessage or ListMessages tools.
I mean, I get it: vibe-coded software deserves vibe-coded coverage. But I would at least appreciate it if the main part of it, the animation, went at a speed that at least makes it possible to follow along and didn't glitch out with elements randomly disappearing in Firefox...
How is this on the front page?
It's on the front page because it looks really cool. You can complain about it being vibe coded, but it still looks good. If you ask Claude to allow the user to slow down the animation, it can do that quite easily, that's just not a problem caused by vibe coding. And I'm on FF and didn't notice anything glitching out.
I hope /Buddy is ported across to OpenCode.
However, excellent development practices involve modularizing code based on functional domains or responsibilities.
The utils directory should only contain truly generic, business-agnostic utilities (such as date retrieval, simple string manipulation, etc.).
We can see that the code produced by Vibe is not what a professional engineer would write. This may be due to the engineers using the Vibe tool.
That's the hallmark of "vibe coding": optimizing for immediate output while treating the utils folder as a generic junk drawer.
Another "hallmark" that happens to describe pretty much every codebase people wrote even before LLMs were a thing.
Sadly, the AI’s have been trained on human developed repos.
So it does use ripgrep and not unix grep. [0] I knew it from some other commenters here on HN, but it's nice to see it in the source as well.
0 - https://github.com/zackautocracy/claude-code/blob/main/src/u...
I just stumbled on a fascinating replacement candidate while clicking around on embed models on hugging face: https://github.com/lightonai/next-plaid/tree/main/colgrep
it looks really interesting.
Same guide for opencode would be nice too
Ah, good well-architected code, finally... With most of the code in utils/other :D
I prefer this mapping from Nikita @ CosmoGraph: https://run.cosmograph.app/public/dfb673fc-bdb9-4713-a6d6-20...
This loads some bars and then just breaks for me :(
why do people care so much? it's just an agentic loop
Many people seem to believe the Claude Code has some sort of secret sauce in the agent itself for some reason.
I have no idea why because in my experience Claude Code and the same models inside of Cursor behave almost identically. I think all the secret sauce is in the RLHF.
No mention of undercover mode?
519K lines of code for something that is using the baseline *nix tools for pretty much everything important, how do they even manage to bloat it this much? I mean I know how technically, but it's still depressing. Can't they ask CC to make it good, instead of asking it to make it bigger?
Is that safe to use?
its April fools joke. this has really gone wide
I expect dozens more "research articles" that
- find nothing - still manage to fill entire lages - somehow have a similar structure - are boring as fuck
At least this one is 3/4, the previous one had BINGO.
nice example: Find all TODO spin the AI machine
i do shift ctrl F
How the hell is it 500k lines?
It is vibe coded.
it's just bunch of useless junk
I think this is unethical, and "everyone else is also doing it" is not a valid excuse.
Ccleaks is down?
cool Archaeologization Collection Output
Source leak or free code review? I can say that there is no bad publicity.
Anthropic is now more open than OpenAI Itself lmao!
I got a goose
War flashbacks to genshin
what is so fascinating about claude code. we have codex that is open source already. is there something special to learn from claude code?
this claude code leak is such a fuck up...
The fact that now every agent designer knows what was already built is a huge shot of steroids to their codebase!
Holy fukking hell thats some bad code. Full blown AI slop.
Thanks, I'll use this for teaching next week (on what not to do). BashTool.ts :D But, in general, I guess it just shows yet again that the emperor has no clothes.
Are you not feeling the vibes?
In all seriousness. I think you‘re supposed to run these in some kind of sandbox.
> it just shows yet again that the emperor has no clothes
Which emperor, specifically?
Hey, nice job! Next time tell calude to add some explosions, car crashes and stuntment into the design! Who cares about content anyway ... https://speculumx.at/blogpost/getting-sick-of-ai-slop
Enshitification galore
What exactly is shitty here? A program i use for hours every day to do the job previously done by many N human beings, without many bugs, seems to have code thats seemingly messy but still clearly works.
Maybe if it was working the way it was 2 months ago, life would be good?