This format, or similar formats, seem to be the standard now, I was just reading the "Lessons from Building Manus"[1] post and they discuss the Hermes Format[2] which seems similar in terms of being pseudo-xml.
My initial thought was how hacky the whole thing feels, but then the fact that it works and gives rise to complex behaviour (like coercing specific tool selection in the Manus post) is quite simple and elegant.
Also as an aside, it is good that it appears that each standard tag is a single token in the OpenAI repo.
Prediction: GPT-5 will use a consortium of models for parallel reasoning, possibly including their oss versions. Each using different 'channels' from the harmony spec.
I have a branch of llm-consortium where I was noodling with giving each member model a role. Only problem is it's expensive to evaluate these ideas so I put it on hold. But maybe now with oss models being cheap I can try and it on those.
I tested a consortium of qwens on the brainfuck test and it solved it, while the single models fail.
MOEs are a single model. An 'expert' is a subset of layers chosen by a router model for each token. This makes them run faster. A consortium is a type of parallel reasoning that uses multiple of the same or different models to generate parallel response and find the best one.
All models have a jagged frontier with weird skill gaps. A consortium can bridge those gaps and increase performance on the frontier.
Yesterday I gave a presentation on the role of harmony in AI — as a matter of philosophical interest. I’d previously written a large literature review on the concept of harmony (here: https://www.sciencedirect.com/science/article/pii/S240587262...). If you are curious about the slides, here: Bit.ly/ozora2025
I assume they are using the concept of harmony to refer to the consistent response format? Or is it their intention for an open weights release?
> The format enables the model to output to multiple different channels for chain of thought, and tool calling preambles along with regular responses
That's pretty cool and seems like a logical next step to structure AI outputs. We started out with a stream of plaintext. In the future perhaps we'll have complex typed output.
Humans also emit many channels of information simutaneously. Our speech, tone of voice, body language, our appearance - it all has an impact on how our information is received by another.
Same here - all those links are either broken or asking for auth. Classic case of announcing something before the infrastructure is ready.
This kind of coordination failure is surprisingly common with AI releases lately. Remember when everyone was trying to access GPT-4 on launch day? Or when Anthropic's Claude had those random outages during their big announcements?
Makes you wonder if they're rushing to counter Google's Genie 3 news and got caught with their pants down during the GitHub outage. The timing seems too coincidental.
At least when it does go live, having truly open weights models will be huge for the community. Just wish they'd test their deployment pipeline before hitting 'publish' on the blog post.
Yes but then you can use the pelican test in all your marketing where you say that this is the <apple slide deck voice> most capable model. ever. And then ignore the new test except as a footnote in some long dry boring evaluation.
I hope this ends in well poisoning to where all data about pelicans is associated with a bicycle in some way to which you can't get any model to give you correct information about pelicans or bicycles but you can get a pelican riding a bicycle.
... but these links aren't active yet. I presume they will be imminently, and I guess that means that OpenAI are releasing an open weights GPT model today?
Basically, LLMs are trained with a specific conversation format, and if your input does not follow that format, the LLM will perform poorly. We usually don't have to worry about this because their API automatically puts our input into the proper format, but I guess now that they open sourced a model, they are also releasing the corresponding format.
> GPT OSS is a hugely anticipated open-weights release by OpenAI, designed for powerful reasoning, agentic tasks, and versatile developer use cases. It comprises two models: a big one with 117B parameters (gpt-oss-120b), and a smaller one with 21B parameters (gpt-oss-20b). Both are mixture-of-experts (MoEs) and use a 4-bit quantization scheme (MXFP4), enabling fast inference (thanks to fewer active parameters, see details below) while keeping resource usage low. The large model fits on a single H100 GPU, while the small one runs within 16GB of memory and is perfect for consumer hardware and on-device applications.
Apparently the issue was resolved, but there's no indication there was an outage in the last 24 hours when looking at status...
https://www.githubstatus.com/
Every OpenAI announcement has threads of people complaining that the links don't work yet as if you can trivially deploy 10 different interconnected websites completely instantly.
This format, or similar formats, seem to be the standard now, I was just reading the "Lessons from Building Manus"[1] post and they discuss the Hermes Format[2] which seems similar in terms of being pseudo-xml.
My initial thought was how hacky the whole thing feels, but then the fact that it works and gives rise to complex behaviour (like coercing specific tool selection in the Manus post) is quite simple and elegant.
Also as an aside, it is good that it appears that each standard tag is a single token in the OpenAI repo.
[1] https://manus.im/blog/Context-Engineering-for-AI-Agents-Less... [2] https://github.com/NousResearch/Hermes-Function-Calling
Prediction: GPT-5 will use a consortium of models for parallel reasoning, possibly including their oss versions. Each using different 'channels' from the harmony spec.
I have a branch of llm-consortium where I was noodling with giving each member model a role. Only problem is it's expensive to evaluate these ideas so I put it on hold. But maybe now with oss models being cheap I can try and it on those.
Computer science's favorite move: we've reached the limits of a scaling law meant to benefit single-threaded processes, so let's go parallel...
we are scaling in one direction for 2 years now...
What are your thoughts on some other model like qwen using something like this?
Pardon me but are you thinking that this method is superior than mixture of experts? What are your thoughts?
I tested a consortium of qwens on the brainfuck test and it solved it, while the single models fail.
MOEs are a single model. An 'expert' is a subset of layers chosen by a router model for each token. This makes them run faster. A consortium is a type of parallel reasoning that uses multiple of the same or different models to generate parallel response and find the best one.
All models have a jagged frontier with weird skill gaps. A consortium can bridge those gaps and increase performance on the frontier.
Has anyone compared a consortium of leading edge 3B-20B models compared to the most powerful models?
I'd love to see how they performed.
Do you have a favourite benchmark? I may just have the budget for testing some 3b models
This is what Grok 4 Heavy does with apparent success.
They may have been inspired by it. It was shared by karpathy... https://x.com/karpathy/status/1870692546969735361
I wish someone would extract the Grok Heavy prompts to confirm, but I guess those jailbreakers don't have the $200 sub.
Yesterday I gave a presentation on the role of harmony in AI — as a matter of philosophical interest. I’d previously written a large literature review on the concept of harmony (here: https://www.sciencedirect.com/science/article/pii/S240587262...). If you are curious about the slides, here: Bit.ly/ozora2025
I assume they are using the concept of harmony to refer to the consistent response format? Or is it their intention for an open weights release?
> The format enables the model to output to multiple different channels for chain of thought, and tool calling preambles along with regular responses
That's pretty cool and seems like a logical next step to structure AI outputs. We started out with a stream of plaintext. In the future perhaps we'll have complex typed output.
Humans also emit many channels of information simutaneously. Our speech, tone of voice, body language, our appearance - it all has an impact on how our information is received by another.
Links seem to be working now:
- https://openai.com/index/introducing-gpt-oss/
- https://cdn.openai.com/pdf/419b6906-9da6-406c-a19d-1bb078ac7...
Same here - all those links are either broken or asking for auth. Classic case of announcing something before the infrastructure is ready.
This kind of coordination failure is surprisingly common with AI releases lately. Remember when everyone was trying to access GPT-4 on launch day? Or when Anthropic's Claude had those random outages during their big announcements?
Makes you wonder if they're rushing to counter Google's Genie 3 news and got caught with their pants down during the GitHub outage. The timing seems too coincidental.
At least when it does go live, having truly open weights models will be huge for the community. Just wish they'd test their deployment pipeline before hitting 'publish' on the blog post.
gpt-oss models are reportedly being hosted on huggingface.
https://www.bleepingcomputer.com/news/artificial-intelligenc...
(as of 3 days ago)
pelican when
What's pelican?
@simonw asks every new foundation model to generate an SVG of a pelican riding a bicycle as a part of his review post
The foundation model companies should just learn that case and call it a day.
Yes, they should definitely Goodhardt the Pelican Test so we can... just have to invent a new test?
Yes but then you can use the pelican test in all your marketing where you say that this is the <apple slide deck voice> most capable model. ever. And then ignore the new test except as a footnote in some long dry boring evaluation.
He spotted a pelican in a presentation the other week, so they're on to him and he's on to them.
Benchmark-driven development, like Dieselgate in automotive.
I hope this ends in well poisoning to where all data about pelicans is associated with a bicycle in some way to which you can't get any model to give you correct information about pelicans or bicycles but you can get a pelican riding a bicycle.
wen pelican.... WEN BICYCLE
I wonder how much performance is left on the table due to it not being zero-copy.
The page links to: https://gpt-oss.com/ and https://openai.com/open-models
... but these links aren't active yet. I presume they will be imminently, and I guess that means that OpenAI are releasing an open weights GPT model today?
what's this for?
Basically, LLMs are trained with a specific conversation format, and if your input does not follow that format, the LLM will perform poorly. We usually don't have to worry about this because their API automatically puts our input into the proper format, but I guess now that they open sourced a model, they are also releasing the corresponding format.
read the README
It's weird to me that ChatGPT would release a local model that you can't plug directly into their client.....kind of defeats the purpose.
Also creates a walled garden on purpose.
None of their links work?
- https://gpt-oss.com/ Auth required?
- https://openai.com/open-models/ seems empty?
- https://cookbook.openai.com/topic/gpt-oss 404
- https://openai.com/index/gpt-oss-model-card/ empty page?
Am I holding the internet wrong?
I think they're currently doing the release. I am guessing those will all be online soon.
The new transformers release describes the model: https://github.com/huggingface/transformers/releases/tag/v4....
> GPT OSS is a hugely anticipated open-weights release by OpenAI, designed for powerful reasoning, agentic tasks, and versatile developer use cases. It comprises two models: a big one with 117B parameters (gpt-oss-120b), and a smaller one with 21B parameters (gpt-oss-20b). Both are mixture-of-experts (MoEs) and use a 4-bit quantization scheme (MXFP4), enabling fast inference (thanks to fewer active parameters, see details below) while keeping resource usage low. The large model fits on a single H100 GPU, while the small one runs within 16GB of memory and is perfect for consumer hardware and on-device applications.
Presumably they use GitHub and their release process is delayed by the current GitHub outage.
Apparently the issue was resolved, but there's no indication there was an outage in the last 24 hours when looking at status... https://www.githubstatus.com/
Not a fan of this presentation of communication.
The status page now reflects an issue, at time of writing it had been resolved for almost an hour and there was no indication of an issue.
IDK I think this is on purpose: https://nitter.net/sama/status/1952759361417466016#m
EDIT: nevermind, I spoke too soon! I guess this was referring to GPT 5 later this week. https://openai.com/open-models/ is liveCosmically bad timing.
> Am I holding the internet wrong?
Likely, considering every single one opens right up for me.
Also https://cookbook.openai.com/articles/openai-harmony is referenced 3 times in the README but it is 404
the link does work now for what its worth
https://ollama.com/library/gpt-oss
Does seem like we're gonna get open weights models today tho
I'm guessing someone published the github repo too early.
GitHub is having an outage.
OpenAI might have tried coordinating the press release of their open model to counter Google Genie 3 news but got stuck in the middle of the outage.
GitHub got hugged to death by OpenAI :)
Every OpenAI announcement has threads of people complaining that the links don't work yet as if you can trivially deploy 10 different interconnected websites completely instantly.
> Am I holding the internet wrong?
The GitHub outage is delaying them on their release.