> RadiantOS treats your computer as an extension of your mind. It’s designed to capture your knowledge, habits, and workflows at the system layer. Data is interlinked like a personal wiki, not scattered across folders.
This sounded really interesting... till I read this:
> It’s an AI-native operating system. Artificial neural networks are built in and run locally. The OS understands what applications can do, what they expose, and how they fit together. It can integrate features automatically, without extra code. AI is used to extend your ability, help you understand the system and be your creative aid.
I actually don't mind it necessarily. I wonder if the medium-far future of software is a ground-level AI os that spins up special purpose applications on the fly in real time.
What clashes for me is that I don't see how that has anything to do with the mission statement about getting away from social media and legacy hardware support. In fact it seems kind of diametrically opposite, suggesting intentionally hand crafted, opinionated architecture and software principles. Nothing about the statement would have lead me to believe that AI is the culmination of the idea.
And again, the statement itself I am fine with! In fact I am against the culture of reflex backlash to vision statements and new ventures. But I did not take the upshot of this particular statement to be that AI was the culmination of the vision.
Same. I was super excited until I saw the AI stuff you pointed out. I'll have to read more about that. I like the idea of a new OS that isn't just a Linux clone, networking stack that is old school and takes computing in a different direction. I don't have a lot of need for the AI stuff outside of some occasional LLM stuff. I'd like to hear more from the authors on this.
I also understand that the old BBS way of communicating isn't perfect, but looking into web browsers seems to just be straight up insanity. Surely we can come up with something different now that takes the lessons learned over the past few decades combined with more modern hardware. I don't pretend to know what that would look like, but the idea of being able to fully understand the overall software stack (at least conceptually) is pretty tempting.
> Radiance compiler targetting RISC-V ISA. Involves writing an R' compiler in C and then porting it to R'.
R is a language for statistics and data analysis, I can't understand why they chose it for low-level systems programming having modern alternatives like Go or Rust. Maybe it has to do with the AI integration.
It seems interesting enough to follow, but I'm uncertain about its actual direction.
I think R’ is completely separate from R-the-stats-language and more like a cut down version of their Radiance language. Pretty common way to bootstrap a self-hosted runtime.
Most of the text on the site seems LLM written as well. Given that the scope of the project involves making their own programming language, OS, and computing hardware, but they don't seem to have made very much tangible progress towards these goals, I don't understand why they decided to spend time making a fancy project site before they have anything to show. It makes me doubt that this will end up going anywhere.
>Most of the text on the site seems LLM written as well.
I was thinking the same thing. Out of curiosity I pasted it at one of those detection sites and it said 0% AI written, but the tone of vague transcendance certainly got my eyebrow raised.
That’s kind of where my mind went too. They’re pitching this functionality for use by AI, but if it’s actually something like OLE or the Smalltalk browser or something like that where you can programmatically enumerate APIs, this has a lot of potential for non-AI use cases too that I generally find lacking in conventional platforms.
There are lots of systems that have tried to do something like the first quote. They're usually referred to as "semantic OSes", since the OS itself manages the capturing of semantic links.
I don't think anyone denies the current utility of AI. A big problem of the current OSes is that AI features are clumsily bolted on without proper context. If the entire system is designed from the ground up for AI and the model runs locally, perhaps many of the current issues will be diminished.
> I don't think anyone denies the current utility of AI. A big problem of the current OSes is that AI features are clumsily bolted on without proper context.
I do. "AI" is not trustworthy enough to be anything but "clumsily bolted on without proper context."
Why isn't AI just another application that can be run on the device? Surely we expose the necessary interfaces through the OS and the application goes from there?
I haven't read all of the documentation around this project but I hope it's in the same vein as the cannon cat and the apple-apple//gs (and other early computer systems with quick and easy access to some kind of programmable environment). (as an aside I think apple tried to keep this going with applescript and automator but didn't quite pull it off)
I think there is a weird trick though. General purpose computers a great and they can do anything and many people bog down their systems with everything as a result. I feel like past projects like Microsoft's Bob and the Canon Cat were also in this general thought pattern. Strip it back, give them the tools that they need and very little else.
I try and follow that pattern also on my work macbook. I only install what I need to do my job. Anything I want to try out gets a deadline for removal. I keep my /Applications folder very light and I police my homebrew installs with a similar zeal.
I'm interested in the idea of a clean slate hardware/software system. I think being constrained to support existing hardware or software reduces opportunities for innovation on the other.
I don't see that in this project. This isn't defined by a clean slate. It is defined by properties that it does not want to be.
Off the top of my head I can think of a bunch of hardware architectures that would require all-new software. There would be amazing opportunities for discovery writing software for these things. The core principles of the software for such a machine could be based upon a solid philosophical consideration of what a computer should be. Not just "One that doesn't have social media" but what are truly the needs of the user. This is not a simple problem. If it should facilitate but also protect, when should it say no?
If software can run other software, should there be an independent notion of how that software should be facilitated?
What should happen when the user directs two pieces of software to perform contradictory things? What gets facilitated, what gets disallowed.
I'd love to see some truly radical designs. Perhaps model where processing and memory are one, A:very simple core per 1k of SRAM per 64k of DRAM per megabytes of flash, machines with 2^n cores where each core has a direct data channel to every core with its n-bit core ID being one but different (plus one for all bits different).
A n=32 system would have four billion cores and 4 terabytes if RAM and nearly enough persistent storage but it would take talking through up to 15 intermediaries to communicate between any two arbitrary cores.
You could probably start with a much lower n. Then consider how to write software for it that meets the principles that meets the criteria of how it should behave.
Clean slate designs with arbitrarily radical designs are easy when you don’t have to actually build them.
There are reasons that current architecture are mostly similar to each other, having evolved over decades of learning and research.
> Perhaps model where processing and memory are one, A:very simple core per 1k of SRAM per 64k of DRAM per megabytes of flash,
To serve what goal? Such a design certainly wouldn’t be useful for general purpose computing and it wouldn’t even serve current GPU workloads well.
Any architecture that requires extreme overhauls of how software is designed and can only benefit unique workloads is destined to fail. See Itanium for a much milder example that still couldn’t work.
> machines with 2^n cores where each core has a direct data channel to every core with its n-bit core ID being one but different (plus one for all bits different).
Software isn’t the only place where big-O scaling is relevant.
Fully connected graph topologies are great on paper, but the number of connections scales quadratically. For a 64-core fully connected CPU topology you would need 2,016 separate data buses.
Those data buses take up valuable space. Worse, the majority of them are going to be idle most of the time. It’s extremely wasteful. The die area would be better used for anything else.
> A n=32 system would have four billion cores
A four billion core system would be the poster child for Amdahl’s law and a great example of how not to scale compute.
Let’s not be so critical of companies trying to make practical designs.
> Software isn’t the only place where big-O scaling is relevant.
> Fully connected graph topologies are great on paper, but the number of connections scales quadratically. For a 64-core fully connected CPU topology you would need 2,016 separate data buses.
Nitpick: I don't think the comment you're replying to is proposing a fully-connected graph. It's proposing a hypercube topology, in which the number of connections per CPU scales logarithmically. (And with each node also connected to its diagonal opposite, but that doesn't significantly change the scaling.)
If my math is right, a 64-core system with this topology would have only 224 connections.
Are you aware of anyone who has used that system outside of a hobbyist buying the dev board? I looked into it and the ideas were cool, but no clue how to actually do anything with it.
Thanks for these thoughts -- I agree in principle, but we have to juggle a couple things here: while Radiant is in some ways an experiment, it isn't a research project. There are enough "obvious" things we can do better this time around, given everything we've learned as an industry, that I wouldn't want to leapfrog over this next milestone in personal computer evolution and end up building something a little too unfamiliar to be useful.
Very interesting and ambitious project and nice design. I hope the author will be able to comment here.
I'm interested to hear about the plans or capabilities in R' or Radiance for things like concurrent programming, asynchronous/scheduling, futures, and invisible or implied networking.
AI is here and will be a big part of future personal computing. I wonder what type of open source accelerator for neural networks is available as a starting point. Or if such a thing exists.
One of the opportunities for AI is in compression codecs that could provide for very low latency low bandwidth standards for communication and media browsing.
For users, the expectation will shortly be that you can talk to your computer verbally or send it natural language requests to accomplish tasks. It is very interesting to think how this could be integrated into the OS for example as a metadata or interface standard. Something like a very lightweight version of MCP or just a convention for an SDK filename (since software is distributed as source) could allow for agents to be able to use any installed software by default. Built in embeddings or vector index could also be very useful, maybe to filter relevant SDKs for example.
If content centric data is an assumption and so is AI, maybe we can ditch Google and ChatGPT and create a distributed hash embedding table or something for finding or querying content.
It's really fun to dream about idealized or future computers. Congratulations for getting so far into the details of a real system.
One of my more fantasy style ideas for a desktop uses a curved continuous touch screen. The keyboard/touchpad area is a pair of ergonomic concave curves that meet in the middle and level out to horizontal workspaces on the sides. The surface has a SOTA haptic feedback mechanism.
Like clockwork every year or so someone emerges and says "I'm going to fix computing" and then it never happens. We're as mired in the status quo in computing as we are in politics, and I don't see any way out of it, really.
Also the website is very low contrast (brighten up that background gray a bit!)
I have been having a lot of fun with PicoCalc. It's not targeted at end users but is fun for developers alike who want a taste of developing things from first principles. More than anything it can live independently from your other devices.
I thought they were talking about redesigning hardware from the ground up. There will always be history and baggage if you are working with the same computer instruction sets. From the very beginning at the level of assembly, there is history and baggage. This is not ambitious enough.
The landing page reads like it was written with an LLM.
Somehow this makes me immediately not care about the project; I expect it to be incomplete vibe-coded filler somehow.
Odd what a strong reaction it invokes already. Like: if the author couldn’t be bothered to write this, why waste time reading it? Not sure I support that, but that’s the feeling.
I am very concerned about the long term effects of people developing the habit of mistrusting things just because they’re written in coherent English and longer than a tweet. (Which seems to be the criterion for “sounds like an LLM wrote it”.)
Haha. This is so true. I'm a bit long-winded myself and once got accused of being AI on here. I just don't communicate like Gen Alpha. I read their site and nothing jumped out as AI although it's possible they used it to streamline what they initially wrote.
The thing that always worries me about these clean-slate designs is the fear that they'll ignore accessibility for disabled people, e.g. blind people, and then either the system will remain inaccessible, or accessibility will have to be retrofitted later.
I'm actually ok with that if it truly serving the purpose for what a computer should be.
I think those principles would embody the notion that the same thing cannot serve all people equally. Simultaneously, for people to interact, interoperability is required. For example, I don't think everyone should use the same word processor. It is likely that blind people would be served best by a word processor designed by blind people. Interoperable systems would aim to neither penalise or favour users for using a different program for the same task.
I also think for the purpose of piloting a new system I don't mind people chasing whatever aspect of that mission most inspires them. Anything aspiring to be a universal paradigm needs to account for accessibility to have legitimacy in being "for everyone" but that doesn't necessarily have to be the scope when you're starting.
I'd like to think that prioritizing early phase momentum of computing projects leads to more flowers blooming, and ultimately more accessibility-enabled projects in the long run.
Yeah, this is concerning. Although, if the system is architected well, accessibility features ought to be something that can be added as an extension.
What is a screen reader but something that can read the screen? It needs metadata from the GUI, which ought to be available if the system is correctly architected. It needs navigation order, which ought to be something that can be added later with a separate metadata channel (since navigation order should be completely decoupled from the implementation of the GUI).
The other topic of accessibility a la Steve Yegge: the entire system should be approachable to non-experts. That's already in their mission statement.
I think that the systems of the past have trained us to expect a lack of dynamism and configurability. There is some value to supporting existing screen-readers, like ORCA, since power users have scripts and whatnot. But my take is that if you provide a good mechanism that supports the primitive functionality and support generalized extensibility, then new and better systems can emerge organically. I don't use accessibility software, but I can't imagine it's perfect. It's probably ripe for its own reformation as well.
> What is a screen reader but something that can read the screen?
Good screen readers track GUI state which makes it hard to tack on accessibility after the fact. They depend on the identity of the elements on the screen so they can detect relevant changes.
You won't be bringing your own graphics card to RadiantOS. According to one of the pages, they want to design their own hardware and the graphics will be provided by a memory-mapped FPGA.
If your question is about the general intricacies in graphics that usually have bugs, then I'd say they have a much better chance at solving those issues than other projects that try to support 3rd party graphics hardware.
I am fascinated by the art but it seem bizzarely overdefined relative to the software vision laid out in text. That is, the amount of richly imagined imagery dramatically outpaces the overall coherence of the vision in every other respect.
And as with the text, the art feels AI generated. In fact I even think it's quite beautiful for what it is, but it reminds me of "dark fantasy" AI generated art on Tikok.
I have nothing against an aesthetic vision being the kernel of inspiration for a computing paradigm (I actually think the concept art process is a fantastic way to ignite your visionary mojo, and I'm flashing back to amazing soviet computing design art).
But I worry about the capacity and expertise to be able to follow through given the vagueness of the text and the, at least, strongly-suggestive-of-AI text and art, which might reflect the limited capacity and effort even to generate the website let alone build out any technology.
I'm having a hard time following the through line on these first principles. Likely it's just a "me" problem because I have status quo system designs set in my head, but here are some ideas that seem conflicting to me:
> Hardware and software must be designed as one
In here, they describe an issue with computers is how they use layers of abstraction, and that actually hides complexity. But...
> Computers should feel like magic
I'm not sure how the authors think "magic" happens, but it's not through simplicity. Early computers were quite simple, but I can guarantee most modern users would not think they were magical to use. Of course, this also conflicts with the idea that...
> Systems must be tractable
Why would a user need to know how every aspect of a computer works if they're "magic" and "just work"?
Anyway, I'm really trying not to be cynical here. This just feels like a list written by someone who doesn't really understand how computers or software came to work the way they do.
Yeah I felt the contradictions here too. Doesn’t the feeling of “magic” directly proceed from abstraction and non-tractability (or at least, as you say, not needing to understand every part of the system)?
I don't understand why they're particular about writing their own esoteric language. If they want people to buy and engage with it, software has to be the gateway, and that's easier to write in a language people know.
It's prob a balance. Sure, C is king....but if you are starting from scratch...do you REALLY need it or could you design something even better? Maybe, maybe not.
I've programmed for a long time, but always struggled with Assembly and C, so take my views with a grain of salt.
The more I look at it and think about it, it feels like the whole thing, language and images together, are collectively concept art. Which, if that's the case, is fine for what it is. But I do think if that's the case, I think it's at least slightly disrespectful to readers to be coy about how real any of this is.
I like this type of stuff, but it cannot go anywhere. A real clean slate system, free from all crap we have piled on the things we use every day, must be something simple that cannot be interesting for the masses and must be understood and programmable by one person if things go bad. The only way I can see that is by creating something underpowered just to have fun and we already have that; actual (old and new) hardware, emulators and virtual cpus. As soon as it gets any volume or viability, it will be taken on by commercial entities that will eat it and the way to prevent it is always to make it obsolete to begin with.
>"It's a tool for personal computing where every application and every surface, exists as code you can read, edit, and extend. It's a system you can truly own"
This sounds a lot like a Smalltalk running as the OS until they started talking about implementing a systems language.
I read clean slate architecture and "no baggage" and thought someone was designing a non Von Neuman architecture machine with a novel clockless asynchronous cpu, but nope, it's a custom OS running on RISC-V
I thought that too. It would have been such an interesting thing, I would have (modestly) contributed to their Kickstarter even if it didn't produce a commercial product in the end.
The exokernel makes this a nonstarter if you ever want to run untrusted code, as it implies hardware takeovers, compromised peripherals/TPMs/drives/etc. especially when it claims to be AI first.
Out of curiosity- there's a focus on local llm then talk about no GPU, only FPGA. Those feel- at odds. But maybe I'm out of the loop for how far local LLMs on custom hardware has come?
They're still at the compiler stage. LLM features and hardware seem far enough away that it's reasonable to wait to evaluate if that combination is actually practical.
Honestly, this seems rambling and unfocused. It's like a grab-bag of recent-ish buzzwords.
The task that has been set is gigantic. Despite that, they've decided to make it even harder by designing a new programming language on top of it (this seems to be all the work done to date).
The hardware challenge alone is quite difficult. I don't know why that isn't the focus at this stage. It is as-if the author is suggesting that only the software is a problem, when some of the biggest issues are actually closed hardware. Sure, Linux is not ideal, but its hardly relavent in comparison.
I think this project suffers from doing too much abstract thinking without studying existing concrete realities.
I would suggest tackling one small piece of the problem in the hardware space, or building directly on some of the work others have done.
I don't disagree with the thesis of the project, but I think it's a MUCH bigger project than the author suggests and would/will require a concentrated effort from many groups of people working on many sub-projects.
Look, if someone hasn't done it already, I see absolutely no reason not to build a Lua-based IPFS process, port it absolutely everywhere, and use it to host its own operating system.
Why does it always need to be so difficult? We already have the tools. Our methods, constantly changing and translblahbicatin' unto the falnords, snk snk... this kind of contrafabulation needs to cease.
Just sayin'.
IPFS+Lua. It's all we really need.
Yes yes, new languages are best languages, no no, we don't need it to be amazing, just great.
I'll indications point to this GitHub user [1], Alexis Sellier [2], as the engineer behind this. Good luck with such an ambitious goal. I'd love to see it.
You wouldn't I don't think (assuming this thing ever got off the ground - huge assumption), but is that really a problem? I think the web page is more to make normalish people aware that this hypothetical ecosystem would be out there. From within that ecosystem they could have a different page.
What an airball. Social networks are the single most valuable aspect of computers and the internet. It is a dark world where we all just talk to LLMs instead of each other.
> They're just now implementing the implementation language, R'.
They haven't done their due diligence: there's already a well-known language named R: https://www.r-project.org/. The prime isn't sufficient disambiguation.
I assume they know but don't care. Either way, that is a bad choice. I think "Rad" would be a good name, but maybe they already are using that for something else.
I assumed R and R' are prototypical bootstrapping variants of what will be the full-fledged Radiant language, but that wasn't explicitly written anywhere.
I am wondering what linux distro/iso comes up with a liveboot gui desktop environment but without a web browser (I only know of tinycorelinux but that is too barebones and I wanted to build my own iso on top of it but it was a little hard to install packages when I tried following its remastering guide etc.)
I even tried to search it on distrowatch with the negate option in their search but it seemed to be broken.
I needed it once to build my own "studyOs" , and in the process I went down a deep rabbit hole on about the hobby-ist distros of linux and their importance.
I then settled on MXLinux because of what I wrote below
People recommend cubic etc. but personally I recommend MXLinux. Its default linux snapshot feature was exactly what I was looking for and it just worked.
I glanced over this and I was excited thinking oh great this could be a linux iso with no browser and similar to tiny core but I found out through the comments that its focus on LLM's etc. is very vague and weird for what I am reading.
I just feel like its seriously not getting the idea. I want to effectively dissect this post's tenants from a Linux user for just a few years.
My first experiences was positive, then negative and now its mixed really.
I feel like this is intending on become so hardware focused that I am not even sure what they mean by this. From my limited knowledge, Linux tries to do a lot of things simply to boot up into a predictable environment on every computer device most likely to the point that there are now things like nix that can arise your system in a determinist system.
I still think that there is a point in making something completely new instead of Yet another Unix from what I can tell, but my hopes aren't very high, sorry. You would have to convince me from why the world would be better off with this instead of Linux aside and their notes on why not linux is still absolutely mixed thoughts in my opinion
> Linux is a monolithic kernel. All drivers, filesystems, and core services share the same privilege space. A single bug, eg. a bad pointer dereference in a GPU driver can corrupt kernel memory and take down the entire system.
Can't drivers be loaded at runtime and there are ways to isolate the taking down of entire system imo. I think this is just how a monolithic kernel should work, no?
I read more discussions on mono-lithic kernel and micro-kernel on Tanenbaum–Torvalds debate wiki [1] and here is something that I think to be apt here
> Torvalds, aware of GNU's efforts to create a kernel, stated "If the GNU kernel had been ready last spring, I'd not have bothered to even start my project: the fact is that it wasn't and still isn't."
Some other person on the usenet group also called gnu hurd a vaporware and well I think there is some factuality to it and gnu hurd team was working on hurd far longer than linux was working at linus (an excerpt? from the same wikipage)
Another line I want to share is this from the wiki: Different design goals get you different designs
I think I was going to criticize the radiant computer but hey, its open source,nobody's stopping you from doing work on it. And this line was said defending linux earlier, so it sure can defend this
But at the same time, my concerns regarding this or any project is regarding it becoming vapor-ware. Linux is way too big and actually good enough for most users. I don't think that the world can have a perfect os. It can have a good enough tho and from the end user, Linux is exactly that. The fact that its open source and is genuinely good at what it does, and there is absolutely no denying about it. You could live your whole life using linux Imo. Its beautiful.
I used to defend NetBsd etc. or hate systemd etc. but the truth of the matter is that nobody's forcing you to use systemd or netbsd, you damn well could use a server without it but I have found that the mass adoption does make me convince that a sys-admin level, linux, maybe even debian or systemd in general would have its gains.
I think linux is really really really good, its just the best imo but I will still try out things like the freebsd,openbsd etc. . I genuinely love it so much. Its honestly wild / even a fever dream when you think about it that something like linux even actually exists. Its so brilliant and the ecosystem around it is just chef's kiss.
One can try and these are your developer hours but I just don't want to see things turn into vaporware, so I will just ask you a question on how do you prevent this project from becoming a vaporware. I am sure this isn't the first time someone has proposed the ideal system and it wouldn't be the last either.
Edits: Sorry this got long. I got a little carried reading the wiki article, its so good.
> Computing machines are instruments of creativity, companions in learning, and partners in thought. They should amplify human intention.
An admirable goal. However putting that next to a bunch of AI slop artwork and this statement...
> One of our goals is to explore how an A.I.-native computer system can enhance the creative process, all while keeping data private.
...is comically out of touch.
The intersection between "I want simple and understandable computing systems" and "I want AI" is basically zero. (Yes, I'm sure some of you exist, my point is that you're combining a slim segment of users who want this approach to tech with another slim segment of users who want AI.)
> RadiantOS treats your computer as an extension of your mind. It’s designed to capture your knowledge, habits, and workflows at the system layer. Data is interlinked like a personal wiki, not scattered across folders.
This sounded really interesting... till I read this:
> It’s an AI-native operating system. Artificial neural networks are built in and run locally. The OS understands what applications can do, what they expose, and how they fit together. It can integrate features automatically, without extra code. AI is used to extend your ability, help you understand the system and be your creative aid.
(From https://radiant.computer/system/os/)
That's... kind of a wierd thing to have? Other than that, it actually looks nice.
I actually don't mind it necessarily. I wonder if the medium-far future of software is a ground-level AI os that spins up special purpose applications on the fly in real time.
What clashes for me is that I don't see how that has anything to do with the mission statement about getting away from social media and legacy hardware support. In fact it seems kind of diametrically opposite, suggesting intentionally hand crafted, opinionated architecture and software principles. Nothing about the statement would have lead me to believe that AI is the culmination of the idea.
And again, the statement itself I am fine with! In fact I am against the culture of reflex backlash to vision statements and new ventures. But I did not take the upshot of this particular statement to be that AI was the culmination of the vision.
Same. I was super excited until I saw the AI stuff you pointed out. I'll have to read more about that. I like the idea of a new OS that isn't just a Linux clone, networking stack that is old school and takes computing in a different direction. I don't have a lot of need for the AI stuff outside of some occasional LLM stuff. I'd like to hear more from the authors on this.
I also understand that the old BBS way of communicating isn't perfect, but looking into web browsers seems to just be straight up insanity. Surely we can come up with something different now that takes the lessons learned over the past few decades combined with more modern hardware. I don't pretend to know what that would look like, but the idea of being able to fully understand the overall software stack (at least conceptually) is pretty tempting.
> Radiance compiler targetting RISC-V ISA. Involves writing an R' compiler in C and then porting it to R'.
R is a language for statistics and data analysis, I can't understand why they chose it for low-level systems programming having modern alternatives like Go or Rust. Maybe it has to do with the AI integration.
It seems interesting enough to follow, but I'm uncertain about its actual direction.
I think R’ is completely separate from R-the-stats-language and more like a cut down version of their Radiance language. Pretty common way to bootstrap a self-hosted runtime.
Yes, R' is "R prime", unrelated to the statistics language. Honestly didn't think about it that much.
Most of the text on the site seems LLM written as well. Given that the scope of the project involves making their own programming language, OS, and computing hardware, but they don't seem to have made very much tangible progress towards these goals, I don't understand why they decided to spend time making a fancy project site before they have anything to show. It makes me doubt that this will end up going anywhere.
They've written an R' compiler in C, and ported its order and parser to be self-hosted, with source code for those included in blog posts.
I'm not a fan of all the LLM and image generator usage either, though.
>Most of the text on the site seems LLM written as well.
I was thinking the same thing. Out of curiosity I pasted it at one of those detection sites and it said 0% AI written, but the tone of vague transcendance certainly got my eyebrow raised.
People had similar fears about OLE in Windows 95.
That’s kind of where my mind went too. They’re pitching this functionality for use by AI, but if it’s actually something like OLE or the Smalltalk browser or something like that where you can programmatically enumerate APIs, this has a lot of potential for non-AI use cases too that I generally find lacking in conventional platforms.
There are lots of systems that have tried to do something like the first quote. They're usually referred to as "semantic OSes", since the OS itself manages the capturing of semantic links.
I don't think anyone denies the current utility of AI. A big problem of the current OSes is that AI features are clumsily bolted on without proper context. If the entire system is designed from the ground up for AI and the model runs locally, perhaps many of the current issues will be diminished.
> I don't think anyone denies the current utility of AI. A big problem of the current OSes is that AI features are clumsily bolted on without proper context.
I do. "AI" is not trustworthy enough to be anything but "clumsily bolted on without proper context."
Why isn't AI just another application that can be run on the device? Surely we expose the necessary interfaces through the OS and the application goes from there?
I think it's fine if all the 'ai' is local.
I haven't read all of the documentation around this project but I hope it's in the same vein as the cannon cat and the apple-apple//gs (and other early computer systems with quick and easy access to some kind of programmable environment). (as an aside I think apple tried to keep this going with applescript and automator but didn't quite pull it off)
I think there is a weird trick though. General purpose computers a great and they can do anything and many people bog down their systems with everything as a result. I feel like past projects like Microsoft's Bob and the Canon Cat were also in this general thought pattern. Strip it back, give them the tools that they need and very little else.
I try and follow that pattern also on my work macbook. I only install what I need to do my job. Anything I want to try out gets a deadline for removal. I keep my /Applications folder very light and I police my homebrew installs with a similar zeal.
Sounds like it's vibe-coding your entire software stack (data, apps, OS) in real time.
I'm interested in the idea of a clean slate hardware/software system. I think being constrained to support existing hardware or software reduces opportunities for innovation on the other.
I don't see that in this project. This isn't defined by a clean slate. It is defined by properties that it does not want to be.
Off the top of my head I can think of a bunch of hardware architectures that would require all-new software. There would be amazing opportunities for discovery writing software for these things. The core principles of the software for such a machine could be based upon a solid philosophical consideration of what a computer should be. Not just "One that doesn't have social media" but what are truly the needs of the user. This is not a simple problem. If it should facilitate but also protect, when should it say no?
If software can run other software, should there be an independent notion of how that software should be facilitated?
What should happen when the user directs two pieces of software to perform contradictory things? What gets facilitated, what gets disallowed.
I'd love to see some truly radical designs. Perhaps model where processing and memory are one, A:very simple core per 1k of SRAM per 64k of DRAM per megabytes of flash, machines with 2^n cores where each core has a direct data channel to every core with its n-bit core ID being one but different (plus one for all bits different).
A n=32 system would have four billion cores and 4 terabytes if RAM and nearly enough persistent storage but it would take talking through up to 15 intermediaries to communicate between any two arbitrary cores.
You could probably start with a much lower n. Then consider how to write software for it that meets the principles that meets the criteria of how it should behave.
Different, clean slate, not easy.
Clean slate designs with arbitrarily radical designs are easy when you don’t have to actually build them.
There are reasons that current architecture are mostly similar to each other, having evolved over decades of learning and research.
> Perhaps model where processing and memory are one, A:very simple core per 1k of SRAM per 64k of DRAM per megabytes of flash,
To serve what goal? Such a design certainly wouldn’t be useful for general purpose computing and it wouldn’t even serve current GPU workloads well.
Any architecture that requires extreme overhauls of how software is designed and can only benefit unique workloads is destined to fail. See Itanium for a much milder example that still couldn’t work.
> machines with 2^n cores where each core has a direct data channel to every core with its n-bit core ID being one but different (plus one for all bits different).
Software isn’t the only place where big-O scaling is relevant.
Fully connected graph topologies are great on paper, but the number of connections scales quadratically. For a 64-core fully connected CPU topology you would need 2,016 separate data buses.
Those data buses take up valuable space. Worse, the majority of them are going to be idle most of the time. It’s extremely wasteful. The die area would be better used for anything else.
> A n=32 system would have four billion cores
A four billion core system would be the poster child for Amdahl’s law and a great example of how not to scale compute.
Let’s not be so critical of companies trying to make practical designs.
> Software isn’t the only place where big-O scaling is relevant.
> Fully connected graph topologies are great on paper, but the number of connections scales quadratically. For a 64-core fully connected CPU topology you would need 2,016 separate data buses.
Nitpick: I don't think the comment you're replying to is proposing a fully-connected graph. It's proposing a hypercube topology, in which the number of connections per CPU scales logarithmically. (And with each node also connected to its diagonal opposite, but that doesn't significantly change the scaling.)
If my math is right, a 64-core system with this topology would have only 224 connections.
Perhaps not a true counterpoint, but there are systems like the GA144, an array of 144 Forth processors.
I think you're missing the point, and I don't think OP is "being critical of companies making practical designs."
Also, I think OP was imagining some kind of tree based topology, not connected graph since he said:
> ...but it would take talking through up to 15 intermediaries to communicate between any two arbitrary cores.
Are you aware of anyone who has used that system outside of a hobbyist buying the dev board? I looked into it and the ideas were cool, but no clue how to actually do anything with it.
Thanks for these thoughts -- I agree in principle, but we have to juggle a couple things here: while Radiant is in some ways an experiment, it isn't a research project. There are enough "obvious" things we can do better this time around, given everything we've learned as an industry, that I wouldn't want to leapfrog over this next milestone in personal computer evolution and end up building something a little too unfamiliar to be useful.
Very interesting and ambitious project and nice design. I hope the author will be able to comment here.
I'm interested to hear about the plans or capabilities in R' or Radiance for things like concurrent programming, asynchronous/scheduling, futures, and invisible or implied networking.
AI is here and will be a big part of future personal computing. I wonder what type of open source accelerator for neural networks is available as a starting point. Or if such a thing exists.
One of the opportunities for AI is in compression codecs that could provide for very low latency low bandwidth standards for communication and media browsing.
For users, the expectation will shortly be that you can talk to your computer verbally or send it natural language requests to accomplish tasks. It is very interesting to think how this could be integrated into the OS for example as a metadata or interface standard. Something like a very lightweight version of MCP or just a convention for an SDK filename (since software is distributed as source) could allow for agents to be able to use any installed software by default. Built in embeddings or vector index could also be very useful, maybe to filter relevant SDKs for example.
If content centric data is an assumption and so is AI, maybe we can ditch Google and ChatGPT and create a distributed hash embedding table or something for finding or querying content.
It's really fun to dream about idealized or future computers. Congratulations for getting so far into the details of a real system.
One of my more fantasy style ideas for a desktop uses a curved continuous touch screen. The keyboard/touchpad area is a pair of ergonomic concave curves that meet in the middle and level out to horizontal workspaces on the sides. The surface has a SOTA haptic feedback mechanism.
Like clockwork every year or so someone emerges and says "I'm going to fix computing" and then it never happens. We're as mired in the status quo in computing as we are in politics, and I don't see any way out of it, really.
Also the website is very low contrast (brighten up that background gray a bit!)
I have been having a lot of fun with PicoCalc. It's not targeted at end users but is fun for developers alike who want a taste of developing things from first principles. More than anything it can live independently from your other devices.
I keep seeing the videos pop up. It does look really cool. I see it has a basic interpreter, so I guess kinda like a C64?
I thought they were talking about redesigning hardware from the ground up. There will always be history and baggage if you are working with the same computer instruction sets. From the very beginning at the level of assembly, there is history and baggage. This is not ambitious enough.
The landing page reads like it was written with an LLM.
Somehow this makes me immediately not care about the project; I expect it to be incomplete vibe-coded filler somehow.
Odd what a strong reaction it invokes already. Like: if the author couldn’t be bothered to write this, why waste time reading it? Not sure I support that, but that’s the feeling.
I am very concerned about the long term effects of people developing the habit of mistrusting things just because they’re written in coherent English and longer than a tweet. (Which seems to be the criterion for “sounds like an LLM wrote it”.)
Haha. This is so true. I'm a bit long-winded myself and once got accused of being AI on here. I just don't communicate like Gen Alpha. I read their site and nothing jumped out as AI although it's possible they used it to streamline what they initially wrote.
Wait until the bot herders realize you can create engagement by having a bot complain about texts being LLM-like.
It seems be popular here because of the ideas it proposes.
The thing that always worries me about these clean-slate designs is the fear that they'll ignore accessibility for disabled people, e.g. blind people, and then either the system will remain inaccessible, or accessibility will have to be retrofitted later.
It's funny you mention that because the first thing I thought when viewing this page was "is this a loading state? why is everything grey?".
Ahem. It's _radiant_ grey.
I thought "is there one of those popups covering things and greying out the page until you close it?"
I'm actually ok with that if it truly serving the purpose for what a computer should be.
I think those principles would embody the notion that the same thing cannot serve all people equally. Simultaneously, for people to interact, interoperability is required. For example, I don't think everyone should use the same word processor. It is likely that blind people would be served best by a word processor designed by blind people. Interoperable systems would aim to neither penalise or favour users for using a different program for the same task.
I also think for the purpose of piloting a new system I don't mind people chasing whatever aspect of that mission most inspires them. Anything aspiring to be a universal paradigm needs to account for accessibility to have legitimacy in being "for everyone" but that doesn't necessarily have to be the scope when you're starting.
I'd like to think that prioritizing early phase momentum of computing projects leads to more flowers blooming, and ultimately more accessibility-enabled projects in the long run.
Yeah, this is concerning. Although, if the system is architected well, accessibility features ought to be something that can be added as an extension.
What is a screen reader but something that can read the screen? It needs metadata from the GUI, which ought to be available if the system is correctly architected. It needs navigation order, which ought to be something that can be added later with a separate metadata channel (since navigation order should be completely decoupled from the implementation of the GUI).
The other topic of accessibility a la Steve Yegge: the entire system should be approachable to non-experts. That's already in their mission statement.
I think that the systems of the past have trained us to expect a lack of dynamism and configurability. There is some value to supporting existing screen-readers, like ORCA, since power users have scripts and whatnot. But my take is that if you provide a good mechanism that supports the primitive functionality and support generalized extensibility, then new and better systems can emerge organically. I don't use accessibility software, but I can't imagine it's perfect. It's probably ripe for its own reformation as well.
> What is a screen reader but something that can read the screen?
Good screen readers track GUI state which makes it hard to tack on accessibility after the fact. They depend on the identity of the elements on the screen so they can detect relevant changes.
I love these guys for trying to do this. I just hope they’ve already made their money and can afford to continue doing this.
It’s every engineer’s dream - to reinvent the entire stack, and fix society while they’re at it (a world without social media, sign me up!).
Love the retro future vibes, complete with Robert Tinney-like artwork! (He did the famous Byte Magazine covers in the late 70s and early 80s).
https://tinney.net/article-this-1981-computer-magazine-cover...
This looks like an advertisement for a new season of Severance or something.
The image on this page is wild: https://radiant.computer/principles/
Of course, I am intrigued by open architecture. Will they be able to solve graphic card issues though?
You won't be bringing your own graphics card to RadiantOS. According to one of the pages, they want to design their own hardware and the graphics will be provided by a memory-mapped FPGA.
If your question is about the general intricacies in graphics that usually have bugs, then I'd say they have a much better chance at solving those issues than other projects that try to support 3rd party graphics hardware.
That image is giving me some Evangelion vibes: https://wiki.evageeks.org/Ramiel
I am fascinated by the art but it seem bizzarely overdefined relative to the software vision laid out in text. That is, the amount of richly imagined imagery dramatically outpaces the overall coherence of the vision in every other respect.
And as with the text, the art feels AI generated. In fact I even think it's quite beautiful for what it is, but it reminds me of "dark fantasy" AI generated art on Tikok.
I have nothing against an aesthetic vision being the kernel of inspiration for a computing paradigm (I actually think the concept art process is a fantastic way to ignite your visionary mojo, and I'm flashing back to amazing soviet computing design art).
But I worry about the capacity and expertise to be able to follow through given the vagueness of the text and the, at least, strongly-suggestive-of-AI text and art, which might reflect the limited capacity and effort even to generate the website let alone build out any technology.
my outie enjoys trying experimental operating systems
Am I hallucinating or is that black diamond in the sky a little malproportioned?
It's an AI generated image
I'm having a hard time following the through line on these first principles. Likely it's just a "me" problem because I have status quo system designs set in my head, but here are some ideas that seem conflicting to me:
> Hardware and software must be designed as one
In here, they describe an issue with computers is how they use layers of abstraction, and that actually hides complexity. But...
> Computers should feel like magic
I'm not sure how the authors think "magic" happens, but it's not through simplicity. Early computers were quite simple, but I can guarantee most modern users would not think they were magical to use. Of course, this also conflicts with the idea that...
> Systems must be tractable
Why would a user need to know how every aspect of a computer works if they're "magic" and "just work"?
Anyway, I'm really trying not to be cynical here. This just feels like a list written by someone who doesn't really understand how computers or software came to work the way they do.
Yeah I felt the contradictions here too. Doesn’t the feeling of “magic” directly proceed from abstraction and non-tractability (or at least, as you say, not needing to understand every part of the system)?
>Doesn’t the feeling of “magic” directly proceed from abstraction and non-tractability
Yes, but also I think it can also have a kind of liminal impression of an internal logic.
Would you mind elaborating?
> RadiantOS is a single address space operating system.
But why? We use virtual address spaces for a reason.
The whole thing feels like it's generated by LLM. Some interesting sounding titbits here and there, no specifics ever, weird trance images.
I don't understand why they're particular about writing their own esoteric language. If they want people to buy and engage with it, software has to be the gateway, and that's easier to write in a language people know.
It's prob a balance. Sure, C is king....but if you are starting from scratch...do you REALLY need it or could you design something even better? Maybe, maybe not.
I've programmed for a long time, but always struggled with Assembly and C, so take my views with a grain of salt.
I don't think C is king anymore. They could use Rust with nostd, or Zig, or C++. Anything (low level enough) is better than an entirely new language.
I missed this earlier: "Radiance features a modern syntax and design inspired by Rust, Swift and Zig."
The more I look at it and think about it, it feels like the whole thing, language and images together, are collectively concept art. Which, if that's the case, is fine for what it is. But I do think if that's the case, I think it's at least slightly disrespectful to readers to be coy about how real any of this is.
I like this type of stuff, but it cannot go anywhere. A real clean slate system, free from all crap we have piled on the things we use every day, must be something simple that cannot be interesting for the masses and must be understood and programmable by one person if things go bad. The only way I can see that is by creating something underpowered just to have fun and we already have that; actual (old and new) hardware, emulators and virtual cpus. As soon as it gets any volume or viability, it will be taken on by commercial entities that will eat it and the way to prevent it is always to make it obsolete to begin with.
The AI art makes it look like vapor.
So does the AI text.
They want to implement custom hardware with support for audio, video, everything, a completely new language, a ground-up OS, and also include AI.
Sounds easy enough.
>"It's a tool for personal computing where every application and every surface, exists as code you can read, edit, and extend. It's a system you can truly own"
This sounds a lot like a Smalltalk running as the OS until they started talking about implementing a systems language.
The most important question I have for any project like this is: who is making it? And this website does not answer this question.
Alexis Sellier is the author of all of the posts under /log
I read clean slate architecture and "no baggage" and thought someone was designing a non Von Neuman architecture machine with a novel clockless asynchronous cpu, but nope, it's a custom OS running on RISC-V
I thought that too. It would have been such an interesting thing, I would have (modestly) contributed to their Kickstarter even if it didn't produce a commercial product in the end.
The exokernel makes this a nonstarter if you ever want to run untrusted code, as it implies hardware takeovers, compromised peripherals/TPMs/drives/etc. especially when it claims to be AI first.
Out of curiosity- there's a focus on local llm then talk about no GPU, only FPGA. Those feel- at odds. But maybe I'm out of the loop for how far local LLMs on custom hardware has come?
They're still at the compiler stage. LLM features and hardware seem far enough away that it's reasonable to wait to evaluate if that combination is actually practical.
Love ambitious projects like this!
I wonder why the Unix standard doesn't start dropping old syscalls and standards? Does it have to be strictly backwards compatible?
Even if the standard dropped them, Linux would likely retain them.
https://linuxreviews.org/WE_DO_NOT_BREAK_USERSPACE
Honestly, this seems rambling and unfocused. It's like a grab-bag of recent-ish buzzwords.
The task that has been set is gigantic. Despite that, they've decided to make it even harder by designing a new programming language on top of it (this seems to be all the work done to date).
The hardware challenge alone is quite difficult. I don't know why that isn't the focus at this stage. It is as-if the author is suggesting that only the software is a problem, when some of the biggest issues are actually closed hardware. Sure, Linux is not ideal, but its hardly relavent in comparison.
I think this project suffers from doing too much abstract thinking without studying existing concrete realities.
I would suggest tackling one small piece of the problem in the hardware space, or building directly on some of the work others have done.
I don't disagree with the thesis of the project, but I think it's a MUCH bigger project than the author suggests and would/will require a concentrated effort from many groups of people working on many sub-projects.
In the OSdev world that's called an "Alta Lang" problem.
https://wiki.osdev.org/Alta_Lang
It's a miracle that the internet and computers work with each other as well as they do.
Look, if someone hasn't done it already, I see absolutely no reason not to build a Lua-based IPFS process, port it absolutely everywhere, and use it to host its own operating system.
Why does it always need to be so difficult? We already have the tools. Our methods, constantly changing and translblahbicatin' unto the falnords, snk snk... this kind of contrafabulation needs to cease.
Just sayin'.
IPFS+Lua. It's all we really need.
Yes yes, new languages are best languages, no no, we don't need it to be amazing, just great.
It'll be great.
I'll indications point to this GitHub user [1], Alexis Sellier [2], as the engineer behind this. Good luck with such an ambitious goal. I'd love to see it.
[1]: https://github.com/cloudhead [2]: https://cloudhead.io/
If it doesn't have a browser, how will you visit radiant.computer on your Radiant Computer?
You wouldn't I don't think (assuming this thing ever got off the ground - huge assumption), but is that really a problem? I think the web page is more to make normalish people aware that this hypothetical ecosystem would be out there. From within that ecosystem they could have a different page.
Coincidence or borrows from Asimov?
The Prime Radiant featured in Foundation.
Was hoping this was an evolution on the daylight computer.
Why does the website look like my monitor is dying? Black on dark grey, seriously?
Indeed. Not very.. radiant.
> No social networking
What an airball. Social networks are the single most valuable aspect of computers and the internet. It is a dark world where we all just talk to LLMs instead of each other.
Nice idea, all the best!
So what does its UI look like?
Based on its /log page, it doesn't look like it has one yet. They're just now implementing the implementation language, R'.
> They're just now implementing the implementation language, R'.
They haven't done their due diligence: there's already a well-known language named R: https://www.r-project.org/. The prime isn't sufficient disambiguation.
I assume they know but don't care. Either way, that is a bad choice. I think "Rad" would be a good name, but maybe they already are using that for something else.
Edit: where did you see it's called "R"? It looks like they call the system language "Radiance" : https://radiant.computer/system/radiance/
https://radiant.computer/system/radiance/prime/
Ah, so no quite "R", but "R'" (R Prime).
I assumed R and R' are prototypical bootstrapping variants of what will be the full-fledged Radiant language, but that wasn't explicitly written anywhere.
well-known "language" (air quotes)
They called their language "R"??? Robert Gentleman will throw a hissy fit.
love it!
I wish I could work there!
I am wondering what linux distro/iso comes up with a liveboot gui desktop environment but without a web browser (I only know of tinycorelinux but that is too barebones and I wanted to build my own iso on top of it but it was a little hard to install packages when I tried following its remastering guide etc.)
I even tried to search it on distrowatch with the negate option in their search but it seemed to be broken.
I needed it once to build my own "studyOs" , and in the process I went down a deep rabbit hole on about the hobby-ist distros of linux and their importance.
I then settled on MXLinux because of what I wrote below
People recommend cubic etc. but personally I recommend MXLinux. Its default linux snapshot feature was exactly what I was looking for and it just worked.
I glanced over this and I was excited thinking oh great this could be a linux iso with no browser and similar to tiny core but I found out through the comments that its focus on LLM's etc. is very vague and weird for what I am reading.
I just feel like its seriously not getting the idea. I want to effectively dissect this post's tenants from a Linux user for just a few years.
My first experiences was positive, then negative and now its mixed really.
I feel like this is intending on become so hardware focused that I am not even sure what they mean by this. From my limited knowledge, Linux tries to do a lot of things simply to boot up into a predictable environment on every computer device most likely to the point that there are now things like nix that can arise your system in a determinist system.
I still think that there is a point in making something completely new instead of Yet another Unix from what I can tell, but my hopes aren't very high, sorry. You would have to convince me from why the world would be better off with this instead of Linux aside and their notes on why not linux is still absolutely mixed thoughts in my opinion
> Linux is a monolithic kernel. All drivers, filesystems, and core services share the same privilege space. A single bug, eg. a bad pointer dereference in a GPU driver can corrupt kernel memory and take down the entire system.
Can't drivers be loaded at runtime and there are ways to isolate the taking down of entire system imo. I think this is just how a monolithic kernel should work, no?
I read more discussions on mono-lithic kernel and micro-kernel on Tanenbaum–Torvalds debate wiki [1] and here is something that I think to be apt here
> Torvalds, aware of GNU's efforts to create a kernel, stated "If the GNU kernel had been ready last spring, I'd not have bothered to even start my project: the fact is that it wasn't and still isn't."
Some other person on the usenet group also called gnu hurd a vaporware and well I think there is some factuality to it and gnu hurd team was working on hurd far longer than linux was working at linus (an excerpt? from the same wikipage)
Another line I want to share is this from the wiki: Different design goals get you different designs
I think I was going to criticize the radiant computer but hey, its open source,nobody's stopping you from doing work on it. And this line was said defending linux earlier, so it sure can defend this
But at the same time, my concerns regarding this or any project is regarding it becoming vapor-ware. Linux is way too big and actually good enough for most users. I don't think that the world can have a perfect os. It can have a good enough tho and from the end user, Linux is exactly that. The fact that its open source and is genuinely good at what it does, and there is absolutely no denying about it. You could live your whole life using linux Imo. Its beautiful.
I used to defend NetBsd etc. or hate systemd etc. but the truth of the matter is that nobody's forcing you to use systemd or netbsd, you damn well could use a server without it but I have found that the mass adoption does make me convince that a sys-admin level, linux, maybe even debian or systemd in general would have its gains.
I think linux is really really really good, its just the best imo but I will still try out things like the freebsd,openbsd etc. . I genuinely love it so much. Its honestly wild / even a fever dream when you think about it that something like linux even actually exists. Its so brilliant and the ecosystem around it is just chef's kiss.
One can try and these are your developer hours but I just don't want to see things turn into vaporware, so I will just ask you a question on how do you prevent this project from becoming a vaporware. I am sure this isn't the first time someone has proposed the ideal system and it wouldn't be the last either.
Edits: Sorry this got long. I got a little carried reading the wiki article, its so good.
[1]: https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_deb...
> Computing machines are instruments of creativity, companions in learning, and partners in thought. They should amplify human intention.
An admirable goal. However putting that next to a bunch of AI slop artwork and this statement...
> One of our goals is to explore how an A.I.-native computer system can enhance the creative process, all while keeping data private.
...is comically out of touch.
The intersection between "I want simple and understandable computing systems" and "I want AI" is basically zero. (Yes, I'm sure some of you exist, my point is that you're combining a slim segment of users who want this approach to tech with another slim segment of users who want AI.)