This video https://youtu.be/TBrPyy48vFI?t=1277 is a few years old, but it covers how the GRiSP platform combines Erlang and RTEMS Real-time OS [1] to overcome Erlang VM's soft real-time limitations and achieve hard real-time event handling.
It's all relative. Hard Realtime vs Soft Realtime is not clearly delineated. Because on anything real world there is always a probability distribution for every deadline.
Our observation is that Erlang's "soft-realtime" is already getting much harder once Linux stays out of the way. We have a master thesis worth of research on having multiple sets of schedulers in one Erlang VM that run on different hard real-time priorities plus research on a network of message passing and garbage collecting Erlang processes can be doing Earlies Deadline First scheduling.
However that stayed prototypical because we found relativeness of vanilla Erlang on RTEMS was good enough for all practical customer problems we solved.
For very high performance hard real-time we drop down to C and we are currently working on a little language that can be programmed from the Erlang level that avoids that.
Erlang's BEAM, assuming no chicanery of NIFs, will use reduction counting to eventually yield a scheduler to make sure other Erlang processes get execution time. This gives you kind of a "will eventually happen" property. It can't guarantee meeting a deadline. Just that all things will be serviced at some point.
Right GRiSP has support for creating RTOS tasks in C, IIRC.
Within BEAM itself there’s no priority mechanism, however, on a RPi3 or BeagleBone you could get about an 200 uS average response time to GPIO on Linux, even under moderate load. The jitter was pretty low too, like 10-20 uS on average, but the 99.9% tail latencies could get up to hundreds of millis.
That’s fine for many use cases. Still I now prefer programming on esp32’s with Nim for anything realtime. Imperative programming just makes handling arrays easier. Just wish FreeRTOS tasks had error handling akin to OTP supervisors.
Now Beam/Elixir would be amazing for something like HomeAssistant or large networked control systems.
Just a reminder that commonly "real-time" on stuff like VxWorks isn't hard realtime either. You test a bunch of scenarios, put in some execution CPU head-room you are comfortable with, and call it a day. With enough head-room and some more (or less, if you have money and time) hand-waving, you can more or less guarantee that deadlines will be kept.
Addendum: we have Buildroot and Yocto based platforms too. Not clear on the website right now but we have three platforms actually:
* GRiSP Metal - aka just GRiSP (Erlang/Elxiir + RTEMS)
* GRiSP Alloy - Buildroot Linux based, starts the Erlang/Elixir runtime as process 1 similar to Nerves but more language agnostic (Nerves is for Elixir only) and we support RT Linux and running multiple Erlang Runtimes at different priorities
* GRiSP Forge - Similar to alloy but Yocto based.
The idea is that from the high level language level they are more or less interchangeable.
I met Peer at Lambda Days in 2023 briefly. We didn't chat for super long (about 5-10 minutes), but (and I do genuinely mean this as a compliment), he was one of the most enthusiastic geeks I've ever met. He seemed so genuinely passionate about Erlang and GRiSP and technology in general, it was outright delightful to talk to him.
I love people who can stay excited and optimistic about stuff; it's so easy to be cynical, it's refreshing to meet someone who hasn't had the life sucked out of them.
I need to pick up a GRiSP Nano one of these days. I have the GRiSP 1 and even managed to get Lisp Flavoured Erlang working on there [1], but I haven't played with it much since then. I should fix that.
> I love people who can stay excited and optimistic about stuff; it's so easy to be cynical, it's refreshing to meet someone who hasn't had the life sucked out of them.
I suppose having the small DRAM footprint is required to meet extremely low power requirements. How low power is it? The CPU has a 18.6 μA/MHz Run mode at 3.3 V [1], so 61μW! I wanted to know more about the power harvesting applications though.
The development was funded by a research project where the coordinator was a manufacturer of thermal energy harvesting devices. That's why we focussed on power most.
That requires certain choices for CPU, RAM.
We also have a lot of energy management hardware on the board: All PMODs can completely switched off. There is a separate wakeup logic that triggers when the capacitor that gets charged by the harvester has reached a certain charge level and many more.
The challenge with using Erlang for systems like that is that it has a boot phase which it needs to get through until we can manage energy from the Erlang level. So we either need to have enough charge to get through the whole boot or we need to manage the boot process to and do it in chunks then sleep again.
That's what we want to find out with this hardware but first we needed to squeeze in Erlang VM, RTEMS, TCP Stack and all of the Erlang objects to be useful (first goal was reach the shell). That's where we are right now.
Is 16 MB "only" for Erlang? I thought it started like something made for embedded hardware decades ago? Wikipedia says 1986.
Makes me curious at what pace and why the size has grown from 1986-2025 and how long ago the line was crossed that made 16 MB seem like a that is now a small runtime?
It's an evaluation board for a technology stack. That's its "use". We wanted to explore the design space towards smaller/low power/cheaper and find out where we can still squeeze a fully functional Erlang VM. We are working on distributed computing that benefits when we can run the same BEAM files (Erlang VM object files) everywhere from IoT to Edge to Cloud
I love the idea of elixir/erlang in robotics/embedded environments, and so from my research I can confidently say very few. As far as I can tell, when people do use it it's primarily for industrial equipment, and the main selling point is liveview/scenic for displaying information about the equipment.
I want to believe we'd someday see erlang/elixir all over the place, especially in flight software, due to their use of "lightweight processes" and high fault tolerance, but I don't see it happening any time soon, and I certainly don't see people hiring for it. I think it could solve some legitimate industry issues but it's too big of a change for too subtle of benefits
This video https://youtu.be/TBrPyy48vFI?t=1277 is a few years old, but it covers how the GRiSP platform combines Erlang and RTEMS Real-time OS [1] to overcome Erlang VM's soft real-time limitations and achieve hard real-time event handling.
[1] https://www.rtems.org/
What are the soft real time limitations of erlang?
It's all relative. Hard Realtime vs Soft Realtime is not clearly delineated. Because on anything real world there is always a probability distribution for every deadline.
Our observation is that Erlang's "soft-realtime" is already getting much harder once Linux stays out of the way. We have a master thesis worth of research on having multiple sets of schedulers in one Erlang VM that run on different hard real-time priorities plus research on a network of message passing and garbage collecting Erlang processes can be doing Earlies Deadline First scheduling.
However that stayed prototypical because we found relativeness of vanilla Erlang on RTEMS was good enough for all practical customer problems we solved.
For very high performance hard real-time we drop down to C and we are currently working on a little language that can be programmed from the Erlang level that avoids that.
Erlang's BEAM, assuming no chicanery of NIFs, will use reduction counting to eventually yield a scheduler to make sure other Erlang processes get execution time. This gives you kind of a "will eventually happen" property. It can't guarantee meeting a deadline. Just that all things will be serviced at some point.
Right GRiSP has support for creating RTOS tasks in C, IIRC.
Within BEAM itself there’s no priority mechanism, however, on a RPi3 or BeagleBone you could get about an 200 uS average response time to GPIO on Linux, even under moderate load. The jitter was pretty low too, like 10-20 uS on average, but the 99.9% tail latencies could get up to hundreds of millis.
That’s fine for many use cases. Still I now prefer programming on esp32’s with Nim for anything realtime. Imperative programming just makes handling arrays easier. Just wish FreeRTOS tasks had error handling akin to OTP supervisors.
Now Beam/Elixir would be amazing for something like HomeAssistant or large networked control systems.
Erlang does have a mechanism to modify process priority, with process_flag/2,3.
As of OTP 28 there's also priority messaging that a process can opt in to. Not really related, but it's new and interesting to note.
> As of OTP 28 there's also priority messaging that a process can opt in to.
That's a very important feature. Without priority messaging you can't nicely recover from queues that start backing up.
Just a reminder that commonly "real-time" on stuff like VxWorks isn't hard realtime either. You test a bunch of scenarios, put in some execution CPU head-room you are comfortable with, and call it a day. With enough head-room and some more (or less, if you have money and time) hand-waving, you can more or less guarantee that deadlines will be kept.
quick-question: why go the `rtems` route ? would 'isolcpus' not work in this case ?
--
thanks !
With Linux we can only run on larger embedded CPUs that support virtual memory well enough. With RTEMS we can go towards much smaller platforms.
Addendum: we have Buildroot and Yocto based platforms too. Not clear on the website right now but we have three platforms actually:
* GRiSP Metal - aka just GRiSP (Erlang/Elxiir + RTEMS)
* GRiSP Alloy - Buildroot Linux based, starts the Erlang/Elixir runtime as process 1 similar to Nerves but more language agnostic (Nerves is for Elixir only) and we support RT Linux and running multiple Erlang Runtimes at different priorities
* GRiSP Forge - Similar to alloy but Yocto based.
The idea is that from the high level language level they are more or less interchangeable.
I am not associated with the project, so I cannot answer that.
I met Peer at Lambda Days in 2023 briefly. We didn't chat for super long (about 5-10 minutes), but (and I do genuinely mean this as a compliment), he was one of the most enthusiastic geeks I've ever met. He seemed so genuinely passionate about Erlang and GRiSP and technology in general, it was outright delightful to talk to him.
I love people who can stay excited and optimistic about stuff; it's so easy to be cynical, it's refreshing to meet someone who hasn't had the life sucked out of them.
I need to pick up a GRiSP Nano one of these days. I have the GRiSP 1 and even managed to get Lisp Flavoured Erlang working on there [1], but I haven't played with it much since then. I should fix that.
[1] https://medium.com/@tombert/working-with-lisp-flavoured-erla...
> I love people who can stay excited and optimistic about stuff; it's so easy to be cynical, it's refreshing to meet someone who hasn't had the life sucked out of them.
How true that is
I suppose having the small DRAM footprint is required to meet extremely low power requirements. How low power is it? The CPU has a 18.6 μA/MHz Run mode at 3.3 V [1], so 61μW! I wanted to know more about the power harvesting applications though.
[1] https://www.st.com/resource/en/datasheet/stm32u5f7vj.pdf
The development was funded by a research project where the coordinator was a manufacturer of thermal energy harvesting devices. That's why we focussed on power most.
That requires certain choices for CPU, RAM.
We also have a lot of energy management hardware on the board: All PMODs can completely switched off. There is a separate wakeup logic that triggers when the capacitor that gets charged by the harvester has reached a certain charge level and many more.
The challenge with using Erlang for systems like that is that it has a boot phase which it needs to get through until we can manage energy from the Erlang level. So we either need to have enough charge to get through the whole boot or we need to manage the boot process to and do it in chunks then sleep again.
That's what we want to find out with this hardware but first we needed to squeeze in Erlang VM, RTEMS, TCP Stack and all of the Erlang objects to be useful (first goal was reach the shell). That's where we are right now.
Is 16 MB "only" for Erlang? I thought it started like something made for embedded hardware decades ago? Wikipedia says 1986.
Makes me curious at what pace and why the size has grown from 1986-2025 and how long ago the line was crossed that made 16 MB seem like a that is now a small runtime?
Datasets have grown.
This is incredible. Kudos on getting it done, and done so quickly!
For what use ?
It's an evaluation board for a technology stack. That's its "use". We wanted to explore the design space towards smaller/low power/cheaper and find out where we can still squeeze a fully functional Erlang VM. We are working on distributed computing that benefits when we can run the same BEAM files (Erlang VM object files) everywhere from IoT to Edge to Cloud
I love the idea of elixir/erlang in robotics/embedded environments, and so from my research I can confidently say very few. As far as I can tell, when people do use it it's primarily for industrial equipment, and the main selling point is liveview/scenic for displaying information about the equipment.
I want to believe we'd someday see erlang/elixir all over the place, especially in flight software, due to their use of "lightweight processes" and high fault tolerance, but I don't see it happening any time soon, and I certainly don't see people hiring for it. I think it could solve some legitimate industry issues but it's too big of a change for too subtle of benefits
I've not seen an explicit usage in the article, but a lot of progress/product/tech derives from “small leaps” like this one ;P
Sweet!