These designs fascinate people who haven't designed antennas. I don't doubt that throwing enough computational power at optimizing antennas will produce antennas optimized for something at the expense of something else but if you're a casual what you should notice is that these papers never mention the "something elses". You can get a paper out of just about any antenna design, btw. There's also a type of ham that will tune up a bedframe or whatever. So just getting something to radiate should not be confused with advancing the state of the art.
These antennas found their way into the utterly savage "pathological antennas" chapter of Hansen and Collin's _Small Antenna Handbook_. See "random segment antennas". Hansen and Collin is the book to have on your shelf if you're doing any small antennas commercially and that chapter is the chapter to go to when you're asked "why don't you just".
This comment really sums it up well. Literally everything with antenna design is a trade-off. You can design an antenna to radiate very well at a given wavelength. The better it is at doing this, the worse it tends to be at every other wavelength. You can make an antenna that radiates to some degree across a wide array of wavelengths, but it's not actually going to work very well across any of them.
Same thing with radiation patterns. You can make a directional antenna that has a huge amount of gain in one direction. The trade-off is that it's deaf and dumb in every other direction. (See a Yagi-Uda design, for instance.)
Physics is immutable and when it comes to antenna design there really is no such thing as free lunch. Other than coming up with some wacky shapes I don't really think AI is going to be able to create any type of "magic" antenna that's somehow a perfect isotropic radiator with a low SWR across some huge range of wavelengths.
I just read it in the referenced book section from the parent comment. It shocked the imaginary bubble where my mind is a bit. I want to reflect more on it.
Somehow, in the midset of all these LLM and diffusion models, the only thing that seems to catch attention is creativity. I've not thought of experience.
As somebody who almost fried his computer during the antenna design course to optimize a dipoles array with a (not optimized) genetic algorithm, I really like this content.
A good rule of thumb: never mock someone’s enthusiasm or excitement about learning something, even if it’s old news to you. Let people enjoy discovering things.
I'm rediscovering it. I remember reading about this in some flashy, superlative Popular Science article, from the early/mid 2000s. So I was quite excited to click on the link and see that shape again.
A long time dream of rocket scientists is single-stage-to-orbit. Ideally you'd have a vehicle that takes off and lands like a conventional jet plane at a regular airport. I've always thought that perhaps AI and evolutionary algorithms might be able to navigate a way through the various tradeoffs and design constraints that have stopped us so far.
As an avid observer of rocket design, I suppose that hasn't happened because SSTO may not have any good solutions. I further suppose that the design parameters are so constrained there is very little opportunity for a generative or evolutionary, or any other AI-driven design approach, to do more than optimize some components.
Very cool. Evolutionary Algorithms have kinda been out of the mainstream for a long time. They are good when you can do a lot of "black-box" function evaluations but kinda suck when your computational budget is limited. I wonder if coupling them with ML techniques could bring them back.
> I wonder if coupling them with ML techniques could bring them back.
EAs are effectively ML techniques. It's all a game of search.
The biggest problem I have seen with these algorithms is that they are wildly irrespective of the underlying hardware that they will inevitably run on top of. Koza, et. al., were effectively playing around in abstraction Narnia when you consider how impractical their designs were (are) to execute on hardware.
An L1-resident hill climber running on a single Zen4+ thread would absolutely smoke every single technique from the 90s combined, simply because it can explore so much more of the search space per unit time. A small tweak to this actually shows up on human timescales and so you can make meaningful iterations. Being made to wait days/weeks each time you want to see how your idea plays out will quickly curtail the space of ideas.
The main use of evolutionary algorithms in machine learning currently is architecture search for neural networks. There's also work on pipeline design, finding the right way to string things together.
Neural networks already take a long time to train so throwing out gradient descent entirely for tuning weights doesn't scale great.
Genetic programming can solve classic control problems with a few instructions when they can solve it, so that's cool.
These designs fascinate people who haven't designed antennas. I don't doubt that throwing enough computational power at optimizing antennas will produce antennas optimized for something at the expense of something else but if you're a casual what you should notice is that these papers never mention the "something elses". You can get a paper out of just about any antenna design, btw. There's also a type of ham that will tune up a bedframe or whatever. So just getting something to radiate should not be confused with advancing the state of the art.
These antennas found their way into the utterly savage "pathological antennas" chapter of Hansen and Collin's _Small Antenna Handbook_. See "random segment antennas". Hansen and Collin is the book to have on your shelf if you're doing any small antennas commercially and that chapter is the chapter to go to when you're asked "why don't you just".
This comment really sums it up well. Literally everything with antenna design is a trade-off. You can design an antenna to radiate very well at a given wavelength. The better it is at doing this, the worse it tends to be at every other wavelength. You can make an antenna that radiates to some degree across a wide array of wavelengths, but it's not actually going to work very well across any of them.
Same thing with radiation patterns. You can make a directional antenna that has a huge amount of gain in one direction. The trade-off is that it's deaf and dumb in every other direction. (See a Yagi-Uda design, for instance.)
Physics is immutable and when it comes to antenna design there really is no such thing as free lunch. Other than coming up with some wacky shapes I don't really think AI is going to be able to create any type of "magic" antenna that's somehow a perfect isotropic radiator with a low SWR across some huge range of wavelengths.
"Do not confuse inexperience with creativity"...
I just read it in the referenced book section from the parent comment. It shocked the imaginary bubble where my mind is a bit. I want to reflect more on it.
Somehow, in the midset of all these LLM and diffusion models, the only thing that seems to catch attention is creativity. I've not thought of experience.
As somebody who almost fried his computer during the antenna design course to optimize a dipoles array with a (not optimized) genetic algorithm, I really like this content.
Do people not go on Wikipedia nowadays? This is literally on the frontpage of the wiki for this stuff: https://en.wikipedia.org/wiki/Genetic_algorithm
A good rule of thumb: never mock someone’s enthusiasm or excitement about learning something, even if it’s old news to you. Let people enjoy discovering things.
I'm rediscovering it. I remember reading about this in some flashy, superlative Popular Science article, from the early/mid 2000s. So I was quite excited to click on the link and see that shape again.
But also, something something lucky ten thousand.
https://xkcd.com/1053/
A long time dream of rocket scientists is single-stage-to-orbit. Ideally you'd have a vehicle that takes off and lands like a conventional jet plane at a regular airport. I've always thought that perhaps AI and evolutionary algorithms might be able to navigate a way through the various tradeoffs and design constraints that have stopped us so far.
As an avid observer of rocket design, I suppose that hasn't happened because SSTO may not have any good solutions. I further suppose that the design parameters are so constrained there is very little opportunity for a generative or evolutionary, or any other AI-driven design approach, to do more than optimize some components.
As a rocket scientist I assure you it's been tried
Very cool. Evolutionary Algorithms have kinda been out of the mainstream for a long time. They are good when you can do a lot of "black-box" function evaluations but kinda suck when your computational budget is limited. I wonder if coupling them with ML techniques could bring them back.
> I wonder if coupling them with ML techniques could bring them back.
EAs are effectively ML techniques. It's all a game of search.
The biggest problem I have seen with these algorithms is that they are wildly irrespective of the underlying hardware that they will inevitably run on top of. Koza, et. al., were effectively playing around in abstraction Narnia when you consider how impractical their designs were (are) to execute on hardware.
An L1-resident hill climber running on a single Zen4+ thread would absolutely smoke every single technique from the 90s combined, simply because it can explore so much more of the search space per unit time. A small tweak to this actually shows up on human timescales and so you can make meaningful iterations. Being made to wait days/weeks each time you want to see how your idea plays out will quickly curtail the space of ideas.
> A small tweak to this actually shows up on human timescales and so you can make meaningful iterations.
Please could you explain what you meant by this part? I'm trying and failing to understand it.
The main use of evolutionary algorithms in machine learning currently is architecture search for neural networks. There's also work on pipeline design, finding the right way to string things together.
Neural networks already take a long time to train so throwing out gradient descent entirely for tuning weights doesn't scale great.
Genetic programming can solve classic control problems with a few instructions when they can solve it, so that's cool.
This has to have been done in more modern times in simulation of the EM field for a better design instead of practically