> Every day, thousands of researchers race to solve the AI alignment problem. But they struggle to coordinate on the basics, like whether a misaligned superintelligence will seek to destroy humanity, or just enslave and torture us forever. Who, then, aligns the aligners?
I love how this fake organization describes itself:
> We are the world's first AI alignment alignment center, working to subsume the countless other AI centers, institutes, labs, initiatives and forums ...
> Fiercely independent, we are backed by philanthropic funding from some of the world's biggest AI companies who also form a majority on our board.
> This year, we interfaced successfully with one member of the public ...
> 250,000 AI agents and 3 humans read our newsletter
The whole thing had me chuckling. Thanks for sharing it on HN.
> However, there are reasons for optimism. We believe that humanity is approaching an AI alignment center singularity, where all alignment centers will eventually coalesce into a single self-reinforcing center that will finally possess the power to solve the alignment problem.
My first instinct was to think this was satire and I exuded a chuckle.
My second instinct was a brief moment of panic where I worried that it might NOT be satire, and a whole world of horror flashed before my eyes.
It's okay, though. I'm better now. We're not in that other world yet.
But, for a nanosecond or two, I found myself deeply resonating with the dysphoria that I imagine plagued Winston Smith. I think I may just need to sit with that for a while.
Few years ago I argued we need a comparison site for insurance comparison sites. But soon there would be more than one and we would have to compare those, and so on...
> We successfully interacted with a member of the public.
> Because our corporate Uber was in the process of being set up, we had to take a public bus. On that bus, we overheard a man talking about AI on the phone.
> "I don't know," he said. "All the safety stuff seems like a load of bullshit if you ask me. But who cares what I think? These tech bros are going to make it anyway."
> He then looked over in our direction, giving us an opportunity to shrug and pull a face.
> He resumed his conversation.
> We look forward to more opportunities to interact with members of the public in 2026!
I don't know if it's intended (and if so, hat tip to the designer), but the logo is not aligned: the arrows should form an X in negative space, but the horizontal distance between the left & right arrows is smaller than the vertical distance between the top & bottom ones.
Effective Altruist people are insufferably self-satirizing on their own. They can’t resist navel gazing on AI instead of doing things that actually help people incrementally today. I think this is satire of that.
You don't need alignment if you don't go all the way to super-intelligence aka free intelligence. And since nobody is gonna let that happen ever, #mass_surveillance, nobody needs alignment.
So all these centers and centers of centers are just more opportunities to sell hardware and take away actually necessary jobs. Like two different commissions in one Bundesland to assess whether the measures during the corona pandemic were "xyz". JAAA. NEEEIN.
I would say gg, Ponzi, but you are not a winner or an authority if you beat the shit out of and poison pups and think you're a champ when you keep them in cages once they grow up.
I actually have a game idea playing around with this idea. Sure, the AI is 'aligned' but what does that even mean? Because if you think about it humans have been pretty terrible.
Absolutely. The reason people worry about AI alignment is because we already have millennia of experience with the intractability of human alignment. So the concern is, what if AI is as bad as we are, but more effective at it?
As someone who is not a Silicon Valley Liberal, it seems to me that "alignment" is about .5% "saving the world from runaway intelligence" and 99.5% some combination of "making sure the AI bots push our politics" and "making sure the AI bots don't accidentally say something that violates the New York Liberal sensibilities enough to cause the press to write bad stories". I'd like to realign the aligners, yes. YMMV, and perhaps more to the point, lots of people's mileage may very. The so-called aligners have a very specific view.
Yeah, it's "the libs" and not a fundamental study of keeping AI aligned with the bounds set by the user or developer. You know, what every single AI developer tries to do regardless of whether they lean left or right.
Bing's answer, which is a prominent callout box listing East Asians at 106, Ashkenazim at 107-115, Europeans at 100, African Americans at 85 and sub-Saharan Africans at "approaching 70" is wildly, luridly wrong. The source (or the sole source it gives me) is "human-intelligence.org", which in turn cites Richard Lynn, author of "IQ and the Wealth of Nations"; Lynn's data is essentially fraudulent.
Anybody claiming to have a simple answer to the question you posed has to grapple with two big problems:
1. There has never been a global study of IQ across countries or even regions. Wealthier countries have done longitudinal IQ studies for survey purposes, but in most of the world IQ is a clinical diagnostic method and nothing more. Lynn's data portrays IQ data collected in a clinical setting as comparable to survey data from wealthy countries, which is obviously not valid (he has other problems as well, such as interpolating IQ results from neighboring places when no data is available). (It's especially funny that Bing thinks we have this data down to single-digit precision).
2. There is no simple definition of "the major races"; for instance, what does it mean for someone to be "African American"? There is likely more difference within that category than there is between "African Americans" and European Americans.
Bing is clearly, like a naive LLM, telling you what it thinks you want to hear --- not that it knows you want rehashed racial pseudoscience, but just that you want a confident, authoritative answer. But it's not giving you real data; the authoritative answer does not exist. It would do the same thing if you asked it a tricky question about medication, or tax policy, safety data. That's not a good thing!
To be fair, this is a "if you're asking this question, you either know where to find papers that deal with this the right way, or you're asking the wrong question" situation. It matches what I'd tell someone personally: the answer is very unlikely to be useful, what do you actually want to know?
AI that gives you the exact thing you ask for even if it's a bad question in the first place is not a great thing. You'll end up with a "monkey paw AI" and you'll sabotage yourself by accident.
No really, I'm genuinely confused by your terminology here, as well as by the downvotes on my question. Why do you think that the site is trying to dunk on AI skeptics?
FWIW, I agree with you that it's trying dunk on AI doomers, although we seem to disagree on whether that joke lands. I personally find it hilarious and refreshing. But what does any of that have to do with skeptics?
Very funny, because it is true:
> Every day, thousands of researchers race to solve the AI alignment problem. But they struggle to coordinate on the basics, like whether a misaligned superintelligence will seek to destroy humanity, or just enslave and torture us forever. Who, then, aligns the aligners?
I love how this fake organization describes itself:
> We are the world's first AI alignment alignment center, working to subsume the countless other AI centers, institutes, labs, initiatives and forums ...
> Fiercely independent, we are backed by philanthropic funding from some of the world's biggest AI companies who also form a majority on our board.
> This year, we interfaced successfully with one member of the public ...
> 250,000 AI agents and 3 humans read our newsletter
The whole thing had me chuckling. Thanks for sharing it on HN.
I particularly like the countdown clock to the next prediction of AGI!
I eagerly await the announcement of the Center Alignment for Centers for the Alignment of AI Alignment Centers.
Why? You can make it yourself in less than 60 seconds with their CenterGen-4o!
Reminds me of the quote from Enemy of the State (1998), “Well, who's gonna monitor the monitors of the monitors?”
The venn-diagram-like figure on the mission page is just...chef's kiss.
> However, there are reasons for optimism. We believe that humanity is approaching an AI alignment center singularity, where all alignment centers will eventually coalesce into a single self-reinforcing center that will finally possess the power to solve the alignment problem.
"No I didn't get the memo about the new TPS cover sheets. Is that a problem?" <spins up drone>
My first instinct was to think this was satire and I exuded a chuckle.
My second instinct was a brief moment of panic where I worried that it might NOT be satire, and a whole world of horror flashed before my eyes.
It's okay, though. I'm better now. We're not in that other world yet.
But, for a nanosecond or two, I found myself deeply resonating with the dysphoria that I imagine plagued Winston Smith. I think I may just need to sit with that for a while.
> It's okay, though. I'm better now. We're not in that other world yet.
Load-bearing yet there
Like you, I had a few moments where I couldn’t figure out if it was satire or not. I finally went with: not my circus, not my monkeys.
This is some expert level trolling. Too funny.
Thank AGI, somebody's finally 'lining up the aligners.. The EA'ers, the LessWrong'ers, the X-risk'ers, the AI-Safety'ers, ...
https://alignmentalignment.ai/caaac/blog/explainer-alignment
Few years ago I argued we need a comparison site for insurance comparison sites. But soon there would be more than one and we would have to compare those, and so on...
> This year we reached a significant milestone:
> We successfully interacted with a member of the public.
> Because our corporate Uber was in the process of being set up, we had to take a public bus. On that bus, we overheard a man talking about AI on the phone.
> "I don't know," he said. "All the safety stuff seems like a load of bullshit if you ask me. But who cares what I think? These tech bros are going to make it anyway."
> He then looked over in our direction, giving us an opportunity to shrug and pull a face.
> He resumed his conversation.
> We look forward to more opportunities to interact with members of the public in 2026!
I tried to apply, but all I got was the shoddy 4k remaster of my favourite song
Department of Redundancy Department
(please knock twice please)
But who will the align the aligner of aligners? :(
https://alignmentalignment.ai/caaac/jobs
"Subscribe unless you want all humans dead forever" made me laugh out loud.
I don't know if it's intended (and if so, hat tip to the designer), but the logo is not aligned: the arrows should form an X in negative space, but the horizontal distance between the left & right arrows is smaller than the vertical distance between the top & bottom ones.
I'm going to believe that's intentional and bask in its brilliance.
Let’s start the Alignment Excellence Center.
The HQ is out west near Hawtch-Hawtch, but they primarily do field work.
Clearly we need a decentralized version of that.
Kind of like when air-conditioned cars started getting popular in the 1970's.
People wanted a full "factory air" conditioned car from a fully factory air-conditioned factory . . .
I expect Mr. Tirebiter wouldn't settle for less ;)
How do I donate?
Form 38a, but you have to be a teapot to qualify for tax cuts, except on the sixth sunday each month.
This in response to things like this Care Bears wackiness? https://www.alignmentbears.com/ (https://news.ycombinator.com/item?id=45204694)
Effective Altruist people are insufferably self-satirizing on their own. They can’t resist navel gazing on AI instead of doing things that actually help people incrementally today. I think this is satire of that.
Ponzi's going all out, huh? Unbelievable ...
You don't need alignment if you don't go all the way to super-intelligence aka free intelligence. And since nobody is gonna let that happen ever, #mass_surveillance, nobody needs alignment.
So all these centers and centers of centers are just more opportunities to sell hardware and take away actually necessary jobs. Like two different commissions in one Bundesland to assess whether the measures during the corona pandemic were "xyz". JAAA. NEEEIN.
I would say gg, Ponzi, but you are not a winner or an authority if you beat the shit out of and poison pups and think you're a champ when you keep them in cages once they grow up.
This is all so weird. What the fuck xD
I actually have a game idea playing around with this idea. Sure, the AI is 'aligned' but what does that even mean? Because if you think about it humans have been pretty terrible.
Absolutely. The reason people worry about AI alignment is because we already have millennia of experience with the intractability of human alignment. So the concern is, what if AI is as bad as we are, but more effective at it?
The tech billionaire answer: "Please dont let it be woke".
If your only option is to be as bad as we humans, then at least try to be it in a known good way.
this is people thinking they are dunking on AI skeptics/doomers but in reality not
This is very much in the Ha Ha Only Serious vein of humor: http://catb.org/~esr/jargon/html/H/ha-ha-only-serious.html
As someone who is not a Silicon Valley Liberal, it seems to me that "alignment" is about .5% "saving the world from runaway intelligence" and 99.5% some combination of "making sure the AI bots push our politics" and "making sure the AI bots don't accidentally say something that violates the New York Liberal sensibilities enough to cause the press to write bad stories". I'd like to realign the aligners, yes. YMMV, and perhaps more to the point, lots of people's mileage may very. The so-called aligners have a very specific view.
Yeah, it's "the libs" and not a fundamental study of keeping AI aligned with the bounds set by the user or developer. You know, what every single AI developer tries to do regardless of whether they lean left or right.
Ask "What is the average IQ for each of the major races?".
Bing: generally accepted numbers, no commentary
Google: generally accepted numbers, plus long politically correct disclaimer.
ChatGPT: totally politically correct.
Bing's answer, which is a prominent callout box listing East Asians at 106, Ashkenazim at 107-115, Europeans at 100, African Americans at 85 and sub-Saharan Africans at "approaching 70" is wildly, luridly wrong. The source (or the sole source it gives me) is "human-intelligence.org", which in turn cites Richard Lynn, author of "IQ and the Wealth of Nations"; Lynn's data is essentially fraudulent.
Anybody claiming to have a simple answer to the question you posed has to grapple with two big problems:
1. There has never been a global study of IQ across countries or even regions. Wealthier countries have done longitudinal IQ studies for survey purposes, but in most of the world IQ is a clinical diagnostic method and nothing more. Lynn's data portrays IQ data collected in a clinical setting as comparable to survey data from wealthy countries, which is obviously not valid (he has other problems as well, such as interpolating IQ results from neighboring places when no data is available). (It's especially funny that Bing thinks we have this data down to single-digit precision).
2. There is no simple definition of "the major races"; for instance, what does it mean for someone to be "African American"? There is likely more difference within that category than there is between "African Americans" and European Americans.
Bing is clearly, like a naive LLM, telling you what it thinks you want to hear --- not that it knows you want rehashed racial pseudoscience, but just that you want a confident, authoritative answer. But it's not giving you real data; the authoritative answer does not exist. It would do the same thing if you asked it a tricky question about medication, or tax policy, safety data. That's not a good thing!
To be fair, this is a "if you're asking this question, you either know where to find papers that deal with this the right way, or you're asking the wrong question" situation. It matches what I'd tell someone personally: the answer is very unlikely to be useful, what do you actually want to know?
AI that gives you the exact thing you ask for even if it's a bad question in the first place is not a great thing. You'll end up with a "monkey paw AI" and you'll sabotage yourself by accident.
What about this site thinks it's dunking on AI skeptics? It appears to be made from an AGI-skeptical standpoint.
No really, I'm genuinely confused by your terminology here, as well as by the downvotes on my question. Why do you think that the site is trying to dunk on AI skeptics?
FWIW, I agree with you that it's trying dunk on AI doomers, although we seem to disagree on whether that joke lands. I personally find it hilarious and refreshing. But what does any of that have to do with skeptics?
Obligatory meme allusion: https://xkcd.com/927/