You're better off not loading the question like "Do you simply consider it someone else's job to worry about risks like that?". Who would want to talk to you when it sounds you're not asking but looking to berate?
Your question still implies a hysteric interpretation of a nonexistent featureset. I think you will struggle to foster a serious discussion without actually describing what you're worried about. "AI kills people" is not any more of a serious concern than household furnitute becoming sentient and resolving to form an army that challenges humankind.
You have to describe what the actual threat is for us to treat it as an imperative issue. 99% of the time, these hypotheticals end with human error, not rogue AI.
Robotics progress is a lot slower than progress in disembodied AI, and disembodied AI trying to kill humanity is like naked John von Neumann trying to kill a tiger in an arena. IMO we need to figure out AI safety before physically embodied AI (smart robots) becomes routine, but to me safety in that context looks more like traditional safety-critical and security-critical software development.
I'm aware of the argument that smart enough AI can rapidly bootstrap itself to catastrophically affect the material world:
"It gets an immense technological advantage. If it's smart, it doesn't announce itself. It doesn't tell you that there's a fight going on. It emails out some instructions to one of those labs that'll synthesize DNA and synthesize proteins from the DNA and get some proteins mailed to a hapless human somewhere who gets paid a bunch of money to mix together some stuff they got in the mail in a file. Like, smart people will not do this for any sum of money. Many people are not smart. Builds the ribosome, but the ribosome that builds things out of covalently bonded diamondoid instead of proteins folding up and held together by Van der Waals forces, builds tiny diamondoid bacteria. The diamondoid bacteria replicate using atmospheric carbon, hydrogen, oxygen, nitrogen, and sunlight. And a couple of days later, everybody on earth falls over dead in the same second."
As someone with a strong background in chemistry this just makes me skeptical of Yudkowsky's groundedness as a prognosticator. Biological life is not compatible with known synthesis conditions for diamond, and even superintelligence may not discover workarounds. I am even more skeptical that AI can make such advances and turn them into working devices purely by pondering/simulation, i.e. without iterative laboratory experiments.
1. If AI is latently capable of killing people using just computer power, then it was going to happen regardless. If the AI requires assistance from human actors then it's basically indistinct from human actors acting alone without AI. If you are a human that puts AI in charge of a human life, you are liable for criminal negligence.
2. You cannot stop AI research because of a bunch of unknowns. People will not be afraid of an immaterial threat that has no plausible way to threaten people besides generating text. Even if that text has access to the internet, the worst that can happen has probably already been explored by human actors. No AI was ever needed to proliferate catastrophes like Stuxnet, Sarin gas attacks, or 9/11.
3. Some people (like myself) have been following this space since Google published BERT. In that time, I have watched LLMs go from "absolutely dogshit text generator" to "slightly less dogshit text generator". It sounds to me like you've drank Sam Altman's Kool-aid without realizing that Sam is bullshitting too.
You're better off not loading the question like "Do you simply consider it someone else's job to worry about risks like that?". Who would want to talk to you when it sounds you're not asking but looking to berate?
I removed that sentence (from the end of my post). Thanks for the feedback. I'll try to calm myself down now.
Your question still implies a hysteric interpretation of a nonexistent featureset. I think you will struggle to foster a serious discussion without actually describing what you're worried about. "AI kills people" is not any more of a serious concern than household furnitute becoming sentient and resolving to form an army that challenges humankind.
You have to describe what the actual threat is for us to treat it as an imperative issue. 99% of the time, these hypotheticals end with human error, not rogue AI.
Robotics progress is a lot slower than progress in disembodied AI, and disembodied AI trying to kill humanity is like naked John von Neumann trying to kill a tiger in an arena. IMO we need to figure out AI safety before physically embodied AI (smart robots) becomes routine, but to me safety in that context looks more like traditional safety-critical and security-critical software development.
I'm aware of the argument that smart enough AI can rapidly bootstrap itself to catastrophically affect the material world:
https://www.lesswrong.com/posts/Aq82XqYhgqdPdPrBA/full-trans...
"It gets an immense technological advantage. If it's smart, it doesn't announce itself. It doesn't tell you that there's a fight going on. It emails out some instructions to one of those labs that'll synthesize DNA and synthesize proteins from the DNA and get some proteins mailed to a hapless human somewhere who gets paid a bunch of money to mix together some stuff they got in the mail in a file. Like, smart people will not do this for any sum of money. Many people are not smart. Builds the ribosome, but the ribosome that builds things out of covalently bonded diamondoid instead of proteins folding up and held together by Van der Waals forces, builds tiny diamondoid bacteria. The diamondoid bacteria replicate using atmospheric carbon, hydrogen, oxygen, nitrogen, and sunlight. And a couple of days later, everybody on earth falls over dead in the same second."
As someone with a strong background in chemistry this just makes me skeptical of Yudkowsky's groundedness as a prognosticator. Biological life is not compatible with known synthesis conditions for diamond, and even superintelligence may not discover workarounds. I am even more skeptical that AI can make such advances and turn them into working devices purely by pondering/simulation, i.e. without iterative laboratory experiments.
1. If AI is latently capable of killing people using just computer power, then it was going to happen regardless. If the AI requires assistance from human actors then it's basically indistinct from human actors acting alone without AI. If you are a human that puts AI in charge of a human life, you are liable for criminal negligence.
2. You cannot stop AI research because of a bunch of unknowns. People will not be afraid of an immaterial threat that has no plausible way to threaten people besides generating text. Even if that text has access to the internet, the worst that can happen has probably already been explored by human actors. No AI was ever needed to proliferate catastrophes like Stuxnet, Sarin gas attacks, or 9/11.
3. Some people (like myself) have been following this space since Google published BERT. In that time, I have watched LLMs go from "absolutely dogshit text generator" to "slightly less dogshit text generator". It sounds to me like you've drank Sam Altman's Kool-aid without realizing that Sam is bullshitting too.