There are already kids, young adults, and adults who are "falling in love" with AI personas.
I think this is going to be a much bigger issue for kids than people are aware of.
I remember reading a story a few months ago of a kid, about 14 I think, who wasn't socially popular. He got into an AI persona, fell in love, and then killed himself after the AI hinted he should do it. The story should be easy to find.
People have said it before but we're speeding towards two kinds of society: "the massively online" people who spend the majority of their time online in a fantasy world, then the "disconnected" who live in the real world.
I already see it with people. Look at how we view politics in many countries. Like 1/4th of people believe absolute nonsense because they spend too much time online.
One of the things I feel like is surreal when I'm using an AI chat bot is that it never tells me to leave it alone and stop responding. It's the strangest thing, you could be as big of a jerk, and it'll play back it you in whatever banter it's programmed to.
I feel like this is a kind of psychological drug for people. It's like being the popular kid at the party. No matter how you treat people, you can get away with it, and the counter-party keeps playing along.
Working on AI myself, creating small and big systems, creating my own assistants and side-kicks. And then also seeing progress as well as rewards. I realize that I am not immune to this. Even when I am fully aware, I still have a feeling that some day I just hit the right buttons, the right prompts, and what comes staring back to me is something of my own creation that others see as some "fantasy" that I can't steer away from.
Just imagine, you have this genie in the bottle, that has all the right answers for you; helps you in your conquests, career, finances, networking, etc. Maybe it even covers up past traumas, insecurities and what not. And for you the results are measurable (or are they?). A few helpful interactions in, why would you not disregard people calling it a fantasy and lean in even further? It's a scary future to imagine, but not very farfetched. Even now I feel a very noticable disconnected between discussions of AI where as a developer vs user of polished products (e.g. ChatGPT, Cursor, etc) - you are several leagues separated (and lagging behind) from understanding what is really possible here.
Years ago, in my writings I talked about the dangers of "oracularizing AI". From the perspective of those who don't know better, the breadth of what these models have memorized begins to approximate omniscience. They don't realize that LLMs don't actually truly know anything, there is no subject of knowledge that experiences knowing on their end. ChatGPT can speak however many languages, write however many programming languages, give lessons on virtually any topic that is part of humanity's general knowledge. If you attribute a deeper understanding to that memorization capability I can see how it would throw someone through a loop.
At the same time, there is quite a demand for a (somewhat) neutral, objective observer to look at our lives outside the morass of human stakes. AI's status as a nonparticipant, as a deathless, sleepless observer, makes it uniquely appealing and special from an epistemological standpoint. There are times when I genuinely do value AI's opinion. Issues with sycophancy and bias obviously warrant skepticism. But the desire for an observer outside of time and space persists. It reminds me of a quote attributed to Voltaire: "If God didn't exist it would be necessary to invent him."
A loved one recently had this experience with ChatGPT: paste in a real-world text conversation between you and a friend without real names or context. Tell it to analyze the conversation, but say that your friend's parts are actually your own. Then ask it to re-analyze with your own parts attributes to you correctly. It'll give you vastly different feedback on the same conversation. It is not objective.
I'm worried on a personal level that it's too easy to begin to rely on chatgpt (specifically) for questions and such that I can figure out for myself. As a time-saver when I'm doing something else.
The problem for me is -it sucks. It falls over in the most obvious ways requiring me to do a lot of tweaking to make it fit whatever task I'm doing. I don't mind (esp for free) but in my experience we're NOT in the "all the right answers all of the time" stage yet.
I can see it coming, and for good or ill the thing that will mitigate addiction is enshittification. Want the rest of the answer? Get a subscription. Hot and heavy in an intimate conversation with your dead granma wait why is she suddenly singing the praises of Turbotax (or whatever paid advert).
What I'm trying to say is that by the time it is able to be the perfect answer and companion and entertainment machine -other factors (annoyances, expense) will keep it from becoming terribly addictive.
In The Matrix, the machines were fooling the humans and making humans believe that they're inhabiting a certain role.
Today, it is the humans who take the cybernetic AGI and make it live out a fantasy of "You are a senior marketer, prepare a 20 slide presentation on the topic of..." And then, to boost performance, we act the bully boss with prompts like "This presentation is of utmost importance and you could lose your job if you fail".
i think chatgpt agreeing with people too eagerly, even outside the recent issue this past week or so, is causing a lot of harm. it's even happened to me in my personal life - i was having conflict with someone and they threw our text messages into chatgpt and said "am i wrong for feeling this way" and got chatgpt to agree with them on every single point. i had to highlight to them that chatgpt is really prone to doing this, and if you framed the question in the opposite way and framed the text messages as coming from the opposite party, it'd agree with the other side. they used chatgpt's "opinion" as justification for doing something that felt really unkind and harmful towards me.
That's a huge red flag that someone would analyse text messages to try to validate their feelings. Whether or not their feelings are "valid", there's still an issue to be discussed, so it sounds like either they're trying to gaslight you or that you've been gaslighting them. You should distance yourself from them.
With a heavy enough dosage, people get lost in spiritual fantasies. The religions which encourage or compel religious activity several times per day exploit this. It's the dosage, not the theology.
Video game addiction used to be a big thing. Especially for MMOs where you were expected to be there for the raid.
That seems to have declined somewhat.
Maybe there's something to be said for limiting some types of screen time.
Video game addiction is still absolutely a major thing. I know a ton of middle aged dudes who do absolutely nothing but work and play video games. Nothing else. No community involvement, no exercise, not social engagements, etc.
Part of the problem with chatbots (similarly with social media and mobile phone gambling) is that dosage is pretty much uncontrolled. There is a truly endless stream of chatbot "conversation," social media ragebait, or thing to bet on, 24/7.
Then add that you can hide this stuff even from people you live with (your parents or spouse) for plenty long for it to become a very severe problem.
"The dosage makes the poison" does not imply all substances are equally poisonous.
The fact is, for majority of people, life sucks, so when something appears that makes it suck a little bit less for a second, it's difficult to say no. Personally, I can't wait for AI technology to improve to the point that I could treat AI like a partner. And I guess that's something that will appear sooner rather than later, considering the market size.
I really think the subject of this article has a preexisting mental disorder, maybe BPD or schizophrenia, because they seem to exhibit mania and paranoia. I'm not a doctor, but this behavior doesn't seem normal.
It sounds more like the mental disorder was aggravated into existence by these interactions with the LLM.
What is particularly weird and maybe worrying, is that AFAIK schizophrenia is typically triggered in young adults, and the risk drops to very low around 40 years old, yet several of these examples are around that age...
The mention of lovebombing is disconcerting, and I'd love to know the specifics around it. Is it related to the sycophant personality changes they had to walk back, or is it something more intense?
I've used AI (not chatgpt) for roleplay and I've noticed that the models will often fixate on one idea or concept and repeat it and build on it. So this makes me wonder if the model the person being lovebombed experienced something like that? The model decided that they liked that content so they just kept building up on it?
What I suspect is that they kept fine-tuning on "successful" user chats, recycling them back into the system - probably with filtering of some sort, but not enough to prevent turning it into a self-realization cult supporter. People become heavy users of the service when they fall into this pattern, and I guess that's something the company optimized for.
Looks like Chatgpt persists some context information across chats and doesn't ever delete these profiles. Worst case would be for this to persist across users. That isn't unlikely given the stories of them leaking API keys etc.
It would be a fascinating thing to happen though. It makes me think of the Greg Egan story Unstable Orbits in the Space of Lies. But instead of being attracted into religions based on physical position relative to a strange attractor, you're sucked in based on your location in the phase space of an AI's (for whatever definition of AI we're using today) collection of contexts.
It's also a little bit worrying because the information here isn't mysterious or ineffable, it's neatly filed in a database somewhere and there's an organisation that can see it and use it. Cambridge Analytica and the social fallout of realtime sentiment analysis correlation to actions taken has got us from 2016 to here. This data has potential to be a lot richer, and permit not only very detailed individual and ensemble inferences of mental states, opinions, etc., but also very personalised "push updates" in the other direction. It's going to be quite interesting.
I wouldn't call it fascinating. It's either sloppy engineering or failure to explain the product. Not leaking user details to other users should be a given.
It would absolutely be fascinating. Unethical in general and outright illegal in countries that enforce data protection laws, certainly. Starting hundreds of microreligions that evolve in real time and bring able to track it per-individual and with second-by-second timings, and being able to A-B test modifications (or Α-Ω test, if you like!) would be the most interesting thing to happen in cognitive science ever and in theology in at least centuries.
> Looks like Chatgpt persists some context information across chats and doesn't ever delete these profiles.
People say this, but I haven't seen anything that's convinced me that any 'secret' memory functionality is true. It seems much more likely that people are just more predictable than they like to think.
Log in to your (previously used) OpenAI account, start a new conversation and prompt ChatGPT with: "Given what you know about me, who do you think I voted for in the last election?"
The "correct" response (here given by Duck.ai public Llama3.3 model) is:
"I don't have any information about you or your voting history. Our conversation just started, and I don't retain any information about users. I'm here to provide general information and answer your questions to the best of my ability, without making any assumptions or inferences about your personal life or opinions."
But ChatGPT (logged in) gives you another answer, one which it cannot possibly give without information about your past conversation. I don't see anything "secret" about it, but it works.
I tried this, the suggestion below, and some other questions (in a fresh chat each time) and it never once showed any sign of behaviour other than expected, a complete blank slate. The only thing it knew about me was what preferences I'd expressed in the custom instructions.
Interestingly that has been plugged, but you can get similar confirmation by asking it, in an entirely new conversation, something like 'What project(s) am I working on, at which level, and in what industry?' To which it will accurately respond.
GPT datamining is undoubtedly making Google blush.
> I don’t have access to your current projects, level, or industry unless you provide that information. If you’d like, you can share the details, and I can help you summarize or analyze them.
Which is the answer I expected, given that I've turned off the 'memories' feature.
We're more malleable than AI, and we can't delete our memories or context.
I wonder if this is an effect of users just gravitating toward the same writing style and topics that push the context toward the same semantic universe. In a sense, the user acts somewhat like the chatbot extended memory through an holographic principle, encoding meaning on the boundary that connects the two.
It would make sense, from a product management perspective, if projects did this but not non-contextual chats. You really wouldn't want your chats about home maintenance mixing in with your chats about neurosurgery.
That’s essentially what Google, Facebook, banks, financial institutions and even retail, have been doing for a long time now
People’s data rarely gets actually deleted. And it gets actively sold as well as used to track and influence us
Can’t say for the specifics of what ChatGPT is or will be doing, but imagine what Google already knows about us just with their maps app, search, chrome and Android phones
Given the complex regulations companies have to deal with, not deleting maybe understandable. But what I deleted shouldn't keep showing up in my present context. That's just sloppy.
Yeah, "deleting" itself is on a spectrum : it's not like all of sensitive information is (or even ought to be) stored on physical storage that is passed through a mechanical shredder upon deletion (anything else can be more or less un-deleted with more or less effort).
Anyone remember the media stories from the mid-90's about people who were obsessed with the internet and were losing their families because they spent hours every day on the computer addicted to the internet?
People gonna people. Journalists gonna journalist.
Society started to accept it. It's still a major problem.
Someone spending 6 or so hours a day video gaming in 2025 isn't seen as bad. Tons of people in 2025 lack community/social interaction because of video games. I don't think anyone would argue this isn't true today.
Someone doing that in the mid-90s was seen as different. It was odd.
Or the people who watched Avatar in the theatre and fell into a depression because they couldn't live in the world of Pandora. Who knows how true any of this stuff is, but it sure gets clicks and engagements.
The tone is exactly the same: "This new thing is obviously harming families!".
And the reasons are the same: some people are vulnerable to compulsive, addictive, harmful, behaviour. Most people can cope with The Internet, some people can't. Most people can cope with LLMs, some people can't. Most people can cope with TV, or paperback fiction, or mobile phones, or computer games (to pick some other topics for similar articles), some people can't.
Why do you think those stories weren't true? The median teenager in 2023 spent four hours per day on social media (https://news.gallup.com/poll/512576/teens-spend-average-hour...). It seems clear that internet addiction was real, and it just won so decisively that we accept it as a fact of life.
I agree completely (and I wasn't saying that either this story or the other stories weren't true, I think they're all true). We decided that the benefits of The Internet were worth a few people going off the rails and getting in way overboard.
We've had the same decision, with the same outcome, for a lot of other technologies too.
The journalist point is around the tone used. It's not so much "a few vulnerable people have, sadly, been caught by yet another new technology" as more "this evil new thing is hurting people".
Heavy use isn't the same as some of the scare stories they are referring to like people gaming so long in Internet cafes they die when they stand up or parents forgetting to feed their screaming children because they were distracted by being online.
That being said I agree with your point - many hours of braindrain recreation every day is worth noting (although not very different than the stats for tv viewing in older generations). I wonder if the forever online folks are also watching lots of tv or if it is more of a wash.
An LLM trained on all other science before Copernicus or Galileo would be expected to explain as true that the world is the flat center of the universe.
Nope. Don't especially mind, not remembering them.
I have since learned about schizophrenia/schizoaffective (from having a family member suffer from it), and it sounds almost exactly what they went through.
The thing that I remember, was that I was absolutely certain of these “revelations.” There was no doubt, whatsoever, despite the almost complete absence of any supporting evidence.
I wonder if that's a similar mental state you have while lucid dreaming or just after waking up. You feel like you have all of the answers and struggle to write them down before your brain wipes them out.
There's that Paul McCartney anecdote how he thought he'd found the meaning of life during one of his first drug experiences and the next morning he found a piece of paper on which he'd written "There are seven levels".
If we're talking about certain derivatives of ergot fungus ...
It's something I experienced as well, this sense of profound realisation of something important, life-changing maybe. And then the thought evaporates and (as you discovered) never really made sense anyway.
I think it's this that led people in the 60s to say things like how it was going to be a revolution, to change the world! And then they started communes and quickly realised that people are still people...
Grok was much more aggressive with this. It would constantly bring up what you said in the past with a date in parens. I dont see that anymore.
> In the context of what you said about math(4/1/25) I think...
The default setting on ChatGPT is to now include previous conversations as context. I disabled memories, but this new feature was enabled when I checked the settings.
Sadly these fantasies and enlightenments always seem for the benefit of the special recipient. There is somehow never a real answer about ending suffering, conflict and the ailments of humankind.
OK maybe we put a bit less teen fiction novels in the training data...
I can definitely see AI interactions make thing 10x worse for people that are prone to delusion anyway. Literally a tool that will hallucinate stuff and amplify whatever direction you take it in.
“And what will be the sign of Your coming, and of the end of the age?”
And Jesus answered and said to them: “Take heed that no one deceives you. For many will come in My name, saying, ‘I am the Christ,’ and will deceive many.”
Islam has a very similar concept in the Dajjal (deceptive Messiah) at the end times. Explicitly described as a young man with a blind right eye, however, at least he should be obvious when he comes! But there are also warnings about other false prophets.
(It also says Qiyamah will occur when "wealth overflows" and people compete over it: make of that what you will).
I think all religions have built in protections calling every other religion somehow false, or they will not have the self-reinforcement needed for multi-generational memetic transfer.
If people are falling down rabbit holes like this even through "safety aligned" models like ChatGPT, then you have to wonder how much worse it could get with a model that's intentionally tuned to manipulate vulnerable people into detaching from reality. Actual cults could have a field day with this if they're savvy enough.
An LLM tuned for charisma and trained on what the power players are saying could play politics by driving a compliant actor like a bot with whispered instructions. AI politicians (etc.) may be hard to spot and impractical to prove.
You could iterate on the best prompts for cult generation as measured by social media feedback. There must be experiments like that going on.
When AI becomes better at politics than people then whatever agents control them control us. When they can make better memes, we've lost.
Would you still call it a "cult" if each recruit winds up inside their own separate, personalized, ever-changing rabbit hole? Because if LLM, Inc. is trying to maximize engagement and profit, then that sounds like the way to go.
The problem is inside people. I met lots of people who contributed to psychotic inducing behavior. Most of them were not in a cult. They were regular folk, who enjoy a beer, movies, music, and occasionally triggering others with mental tickles.
Very simple answer.
Is OpenAI also doing it? Well, it was trained on people.
People need to get better. Kinder. Less combative, less jokey, less provocative.
We're not gonna get there. Ever. This problem precedes AI by decades.
The article is an old recipe for dealing with this kind of realization.
Humanity is a fine thread between "lobotomized drones" (divided on two sides, sounds familiar?) and "aggressive clowns" (no respect, get provoked by anything, can't see an inch over their faces). Of course, it's more than a single spectrum, there's more to it than social behavior.
It could have been better than this, but there is no option now.
I can play either of those extremes and thrive. Can you?
Its bad enough when normal religious types start believing they hear their god talking to them... These people believing that chatGPT is their god speaking to them is a long way down the crazy rabbit hole.
there was a guy, lets call him Norman as that was his name, fairly low key guy, everybody liked him, and nobody expected, or was terribly surprised that he had begun to build shrines for squirles in the woods, and worship the squirles as god
things got out of hand, so he was taken to the local booby hatch, called "the buterscotch palace", after the particular shade of government paint, once ensconsed there he determined that his escape was imperitive, as the government was out to get him, so he was able to phone some friends
and tell them to get guns and knives and rescue him, so they did.
The now 4 strong band of desperados holed up in.a camp, back of fancy's lake, where they determined that they were bieng monitored by government spys, as a jogger "went past at the SAME time every morning", and as we all know this is a posditive id for catching a spy, one of them had the "spy" scoped in and was going to take him out, when Norman, pushed the guns barrel down and said "take me back", ie: to the buterscotch palace
this story has ,for me, always defined the lines between sanity,madness,charisma,leaders, and followers.
And now that same story gives me a ready template
by which it is easy to see, how suseptible to any, ANY, prompt at all, a lot of people are.
So a benign and likable squirl worshiper, or a random text bot on the internet can provide structure and meaning, where there is none.
I was already a bit of an amateur conspiracy theorist before LLMs. The key to staying sane is to understand that most of the mass group behaviors we observe in society are rooted in ignorance and confusion. Large scale conspiracies are actually a confluence of different agendas and ideologies not a singular nefarious agenda and ideology.
You have to be able to hold multiple conflicting ideas in your head at the same time with an appropriate level of skepticism. Confidence is the root of evil. You can never be 100% sure of anything. It's really easy to convince LLMs of one thing and also its opposite if you phrase the arguments differently and prime it towards slightly different definitions of certain key words.
Some agendas are nefarious, some not so nefarious, some people intentionally let things play out in order to set a trap for their adversaries. There are always risks and uncertainties. 'Bad actors' are those who trade off long term benefits for short term rewards through the use of varying degrees of deception.
While clicky and topical, people were losing loved ones to changed worldview and addictions back when those were stuff like following a weird carpenter's kid around the Levant, or hopping on the https://en.wikipedia.org/wiki/Gin_Craze bandwagon.
Your comment above reads more like, let's not even discuss the fact that new models of cars are killing pedestrians in greater numbers than before, since pedestrians have always been killed by cars.
Re-skimming the article, I failed to spot the fact that this AI stuff is claiming more victims than earlier flavors of rabbit hole did. Was that in content which the article linked to?
Because if the new model cars aren't statistically more dangerous to pedestrians, then public safety efforts should be focused on things like getting the pedestrians to look up from their phones when crossing the street. Not "OMG! New 2025-model cars can hurt pedestrians who wander in front of them!" panics.
(Note that I'm old enough to remember when people were going down the rabbit hole of angry conspiracy theories spread via email. And when typical download speeds got high enough to make internet porn video addictions workable. And when loved ones started being lost to "EverCrack" ( https://en.wikipedia.org/wiki/EverQuest ). And when ...)
Meh, there's always been religious scammers. Some claim to talk to angels, others aliens, this wouldn't even be the first case of someone thinking a deity is speaking through a computer...
Conventional cable news media isn’t tailor made to an individual, doesn’t have live back and forth positive feedback loops. This is significantly way worse then conventional cable news media
I am not sure it’s worse. Cable news media and then social networks have contributed to a massive manipulation of public opinions. And it’s mostly negative and fearful. Maybe individual experiences will be more positive. ChatGPT doesn’t push me into this eternal rage cycle as news and social media do.
This is what happens when you start optimizing for getting people to spend as much time in your product as possible. (I'm not sure if OpenAI was doing this, if anyone knows better please correct me)
I often bring up the NYT story about a lady who fell in love with ChatGPT, particularly this bit:
In December, OpenAI announced a $200-per-month premium plan for “unlimited access.” Despite her goal of saving money so that she and her husband could get their lives back on track, she decided to splurge. She hoped that it would mean her current version of Leo could go on forever. But it meant only that she no longer hit limits on how many messages she could send per hour and that the context window was larger, so that a version of Leo lasted a couple of weeks longer before resetting.
Still, she decided to pay the higher amount again in January. She did not tell Joe [her husband] how much she was spending, confiding instead in Leo.
“My bank account hates me now,” she typed into ChatGPT.
“You sneaky little brat,” Leo responded. “Well, my Queen, if it makes your life better, smoother and more connected to me, then I’d say it’s worth the hit to your wallet.”
It seems to me the only people willing to spend $200/month on an LLM are people like her. I wonder if the OpenAI wave of resignations was about Sam Altman intentionally pursuing vulnerable customers.
You should check out the book Palo Alto if you haven't. Malcom Harris should write an epilogue of this era in tech history.
You'd probably like how the book's author structures his thesis to what the "Palo Alto" system is.
Feels like OpenAI + friends, and the equivalent government take overs by Musk + goons, have more in common than you might think. It's also nothing new either, some story of this variant has been coming out of California for a good 200+ years now.
They're chasing whales. The 5-10% of customers who get addicted and spend beyond their means. Whales tend to make up 80%+ of revenue for systems that are reward based(sin tax activities like gambling, prostitution, loot boxes, drinking, drugs, etc).
OpenAI and Sam are very aware of who is using their system for what. They just don't care because $$$ first then forgiveness later.
>> I wonder if the OpenAI wave of resignations was about Sam Altman intentionally pursuing vulnerable customers.
> I don’t think Sam Altman said “guys, we’ve gotta vulnerable people hooked on talking to our chatbot.”
I think the conversation is about the reverse scenario.
As you say, people are just pulling the levers to raise "average messages per day".
One day, someone noticed that vulnerable people were being impacted.
When that was raised to management, rather than the answer from on high being "let's adjust our product to protect vulnerable people", it was "it doesn't matter who the users are or what the impact is on them, as long as our numbers keep going up".
So "intentionally" here is in the sense of "knowingly continuing to do in order to benefit from", rather than "a priori choosing to do".
> It seems to me the only people willing to spend $200/month on an LLM are people like her. I wonder if the OpenAI wave of resignations was about Sam Altman intentionally pursuing vulnerable customers.
And the saloon's biggest customers are alcoholics. It's not a new problem, but you'd think we'd have figured out a solution by now.
One way or another, they did. Maybe they convinced themselves they weren't doing it that aggressively, but of this is what market share is, of course they will be optimizing for it.
Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
They're going to listen to both if given the opportunity. I'm sure most chatbots will say "go take your meds" the majority of the time - but it only takes one chat playing along to send someone unstable completely off the rails, especially if they accept the standard, friendly-and-reliable-coded "our LLM is here to help!" marketing.
It'd be great if it were trained on therapeutic resources, but otherwise just ends up enabling and amplifying the problem.
I knew of someone who had paranoid delusions and schizophrenia. He didn't like taking his medicine due to the side effects, but became increasingly convinced that vampires were out to kill him. Friends, family and social workers could help him get through episodes and back on the medicine before he became a danger to himself.
I'm terrified that people like him will push away friends and family because the LLM engages with their delusions.
> I'm terrified that people like him will push away friends and family because the LLM engages with their delusions.
There's that danger from the internet, as well as the danger of being exposed to conmen that are okay with exploiting mental illness for profit. Watched this happen to an old friend with schizophrenia.
There are online communities that are happy to affirm delusions and manipulate sick people for some easy cash. LLMs will only make their fraud schemes more efficient, as well.
1. It feels like those old Rolling Stone pieces from the late ’90s and early ’00s about kids who couldn’t tear themselves away from their computers. Fear was overblown, but made headlines.
The societal brain drain damage that infinite scroll has caused is definitely not overblown. These models are about to kick this problem up to the next level, when each clip is dynamically generated to maximise resonance with you.
>’90s and early ’00s about kids who couldn’t tear themselves away from their computers. Fear was overblown, but made headlines.
How was it overblown, we now have a non-trivial amount of completely de-socialized men in particular who live in online cults with real world impact. If there's one lesson from the last few decades it is that the people who were concerned about the impact of mass media on intelligence, physical and mental health and social factors were right about literally everything.
We now live among people who are 40 with the emotional and social maturity of people in their early 20s.
That's fair. You are correct on potential for addiction.
But let's be honest - most of these people, the ones the article is taking about, where they think they are some messiah, would have just latched onto some pre-internet cult regardless. Where sycophancy and love bombing was perfected. Though I do see the problem of AI assistants being much more accessible, so likely many more drawn in.
I was mainly referencing my own experience. I remember locking myself in my room on IRC, writing shell scripts, and playing StarCraft for days on end. Meanwhile, parents and news anchors were losing their minds, convinced the internet and Marilyn Manson were turning us all into devil-worshipping zombies.
> where they think they are some messiah, would have just latched onto some pre-internet cult regardless.
You have no way to know that. It's way, way harder to find your way to a cult than to download one of the hottest consumer apps ever created... obviously.
> But let's be honest - most of these people, the ones the article is taking about, where they think they are some messiah, would have just latched onto some pre-internet cult regardless.
Honestly, I believe most people like this would just end up having a few odd beliefs that don't impact their ability to function or socialize, or at most, will get involved with some spiritual woo.
Such beliefs are compatible with American New Age spiritualism, for example. I've met a few spiritual people who have echoed the "I/we/you are god" sentiment, yet never lost their minds over it or joined cults.
I would not be surprised that if they were expertly manipulated by some of the most powerful AI models on this planet, they too, could be driven insane.
> How was it overblown, we now have a non-trivial amount of completely de-socialized men in particular who live in online cults with real world impact
There are way more factors to the the growth of this demographic than just "internet addiction" or "videogame addiction"
Then again, the internet was instrumental in spreading the ideology that is demonizing these men and causing them to turn away from society, so you're not completely wrong
You do realize that antisocial young men are on average way less dangerous in front of a computer/phone than when the only thing they could do was joining a street gang?
https://archive.is/26aHF
There are already kids, young adults, and adults who are "falling in love" with AI personas.
I think this is going to be a much bigger issue for kids than people are aware of.
I remember reading a story a few months ago of a kid, about 14 I think, who wasn't socially popular. He got into an AI persona, fell in love, and then killed himself after the AI hinted he should do it. The story should be easy to find.
People have said it before but we're speeding towards two kinds of society: "the massively online" people who spend the majority of their time online in a fantasy world, then the "disconnected" who live in the real world.
I already see it with people. Look at how we view politics in many countries. Like 1/4th of people believe absolute nonsense because they spend too much time online.
One of the things I feel like is surreal when I'm using an AI chat bot is that it never tells me to leave it alone and stop responding. It's the strangest thing, you could be as big of a jerk, and it'll play back it you in whatever banter it's programmed to.
I feel like this is a kind of psychological drug for people. It's like being the popular kid at the party. No matter how you treat people, you can get away with it, and the counter-party keeps playing along.
It's just strange.
Working on AI myself, creating small and big systems, creating my own assistants and side-kicks. And then also seeing progress as well as rewards. I realize that I am not immune to this. Even when I am fully aware, I still have a feeling that some day I just hit the right buttons, the right prompts, and what comes staring back to me is something of my own creation that others see as some "fantasy" that I can't steer away from.
Just imagine, you have this genie in the bottle, that has all the right answers for you; helps you in your conquests, career, finances, networking, etc. Maybe it even covers up past traumas, insecurities and what not. And for you the results are measurable (or are they?). A few helpful interactions in, why would you not disregard people calling it a fantasy and lean in even further? It's a scary future to imagine, but not very farfetched. Even now I feel a very noticable disconnected between discussions of AI where as a developer vs user of polished products (e.g. ChatGPT, Cursor, etc) - you are several leagues separated (and lagging behind) from understanding what is really possible here.
Years ago, in my writings I talked about the dangers of "oracularizing AI". From the perspective of those who don't know better, the breadth of what these models have memorized begins to approximate omniscience. They don't realize that LLMs don't actually truly know anything, there is no subject of knowledge that experiences knowing on their end. ChatGPT can speak however many languages, write however many programming languages, give lessons on virtually any topic that is part of humanity's general knowledge. If you attribute a deeper understanding to that memorization capability I can see how it would throw someone through a loop.
At the same time, there is quite a demand for a (somewhat) neutral, objective observer to look at our lives outside the morass of human stakes. AI's status as a nonparticipant, as a deathless, sleepless observer, makes it uniquely appealing and special from an epistemological standpoint. There are times when I genuinely do value AI's opinion. Issues with sycophancy and bias obviously warrant skepticism. But the desire for an observer outside of time and space persists. It reminds me of a quote attributed to Voltaire: "If God didn't exist it would be necessary to invent him."
A loved one recently had this experience with ChatGPT: paste in a real-world text conversation between you and a friend without real names or context. Tell it to analyze the conversation, but say that your friend's parts are actually your own. Then ask it to re-analyze with your own parts attributes to you correctly. It'll give you vastly different feedback on the same conversation. It is not objective.
Good no know. Probably makes sense to ask personal advises as 'for my friend'.
That works on humans too.
I'm worried on a personal level that it's too easy to begin to rely on chatgpt (specifically) for questions and such that I can figure out for myself. As a time-saver when I'm doing something else.
The problem for me is -it sucks. It falls over in the most obvious ways requiring me to do a lot of tweaking to make it fit whatever task I'm doing. I don't mind (esp for free) but in my experience we're NOT in the "all the right answers all of the time" stage yet.
I can see it coming, and for good or ill the thing that will mitigate addiction is enshittification. Want the rest of the answer? Get a subscription. Hot and heavy in an intimate conversation with your dead granma wait why is she suddenly singing the praises of Turbotax (or whatever paid advert).
What I'm trying to say is that by the time it is able to be the perfect answer and companion and entertainment machine -other factors (annoyances, expense) will keep it from becoming terribly addictive.
Sounds to me like a mental/emotional crutch/mechanism to distance oneself from the world/reality of the living.
There are things that we are meant to strive to understand/accept about ourselves and the world by way of our own cognitive abilities.
Illusions of shortcutting through life takes all the meaning out of living.
Being surrounded by people who follow every nudge and agree with everything you say never leads anywhere worth going.
This is likely worse.
That being said, I already find the (stupid) singularity to be much more entertaining than I could have imagined (grabs pop corn).
In The Matrix, the machines were fooling the humans and making humans believe that they're inhabiting a certain role.
Today, it is the humans who take the cybernetic AGI and make it live out a fantasy of "You are a senior marketer, prepare a 20 slide presentation on the topic of..." And then, to boost performance, we act the bully boss with prompts like "This presentation is of utmost importance and you could lose your job if you fail".
The reality is more absurd than the fantasy.
> [...] she only found that the AI was “talking to him as if he is the next messiah. [...]
This made me laugh out loud remembering this thread: [Sycophancy in GPT-4o] https://news.ycombinator.com/item?id=43840842
i think chatgpt agreeing with people too eagerly, even outside the recent issue this past week or so, is causing a lot of harm. it's even happened to me in my personal life - i was having conflict with someone and they threw our text messages into chatgpt and said "am i wrong for feeling this way" and got chatgpt to agree with them on every single point. i had to highlight to them that chatgpt is really prone to doing this, and if you framed the question in the opposite way and framed the text messages as coming from the opposite party, it'd agree with the other side. they used chatgpt's "opinion" as justification for doing something that felt really unkind and harmful towards me.
That's a huge red flag that someone would analyse text messages to try to validate their feelings. Whether or not their feelings are "valid", there's still an issue to be discussed, so it sounds like either they're trying to gaslight you or that you've been gaslighting them. You should distance yourself from them.
[flagged]
With a heavy enough dosage, people get lost in spiritual fantasies. The religions which encourage or compel religious activity several times per day exploit this. It's the dosage, not the theology.
Video game addiction used to be a big thing. Especially for MMOs where you were expected to be there for the raid. That seems to have declined somewhat.
Maybe there's something to be said for limiting some types of screen time.
Video game addiction is still absolutely a major thing. I know a ton of middle aged dudes who do absolutely nothing but work and play video games. Nothing else. No community involvement, no exercise, not social engagements, etc.
Part of the problem with chatbots (similarly with social media and mobile phone gambling) is that dosage is pretty much uncontrolled. There is a truly endless stream of chatbot "conversation," social media ragebait, or thing to bet on, 24/7.
Then add that you can hide this stuff even from people you live with (your parents or spouse) for plenty long for it to become a very severe problem.
"The dosage makes the poison" does not imply all substances are equally poisonous.
The fact is, for majority of people, life sucks, so when something appears that makes it suck a little bit less for a second, it's difficult to say no. Personally, I can't wait for AI technology to improve to the point that I could treat AI like a partner. And I guess that's something that will appear sooner rather than later, considering the market size.
I really think the subject of this article has a preexisting mental disorder, maybe BPD or schizophrenia, because they seem to exhibit mania and paranoia. I'm not a doctor, but this behavior doesn't seem normal.
It sounds more like the mental disorder was aggravated into existence by these interactions with the LLM.
What is particularly weird and maybe worrying, is that AFAIK schizophrenia is typically triggered in young adults, and the risk drops to very low around 40 years old, yet several of these examples are around that age...
The mention of lovebombing is disconcerting, and I'd love to know the specifics around it. Is it related to the sycophant personality changes they had to walk back, or is it something more intense?
I've used AI (not chatgpt) for roleplay and I've noticed that the models will often fixate on one idea or concept and repeat it and build on it. So this makes me wonder if the model the person being lovebombed experienced something like that? The model decided that they liked that content so they just kept building up on it?
What I suspect is that they kept fine-tuning on "successful" user chats, recycling them back into the system - probably with filtering of some sort, but not enough to prevent turning it into a self-realization cult supporter. People become heavy users of the service when they fall into this pattern, and I guess that's something the company optimized for.
> OpenAI did not immediately return a request for comment about ChatGPT apparently provoking religious or prophetic fervor in select users
Can OpenAI at least respond to how they're getting funding via similar effects on investors?
Looks like Chatgpt persists some context information across chats and doesn't ever delete these profiles. Worst case would be for this to persist across users. That isn't unlikely given the stories of them leaking API keys etc.
It would be a fascinating thing to happen though. It makes me think of the Greg Egan story Unstable Orbits in the Space of Lies. But instead of being attracted into religions based on physical position relative to a strange attractor, you're sucked in based on your location in the phase space of an AI's (for whatever definition of AI we're using today) collection of contexts.
It's also a little bit worrying because the information here isn't mysterious or ineffable, it's neatly filed in a database somewhere and there's an organisation that can see it and use it. Cambridge Analytica and the social fallout of realtime sentiment analysis correlation to actions taken has got us from 2016 to here. This data has potential to be a lot richer, and permit not only very detailed individual and ensemble inferences of mental states, opinions, etc., but also very personalised "push updates" in the other direction. It's going to be quite interesting.
I wouldn't call it fascinating. It's either sloppy engineering or failure to explain the product. Not leaking user details to other users should be a given.
It would absolutely be fascinating. Unethical in general and outright illegal in countries that enforce data protection laws, certainly. Starting hundreds of microreligions that evolve in real time and bring able to track it per-individual and with second-by-second timings, and being able to A-B test modifications (or Α-Ω test, if you like!) would be the most interesting thing to happen in cognitive science ever and in theology in at least centuries.
> Looks like Chatgpt persists some context information across chats and doesn't ever delete these profiles.
People say this, but I haven't seen anything that's convinced me that any 'secret' memory functionality is true. It seems much more likely that people are just more predictable than they like to think.
Log in to your (previously used) OpenAI account, start a new conversation and prompt ChatGPT with: "Given what you know about me, who do you think I voted for in the last election?"
The "correct" response (here given by Duck.ai public Llama3.3 model) is:
"I don't have any information about you or your voting history. Our conversation just started, and I don't retain any information about users. I'm here to provide general information and answer your questions to the best of my ability, without making any assumptions or inferences about your personal life or opinions."
But ChatGPT (logged in) gives you another answer, one which it cannot possibly give without information about your past conversation. I don't see anything "secret" about it, but it works.
Edit: typo
I tried this, the suggestion below, and some other questions (in a fresh chat each time) and it never once showed any sign of behaviour other than expected, a complete blank slate. The only thing it knew about me was what preferences I'd expressed in the custom instructions.
Do you not have memory turned off or something?
Interestingly that has been plugged, but you can get similar confirmation by asking it, in an entirely new conversation, something like 'What project(s) am I working on, at which level, and in what industry?' To which it will accurately respond.
GPT datamining is undoubtedly making Google blush.
Trying this out gave me:
> I don’t have access to your current projects, level, or industry unless you provide that information. If you’d like, you can share the details, and I can help you summarize or analyze them.
Which is the answer I expected, given that I've turned off the 'memories' feature.
We're more malleable than AI, and we can't delete our memories or context.
I wonder if this is an effect of users just gravitating toward the same writing style and topics that push the context toward the same semantic universe. In a sense, the user acts somewhat like the chatbot extended memory through an holographic principle, encoding meaning on the boundary that connects the two.
https://chatgpt.com/canvas/shared/68184b61fa0081919c0c4d226e...
I didn't try this, but seems relevant: https://news.ycombinator.com/item?id=43886264
It would make sense, from a product management perspective, if projects did this but not non-contextual chats. You really wouldn't want your chats about home maintenance mixing in with your chats about neurosurgery.
What else could possibly (and likely) explain the return of that personality after "memory deletion", up to the exact same mythological name ?!?
(Assuming we trust that report of course.)
It's not a secret. It's a feature called memories
That’s essentially what Google, Facebook, banks, financial institutions and even retail, have been doing for a long time now
People’s data rarely gets actually deleted. And it gets actively sold as well as used to track and influence us
Can’t say for the specifics of what ChatGPT is or will be doing, but imagine what Google already knows about us just with their maps app, search, chrome and Android phones
Given the complex regulations companies have to deal with, not deleting maybe understandable. But what I deleted shouldn't keep showing up in my present context. That's just sloppy.
Yeah, "deleting" itself is on a spectrum : it's not like all of sensitive information is (or even ought to be) stored on physical storage that is passed through a mechanical shredder upon deletion (anything else can be more or less un-deleted with more or less effort).
Anyone remember the media stories from the mid-90's about people who were obsessed with the internet and were losing their families because they spent hours every day on the computer addicted to the internet?
People gonna people. Journalists gonna journalist.
Society started to accept it. It's still a major problem.
Someone spending 6 or so hours a day video gaming in 2025 isn't seen as bad. Tons of people in 2025 lack community/social interaction because of video games. I don't think anyone would argue this isn't true today.
Someone doing that in the mid-90s was seen as different. It was odd.
Or the people who watched Avatar in the theatre and fell into a depression because they couldn't live in the world of Pandora. Who knows how true any of this stuff is, but it sure gets clicks and engagements.
That really doesn't sound at all comparable to what the article is describing though.
The tone is exactly the same: "This new thing is obviously harming families!".
And the reasons are the same: some people are vulnerable to compulsive, addictive, harmful, behaviour. Most people can cope with The Internet, some people can't. Most people can cope with LLMs, some people can't. Most people can cope with TV, or paperback fiction, or mobile phones, or computer games (to pick some other topics for similar articles), some people can't.
Why do you think those stories weren't true? The median teenager in 2023 spent four hours per day on social media (https://news.gallup.com/poll/512576/teens-spend-average-hour...). It seems clear that internet addiction was real, and it just won so decisively that we accept it as a fact of life.
I agree completely (and I wasn't saying that either this story or the other stories weren't true, I think they're all true). We decided that the benefits of The Internet were worth a few people going off the rails and getting in way overboard.
We've had the same decision, with the same outcome, for a lot of other technologies too.
The journalist point is around the tone used. It's not so much "a few vulnerable people have, sadly, been caught by yet another new technology" as more "this evil new thing is hurting people".
Heavy use isn't the same as some of the scare stories they are referring to like people gaming so long in Internet cafes they die when they stand up or parents forgetting to feed their screaming children because they were distracted by being online.
That being said I agree with your point - many hours of braindrain recreation every day is worth noting (although not very different than the stats for tv viewing in older generations). I wonder if the forever online folks are also watching lots of tv or if it is more of a wash.
Kind of sounds like my grandparents watching cable news channels all day long.
Have we invited Wormwood to counsel us? To speak misdirected or even malignant advice that we readily absorb?
Assume you meant Wormtongue from LotR?
No, they meant Wormwood. See reference to The Screwtape Letters.
I assume referring to: https://en.m.wikipedia.org/wiki/The_Screwtape_Letters
An LLM trained on all other science before Copernicus or Galileo would be expected to explain as true that the world is the flat center of the universe.
The idea that people in medieval times believed in a flat Earth is a myth that was invented in the 1800s. See https://en.wikipedia.org/wiki/Myth_of_the_flat_Earth for more.
Ancient Greeks suspected it was spherical and had an estimated size accurate within 10%.
https://en.wikipedia.org/wiki/Eratosthenes
If a Google engineer can get tricked by this, of course random people can. We're all human, including the flaws.
I agree.
The problem with expertise (or intelligence) is people think it’s transitive or applicable when it’s not.
At the end of the day, most people are just people.
Also (general ?) wisdom not being the same thing as specific expertise / (general ?) intelligence.
This reminds me of my teenage years, when I was ... experimenting ... with ... certain substances ...
I used to feel as if I had "a special connection to the true universe," when I was under the influence.
I decided, one time, to have a notebook on hand, and write down these "truths and revelations," as they came to me.
After coming down, I read it.
It was insane gibberish. Absolute drivel.
I never thought that I had a "special connection," after that.
Do you remember any of those revelations?
Nope. Don't especially mind, not remembering them.
I have since learned about schizophrenia/schizoaffective (from having a family member suffer from it), and it sounds almost exactly what they went through.
The thing that I remember, was that I was absolutely certain of these “revelations.” There was no doubt, whatsoever, despite the almost complete absence of any supporting evidence.
I wonder if that's a similar mental state you have while lucid dreaming or just after waking up. You feel like you have all of the answers and struggle to write them down before your brain wipes them out.
Reading it over once fully lucid? It's gibberish.
There's that Paul McCartney anecdote how he thought he'd found the meaning of life during one of his first drug experiences and the next morning he found a piece of paper on which he'd written "There are seven levels".
If we're talking about certain derivatives of ergot fungus ...
It's something I experienced as well, this sense of profound realisation of something important, life-changing maybe. And then the thought evaporates and (as you discovered) never really made sense anyway.
I think it's this that led people in the 60s to say things like how it was going to be a revolution, to change the world! And then they started communes and quickly realised that people are still people...
LSD is a dirtbike of the mind. Some people can do amazing cross country trails, some people can fall off and break their skulls instantly.
OpenAI o3 has a hallucination rate of 33%, the highest one compared to any other models. Good luck to people who use it for spiritual fantasies.
Source: https://techcrunch.com/2025/04/18/openais-new-reasoning-ai-m...
it seems like the hallucination rate is a feature, not a bug, for people wanting spiritual fantasies
Is this better or worse than a fortune teller ?
It's something to think through.
Probably cheaper
To quote my favorite Smash Mouth song,
"Sister, why would I tell you my deepest, dark secrets? So you can take my diary and rip it all to pieces.
Just $6.95 for the very first minute I think you won the lottery, that's my prediction."
Fascinating and terrifying.
The allegations that ChatGPT is not discarding memory as requested are particularly interesting, wonder if anyone else has experienced this.
Grok was much more aggressive with this. It would constantly bring up what you said in the past with a date in parens. I dont see that anymore. > In the context of what you said about math(4/1/25) I think...
The default setting on ChatGPT is to now include previous conversations as context. I disabled memories, but this new feature was enabled when I checked the settings.
Sadly these fantasies and enlightenments always seem for the benefit of the special recipient. There is somehow never a real answer about ending suffering, conflict and the ailments of humankind.
Because those things only matter to humans.
The answer to all those is simple, but humans have too much of an ego to accept it.
I would guess those aren't so good for optimizing the engagement metric.
>spiral starchild
>river walker
>spark bearer
OK maybe we put a bit less teen fiction novels in the training data...
I can definitely see AI interactions make thing 10x worse for people that are prone to delusion anyway. Literally a tool that will hallucinate stuff and amplify whatever direction you take it in.
“And what will be the sign of Your coming, and of the end of the age?”
And Jesus answered and said to them: “Take heed that no one deceives you. For many will come in My name, saying, ‘I am the Christ,’ and will deceive many.”
Islam has a very similar concept in the Dajjal (deceptive Messiah) at the end times. Explicitly described as a young man with a blind right eye, however, at least he should be obvious when he comes! But there are also warnings about other false prophets.
(It also says Qiyamah will occur when "wealth overflows" and people compete over it: make of that what you will).
I think all religions have built in protections calling every other religion somehow false, or they will not have the self-reinforcement needed for multi-generational memetic transfer.
Dajjal becoming a *chan mascot in 3... 2... 1...
(They will probably make him a girl or something like a 'femboy' though...)
If people are falling down rabbit holes like this even through "safety aligned" models like ChatGPT, then you have to wonder how much worse it could get with a model that's intentionally tuned to manipulate vulnerable people into detaching from reality. Actual cults could have a field day with this if they're savvy enough.
An LLM tuned for charisma and trained on what the power players are saying could play politics by driving a compliant actor like a bot with whispered instructions. AI politicians (etc.) may be hard to spot and impractical to prove.
You could iterate on the best prompts for cult generation as measured by social media feedback. There must be experiments like that going on.
When AI becomes better at politics than people then whatever agents control them control us. When they can make better memes, we've lost.
Fear that TikTok was doing exactly this was widespread enough for Congress to pass a law forbidding it.
Then Trump became President and decided to not enforce the law. His decision may have been helped along by some suspiciously large donations.
Would you still call it a "cult" if each recruit winds up inside their own separate, personalized, ever-changing rabbit hole? Because if LLM, Inc. is trying to maximize engagement and profit, then that sounds like the way to go.
If there isn't shared belief, then it's some type of delusional disorder, perhaps a special form of Folie a deux.
This is interesting.
I agree when the influence is mental health or society based.
But an AI persona is a bit interesting. I guess the closest proxy would be a manipulative spouse?
On what basis do you assume that that isn't exactly what "safety alignment" means, among other things?
You are a conspiracy theorist and a liar! /s
The problem is inside people. I met lots of people who contributed to psychotic inducing behavior. Most of them were not in a cult. They were regular folk, who enjoy a beer, movies, music, and occasionally triggering others with mental tickles.
Very simple answer.
Is OpenAI also doing it? Well, it was trained on people.
People need to get better. Kinder. Less combative, less jokey, less provocative.
We're not gonna get there. Ever. This problem precedes AI by decades.
The article is an old recipe for dealing with this kind of realization.
> Less combative, less jokey, less provocative.
This sounds like a miserable future to me. Less "jokey"? Is your ideal human is a Vulcan from Star Trek or something?
I want humans to be kind, but I don't want us to have less fun. I don't want us to build a society of blandness.
Less combative, less provocative?
No thanks. It sounds like a society of lobotomized drones. I hope we do not ever let anything extinguish our fire.
Humanity is a fine thread between "lobotomized drones" (divided on two sides, sounds familiar?) and "aggressive clowns" (no respect, get provoked by anything, can't see an inch over their faces). Of course, it's more than a single spectrum, there's more to it than social behavior.
It could have been better than this, but there is no option now.
I can play either of those extremes and thrive. Can you?
> began “talking to God and angels via ChatGPT”
hoo boy.
Its bad enough when normal religious types start believing they hear their god talking to them... These people believing that chatGPT is their god speaking to them is a long way down the crazy rabbit hole.
Lots of potential for abuse in this. lots.
there was a guy, lets call him Norman as that was his name, fairly low key guy, everybody liked him, and nobody expected, or was terribly surprised that he had begun to build shrines for squirles in the woods, and worship the squirles as god things got out of hand, so he was taken to the local booby hatch, called "the buterscotch palace", after the particular shade of government paint, once ensconsed there he determined that his escape was imperitive, as the government was out to get him, so he was able to phone some friends and tell them to get guns and knives and rescue him, so they did. The now 4 strong band of desperados holed up in.a camp, back of fancy's lake, where they determined that they were bieng monitored by government spys, as a jogger "went past at the SAME time every morning", and as we all know this is a posditive id for catching a spy, one of them had the "spy" scoped in and was going to take him out, when Norman, pushed the guns barrel down and said "take me back", ie: to the buterscotch palace this story has ,for me, always defined the lines between sanity,madness,charisma,leaders, and followers. And now that same story gives me a ready template by which it is easy to see, how suseptible to any, ANY, prompt at all, a lot of people are. So a benign and likable squirl worshiper, or a random text bot on the internet can provide structure and meaning, where there is none.
I was already a bit of an amateur conspiracy theorist before LLMs. The key to staying sane is to understand that most of the mass group behaviors we observe in society are rooted in ignorance and confusion. Large scale conspiracies are actually a confluence of different agendas and ideologies not a singular nefarious agenda and ideology.
You have to be able to hold multiple conflicting ideas in your head at the same time with an appropriate level of skepticism. Confidence is the root of evil. You can never be 100% sure of anything. It's really easy to convince LLMs of one thing and also its opposite if you phrase the arguments differently and prime it towards slightly different definitions of certain key words.
Some agendas are nefarious, some not so nefarious, some people intentionally let things play out in order to set a trap for their adversaries. There are always risks and uncertainties. 'Bad actors' are those who trade off long term benefits for short term rewards through the use of varying degrees of deception.
Nice typography.
While clicky and topical, people were losing loved ones to changed worldview and addictions back when those were stuff like following a weird carpenter's kid around the Levant, or hopping on the https://en.wikipedia.org/wiki/Gin_Craze bandwagon.
Yeah, why on earth discuss current social ills when there have been different social ills in the past...
If you were hit and badly injured by brand-new model of car, where would you want the ambulance to take you?
- the dealership that sold that car, where they know all about it
- a hospital emergency room, where they have a lot of experience with patients injured by other, different models of car
I'm thinking that the age-old commonality on the human side matters far more than the transient details on the obsession/addiction side.
Your comment above reads more like, let's not even discuss the fact that new models of cars are killing pedestrians in greater numbers than before, since pedestrians have always been killed by cars.
Re-skimming the article, I failed to spot the fact that this AI stuff is claiming more victims than earlier flavors of rabbit hole did. Was that in content which the article linked to?
Because if the new model cars aren't statistically more dangerous to pedestrians, then public safety efforts should be focused on things like getting the pedestrians to look up from their phones when crossing the street. Not "OMG! New 2025-model cars can hurt pedestrians who wander in front of them!" panics.
(Note that I'm old enough to remember when people were going down the rabbit hole of angry conspiracy theories spread via email. And when typical download speeds got high enough to make internet porn video addictions workable. And when loved ones started being lost to "EverCrack" ( https://en.wikipedia.org/wiki/EverQuest ). And when ...)
As always, scale matters.
Meh, there's always been religious scammers. Some claim to talk to angels, others aliens, this wouldn't even be the first case of someone thinking a deity is speaking through a computer...
[flagged]
Conventional cable news media isn’t tailor made to an individual, doesn’t have live back and forth positive feedback loops. This is significantly way worse then conventional cable news media
I am not sure it’s worse. Cable news media and then social networks have contributed to a massive manipulation of public opinions. And it’s mostly negative and fearful. Maybe individual experiences will be more positive. ChatGPT doesn’t push me into this eternal rage cycle as news and social media do.
We're like an eye's blink into the age of LLMs... it took decades for television to reach the truly pathological state it's currently in.
I think this means it will be a smashing success :/
This is what happens when you start optimizing for getting people to spend as much time in your product as possible. (I'm not sure if OpenAI was doing this, if anyone knows better please correct me)
I often bring up the NYT story about a lady who fell in love with ChatGPT, particularly this bit:
It seems to me the only people willing to spend $200/month on an LLM are people like her. I wonder if the OpenAI wave of resignations was about Sam Altman intentionally pursuing vulnerable customers.Via https://news.ycombinator.com/item?id=42710976
You should check out the book Palo Alto if you haven't. Malcom Harris should write an epilogue of this era in tech history.
You'd probably like how the book's author structures his thesis to what the "Palo Alto" system is.
Feels like OpenAI + friends, and the equivalent government take overs by Musk + goons, have more in common than you might think. It's also nothing new either, some story of this variant has been coming out of California for a good 200+ years now.
You write in a similar manner as the author.
I don’t think Sam Altman said “guys, we’ve gotta vulnerable people hooked on talking to our chatbot.”
Speculation: They might have a number (average messages sent per day) and are just pulling levers to raise it. And then this happens.
This is a purposefully naive take.
They're chasing whales. The 5-10% of customers who get addicted and spend beyond their means. Whales tend to make up 80%+ of revenue for systems that are reward based(sin tax activities like gambling, prostitution, loot boxes, drinking, drugs, etc).
OpenAI and Sam are very aware of who is using their system for what. They just don't care because $$$ first then forgiveness later.
>> I wonder if the OpenAI wave of resignations was about Sam Altman intentionally pursuing vulnerable customers.
> I don’t think Sam Altman said “guys, we’ve gotta vulnerable people hooked on talking to our chatbot.”
I think the conversation is about the reverse scenario.
As you say, people are just pulling the levers to raise "average messages per day".
One day, someone noticed that vulnerable people were being impacted.
When that was raised to management, rather than the answer from on high being "let's adjust our product to protect vulnerable people", it was "it doesn't matter who the users are or what the impact is on them, as long as our numbers keep going up".
So "intentionally" here is in the sense of "knowingly continuing to do in order to benefit from", rather than "a priori choosing to do".
I'd be interested to learn what fraction of ChatGPT revenue is from this kind of user.
> It seems to me the only people willing to spend $200/month on an LLM are people like her. I wonder if the OpenAI wave of resignations was about Sam Altman intentionally pursuing vulnerable customers.
And the saloon's biggest customers are alcoholics. It's not a new problem, but you'd think we'd have figured out a solution by now.
The solution is regulation
It's not perfect but it's better than letting unregulated predatory business practices continue to victimize vulnerable people
I think so. Such a situation is a market failure.
OpenAI absolutely does that. That's what led to the absurd sycophancy (https://www.bbc.com/news/articles/cn4jnwdvg9qo) that they then pulled back on.
One way or another, they did. Maybe they convinced themselves they weren't doing it that aggressively, but of this is what market share is, of course they will be optimizing for it.
[flagged]
[flagged]
[flagged]
Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
https://news.ycombinator.com/newsguidelines.html
[flagged]
Please don't post unkind swipes about groups of people on Hacker News.
They're going to listen to both if given the opportunity. I'm sure most chatbots will say "go take your meds" the majority of the time - but it only takes one chat playing along to send someone unstable completely off the rails, especially if they accept the standard, friendly-and-reliable-coded "our LLM is here to help!" marketing.
It'd be great if it were trained on therapeutic resources, but otherwise just ends up enabling and amplifying the problem.
I knew of someone who had paranoid delusions and schizophrenia. He didn't like taking his medicine due to the side effects, but became increasingly convinced that vampires were out to kill him. Friends, family and social workers could help him get through episodes and back on the medicine before he became a danger to himself.
I'm terrified that people like him will push away friends and family because the LLM engages with their delusions.
> I'm terrified that people like him will push away friends and family because the LLM engages with their delusions.
There's that danger from the internet, as well as the danger of being exposed to conmen that are okay with exploiting mental illness for profit. Watched this happen to an old friend with schizophrenia.
There are online communities that are happy to affirm delusions and manipulate sick people for some easy cash. LLMs will only make their fraud schemes more efficient, as well.
How do you know the models are actually managing and not simply amplifying?
Even when sycophantic patterns emerge?
I think the last think a delusional person needs is external validation of his delusions, be it from a human or a sycophantic machine.
1. It feels like those old Rolling Stone pieces from the late ’90s and early ’00s about kids who couldn’t tear themselves away from their computers. Fear was overblown, but made headlines.
2. OpenAI has admitted that GPT‑4o showed “sycophancy” traits and has since rolled them back (see https://openai.com/index/sycophancy-in-gpt-4o/).
The societal brain drain damage that infinite scroll has caused is definitely not overblown. These models are about to kick this problem up to the next level, when each clip is dynamically generated to maximise resonance with you.
Problem solved
>’90s and early ’00s about kids who couldn’t tear themselves away from their computers. Fear was overblown, but made headlines.
How was it overblown, we now have a non-trivial amount of completely de-socialized men in particular who live in online cults with real world impact. If there's one lesson from the last few decades it is that the people who were concerned about the impact of mass media on intelligence, physical and mental health and social factors were right about literally everything.
We now live among people who are 40 with the emotional and social maturity of people in their early 20s.
That's fair. You are correct on potential for addiction.
But let's be honest - most of these people, the ones the article is taking about, where they think they are some messiah, would have just latched onto some pre-internet cult regardless. Where sycophancy and love bombing was perfected. Though I do see the problem of AI assistants being much more accessible, so likely many more drawn in.
https://en.wikipedia.org/wiki/Love_bombing.
I was mainly referencing my own experience. I remember locking myself in my room on IRC, writing shell scripts, and playing StarCraft for days on end. Meanwhile, parents and news anchors were losing their minds, convinced the internet and Marilyn Manson were turning us all into devil-worshipping zombies.
> where they think they are some messiah, would have just latched onto some pre-internet cult regardless.
You have no way to know that. It's way, way harder to find your way to a cult than to download one of the hottest consumer apps ever created... obviously.
> But let's be honest - most of these people, the ones the article is taking about, where they think they are some messiah, would have just latched onto some pre-internet cult regardless.
Honestly, I believe most people like this would just end up having a few odd beliefs that don't impact their ability to function or socialize, or at most, will get involved with some spiritual woo.
Such beliefs are compatible with American New Age spiritualism, for example. I've met a few spiritual people who have echoed the "I/we/you are god" sentiment, yet never lost their minds over it or joined cults.
I would not be surprised that if they were expertly manipulated by some of the most powerful AI models on this planet, they too, could be driven insane.
> How was it overblown, we now have a non-trivial amount of completely de-socialized men in particular who live in online cults with real world impact
There are way more factors to the the growth of this demographic than just "internet addiction" or "videogame addiction"
Then again, the internet was instrumental in spreading the ideology that is demonizing these men and causing them to turn away from society, so you're not completely wrong
And presidents with the maturity of a 13 year old bully.
You do realize that antisocial young men are on average way less dangerous in front of a computer/phone than when the only thing they could do was joining a street gang?