> Distressingly, the people most passionate about AI often express a not-so-subtle disdain for humanity.
I have noticed something similar: those who are ultra-passionate about AI are often Extremely Online, and it seems like their values tilt too far away from humanity for my taste. The use of AI is treated almost as an end in and of itself, which perpetuates a maximalist AI vision. This is also probably why they give off this weird vibe of having their personality outsourced.
Regardless of whether this is true, it is still nothing more than an ad hominem.
I would argue that the most passionate AI optimism and pessimism stems from a conviction that it is an inevitable next step in evolution. Given the associated potency, it is hard to not take an extreme position with regard to it.
The positions in between seem to be of the form "everything will stay largely the same, but with a bit more automation", which seems naive rather than level-headed, imho.
I can't read the tone of this post, but "AI" as it stands as a marketing term for the last ~45 years has little to do with rigor. These are workers producing profit, not scientists. Ethics has nothing to do with it. Scientists deal with empiricism, not sales.
> But there is a feedback loop: If you change the incentive structures, people’s behaviors will certainly change, but subsequently so, too, will those incentive structures.
This is a good point, and somewhat subtle too. Something that worries me is the acceleration of the feedback loop. The Internet, social media, smartphones, and now generative AI are all things that changed how information is generated, consumed and distributed, and changing that affects the incentive structures and behaviors of the people interacting with that information.
But information is spread increasingly faster, in higher amounts and with higher noise, and so the incentives landscape keeps shifting continuously to keep up, without giving people time to adapt and develop immunity against the viral/parasitic memes that each landscape births.
And so the (meta)game keeps changing under our feet, increasingly accelerating towards chaos or, more worryingly, meta-stable ideologies that can survive the continuous bombardment of an adversarial memetic environment. I say worryingly, because most of those ideologies have to be, by definition, totalizing and highly hostile to anything outside of them.
The problem is quite the opposite: a large part of the incentive structure is effectively static. Our biological makeup hardly changes, so we're still drawn to all kinds of primitive things. Without strong cultural overrides we are sitting ducks, ready to be exploited by click and engagement bait.
With an analogy: Connecting an average human to social media is like connecting a Windows 95 machine to the internet.
Tech companies, at least those who weren't founded on AI, have a significant number of people internally who have exactly the opinions in this blog post.
Outside of the tech ecosystem, most people I encounter either don't care about AI, or are vaguely positive (and using it to write emails/etc). There are exceptions, writers and artists for example, but they're in the minority. To be clear, I'm also not seeing "normal" people raving about it much either, but most people really do not share the opinion of this post.
I realise it may not seem like it, but most big tech companies are not designing for high earning valley software engineers because they are not a big market. They're designing for the world, and the world hasn't made its mind up about AI yet.
Counter-point: Dark patterns are also pervasive in more niche applocations including B2Bs targeting tech startups. Some of it is culturally endemic.
Obsession with one-size-fits-all, metrics-driven development, and UX excusively aiming for the lowest common denominators are also part of this problematic incentive structure you allude to.
> I'm also not seeing "normal" people raving about it much either, but most people really do not share the opinion of this post.
I don't think the "we" here was intended to include the general population.
Could the current use of "AI" also be considered a dark pattern?
In the way that dark patterns get you to use/pay for a product/service that you might not want to, but are too confused, frustrated, or the cost/time tradeoff is not worth it to understand how to stop using/paying for the product/service.
In terms of "AI" in products/services this would be the way that using such an assistant atrophies your skills and knowledge so that you become dependent on the product/service.
> What I found most interesting is that the multiple choice options included a lot of “I found the Terminator movies scary”, “I read too much Ray Kurzweil”, and/or “I am or was a SF Bay Area rationalist” undertones, but actual ethical objections were strangely absent.
After reading Sarah Wynn-Williams' book and see the current state of democracy in the USA and some European countries (apart from the fact that democracies are too slow to regulate anyway), I see little hope for the future.
Try reading some Sarah Kendzior [0] if you want to lose whatever shreds you had left (or if you'd like to base any droplets of hope on a more accurate world-view).
Wynn-Williams, and the author of this post, both severely underestimate how dark the tech-bros vision for the future actually is and how far along they are. Envision Snow Crash, but without the humor.
When ic the "German spy agency labels AfD as ‘confirmed rightwing extremist’ force". Which will now lead to the removal of AfD members and sympathizers from the civil service. I at least, have a little hope for Germany e.g. Europe. A little hope that the End of the Capitalist era will not end in fascism and there is another way now open for discus involving not only elites (aka tech-bros and old white male).
> I do not actually believe “The Singularity” is a realistic threat due to every system that exhibits exponential growth encountering carrying capacities, which converts it into an S-curve.
Depending on the parameters of the curve, an S-curve may be effectively the same as an exponential curve. For instance if the IQ of AIs reach a plateau of 500 rather than exponentially increasing to infinity, we may not be around to see the plateau.
> For instance if the IQ of AIs reach a plateau of 500 rather than exponentially increasing to infinity, we may not be around to see the plateau.
If the premise for this is, "because we might not survive each other," rather than the AI being specifically an extinction event for humanity, then I think we agree.
It's not even clear "superintelligence" is a meaningful concept. It could be that the broad conflicts in our society are largely intractable and arbitrary, in which case "superintelligence" can do little but complain about the contradictions its passed (something I suspect many on this forum can understand).
Perhaps the calculator is as close as we'll ever get to "superintelligence".
I'm very much into AI, in terms of judicious and sprinkled usage across disciplines -- but I'm a computer guy, I produce and support a lot of code, art, audio, video, social, multimedia etc content, and I've incorporated bits and pieces of AI here and there to achieve a vision or get work done (LLMs, diffusion, etc) -- my main issue with it, like anything technical, is that the mainstream interpretation / instinct by the average bozo is that the machine will do my work for me, and nothing could be further from the truth.
There's a lot of anti-AI sentiment from the mainstream, but I'm noticing the pro-AI mainstream sentiment comes from people who are either technically-minded grifters looking to deploy automated solutions to snake $$$ from people's pockets, or lazy / disengaged worker drones who just want the computer to do their middling work for them. And it will, up to a point, where your "work" plateaus into a mess of predictable, non-novel banality lol, unless you invest the time to master the tool (which, for what it's worth, isn't like introducing the toothed saw, it's more like a Dremel or a SAWZALL that have specific purposes but casual users won't ever master them)
Buying a digital ELPH in 2001 didn't make you a photographer unless you were a photographer with an open mind. Squarespace doesn't make you a web designer unless you've studied the system and understand the tradeoffs. AWS doesn't solve infrastructure unless you learn how to architect a solution that works for your use-case. AI, by the same property, is shit until you research how it works, experiment with solutions, and find novel workflows to get something out the other end that's new, fresh and exciting.
Companies just rolling "AI" into their products aren't gonna win over customers and users unless they use the tool to deliver something of exceedingly-needed value. If it's a short-term grift or "hail mary", good luck! You'll need it!
- AI CEOs lying to investors and claiming their AIs will one day be impossibly smart.
- Companies that are consumers of AI products having CEOs pushing AI onto their employees as a quick fix thinking that they will get magic productivity gains and be able to cut staff if they just force employees to use it (I have even seen some real examples of companies adding AI usage to performance reviews)
- Companies claiming to be AI-first without launching any significant AI-powered product
But I do think it’s a very measured way to read the situation to refrain from joining the knee-jerk into being an AI-hater. That sentiment is basically just a counter-culture reaction to dislike AI, especially since it seems to most negatively impact creatives such as illustrators.
I think that some professionals who refrain from leveraging it from an ethical standpoint will legitimately fall behind their labor competition.
That has got to be one of the most easily gamed metrics that there ever has been.
"Make sure you use this website that costs the company money frequently"
I wonder how it will play out when the costs of using an AI service are no longer subsidised by venture capital? (For example Uber is just as expensive as normal taxis now.)
> That sentiment is basically just a counter-culture reaction to dislike AI, especially since it seems to most negatively impact creatives such as illustrators.
Worth considering: the blog in question that hosts this article is a furry blog. The furry community is largely creatives.
And I totally understand why counter cultural types have latched on to disliking AI especially since some of its biggest proponents are incredibly corporate and, well, lame as fuck.
I also think that it’s a technology that didn't develop with some counterculture chops like many earlier technology innovations.
E.g., we could think about something like crypto that had an uphill battle against the establishment and was created with some level of ideological independence.
There are even some more corporate disruptions that plain and simple had better marketing behind them, like how Airbnb and Uber had widely disliked incumbents to “beat” in the market. Early Uber or Airbnb users were basically “beating the system.” At least, that’s how a lot of people perceived them, even if that didn’t turn out to be the reality.
In contrast, AI has felt much more like a corporate circlejerk among the wealthiest super-billionaires. There hasn’t even been the slightest facade of genuine do-goodery in this technology. Some wildly well-funded companies led by sociopathic robot-human CEOs made a plagiarism machine that my boss now insists I use for all my work.
I think that usually the people in the middle of the two extremes have the right thought process going on. It’s clear to me that AI is a great tool that isn’t going away, but perhaps its most passionate champions and detractors both need to settle down.
He is misplacing his concerns. He and the ones like him, to fulfill the "we" there.
Oh, in US, for US citizens, intelligence agencies can't do dragnet surveillance. But it seem to be perfectly OK that it does that for the rest of the world.
It enables people to do non-consensual pornography. It may have democratized the realistic video part of it, but already was available the fan fiction part. Is the problem the democratization or the ability on who have enough budget or sponsored agenda for it? At some moment you have to cut somewhere and define where is the realistic line.
Same goes for misinformation, what is wrong is the democratization and not that the people with enough resources can do it?
About displacing industries, it depends, but was already a big abuse of some of those industries to people. Some will adapt. Some will become obsolete as it happened with the industries they replaced in their own turn.
AI is a tool. And as any tool, it empowers people using it, for good and bad. Is the people that you keep giving power the main ones misusing them. Those are the elephants in the room that you refuse to see.
> AI is a tool. And as any tool, it empowers people using it, for good and bad. Is the people that you keep giving power the main ones misusing them. Those are the elephants in the room that you refuse to see.
I've tolerated AI autocomplete in VSCode but I am a bee's dick away from turning it off, because it so often generates a huge chunk of code that is ALMOST correct, and determining where it is wrong is as much a chore as writing it would have been. But, it's like I've got a junior-coder sidekick who doesn't take any feedback. Not great.
Ahead of the inevitable Luddite comments. Here’s your daily reminder that the Luddites were not just technophobes, but were in fact artisans who were concerned about the leverage technology was providing capital to suppress worker rights while eroding the quality of the products. This tension should resonate with us.
I actually think that many of the concerns the author doesn’t have are the more concerning ones and some of the concerns they do have are more likely to be not much of a big deal.
The privacy concern I find to be particularly overstated. This is an identical concern to ones that have existed before AI ever entered the fray. Anytime you send data to a system that someone else controls you run those exact same risks. I also think there’s an overstated fear that an app focused on private data (something similar to Signal) would just add some kind of AI functionality one day out of the blue and suddenly ship your data off to a hive mind.
Any app that is willing to cross that line already has done so (e.g., Facebook).
It also seems to be technologically simple to perform a lot of AI tasks without compromising privacy. E.g., chips with local-first AI computational ability are reaching consumer level devices. Even the much-maligned Windows Recall feature specifically emphasizes how it never sends information to Microsoft servers nor processes data in the cloud.
The risk isn’t that Signal itself will add AI features, it’s more that it will be built-in to your OS that’s running Signal (Apple Intelligence, Windows Recall, etc). These types watch everything you do on-device by default and learn a ton from your E2EE messages regardless of the intentions of the Signal devs.
I would say that in practice that doesn’t seem to be how some of these examples are being architected.
Apple Intelligence as an example only seems to be reading information from Apple’s own default apps as of today, and their developer documentation suggests that capabilities require developers to implement features via their APIs.
It isn’t really a correct read of the situation to say that Apple Intelligence is “watching everything you do” like a service that is just watching your screen output at all times.
Even the service that does do that exact sort of thing (Windows Recall) has an extensive set of controls around filtering out specific apps and private browsing mode and other sensitive information, enabled by default: https://support.microsoft.com/en-us/windows/privacy-and-cont...
So I think the reality is that a lot of the big players making this technology recognize the privacy and security concerns and are designing their AI applications to address those concerns.
I personally feel like AI products are frequently launching with more transparency about data usage than a lot of Web 2.0 era applications like Facebook.
On privacy, the shift in incentives have changed the game. "More is better" is a new mantra and the perceived value of gathering and labeling arbitrary organic data has gone up significantly. This offsets or outright obliterates the liability aspect of data-hoarding. This will have privacy implications for individuals referenced in some of those datasets.
My only concern about AI is that it will get stuck again for another few decades and I'll become elderly before I see what's next.
20 years ago my dream was to see a nimble robot running up the mountain path live. I hope this event is not another 20 years away. Future comes so horribly slowly.
Yes there was something truly special in the movie Mother, when that heavy ass robot was thumping down the hallway. Make sure to watch the boston dynamics outtakes when they break an ankle to alleviate some existential tension afterward
I love the movie "I am mother". Terribly chilling depiction of "end justifies the means", "might makes right" and possibly "hell is paved with good intentions" from the POV of subdued party (which is humanity in this case). Also it's great because it makes you constantly re-evaluate characters intentions and truthfulness of what they are saying.
Here’s the thing…as long as we live in a capitalist society, money will be put in front of people because the very essence of capitalism is to turn a higher profit and anything that impedes that is evolved away. So the question to ask is whether we will remain a capitalist society. If you believe the answer is yes, the only thing to do is adapt, whether you like it or not. Resisting the change will only put you behind the curve. I’m not proclaiming a stance on AI here, but I think it’s prudent to be a realist.
> I’m concerned about the kind of antisocial behaviors that AI will enable.
> Coordinated inauthentic behavior
> Misinformation
> Nonconsensual pornography
> Displacing entire industries without a viable replacement for their income
The first three of these existed and occurred before the arrival of AI. Perhaps AI makes doing the first 3 easier. If there are not laws governing the first three post-AI, do we need laws governing them? If so, what do those look like?
As for "displacing entire industries without a viable replacement for their income" - yea, as a civilization we need to retrain and reeducate those whose livelihoods are displaced by automation. This too has been true forever...
yes, but a machine gun is faster than a musket. these uses of AI are force multipliers for diarrhea levels of bullshit.
I think there are two takes:
- investors should know that consumers will eventually find their products distasteful for the lackluster quality
- users can pay a little more for products that never have, and never will, use AI.
it's moreso that... the displacement is very... on the nose this time. but... jevons paradox withstanding, I think when you replace human calculators with computers what you end up with is wanting to crunch /even more numbers/. It never slows down. The labor only cheapens...
Yes, obviously the White House understands - I am saying that AI companies failed to predict political consequences (or, perhaps more likely, chose to look the other way)
> Distressingly, the people most passionate about AI often express a not-so-subtle disdain for humanity.
I have noticed something similar: those who are ultra-passionate about AI are often Extremely Online, and it seems like their values tilt too far away from humanity for my taste. The use of AI is treated almost as an end in and of itself, which perpetuates a maximalist AI vision. This is also probably why they give off this weird vibe of having their personality outsourced.
Regardless of whether this is true, it is still nothing more than an ad hominem.
I would argue that the most passionate AI optimism and pessimism stems from a conviction that it is an inevitable next step in evolution. Given the associated potency, it is hard to not take an extreme position with regard to it.
The positions in between seem to be of the form "everything will stay largely the same, but with a bit more automation", which seems naive rather than level-headed, imho.
AI is the product of the most advanced scientific minds stripped of ethics. What else can they create?
I can't read the tone of this post, but "AI" as it stands as a marketing term for the last ~45 years has little to do with rigor. These are workers producing profit, not scientists. Ethics has nothing to do with it. Scientists deal with empiricism, not sales.
> But there is a feedback loop: If you change the incentive structures, people’s behaviors will certainly change, but subsequently so, too, will those incentive structures.
This is a good point, and somewhat subtle too. Something that worries me is the acceleration of the feedback loop. The Internet, social media, smartphones, and now generative AI are all things that changed how information is generated, consumed and distributed, and changing that affects the incentive structures and behaviors of the people interacting with that information.
But information is spread increasingly faster, in higher amounts and with higher noise, and so the incentives landscape keeps shifting continuously to keep up, without giving people time to adapt and develop immunity against the viral/parasitic memes that each landscape births.
And so the (meta)game keeps changing under our feet, increasingly accelerating towards chaos or, more worryingly, meta-stable ideologies that can survive the continuous bombardment of an adversarial memetic environment. I say worryingly, because most of those ideologies have to be, by definition, totalizing and highly hostile to anything outside of them.
So yeah, interesting times.
The problem is quite the opposite: a large part of the incentive structure is effectively static. Our biological makeup hardly changes, so we're still drawn to all kinds of primitive things. Without strong cultural overrides we are sitting ducks, ready to be exploited by click and engagement bait.
With an analogy: Connecting an average human to social media is like connecting a Windows 95 machine to the internet.
This post sounds too much like an SCP for my comfort. Just administer the amnestics now.
Tech companies, at least those who weren't founded on AI, have a significant number of people internally who have exactly the opinions in this blog post.
Outside of the tech ecosystem, most people I encounter either don't care about AI, or are vaguely positive (and using it to write emails/etc). There are exceptions, writers and artists for example, but they're in the minority. To be clear, I'm also not seeing "normal" people raving about it much either, but most people really do not share the opinion of this post.
I realise it may not seem like it, but most big tech companies are not designing for high earning valley software engineers because they are not a big market. They're designing for the world, and the world hasn't made its mind up about AI yet.
Counter-point: Dark patterns are also pervasive in more niche applocations including B2Bs targeting tech startups. Some of it is culturally endemic.
Obsession with one-size-fits-all, metrics-driven development, and UX excusively aiming for the lowest common denominators are also part of this problematic incentive structure you allude to.
> I'm also not seeing "normal" people raving about it much either, but most people really do not share the opinion of this post.
I don't think the "we" here was intended to include the general population.
Could the current use of "AI" also be considered a dark pattern?
In the way that dark patterns get you to use/pay for a product/service that you might not want to, but are too confused, frustrated, or the cost/time tradeoff is not worth it to understand how to stop using/paying for the product/service. In terms of "AI" in products/services this would be the way that using such an assistant atrophies your skills and knowledge so that you become dependent on the product/service.
> What I found most interesting is that the multiple choice options included a lot of “I found the Terminator movies scary”, “I read too much Ray Kurzweil”, and/or “I am or was a SF Bay Area rationalist” undertones, but actual ethical objections were strangely absent.
After reading Sarah Wynn-Williams' book and see the current state of democracy in the USA and some European countries (apart from the fact that democracies are too slow to regulate anyway), I see little hope for the future.
> I see little hope for the future.
Try reading some Sarah Kendzior [0] if you want to lose whatever shreds you had left (or if you'd like to base any droplets of hope on a more accurate world-view).
Wynn-Williams, and the author of this post, both severely underestimate how dark the tech-bros vision for the future actually is and how far along they are. Envision Snow Crash, but without the humor.
0 - https://sarahkendzior.substack.com/p/ten-articles-explaining...
> underestimate how dark the tech-bros vision
When ic the "German spy agency labels AfD as ‘confirmed rightwing extremist’ force". Which will now lead to the removal of AfD members and sympathizers from the civil service. I at least, have a little hope for Germany e.g. Europe. A little hope that the End of the Capitalist era will not end in fascism and there is another way now open for discus involving not only elites (aka tech-bros and old white male).
> I do not actually believe “The Singularity” is a realistic threat due to every system that exhibits exponential growth encountering carrying capacities, which converts it into an S-curve.
Depending on the parameters of the curve, an S-curve may be effectively the same as an exponential curve. For instance if the IQ of AIs reach a plateau of 500 rather than exponentially increasing to infinity, we may not be around to see the plateau.
I don't see why an LLM (not) maxing out on IQ would be an existential concern.
> For instance if the IQ of AIs reach a plateau of 500 rather than exponentially increasing to infinity, we may not be around to see the plateau.
If the premise for this is, "because we might not survive each other," rather than the AI being specifically an extinction event for humanity, then I think we agree.
It's not even clear "superintelligence" is a meaningful concept. It could be that the broad conflicts in our society are largely intractable and arbitrary, in which case "superintelligence" can do little but complain about the contradictions its passed (something I suspect many on this forum can understand).
Perhaps the calculator is as close as we'll ever get to "superintelligence".
I'm very much into AI, in terms of judicious and sprinkled usage across disciplines -- but I'm a computer guy, I produce and support a lot of code, art, audio, video, social, multimedia etc content, and I've incorporated bits and pieces of AI here and there to achieve a vision or get work done (LLMs, diffusion, etc) -- my main issue with it, like anything technical, is that the mainstream interpretation / instinct by the average bozo is that the machine will do my work for me, and nothing could be further from the truth.
There's a lot of anti-AI sentiment from the mainstream, but I'm noticing the pro-AI mainstream sentiment comes from people who are either technically-minded grifters looking to deploy automated solutions to snake $$$ from people's pockets, or lazy / disengaged worker drones who just want the computer to do their middling work for them. And it will, up to a point, where your "work" plateaus into a mess of predictable, non-novel banality lol, unless you invest the time to master the tool (which, for what it's worth, isn't like introducing the toothed saw, it's more like a Dremel or a SAWZALL that have specific purposes but casual users won't ever master them)
Buying a digital ELPH in 2001 didn't make you a photographer unless you were a photographer with an open mind. Squarespace doesn't make you a web designer unless you've studied the system and understand the tradeoffs. AWS doesn't solve infrastructure unless you learn how to architect a solution that works for your use-case. AI, by the same property, is shit until you research how it works, experiment with solutions, and find novel workflows to get something out the other end that's new, fresh and exciting.
Companies just rolling "AI" into their products aren't gonna win over customers and users unless they use the tool to deliver something of exceedingly-needed value. If it's a short-term grift or "hail mary", good luck! You'll need it!
This is a great way to put it.
Some more annoying personas in the AI space:
- AI CEOs lying to investors and claiming their AIs will one day be impossibly smart.
- Companies that are consumers of AI products having CEOs pushing AI onto their employees as a quick fix thinking that they will get magic productivity gains and be able to cut staff if they just force employees to use it (I have even seen some real examples of companies adding AI usage to performance reviews)
- Companies claiming to be AI-first without launching any significant AI-powered product
But I do think it’s a very measured way to read the situation to refrain from joining the knee-jerk into being an AI-hater. That sentiment is basically just a counter-culture reaction to dislike AI, especially since it seems to most negatively impact creatives such as illustrators.
I think that some professionals who refrain from leveraging it from an ethical standpoint will legitimately fall behind their labor competition.
> (I have even seen some real examples of companies adding AI usage to performance reviews)
Sad to say, I can vouch for this.
That has got to be one of the most easily gamed metrics that there ever has been.
"Make sure you use this website that costs the company money frequently"
I wonder how it will play out when the costs of using an AI service are no longer subsidised by venture capital? (For example Uber is just as expensive as normal taxis now.)
> That sentiment is basically just a counter-culture reaction to dislike AI, especially since it seems to most negatively impact creatives such as illustrators.
Worth considering: the blog in question that hosts this article is a furry blog. The furry community is largely creatives.
And I totally understand why counter cultural types have latched on to disliking AI especially since some of its biggest proponents are incredibly corporate and, well, lame as fuck.
I also think that it’s a technology that didn't develop with some counterculture chops like many earlier technology innovations.
E.g., we could think about something like crypto that had an uphill battle against the establishment and was created with some level of ideological independence.
There are even some more corporate disruptions that plain and simple had better marketing behind them, like how Airbnb and Uber had widely disliked incumbents to “beat” in the market. Early Uber or Airbnb users were basically “beating the system.” At least, that’s how a lot of people perceived them, even if that didn’t turn out to be the reality.
In contrast, AI has felt much more like a corporate circlejerk among the wealthiest super-billionaires. There hasn’t even been the slightest facade of genuine do-goodery in this technology. Some wildly well-funded companies led by sociopathic robot-human CEOs made a plagiarism machine that my boss now insists I use for all my work.
I think that usually the people in the middle of the two extremes have the right thought process going on. It’s clear to me that AI is a great tool that isn’t going away, but perhaps its most passionate champions and detractors both need to settle down.
This is an incredibly level-headed and good take.
He is misplacing his concerns. He and the ones like him, to fulfill the "we" there.
Oh, in US, for US citizens, intelligence agencies can't do dragnet surveillance. But it seem to be perfectly OK that it does that for the rest of the world.
It enables people to do non-consensual pornography. It may have democratized the realistic video part of it, but already was available the fan fiction part. Is the problem the democratization or the ability on who have enough budget or sponsored agenda for it? At some moment you have to cut somewhere and define where is the realistic line.
Same goes for misinformation, what is wrong is the democratization and not that the people with enough resources can do it?
About displacing industries, it depends, but was already a big abuse of some of those industries to people. Some will adapt. Some will become obsolete as it happened with the industries they replaced in their own turn.
AI is a tool. And as any tool, it empowers people using it, for good and bad. Is the people that you keep giving power the main ones misusing them. Those are the elephants in the room that you refuse to see.
>Oh, in US, for US citizens, intelligence agencies can't do dragnet surveillance.
They can, and do? What else would you call https://www.theguardian.com/world/2013/jun/06/nsa-phone-reco...
That's just one example, Snowden published tons of this stuff.
Conveniently the rest of the sentence from the article was "of encrypted messages".
The encrypted part is kind of important. I blog about the topic a lot!
> AI is a tool. And as any tool, it empowers people using it, for good and bad. Is the people that you keep giving power the main ones misusing them. Those are the elephants in the room that you refuse to see.
Try reading the post again? It's in there.
Well since you claim the article does check all the boxes no point reading it.
More impotent rage internet spam, zero direct call to action politically. Just circling existential dread in different words, I’ll bet.
The social gossip changed and no one knows which way is up despite the sky being right there still?
I've tolerated AI autocomplete in VSCode but I am a bee's dick away from turning it off, because it so often generates a huge chunk of code that is ALMOST correct, and determining where it is wrong is as much a chore as writing it would have been. But, it's like I've got a junior-coder sidekick who doesn't take any feedback. Not great.
Ahead of the inevitable Luddite comments. Here’s your daily reminder that the Luddites were not just technophobes, but were in fact artisans who were concerned about the leverage technology was providing capital to suppress worker rights while eroding the quality of the products. This tension should resonate with us.
https://www.history.com/articles/industrial-revolution-luddi...
The AI conversation tends to split folks along similar “passionate engineer craftsman” vs. “temporarily embarrassed billionaire” lines.
> why we dislike AI
For some completely unspecified group of “we”. At least the post itself says “why I personally dislike AI”.
I actually think that many of the concerns the author doesn’t have are the more concerning ones and some of the concerns they do have are more likely to be not much of a big deal.
The privacy concern I find to be particularly overstated. This is an identical concern to ones that have existed before AI ever entered the fray. Anytime you send data to a system that someone else controls you run those exact same risks. I also think there’s an overstated fear that an app focused on private data (something similar to Signal) would just add some kind of AI functionality one day out of the blue and suddenly ship your data off to a hive mind.
Any app that is willing to cross that line already has done so (e.g., Facebook).
It also seems to be technologically simple to perform a lot of AI tasks without compromising privacy. E.g., chips with local-first AI computational ability are reaching consumer level devices. Even the much-maligned Windows Recall feature specifically emphasizes how it never sends information to Microsoft servers nor processes data in the cloud.
The risk isn’t that Signal itself will add AI features, it’s more that it will be built-in to your OS that’s running Signal (Apple Intelligence, Windows Recall, etc). These types watch everything you do on-device by default and learn a ton from your E2EE messages regardless of the intentions of the Signal devs.
I would say that in practice that doesn’t seem to be how some of these examples are being architected.
Apple Intelligence as an example only seems to be reading information from Apple’s own default apps as of today, and their developer documentation suggests that capabilities require developers to implement features via their APIs.
It isn’t really a correct read of the situation to say that Apple Intelligence is “watching everything you do” like a service that is just watching your screen output at all times.
Even the service that does do that exact sort of thing (Windows Recall) has an extensive set of controls around filtering out specific apps and private browsing mode and other sensitive information, enabled by default: https://support.microsoft.com/en-us/windows/privacy-and-cont...
So I think the reality is that a lot of the big players making this technology recognize the privacy and security concerns and are designing their AI applications to address those concerns.
I personally feel like AI products are frequently launching with more transparency about data usage than a lot of Web 2.0 era applications like Facebook.
On privacy, the shift in incentives have changed the game. "More is better" is a new mantra and the perceived value of gathering and labeling arbitrary organic data has gone up significantly. This offsets or outright obliterates the liability aspect of data-hoarding. This will have privacy implications for individuals referenced in some of those datasets.
My only concern about AI is that it will get stuck again for another few decades and I'll become elderly before I see what's next.
20 years ago my dream was to see a nimble robot running up the mountain path live. I hope this event is not another 20 years away. Future comes so horribly slowly.
Yes there was something truly special in the movie Mother, when that heavy ass robot was thumping down the hallway. Make sure to watch the boston dynamics outtakes when they break an ankle to alleviate some existential tension afterward
I love the movie "I am mother". Terribly chilling depiction of "end justifies the means", "might makes right" and possibly "hell is paved with good intentions" from the POV of subdued party (which is humanity in this case). Also it's great because it makes you constantly re-evaluate characters intentions and truthfulness of what they are saying.
Here’s the thing…as long as we live in a capitalist society, money will be put in front of people because the very essence of capitalism is to turn a higher profit and anything that impedes that is evolved away. So the question to ask is whether we will remain a capitalist society. If you believe the answer is yes, the only thing to do is adapt, whether you like it or not. Resisting the change will only put you behind the curve. I’m not proclaiming a stance on AI here, but I think it’s prudent to be a realist.
[dead]
[flagged]
We're in the massively paralleled "Motivated Reasoning Era"
[flagged]
[flagged]
[flagged]
My translator friends who have lost their jobs agree.
[dead]
[flagged]
[flagged]
[dead]
[dead]
Not everyone enjoyed the transition from being creators to becoming assistant slop verifiers as much as you did.
[flagged]
> I’m concerned about the kind of antisocial behaviors that AI will enable.
> Coordinated inauthentic behavior
> Misinformation
> Nonconsensual pornography
> Displacing entire industries without a viable replacement for their income
The first three of these existed and occurred before the arrival of AI. Perhaps AI makes doing the first 3 easier. If there are not laws governing the first three post-AI, do we need laws governing them? If so, what do those look like?
As for "displacing entire industries without a viable replacement for their income" - yea, as a civilization we need to retrain and reeducate those whose livelihoods are displaced by automation. This too has been true forever...
Artists being replaced by idiot machines that can do approximately 15% as good of a job but almost for free is very sad and bad.
yes, but a machine gun is faster than a musket. these uses of AI are force multipliers for diarrhea levels of bullshit.
I think there are two takes:
- investors should know that consumers will eventually find their products distasteful for the lackluster quality - users can pay a little more for products that never have, and never will, use AI.
it's moreso that... the displacement is very... on the nose this time. but... jevons paradox withstanding, I think when you replace human calculators with computers what you end up with is wanting to crunch /even more numbers/. It never slows down. The labor only cheapens...
Disabling ai at this point is like using your app in off line mode. Not a lots of people want to build for that
> If you do not understand people, you will fail to understand the harms that AI will unleash on the world.
Case in point: the official white house social media account regularly posts low-effort AI meme propaganda.
The White House very much understands the harm they are unleashing. That's the point.
Yes, obviously the White House understands - I am saying that AI companies failed to predict political consequences (or, perhaps more likely, chose to look the other way)