You know that teammate that makes more work for everyone else on the team because they do what they are asked to do but in the most buggy and incomprehensible way, that when you finally get them to move on to another team and you realize how much time you spent corralling them and fixing their subtle bugs and now when they are gone work doesn't seem like so much of a chore.
Just like a poorly managed team, you need to learn how to manage AI to get value from it. All ambiguous processes are like this.
In my case, I find the value with LLMs with respect to writing is consolidation. Use it to make outlines, not writing. One example is I record voice memos when driving or jogging and turn them into documents that can be the basis for all sorts of things. End of the day it saves me alot of time and arguably makes me more effective.
AI goes bad because it’s not smart, and it will pretend that it is. Figure out the things it does well for your scenario and exploit it.
I'm extremely wary of AI myself, especially for creative tasks like writing or making images, etc., but this feels a little over the top. If you let it run wild then yes the result is disaster, but for well defined jobs with a small perimeter AI can save a lot of time.
You are not wrong, but I pose the argument that too many people approach Gen AI as a replacement instead of a tool, and therein lies the root of the problem.
When I use Claude for code, for example, I am not asking it to write my code. I'm asking it to review what I have written and either suggest improvements or ways to troubleshoot a problem I am having. I also don't always follow its advice, either, but that depends on how much I understand the reply. Sometimes it outputs something that makes sense based on my current skill level, sometimes it proposes things that I know nothing about, in which case I ask it to break it down further so I can go search the Internet for more info and see if I can learn more, which pushes the limits of my skill level.
It works well, since my goal is to improve what I bring to the table and I have learned a lot, both about coding and about prompt engineering.
When I talk to other people, they accuse me of having the AI do all the work for me because that's how they approach their use of it. They want the AI to produce the whole project, as opposed to just using it as a second brain to offload some mental chunking. That's where Gen AI fails and the user spends all their time correcting convoluted mistakes caused by confabulation, unless they're making a simple monolithic program or script, but even then there's often hiccups.
Point is, Gen AI is a great tool, if you approach it with the right mindset. The hammer does not build the whole house, but it can certainly help.
You can think that..and you will eventually be left behind. AI is not going anywhere and can be used as a performance booster. Eventually, it will be a requirement for most tech-based jobs.
My writing style is pretty labor intensive [0]. I go through a lot of drafts and read things out loud to make sure they work well etc. And I tend to have a high standard for making sure I source things.
I personally think an LLM could help with some of this, and this is something I've been thinking about the past few days. But I'd have to build a pipeline and figure out a way to make it amplify what I like about my voice rather than have me speak through its voice.
I used to have a sort of puritanical view of art. And I think a younger version of myself would have been low key horrified at the amount of work in great art that was delegated to assistants. E.g. a sculptor (say Michelangelo) would typically make a miniature to get approval from patrons and the final sculpture would be scaled up. Hopefully for major works, the master was closely involved in the scaling up. But I would bet that for minor works (or maybe even the typical work) assistants did a lot of the final piece.
The same happens (and has always happened) with successful authors. Having assistants do bits here or there. Maybe some research, maybe some corrections, maybe some drafts. Possibly relying on them increasingly as you get later in your career or if you're commercially successful enough to need to produce at greater scale.
I think LLMs will obviously fit into these existing processes. They'll also be used to generate content that is never checked by a human before shipping. I think the right balance is yet to be seen, and there will always be people who insist on more deliberate and slower practices over mass production.
[0] Aside from internet comments of course, which are mostly stream of consciousness.
Maybe later he got lazier. I haven't really heard of famous authors using assistants for drafts instead of research (I don't mean commercial authors like Stephen King).
Even research many authors simply could not afford.
I like the perspective of "choices" during creation. It is an essential principle of the real art that it is a result of thousands/millions of deliberate choices. This is what we admire on the art. If you use mostly machine (or other kind of ways that decide instead and for you) for creation, you as an creator simply do less choices.
In this case, you delegate many of your experienced/crazy/hard decisions to the model (which is based on such decision made already by other artists but combines them in a random way). It is like decompressing JPG – some things are just hallucinated by machine.
From the perspective of pure human creativity, the result is thin, diluted. Even it seems like deliberate. In my opinion art lovers will seek for the dense art made by human, maybe asking even more for some kind of "proof" of the human-based process. What do you think?
At its most basic level I just like throwing things I’ve written at ChatGPT and telling it to rewrite it in “x” voice or tone, maybe condense it or expand on some element, and I just pick whatever word comes to mind for the style. Half the time I don’t really use what it spits out. I am a much stronger editor than I am a writer, so when I see things written a different way it really helps me break through writer’s block or just the inertia of moving forward on something. I just treat it like a mediocre sounding board and frankly it’s been great for that.
When I was in high school I really leaned on friends for edits. Not just because of the changes they would make (though they often did make great suggestions), but for the changes I would make to their changes after. That’s what would inevitably turn my papers from a B into an A. It’s basically the same thing in principle. I need to see something written in a way I would not write it or I start talking in circles/get too wordy. And yes this comment is an example of that haha
I avoided cell phones too when they first came out. I didn't want the distraction or "digital leash". Now it's a stable fixture in my life. Some technology is simply transformational and is just a matter of time until almost everyone comes to accept it at some level. Time will tell if AI breaks through the hype curve but my gut feeling is it will within 5 years.
My phone is a fixture in my life but spend a lot of effort trying to rid myself of it actually.
The thing for me is currently, on the receiving end is that I just don't read anything (apart from books) like it has any semblance of authenticity anymore. My immediate assumption is that a large chunk of it or sometimes the entire piece has been written or substantially altered by AI. Seeing this transferring into the publishing and writing domain is just simply depressing.
I avoided web3/crypto/bitcoin altogether when they came out. I'm happy I did and I don't see myself diving into this world anytime soon. I've also never used VR/AR, never owned a headset, never even tried one. Again, I don't see this changing any time soon.
Some technology is just capital trying to find growth in new markets and doesn't represent a fundamental value add.
smart phones became a fixture because they were a key enabler for dozens of other things like fitness tracking fads, logging into key services, communication methods that were not available on desktop, etc. If AI becomes a key enabler of business, then yeah people won't have a choice.
I expect this will be around the time that websites are no longer a thing and we see companies directly pumping information into AI agents which are then postured as the only mechanism for receiving certain information.
As an example, imagine Fandango becoming such a powerful movie agent that theaters no longer need websites. You don't ask it questions. Instead, it notifies YOU based on what it knows about your schedule, your preferences, your income, etc. Right around 5pm it says "Hey did you know F1 is showing down the street from you at Regal Cinema in IMAX tonight at 7:30? That will give you time to finish your 30 minute commute and pickup your girlfriend! Want me to send her a notification that you want to do this?"
People install a litany of agents on their smartphones, and they train their agents based on their personal preferences etc, and the agents then become the advertisers directly feeding relevant and timely information to you that maximizes your spend.
What AI currently lacks is mainly context. A well trained, experienced human knows their reader very well and knows what they don't need to write. And for what they write they know the tone they need to hit. I totally expect that in the future this will totally turn around, the Author will write the facts and framework with the help of AI and your AI will extract and condense it for your consumption. Your AI knows everything about you. Knows everything you ever consumed. Knows how you think and what it needs to tell you in which tone to give you the best experience. You will be informed better than ever before. The future in AI will be bright!
I’m with you—-I think you did a good job of summarizing all the places that LLMs are super practical/useful, but agreed that for prose (as someone who considers themselves a proficient writer), it just never seems to contribute anything useful. And those who are not proficient writers, I’m sure it can be helpful, but it certainly doesn’t contribute any new ideas if you’re not providing them.
I am not a writer. My oldest son,16, started writing short stories. He did not use AI in any aspect of the words on the page. I did however recommend that he feed his stories and ask a LLM for feedback on things that are confusing, unclear, or holes in the plot.
Not to take any words it gives but read what it says and decide if those things are true, if so, make edits. I am not saying it is a great editor but it is better than any other resource he has access to as a teenager. Yeah better than me or his mom
For things like coding LLMs are useful and DEVONThink's recent AI integrations allow me to use local models as something like an encyclopedia or thesaurus to summarize unfamiliar blocks of text. At best I use it like scratch paper.
I formed the habit of exporting entire chats to Markdown and found them useless. Whatever I found of useful from a given response either sparked a superseding thought of my own or was just a reiteration of my own intuitive thoughts.
I've moved from ChatGPT to Claude. The results are practically the same as far as I can tell (although my gut tells me I get better code from Claude) but the I think Anthropic have a better feel for response readability. Sometimes processing a ChatGPT response is like reading a white paper.
Other than that, LLMs get predictable to me after a while and I get why people suspect that they're starting to plateau.
As a professional writer, the author of this post is likely a better writer than 99.99% of the population. A quick skim of his blog suggests that he's comfortably more intelligent than 99% of people. I think it's totally unsurprising that he isn't fully satisfied with the output of LLMs; what is remarkable is that someone in that position still finds plenty of reasons to use them.
Now consider someone further down the scale - someone at the 75th, 50th or 25th percentile. The output of an LLM very quickly goes from "much worse than what I could produce" to "as good as anything I could produce" to "immeasurably better than anything I could hope to ever produce".
I'm worried that an increasing number of people are relying on LLMs for things as fundamental to daily life as expressing themselves verbally or critical thinking.
Perhaps LLMs can move someone's results from the 25th percentile to the 50th for a single task. (Although there's probably a much more nuanced discussion to be had about that: people with poor writing skills can still have unique, valuable, and interesting perspectives that get destroyed in the median-ization of current LLM output.) But after a couple years of using LLMs regularly, I fear that whatever actual talent they have will atrophy below their starting point.
The author is a great guy and indeed quite smart and meticulous in areas he cares about deeply. He is a published author with a reasonably popular book considering the market size: https://www.melvil.cz/kniha-jak-sbalit-zenu-20/ he has edited probably more books than he would like to admit as well. It's not surprising he is able to write a good article.
However good writing is a skill you can get good at with enough practice. Read a lot, write a lot of garbage, consult more experienced writers and eventually you will write readable articles soon. Do 10-100x more of that and you will be pretty great. The rest is some kind skill and experience in many other fields than writing which will inform how to write even better. Some of it is intelligence, luck, great mentors and perhaps something we call talent even. As with most things you can get far just by working diligently a lot.
> "Now consider someone further down the scale - someone at the 75th, 50th or 25th percentile. The output of an LLM very quickly goes from "much worse than what I could produce" to "as good as anything I could produce" to "immeasurably better than anything I could hope to ever produce""
That does, to my mind, explain all the vengeful "haw haw, you're all going to get left behind" comments from some LLM proponents. They actually do get benefit from LLMs, unlike the highest part of the scale who are overrepresented on HN, without realizing what that implies and they think they can overtake the highest part of the scale by using them. Well, we'll see.
Idk, LLM writing style somehow almost always ends up sounding like an insufferable smartass Redditor spiel. Maybe it's only appealing to the respective audience.
I am a book publisher & I love technology. It can empower people. I have been using LLM chatbots since they became widely available. I regularly test machine translation at our publishing house in collaboration with our translators. I have just completed two courses in artificial intelligence and machine learning at my alma mater, Masaryk University, and I am training my own experimental models (for predicting bestsellers :). I consider machine learning to be a remarkable invention and catalyst for progress. Despite all this, I have my doubts.
I know a publisher who translates books (English to Korean). He works alone these days. Using GPT, he can produce a decent-quality first draft within a day or two. His later steps are also vastly accelerated because GPT reliably catches typos and grammar errors. It doesn't take more than a month to translate and print a book from scratch. Marvelous.
But I still don't like that the same model struggles w/ my projects...
Ai is useful in closed loop applications, often it can even do a decent job of closing the loop itself… but you need to understand that it is a fundamentally extractive, not creative, process. The body of human cultural knowledge is the underlying resource , and AI is the drill with which we pull out the parts we want.
Coding, robotics, navigation of constrained data spaces such as translation, tagging, indexing, logging, parsing, data transformations… those are all strong target candidates for transformer architecture automation.
Same. I might use them for some things here and there, but not for writing. When I'm writing blog posts, people are coming to my articles to read what I've written, not what some glorified markov chain spits out.
There have been quite a few skeptic blog posts recently about LLM. Some say they won't use it for coding, others for getting creative ideas, and others won't use it for editing and publishing. However, the silent issue all these posts have in common is that resistance is futile.
To be fair, I also don't like using Copilot when working on code. In many cases it turns into a weird experience when the agent generates the next line(s) and I basically become a discriminator judging if the thing really understands my problem and solution. To be honest, it's boring even if eventually it might make me turn in code faster.
With that said, I cannot ignore that LLMs are happening, and this is the future. The models keep improving but more importantly, the ecosystem keeps improving with things like MCP and better defined context for LLM tools.
We might be looking at a somewhat grim prospect. But like it or not, this is the future. Adapt and survive.
I understand. The question is what does it mean to "survive" for someone.
For me survival means:
- continuing to do my best at the language level – even if more people would start be gradually satisfied with less
- I just believe that education, critical thinking and evidence-based principles are at core of humanity progress and one day it will make comeback
- I am ok with smaller income and not wishing to exchange it for creating bullshit
The adaptation for me means:
- generally: stay open-minded
- I have to understand and somehow accept that the prospect is a bit grim but not to fall into some extreme and doom thinking
- I have to explore new ways how to augment human-oriented creativity (with or without these tools)
I'm pretty sure they were generally (if not completely) correct when they said that.
It's either the tech is advancing so quickly that many people can't keep up, or simply the cost of adapting outweighs the potential profit from their remaining careers, even when taking the new tech into account.
Well, it has a problem with my use of the Oxford comma, for one. Because a huge amount of the corpus is American English, and mine ain't. So it fails on grammar repeatedly.
And if you introduce any words of your own, it will sometimes randomly correct them to something else, and randomly correct other words to the made up ones. And it can't always tell when it's made such a change. And sometimes it does that even if you're just mixing existing languages like French or English. So you can make it useless for spellcheck by touching more than one language.
I do keep trying, despite the fact my stuff has been stolen and is in the training data, because of all the proselytising, but right now... No AI is useful for my writing. Not even just for grammar and spelling.
AI is a tool like any other, and it can be used well or poorly, just like any other tool. It's important to know its limits. Being a tool, it must be studied for proper use.
> in a programming environment, you can immediately verify the answer by evaluating the code (at least for code snippets).
Well, it's a trap. You see a snippet is right, you accept it. Next time you do it faster, and faster. And then you get one that seems right but it's not. If you're lucky, it will cause an error.
What's interesting about thinking of code as art is that there rarely a variety of ways of implementing logic that's all optimal. So if you decide on the implementation and have a LLM code it, you likely won't need to make major changes given the right guidelines (I just mean like a single script, for the sake of comparison).
Writing is entirely different, and for some reason, generic writing even when polished (ChatGPT-esque tone) is so much more intolerable than say AI-generated imagery. Images can blend in the background, reading takes active processing so we're much more sensitive. And for the end user of a product, they care 0 or next to 0 about AI code.
You know that teammate that makes more work for everyone else on the team because they do what they are asked to do but in the most buggy and incomprehensible way, that when you finally get them to move on to another team and you realize how much time you spent corralling them and fixing their subtle bugs and now when they are gone work doesn't seem like so much of a chore.
That's AI.
Just like a poorly managed team, you need to learn how to manage AI to get value from it. All ambiguous processes are like this.
In my case, I find the value with LLMs with respect to writing is consolidation. Use it to make outlines, not writing. One example is I record voice memos when driving or jogging and turn them into documents that can be the basis for all sorts of things. End of the day it saves me alot of time and arguably makes me more effective.
AI goes bad because it’s not smart, and it will pretend that it is. Figure out the things it does well for your scenario and exploit it.
We need to update Hanlon's Razor: Never attribute to AI that which is adequately explained by incompetence.
> You know that teammate
now imagine he can be scaled indefinitely
you thought software was bad today?
imagine Microsoft Teams in 5 years time
I'm extremely wary of AI myself, especially for creative tasks like writing or making images, etc., but this feels a little over the top. If you let it run wild then yes the result is disaster, but for well defined jobs with a small perimeter AI can save a lot of time.
You are not wrong, but I pose the argument that too many people approach Gen AI as a replacement instead of a tool, and therein lies the root of the problem.
When I use Claude for code, for example, I am not asking it to write my code. I'm asking it to review what I have written and either suggest improvements or ways to troubleshoot a problem I am having. I also don't always follow its advice, either, but that depends on how much I understand the reply. Sometimes it outputs something that makes sense based on my current skill level, sometimes it proposes things that I know nothing about, in which case I ask it to break it down further so I can go search the Internet for more info and see if I can learn more, which pushes the limits of my skill level.
It works well, since my goal is to improve what I bring to the table and I have learned a lot, both about coding and about prompt engineering.
When I talk to other people, they accuse me of having the AI do all the work for me because that's how they approach their use of it. They want the AI to produce the whole project, as opposed to just using it as a second brain to offload some mental chunking. That's where Gen AI fails and the user spends all their time correcting convoluted mistakes caused by confabulation, unless they're making a simple monolithic program or script, but even then there's often hiccups.
Point is, Gen AI is a great tool, if you approach it with the right mindset. The hammer does not build the whole house, but it can certainly help.
smart people are reading comments like and going “I am glad I am in the same market as people making such comments” :)
You can think that..and you will eventually be left behind. AI is not going anywhere and can be used as a performance booster. Eventually, it will be a requirement for most tech-based jobs.
You sound bitter. Did you try using more AI for the bug fixing? It gets better and better.
My writing style is pretty labor intensive [0]. I go through a lot of drafts and read things out loud to make sure they work well etc. And I tend to have a high standard for making sure I source things.
I personally think an LLM could help with some of this, and this is something I've been thinking about the past few days. But I'd have to build a pipeline and figure out a way to make it amplify what I like about my voice rather than have me speak through its voice.
I used to have a sort of puritanical view of art. And I think a younger version of myself would have been low key horrified at the amount of work in great art that was delegated to assistants. E.g. a sculptor (say Michelangelo) would typically make a miniature to get approval from patrons and the final sculpture would be scaled up. Hopefully for major works, the master was closely involved in the scaling up. But I would bet that for minor works (or maybe even the typical work) assistants did a lot of the final piece.
The same happens (and has always happened) with successful authors. Having assistants do bits here or there. Maybe some research, maybe some corrections, maybe some drafts. Possibly relying on them increasingly as you get later in your career or if you're commercially successful enough to need to produce at greater scale.
I think LLMs will obviously fit into these existing processes. They'll also be used to generate content that is never checked by a human before shipping. I think the right balance is yet to be seen, and there will always be people who insist on more deliberate and slower practices over mass production.
[0] Aside from internet comments of course, which are mostly stream of consciousness.
Michelangelo worked alone on the David for more than two years:
https://en.wikipedia.org/wiki/David_(Michelangelo)#Process
Maybe later he got lazier. I haven't really heard of famous authors using assistants for drafts instead of research (I don't mean commercial authors like Stephen King).
Even research many authors simply could not afford.
Good point! Thanks.
I like the perspective of "choices" during creation. It is an essential principle of the real art that it is a result of thousands/millions of deliberate choices. This is what we admire on the art. If you use mostly machine (or other kind of ways that decide instead and for you) for creation, you as an creator simply do less choices.
In this case, you delegate many of your experienced/crazy/hard decisions to the model (which is based on such decision made already by other artists but combines them in a random way). It is like decompressing JPG – some things are just hallucinated by machine.
From the perspective of pure human creativity, the result is thin, diluted. Even it seems like deliberate. In my opinion art lovers will seek for the dense art made by human, maybe asking even more for some kind of "proof" of the human-based process. What do you think?
At its most basic level I just like throwing things I’ve written at ChatGPT and telling it to rewrite it in “x” voice or tone, maybe condense it or expand on some element, and I just pick whatever word comes to mind for the style. Half the time I don’t really use what it spits out. I am a much stronger editor than I am a writer, so when I see things written a different way it really helps me break through writer’s block or just the inertia of moving forward on something. I just treat it like a mediocre sounding board and frankly it’s been great for that.
When I was in high school I really leaned on friends for edits. Not just because of the changes they would make (though they often did make great suggestions), but for the changes I would make to their changes after. That’s what would inevitably turn my papers from a B into an A. It’s basically the same thing in principle. I need to see something written in a way I would not write it or I start talking in circles/get too wordy. And yes this comment is an example of that haha
I avoided cell phones too when they first came out. I didn't want the distraction or "digital leash". Now it's a stable fixture in my life. Some technology is simply transformational and is just a matter of time until almost everyone comes to accept it at some level. Time will tell if AI breaks through the hype curve but my gut feeling is it will within 5 years.
My phone is a fixture in my life but spend a lot of effort trying to rid myself of it actually. The thing for me is currently, on the receiving end is that I just don't read anything (apart from books) like it has any semblance of authenticity anymore. My immediate assumption is that a large chunk of it or sometimes the entire piece has been written or substantially altered by AI. Seeing this transferring into the publishing and writing domain is just simply depressing.
I avoided web3/crypto/bitcoin altogether when they came out. I'm happy I did and I don't see myself diving into this world anytime soon. I've also never used VR/AR, never owned a headset, never even tried one. Again, I don't see this changing any time soon.
Some technology is just capital trying to find growth in new markets and doesn't represent a fundamental value add.
smart phones became a fixture because they were a key enabler for dozens of other things like fitness tracking fads, logging into key services, communication methods that were not available on desktop, etc. If AI becomes a key enabler of business, then yeah people won't have a choice.
I expect this will be around the time that websites are no longer a thing and we see companies directly pumping information into AI agents which are then postured as the only mechanism for receiving certain information.
As an example, imagine Fandango becoming such a powerful movie agent that theaters no longer need websites. You don't ask it questions. Instead, it notifies YOU based on what it knows about your schedule, your preferences, your income, etc. Right around 5pm it says "Hey did you know F1 is showing down the street from you at Regal Cinema in IMAX tonight at 7:30? That will give you time to finish your 30 minute commute and pickup your girlfriend! Want me to send her a notification that you want to do this?"
People install a litany of agents on their smartphones, and they train their agents based on their personal preferences etc, and the agents then become the advertisers directly feeding relevant and timely information to you that maximizes your spend.
MCP will probably kill the web as we know it.
True, but at least for me also true: Smartphones are a stable fixture in my life and by now I try to get rid of them as much as possible.
What AI currently lacks is mainly context. A well trained, experienced human knows their reader very well and knows what they don't need to write. And for what they write they know the tone they need to hit. I totally expect that in the future this will totally turn around, the Author will write the facts and framework with the help of AI and your AI will extract and condense it for your consumption. Your AI knows everything about you. Knows everything you ever consumed. Knows how you think and what it needs to tell you in which tone to give you the best experience. You will be informed better than ever before. The future in AI will be bright!
Analogies are not arguments.
I’m with you—-I think you did a good job of summarizing all the places that LLMs are super practical/useful, but agreed that for prose (as someone who considers themselves a proficient writer), it just never seems to contribute anything useful. And those who are not proficient writers, I’m sure it can be helpful, but it certainly doesn’t contribute any new ideas if you’re not providing them.
I am not a writer. My oldest son,16, started writing short stories. He did not use AI in any aspect of the words on the page. I did however recommend that he feed his stories and ask a LLM for feedback on things that are confusing, unclear, or holes in the plot.
Not to take any words it gives but read what it says and decide if those things are true, if so, make edits. I am not saying it is a great editor but it is better than any other resource he has access to as a teenager. Yeah better than me or his mom
For things like coding LLMs are useful and DEVONThink's recent AI integrations allow me to use local models as something like an encyclopedia or thesaurus to summarize unfamiliar blocks of text. At best I use it like scratch paper.
I formed the habit of exporting entire chats to Markdown and found them useless. Whatever I found of useful from a given response either sparked a superseding thought of my own or was just a reiteration of my own intuitive thoughts.
I've moved from ChatGPT to Claude. The results are practically the same as far as I can tell (although my gut tells me I get better code from Claude) but the I think Anthropic have a better feel for response readability. Sometimes processing a ChatGPT response is like reading a white paper.
Other than that, LLMs get predictable to me after a while and I get why people suspect that they're starting to plateau.
You are right. It plateaued and even degraded in some way. Or we just got more sensitive to its bullshiting?
As a professional writer, the author of this post is likely a better writer than 99.99% of the population. A quick skim of his blog suggests that he's comfortably more intelligent than 99% of people. I think it's totally unsurprising that he isn't fully satisfied with the output of LLMs; what is remarkable is that someone in that position still finds plenty of reasons to use them.
Now consider someone further down the scale - someone at the 75th, 50th or 25th percentile. The output of an LLM very quickly goes from "much worse than what I could produce" to "as good as anything I could produce" to "immeasurably better than anything I could hope to ever produce".
I'm worried that an increasing number of people are relying on LLMs for things as fundamental to daily life as expressing themselves verbally or critical thinking.
Perhaps LLMs can move someone's results from the 25th percentile to the 50th for a single task. (Although there's probably a much more nuanced discussion to be had about that: people with poor writing skills can still have unique, valuable, and interesting perspectives that get destroyed in the median-ization of current LLM output.) But after a couple years of using LLMs regularly, I fear that whatever actual talent they have will atrophy below their starting point.
The author is a great guy and indeed quite smart and meticulous in areas he cares about deeply. He is a published author with a reasonably popular book considering the market size: https://www.melvil.cz/kniha-jak-sbalit-zenu-20/ he has edited probably more books than he would like to admit as well. It's not surprising he is able to write a good article.
However good writing is a skill you can get good at with enough practice. Read a lot, write a lot of garbage, consult more experienced writers and eventually you will write readable articles soon. Do 10-100x more of that and you will be pretty great. The rest is some kind skill and experience in many other fields than writing which will inform how to write even better. Some of it is intelligence, luck, great mentors and perhaps something we call talent even. As with most things you can get far just by working diligently a lot.
> "Now consider someone further down the scale - someone at the 75th, 50th or 25th percentile. The output of an LLM very quickly goes from "much worse than what I could produce" to "as good as anything I could produce" to "immeasurably better than anything I could hope to ever produce""
That does, to my mind, explain all the vengeful "haw haw, you're all going to get left behind" comments from some LLM proponents. They actually do get benefit from LLMs, unlike the highest part of the scale who are overrepresented on HN, without realizing what that implies and they think they can overtake the highest part of the scale by using them. Well, we'll see.
Idk, LLM writing style somehow almost always ends up sounding like an insufferable smartass Redditor spiel. Maybe it's only appealing to the respective audience.
I am a book publisher & I love technology. It can empower people. I have been using LLM chatbots since they became widely available. I regularly test machine translation at our publishing house in collaboration with our translators. I have just completed two courses in artificial intelligence and machine learning at my alma mater, Masaryk University, and I am training my own experimental models (for predicting bestsellers :). I consider machine learning to be a remarkable invention and catalyst for progress. Despite all this, I have my doubts.
I know a publisher who translates books (English to Korean). He works alone these days. Using GPT, he can produce a decent-quality first draft within a day or two. His later steps are also vastly accelerated because GPT reliably catches typos and grammar errors. It doesn't take more than a month to translate and print a book from scratch. Marvelous.
But I still don't like that the same model struggles w/ my projects...
Ai is useful in closed loop applications, often it can even do a decent job of closing the loop itself… but you need to understand that it is a fundamentally extractive, not creative, process. The body of human cultural knowledge is the underlying resource , and AI is the drill with which we pull out the parts we want.
Coding, robotics, navigation of constrained data spaces such as translation, tagging, indexing, logging, parsing, data transformations… those are all strong target candidates for transformer architecture automation.
Creative thought is not.
Same. I might use them for some things here and there, but not for writing. When I'm writing blog posts, people are coming to my articles to read what I've written, not what some glorified markov chain spits out.
There have been quite a few skeptic blog posts recently about LLM. Some say they won't use it for coding, others for getting creative ideas, and others won't use it for editing and publishing. However, the silent issue all these posts have in common is that resistance is futile.
To be fair, I also don't like using Copilot when working on code. In many cases it turns into a weird experience when the agent generates the next line(s) and I basically become a discriminator judging if the thing really understands my problem and solution. To be honest, it's boring even if eventually it might make me turn in code faster.
With that said, I cannot ignore that LLMs are happening, and this is the future. The models keep improving but more importantly, the ecosystem keeps improving with things like MCP and better defined context for LLM tools.
We might be looking at a somewhat grim prospect. But like it or not, this is the future. Adapt and survive.
I understand. The question is what does it mean to "survive" for someone.
For me survival means: - continuing to do my best at the language level – even if more people would start be gradually satisfied with less - I just believe that education, critical thinking and evidence-based principles are at core of humanity progress and one day it will make comeback - I am ok with smaller income and not wishing to exchange it for creating bullshit
The adaptation for me means: - generally: stay open-minded - I have to understand and somehow accept that the prospect is a bit grim but not to fall into some extreme and doom thinking - I have to explore new ways how to augment human-oriented creativity (with or without these tools)
What do you think?
Pretty similar view than others have expressed in veiks of "LLMs can be good, just not at my [area of expertise]".
I'm pretty sure they were generally (if not completely) correct when they said that.
It's either the tech is advancing so quickly that many people can't keep up, or simply the cost of adapting outweighs the potential profit from their remaining careers, even when taking the new tech into account.
What about grammar and spelling corrections?
Not the author, but another author here and...
Well, it has a problem with my use of the Oxford comma, for one. Because a huge amount of the corpus is American English, and mine ain't. So it fails on grammar repeatedly.
And if you introduce any words of your own, it will sometimes randomly correct them to something else, and randomly correct other words to the made up ones. And it can't always tell when it's made such a change. And sometimes it does that even if you're just mixing existing languages like French or English. So you can make it useless for spellcheck by touching more than one language.
I do keep trying, despite the fact my stuff has been stolen and is in the training data, because of all the proselytising, but right now... No AI is useful for my writing. Not even just for grammar and spelling.
Agree 100%.
AI is a tool like any other, and it can be used well or poorly, just like any other tool. It's important to know its limits. Being a tool, it must be studied for proper use.
> in a programming environment, you can immediately verify the answer by evaluating the code (at least for code snippets).
Well, it's a trap. You see a snippet is right, you accept it. Next time you do it faster, and faster. And then you get one that seems right but it's not. If you're lucky, it will cause an error.
What's interesting about thinking of code as art is that there rarely a variety of ways of implementing logic that's all optimal. So if you decide on the implementation and have a LLM code it, you likely won't need to make major changes given the right guidelines (I just mean like a single script, for the sake of comparison).
Writing is entirely different, and for some reason, generic writing even when polished (ChatGPT-esque tone) is so much more intolerable than say AI-generated imagery. Images can blend in the background, reading takes active processing so we're much more sensitive. And for the end user of a product, they care 0 or next to 0 about AI code.
[dead]
I think there are a lot of good reasons to be cognitive lazy. Now might not be the time to learn about how something works.