I’m unimpressed with this take. AI will replace those that fail to adapt their skills to the new strategies required to orchestrate the new systems that tie AI solutions together. Someone still has to instruct the AI and I’ve yet to see enough evidence to convince me this tech can be adequately used to replace my abilities… but it will replace my homies that stayed complacent and thought they’d be able to coast by on some archaic knowledge from 15 years ago when we first started.
Not to mention the question of if these setups are financially feasible for the consumer of the AI and for the AI provider. It doesn't make that much business sense for a multi-billion dollar operation to fully automate their data engineering if it now depends on OpenAI, Meta or whatever, it's a huge operational liability. If the models go down you lose the data engineering sector entirely, and then you have no one to turn to because the only one who understands the code is the AI you don't even own.
I know companies have taken these operational liabilities with cloud storage and compute, but it's not the same thing as in it's not possible to mitigate. You can have a local, but shorter, backup of your stuff, but you can't have backup engineers
If you're in Google Cloud, the risk of using Gemini seems not much different than a using their cloud storage imo. Then you also have redundancy if you have OpenAI, Llama, and Claude as drop-in replacements.
So, someone still has to orchestrate AI, right? But that doesn't negate that a large majority of people will be replaced. Of course, there will always be one or two that won't. And what about in 15 years? Because the direction in which we are heading is rather inevitable unless AI is stopped.
Market displacement is nothing new. Happened in the 2000's (dotcom bubble), happened in the 2010's (cloud infra), happened again in 2020 (services workers being funneled into tech), and it's happening again now (AI is replacing those that fail to adapt with the market changes in tech).
The one thing that has kept me viable as an employee over my 15 years in tech is that I literally don't want to do the same thing I did yesterday 1000 times. I want to do it as few times as possible before I automate the problem away, so I can move on to something new. There will always be something new. There will always be someone with a dream and no skills; for me to step in and help out.
> I want to do it as few times as possible before I automate the problem away, so I can move on to something new. There will always be something new. There will always be someone with a dream and no skills; for me to step in and help out.
> I fail to see the problem.
You fail to see the problem for YOU. Others may not have a job as flexible, but of course you were only thinking of yourself.
I’m not special and anyone can do what I’m doing. Civilization has been advancing technology since people stood upright. To stand still and not expect change is just ignorance. I can’t fix flawed people, I can only march forward.
I don't think it's the right path, and I think marching forward with innovation is destructive. People who can't adapt to AI aren't flawed, just like people aren't flawed who can't do math even though I can. The true ignorance is thinking that what you are doing does any good in the world.
I think a lot of what you’re pointing this thread towards boils down to philosophical beliefs. Objectively throughout history there have been people resistive to technological advancement and those people have more often than not been the idea losers in history.
I’ll throw you something I believe that we might agree on though. I don’t think the colossus data center Musk setup in Tennessee is good for anyone. Those generators he’s been running are abhorrent and the guy needs realignment of his neurons through some percussive maintenance, but alas that’s probably illegal because he’s too much of a chump to accept a boxing match.
Why is someone needed to instruct the AI, or orchestrate anything? Isn't that a role that will inevitably be fulfilled by AI, one that's perhaps more focused on this sort of higher-level consideration, without a context polluted with low-level technical detail (i.e., exactly what we expect from tech-lead or management roles today.)
These LLMs are really good at digging up internal docs if you give them access to your knowledge sources with tooling to search and reason in a loop before responding.
>These LLMs are really good at digging up internal docs if you give them access to your knowledge sources with tooling to search and reason in a loop before responding.
Are those internal documents in the room with us right now?
No but seriously, most of the software out there is legacy code (don't quote me on that though). IME, legacy code very poorly documented, if anything at all. Sure you could let the LLM extract semantics from the code alone but with old code, arcane hacks and such LLM interpretation can take you only so far. And even then semantics is not always directly translates to business logic.
> Are those internal documents in the room with us right now?
I have no clue what you're on about here.
If you have a legacy knowledge base, like maybe using mediawiki for corp knowledge, what you do is maintain a vector database that gets updated when it sees changes. Using embeddings enables lookup through sentiment.
In a control loop with well maintained vector embeddings, these LLMs are absolutely better than a human at finding, citing, and summarizing information needed by the user.
Tools like glean already exist for this if you doubt it.
I’m unimpressed with this take. AI will replace those that fail to adapt their skills to the new strategies required to orchestrate the new systems that tie AI solutions together. Someone still has to instruct the AI and I’ve yet to see enough evidence to convince me this tech can be adequately used to replace my abilities… but it will replace my homies that stayed complacent and thought they’d be able to coast by on some archaic knowledge from 15 years ago when we first started.
Not to mention the question of if these setups are financially feasible for the consumer of the AI and for the AI provider. It doesn't make that much business sense for a multi-billion dollar operation to fully automate their data engineering if it now depends on OpenAI, Meta or whatever, it's a huge operational liability. If the models go down you lose the data engineering sector entirely, and then you have no one to turn to because the only one who understands the code is the AI you don't even own.
I know companies have taken these operational liabilities with cloud storage and compute, but it's not the same thing as in it's not possible to mitigate. You can have a local, but shorter, backup of your stuff, but you can't have backup engineers
If you're in Google Cloud, the risk of using Gemini seems not much different than a using their cloud storage imo. Then you also have redundancy if you have OpenAI, Llama, and Claude as drop-in replacements.
The only thing you need to maintain is context.
So, someone still has to orchestrate AI, right? But that doesn't negate that a large majority of people will be replaced. Of course, there will always be one or two that won't. And what about in 15 years? Because the direction in which we are heading is rather inevitable unless AI is stopped.
Market displacement is nothing new. Happened in the 2000's (dotcom bubble), happened in the 2010's (cloud infra), happened again in 2020 (services workers being funneled into tech), and it's happening again now (AI is replacing those that fail to adapt with the market changes in tech).
The one thing that has kept me viable as an employee over my 15 years in tech is that I literally don't want to do the same thing I did yesterday 1000 times. I want to do it as few times as possible before I automate the problem away, so I can move on to something new. There will always be something new. There will always be someone with a dream and no skills; for me to step in and help out.
I fail to see the problem.
> I want to do it as few times as possible before I automate the problem away, so I can move on to something new. There will always be something new. There will always be someone with a dream and no skills; for me to step in and help out.
> I fail to see the problem.
You fail to see the problem for YOU. Others may not have a job as flexible, but of course you were only thinking of yourself.
I’m not special and anyone can do what I’m doing. Civilization has been advancing technology since people stood upright. To stand still and not expect change is just ignorance. I can’t fix flawed people, I can only march forward.
I don't think it's the right path, and I think marching forward with innovation is destructive. People who can't adapt to AI aren't flawed, just like people aren't flawed who can't do math even though I can. The true ignorance is thinking that what you are doing does any good in the world.
I think a lot of what you’re pointing this thread towards boils down to philosophical beliefs. Objectively throughout history there have been people resistive to technological advancement and those people have more often than not been the idea losers in history.
I’ll throw you something I believe that we might agree on though. I don’t think the colossus data center Musk setup in Tennessee is good for anyone. Those generators he’s been running are abhorrent and the guy needs realignment of his neurons through some percussive maintenance, but alas that’s probably illegal because he’s too much of a chump to accept a boxing match.
Why is someone needed to instruct the AI, or orchestrate anything? Isn't that a role that will inevitably be fulfilled by AI, one that's perhaps more focused on this sort of higher-level consideration, without a context polluted with low-level technical detail (i.e., exactly what we expect from tech-lead or management roles today.)
Nah, anyone who thinks they aren't going to need a compute expert to deal with their tech stack is huffing copium.
What did you expect? It is just another Medium post. Badum tss. Rage bait.
> If AI can:
> Understand business requirements from documentation
Wait, how are business requirements getting documented?
These LLMs are really good at digging up internal docs if you give them access to your knowledge sources with tooling to search and reason in a loop before responding.
>These LLMs are really good at digging up internal docs if you give them access to your knowledge sources with tooling to search and reason in a loop before responding.
Are those internal documents in the room with us right now?
No but seriously, most of the software out there is legacy code (don't quote me on that though). IME, legacy code very poorly documented, if anything at all. Sure you could let the LLM extract semantics from the code alone but with old code, arcane hacks and such LLM interpretation can take you only so far. And even then semantics is not always directly translates to business logic.
> Are those internal documents in the room with us right now?
I have no clue what you're on about here.
If you have a legacy knowledge base, like maybe using mediawiki for corp knowledge, what you do is maintain a vector database that gets updated when it sees changes. Using embeddings enables lookup through sentiment.
In a control loop with well maintained vector embeddings, these LLMs are absolutely better than a human at finding, citing, and summarizing information needed by the user.
Tools like glean already exist for this if you doubt it.
Buy asking questions and analyzing requirements of a business in the top level. It'll help AI to design the documentation.
Yes, AI will replace Us.