I study AI, and have developed plenty of software. LLMs are great for using unfamiliar libraries (with the docs open to validate), getting outlines of projects, and bouncing ideas for strategies. They aren’t detail oriented enough to write full applications or complicated scripts. In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I’ll give its output a once over to check it with an eye to the details of implementation. It’s nice to get the boilerplate out of the way quickly.
Don’t get me wrong, LLMs are a huge advancement and unbelievably awesome for what they are. I think that they are one of the most important AI breakthroughs in the past five to ten years. But the AI hype train is misusing them, not understanding their capabilities and limitations, and casting their own wishes and desires onto a pile of linear algebra. Too often a tool (which is one of many) is being conflated with the one and only solution–a silver bullet–and it’s not.
This leads to my biggest fear for the AI field of Computer Science: reality won’t live up to the hype. When this inevitably happens, companies, CEOs, and normal people will sour on the entire field (which is already happening to some extent among workers). Even good uses of LLMs and other AI/ML use cases will be stopped and real academic research drying up.
My fear for the software industry is that we’ll end up replacing junior devs with AI assistance, and then in a decade or two, we’ll see a lack of mid-level and senior devs, because they never had a chance to enter the industry.
That’s happening right now. I have a few friends who are looking for entry-level jobs and they find none.
It really sucks.
That said, the future lack of developers is a corporate problem, not a problem for developers. For us it just means that we’ll earn a lot more in a few years.
You’re not wrong, and I feel like it was a developing problem even before AI - everybody wanted someone with experience, even if the technology was brand new.
That said, even if you and I will be fine, it’s still bad for the industry. And even if we weren’t the ones pulling up the ladder behind us, I’d still like to find a way to start throwing ropes back down for the newbies…
You’re not wrong, and I feel like it was a developing problem even before AI - everybody wanted someone with experience, even if the technology was brand new.
True. It was a long-standing problem that entry-level jobs were mostly found in dodgy startups.
Tbh, I think the biggest issue right now isn’t even AI, but the economy. In the 2010s we had pretty much no intrest rate at all while having a pretty decent economy, at least for IT. The 2008 financial crisis hardly mattered for IT, and Covid was a massive boost for IT. There was nothing else to really spend money on.
IT always has more projects than manpower, so with enough money to spend, they just hired everyone.
But the sanctions against Russia in response to their invasion of Ukraine really hit the economy and rising intrest rates to combat inflation meant that suddenly nobody wanted to invest anymore.
With no investments, startups dried up and large corporations also want to downsize. It’s no coincidence that return-to-work mandates only started after the invasion and not in the two years prior of that where lockdowns were already revoked. Work from home worked totally fine for two years after covid lockdowns, and companies even praised how well it worked.
Same with AI. While it can improve productivity in some edge cases, I think it’s mostly a scapegoat to make mass-fireings sound like a great thing to investors.
That said, even if you and I will be fine, it’s still bad for the industry. And even if we weren’t the ones pulling up the ladder behind us, I’d still like to find a way to start throwing ropes back down for the newbies…
You are totally right with that, and any chance I get I will continue to push for hiring juniors.
But I am also over corporate tears. For decades they have been crying over a lack of skilled workers in the IT and pushing for more and more people to join IT, so that they can dump wages, and as soon as the economy is bad, they instantly u-turn and dump employees.
If corporations want to be short-sighted and make people suffer for it, they won’t get compassion from me when it fails.
Edit: Remember, we are not the ones pulling the ladder up.
Was it really Russia’s invasion, or just because the interest rates went up to prevent too much inflation after the COVID stimulus packages? Hard to imagine Russia had that much demand for software compared to the rest of the world.
Inflation went up due to the knock-on effects of the sanctions. Specifically prices for oil and gas skyrocketed.
And since everything runs on oil and gas, all prices skyrocketed.
Covid stimulus packages had nothing to do with that, especially in 2023, 2024 and 2025, when there were no COVID stimulus packages, yet the inflation was much higher than at any time during COVID.
Surely it is not too much to ask that people remember what year stuff happened in, especially if we are talking about things that happened just 2 years ago.
I would say that “replacing with AI assistance” is probably not what is actually happening. Is it economic factors reducing hiring. This isn’t the first time it has happened and it won’t be the last. The AI boosters are just claiming responsibility for marketing purposes.
Couldn’t have said it better myself. The amount of pure hatred for AI that’s already spreading is pretty unnerving when we consider future/continued research. Rather than direct the anger towards the companies misusing and/or irresponsibly hyping the tech, they direct it at the tech itself. And the C Suites will of course never accept the blame for their poor judgment so they, too, will blame the tech.
Ultimately, I think there are still lots of folks with money that understand the reality and hope to continue investing in further research. I just hope that workers across all spectrums use this as a wake up call to advocate for protections. If we have another leap like this in another 10 years, then lots of jobs really will be in trouble without proper social safety nets in place.
People specifically hate having tools they find more frustrating than useful shoved down their throat, having the internet filled with generative ai slop, and melting glaciers in the context of climate change.
This is all specifically directed at LLMs in their current state and will have absolutely zero effect on any research funding. Additionally, openAI etc would be losing less money if they weren’t selling (at a massive loss) the hot garbage they’re selling now and focused on research.
As far as worker protections, what we need actually has nothing to do with AI in the first place and has everything to do with workers/society at large being entitled to the benefits of increased productivity that has been vacuumed up by greedy capitalists for decades.
They can be helpful when using a new library or development environment which you are not familiar with. I’ve noticed a tendency to make up functions that arguably should exist but often don’t.
Excellent take. I agree with everything. If I give Claude a function signature, types and a description of what it has to do, 90% of the time it will get it right. 10% of the time it will need some edits or efficiency improvements but still saves a lot of time. Small scoped tasks with correct context is the right way to use these tools.
They aren’t detail oriented enough to write full applications or complicated scripts.
I’m not sure I agree with that. I wrote a full Laravel webapp using nothing but ChatGPT, very rarely did I have to step in and do things myself.
In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I’ll give its output a once over to check it with an eye to the details of implementation. It’s nice to get the boilerplate out of the way quickly.
Yep, I agree with that.
There are definitely people misusing AI, and there is definitely lots of AI slop out there which is annoying as hell, but they also can be pretty capable for certain things too, even more than one might think at first.
Greenfielding webapps is the easiest, most basic kind of project around. that’s something you task a junior with and expect that they do it with no errors. And after that you instantly drop support, because webapps are shovelware.
So you’re saying there’s no such thing as complex webapps and that there’s no such thing as senior web developers, and webapps can basically be made by a monkey because they are all so simple and there’s never any competent developers that work on them and there’s no use for them at all?
Who says I made my webapp with ChatGPT in an afternoon?
I built it iteratively using ChatGPT, much like any other application. I started with the scaffolding and then slowly added more and more features over time, just like I would have done had I not used any AI at all.
I study AI, and have developed plenty of software. LLMs are great for using unfamiliar libraries (with the docs open to validate), getting outlines of projects, and bouncing ideas for strategies. They aren’t detail oriented enough to write full applications or complicated scripts. In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I’ll give its output a once over to check it with an eye to the details of implementation. It’s nice to get the boilerplate out of the way quickly.
Don’t get me wrong, LLMs are a huge advancement and unbelievably awesome for what they are. I think that they are one of the most important AI breakthroughs in the past five to ten years. But the AI hype train is misusing them, not understanding their capabilities and limitations, and casting their own wishes and desires onto a pile of linear algebra. Too often a tool (which is one of many) is being conflated with the one and only solution–a silver bullet–and it’s not.
This leads to my biggest fear for the AI field of Computer Science: reality won’t live up to the hype. When this inevitably happens, companies, CEOs, and normal people will sour on the entire field (which is already happening to some extent among workers). Even good uses of LLMs and other AI/ML use cases will be stopped and real academic research drying up.
My fear for the software industry is that we’ll end up replacing junior devs with AI assistance, and then in a decade or two, we’ll see a lack of mid-level and senior devs, because they never had a chance to enter the industry.
That’s happening right now. I have a few friends who are looking for entry-level jobs and they find none.
It really sucks.
That said, the future lack of developers is a corporate problem, not a problem for developers. For us it just means that we’ll earn a lot more in a few years.
You’re not wrong, and I feel like it was a developing problem even before AI - everybody wanted someone with experience, even if the technology was brand new.
That said, even if you and I will be fine, it’s still bad for the industry. And even if we weren’t the ones pulling up the ladder behind us, I’d still like to find a way to start throwing ropes back down for the newbies…
They wanted someone with experience, who can hit the ground running, but didn’t want to pay for it, either with cash or time.
You can only pick two.
True. It was a long-standing problem that entry-level jobs were mostly found in dodgy startups.
Tbh, I think the biggest issue right now isn’t even AI, but the economy. In the 2010s we had pretty much no intrest rate at all while having a pretty decent economy, at least for IT. The 2008 financial crisis hardly mattered for IT, and Covid was a massive boost for IT. There was nothing else to really spend money on.
IT always has more projects than manpower, so with enough money to spend, they just hired everyone.
But the sanctions against Russia in response to their invasion of Ukraine really hit the economy and rising intrest rates to combat inflation meant that suddenly nobody wanted to invest anymore.
With no investments, startups dried up and large corporations also want to downsize. It’s no coincidence that return-to-work mandates only started after the invasion and not in the two years prior of that where lockdowns were already revoked. Work from home worked totally fine for two years after covid lockdowns, and companies even praised how well it worked.
Same with AI. While it can improve productivity in some edge cases, I think it’s mostly a scapegoat to make mass-fireings sound like a great thing to investors.
You are totally right with that, and any chance I get I will continue to push for hiring juniors.
But I am also over corporate tears. For decades they have been crying over a lack of skilled workers in the IT and pushing for more and more people to join IT, so that they can dump wages, and as soon as the economy is bad, they instantly u-turn and dump employees.
If corporations want to be short-sighted and make people suffer for it, they won’t get compassion from me when it fails.
Edit: Remember, we are not the ones pulling the ladder up.
Was it really Russia’s invasion, or just because the interest rates went up to prevent too much inflation after the COVID stimulus packages? Hard to imagine Russia had that much demand for software compared to the rest of the world.
Did you not read what I wrote?
Inflation went up due to the knock-on effects of the sanctions. Specifically prices for oil and gas skyrocketed.
And since everything runs on oil and gas, all prices skyrocketed.
Covid stimulus packages had nothing to do with that, especially in 2023, 2024 and 2025, when there were no COVID stimulus packages, yet the inflation was much higher than at any time during COVID.
Surely it is not too much to ask that people remember what year stuff happened in, especially if we are talking about things that happened just 2 years ago.
I would say that “replacing with AI assistance” is probably not what is actually happening. Is it economic factors reducing hiring. This isn’t the first time it has happened and it won’t be the last. The AI boosters are just claiming responsibility for marketing purposes.
It may also be self fulfilling. Our new ceo said all upcoming projects must save 15% using ai, and while we’re still hiring it’s only in India.
So 6 months from now we will have status reports talking about how we saved 15% in every project
100% agreed. It should not be used as a replacement but rather as an augmentation to get the real benefits.
Couldn’t have said it better myself. The amount of pure hatred for AI that’s already spreading is pretty unnerving when we consider future/continued research. Rather than direct the anger towards the companies misusing and/or irresponsibly hyping the tech, they direct it at the tech itself. And the C Suites will of course never accept the blame for their poor judgment so they, too, will blame the tech.
Ultimately, I think there are still lots of folks with money that understand the reality and hope to continue investing in further research. I just hope that workers across all spectrums use this as a wake up call to advocate for protections. If we have another leap like this in another 10 years, then lots of jobs really will be in trouble without proper social safety nets in place.
People specifically hate having tools they find more frustrating than useful shoved down their throat, having the internet filled with generative ai slop, and melting glaciers in the context of climate change.
This is all specifically directed at LLMs in their current state and will have absolutely zero effect on any research funding. Additionally, openAI etc would be losing less money if they weren’t selling (at a massive loss) the hot garbage they’re selling now and focused on research.
As far as worker protections, what we need actually has nothing to do with AI in the first place and has everything to do with workers/society at large being entitled to the benefits of increased productivity that has been vacuumed up by greedy capitalists for decades.
They can be helpful when using a new library or development environment which you are not familiar with. I’ve noticed a tendency to make up functions that arguably should exist but often don’t.
Excellent take. I agree with everything. If I give Claude a function signature, types and a description of what it has to do, 90% of the time it will get it right. 10% of the time it will need some edits or efficiency improvements but still saves a lot of time. Small scoped tasks with correct context is the right way to use these tools.
I’m not sure I agree with that. I wrote a full Laravel webapp using nothing but ChatGPT, very rarely did I have to step in and do things myself.
Yep, I agree with that.
There are definitely people misusing AI, and there is definitely lots of AI slop out there which is annoying as hell, but they also can be pretty capable for certain things too, even more than one might think at first.
Greenfielding webapps is the easiest, most basic kind of project around. that’s something you task a junior with and expect that they do it with no errors. And after that you instantly drop support, because webapps are shovelware.
So you’re saying there’s no such thing as complex webapps and that there’s no such thing as senior web developers, and webapps can basically be made by a monkey because they are all so simple and there’s never any competent developers that work on them and there’s no use for them at all?
Where do you think we are?
None that you can make with ChatGPT in an afternoon, no.
Who says I made my webapp with ChatGPT in an afternoon?
I built it iteratively using ChatGPT, much like any other application. I started with the scaffolding and then slowly added more and more features over time, just like I would have done had I not used any AI at all.
Like everybody knows, Rome wasn’t built in a day.