paint-brush
“AI Software Engineer” — Isn’t It Still a Bit Too Early for That?by@maxkudosh
New Story

“AI Software Engineer” — Isn’t It Still a Bit Too Early for That?

by Max KudoshApril 1st, 2025
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

LLMs are great at coding but far from replacing software engineers. Coding is just one part of software engineering, which also requires creativity, system design, context awareness, and deep domain knowledge — much of which isn’t publicly available or documented. Since LLMs rely on large, high-quality datasets and lack true comprehension or feedback loops, their ability to fully engineer software is fundamentally limited. Until AI can truly think and understand, software engineers aren’t going anywhere — though learning to use AI effectively will be crucial.

Company Mentioned

Mention Thumbnail
featured image - “AI Software Engineer” — Isn’t It Still a Bit Too Early for That?
Max Kudosh HackerNoon profile picture
0-item


Sounds too much to me – am I wrong?


Can LLMs replace software engineers? I started thinking about this again after a conversation with a former colleague. He wanted to use an AI agent to build an app that collects daily updates from various company services (Jira, BitBucket repos, Google Docs, Gmail), summarizes them, and presents the result to the user.


That colleague has a strong background in both software engineering and management. But needless to say, the attempt failed miserably. The agent did generate a lot of code across many files, sure. But neither the code nor the ongoing interaction with the agent led to a functioning app. And I’m not even talking about code quality, suitability, or maintainability. It simply didn’t work — key parts (like integrations with external APIs) were hallucinated.


That got me thinking. At the time, I had taken a two-month break from my career for personal reasons, and I hadn’t been keeping up with industry updates. I asked myself: Did I miss a major leap that made these kinds of expectations for AI more realistic?



So the question is: can LLMs replace software engineers? And regardless of the answer — why or why not? There are two major parts to this question: LLMs and software engineering. Let’s look deeper into them both.


What is an LLM?

Large Language Models (LLMs) are programs trained to generate statistically likely sequences of tokens. They do this by identifying patterns in massive datasets.


LLMs don’t comprehend the data, don’t think, and don’t learn in the way humans do. As coders, LLMs lack a real feedback loop - they can’t run, debug, and iteratively fix the code they write.


One conclusion: LLMs need a very high-quality dataset to be competent in a particular domain. Keep that in mind.


The best pic I managed to squeeze out of ChatGPT

Software engineering vs coding

Now let’s talk about software engineering (SE). It’s crucial to distinguish it from coding or programming. Programming is about writing code to get the computer to do something. Software engineering is a broader discipline that includes coding/programming as well as many other things.



Engineering is about building reliable systems composed of many parts and connecting them with other systems. Engineers have done this for centuries across different domains. Software just happens to be one of them. I’d even say some SE principles mirror those in other engineering fields — like failover, for example.

Do LLMs code well?

Yes, LLMs are great at coding. That’s because there’s a wealth of structured programming problems and solutions on the internet — the kind of material LLMs were trained on.


But we’ve already established that software engineering is not just coding. So when someone like Andrej Karpathy says that LLMs help a lot with his programming tasks, he’s not talking about conventional software engineering. As an AI/ML researcher, he mostly deals with algorithms, scripts, and complex mathematical computations.

Are LLMs good software engineers?

So are LLMs capable of doing software engineering work? Let’s think about it.


When someone asks a software engineer if their job is creative or not, they often say it is. That’s because there’s no single blueprint you can reuse for every project.


In that regard, SE craft involves:

  1. Knowing the building blocks: languages, libraries, frameworks, design patterns of different scale, databases, cloud platforms, message queues and numerous other services/products, you name it;
  2. Creatively applying these building blocks according to meet specific business needs.


Interestingly, both these things are dynamic, they both change in time and space.


All the building blocks constantly evolve: new ones are invented, some get reworked a lot (look at how .NET or JavaScript has changed over the last decade), some get deprecated. This evolution is described in the public space via all kinds of documentation, StackOverflow threads, blogs, Github repos, etc. But this representation is often fragmented, outdated, or missing.


On top of that, how we apply all these tools varies dramatically — across companies, teams, and projects, and with time. It is so because each business is unique, with its own history and its own dynamic (!) needs. And a giant part of that knowledge on how to apply the blocks isn’t public, and it will never be. It’s locked inside organizations and people’s heads.


This makes me think the truly essential knowledge that makes a good software engineer simply isn’t publicly available — and never will be.


And remember, LLMs need large, high-quality public datasets to learn from.


I just don’t think such datasets exist for software engineering. And I can’t see how they ever could. So I conclude that LLMs won’t be nearly as successful at software engineering as they are at coding.

Conclusions

  1. There isn’t enough reliable, public knowledge for LLMs to become good software engineers, let alone replace them.
  2. As long as LLMs can’t comprehend, think, or learn like humans — and as long as the data gap exists — software engineers are safe.
  3. If one day AIs can truly comprehend and think, we’ll have far bigger changes to worry about. Basically most of the jobs are done, not only software engineering 😀

Extra thoughts

Some important topics I didn’t cover, but worth mentioning:


  • Bias: The more a concept appears in a dataset, the more likely an LLM is to output it. The tech industry is already prone to hype and bias, which could further degrade the usefulness of LLM-generated solutions.
  • Emergent capabilities: Some researchers think LLMs might develop new ‘emergent’ abilities as compute scales and training methods evolve (like with reinforcement learning). Maybe. But LLMs are black boxes. We don’t really understand how they work at scale. If they do evolve into something smarter, the implications will be global — not limited to software engineers.
  • Tools, not replacements: LLMs won’t replace engineers. But engineers will need to learn to use them effectively — and that’s a separate discussion.
  • More machine learning: I believe lot’s of medium+ sized companies will get more and more involved with their own experiments with machine learning, neural networks, LLMs to build their own tools of fine-tune existing ones. So eventually machine learning will be more and more prominent in the industry.


Thanks for your attention. I’ll be glad for any feedback.