Skip to main content

Six things about how AI is changing software engineering

4 min read

Some Sunday morning thoughts on where we're at in the software engineering industry with the prevalence of Generative AI. I have many more thoughts on this topic but here are a few observations considering where we are at right now in November 2025.

  • AI is enabling engineers to work above their skill level, but this is a double-edged sword. On one hand, engineers using AI get stuck less often and, with the right mindset, can actually accelerate their learning. However, in cases where imposter syndrome is present, it allows engineers to appear competent while masking gaps in essential skillsets. Instead of asking for help from a colleague, they can push generated code that appears to be good into a codebase, yet have no understanding of what it's doing. You might see this play out if an engineer gets surprisingly defensive during routine code review. It's our job as fellow engineers and leaders to employ empathy and help them turn this into a learning opportunity.
  • Being able to get to a correct solution quickly is a good short-term win for the business and scrum team, but in my experience, learning and wisdom happen during the 50 wrong doors you knock on while attempting to get to the right answer. You learn not only the way to solve the problem, but all the ways this particular problem isn't solved and how other adjacent problems could be solved. This can potentially be offset by much deeper learning in a single problem space rather than the shallower but broad learning while exploring all the incorrect paths.
  • Engineering tasks that used to take weeks or months, like late-stage internationalising an application, are now a prompt and a bit of agent-babysitting away. This is a very promising win from my perspective. It's low-complexity but tedious work, and very suited to entry-level agentic tasks.
  • With AI tools like V0, Lovable, Google AI studio, and even Figma's built-in AI tools, the days of an engineer getting a pretty figma design to implement that has broken UX should be behind us. We should be able to test the UX in a prototype environment with a few prompts and the time it takes to make a coffee, ensuring that the design doesn't just look good but actually functions for the user.
  • Agentic tools, like Claude Code, are incredibly powerful in the hands of someone who knows what "good" looks like. You can craft a team of agents that handle technical design, implementation, code review, QA, and even release notes autonomously, and then run them in parallel on different work trees or branches. That's a lot of code being generated, and it takes a while to develop the skillset to stay across it all to ensure quality and to step in and fix or redirect as needed. The traditional path to developing the sixth-sense of what "good" software looks like, typically at a senior engineering level, was fairly linear before GenAI. I think the jury is still out on how that path is affected in the world of GenAI. I think junior engineers will be more AI-native than those that came before, so their ability to prompt and context engineer will come very naturally, but the skills to validate the generated code will still need to be acquired.
  • The playing field is very uneven at the minute. Those who can afford the time to develop the skills to interact with these tools are winning over those that don't. Those who are more senior and know what good software looks like are able to capitalise on using these tools, whereas juniors coming out of university might feel like the ladder is being pulled up in front of them. Companies that can afford to pay for the latest models at high usage levels have an advantage over those who can't. This will probably even out over time as inference costs reduce, models get smarter, and the industry adapts and provides pathways for engineers to progress while being AI-enabled, not AI-dependent.

There's a lot of conjecture about the future of software engineering (I'll add some more) and plenty of noise from AI grifters and "experts". But amongst all that, I believe there's real utility in these tools for increasing our productivity, quality, and even fun. I'm not convinced software is an entirely automatable domain, and I don't really want it to be.

AI is changing how we learn, how we build, and the very structure of our teams and career paths. As we go through this transition, tools will come and go, but people will remain (hopefully). So, as leaders, the real question is: how do we help navigate our people through this transition and into what might just be the next phase of software engineering?

© 2021 by Madole.
GitHub Repository
Last build: 1769562911291