Engineering has changed forever - evolve or die.

It's life Jim but not as we know it.
I'll start with an admission.
I was entirely dismissive of AI assisted development at the start of 2025. I was skeptical in Aug 2025, and even in Nov 2025 I thought it had some use cases but wasn't able to "do what I do". Then things changed, very quickly.
Turning point and "AI Slop"
The turning point, IMHO, was early 2026 when Opus 4.6 and GPT 5.4 landed - software development became a solved problem.
Well, maybe a mostly solved problem because simply bashing away with Claude, GPT, or Gemini with no concern about traditional engineering is going to produce nothing more than unmaintainable "AI slop". I think you still need to be a solid engineer in order to steer the models correctly (which brews an interesting paradox in 10-20 years, but that's for another post).
From mid 2025 to Feb 2026, the models became genuinely useful and able to work on large, long running and multi stage tasks with a degree of quality that far surpassed anything I'd seen, and been dismissive of, before. This evolution is going to continue at a staggering pace. The models today are the worst they'll ever be - they'll be better again in 3 months, again in 6, and each evolution isn't just a linear progression, they're step-changes.
Leaving "AI Slop" aside for now, proper engineering isn't (currently) going anywhere, but it is changing dramatically. The job is completely different today as we head toward the 2nd half of 2026, and it's not just evolution, it's revolution and we all need to keep up.
Power looms
This revolution is analogous to other large scale changes throughout history - the printing press or the power loom.
Be under no illusion, we're in the middle of a new industrial revolution, and just like the weavers who watched the first power looms roll into textile mills in the early 1800s, many skilled craftsmen are about to discover that the nature of their craft has fundamentally changed. The work doesnβt disappear β but the people who refuse to work with the machines do.
For start-ups - failing to embrace this way of working, failing to hire competent engineers who can work AI first - will find themselved being out-paced, disrupted, and overtaken by smaller teams in faster moving start-ups before they even have chance to establish product market fit. The opportunity to rapidly launch, test, and iterate on compelling new products and services makes this a time in history unlike any that came before.
For individual engineers - failing to keep pace will I'm afraid mean you're outpaced by your peers, you're going to be left behind and ultimately struggle to find work. An engineer still needs to be an engineer, but now, they are team leads - running a team of very enthusiastic AI agents that deliver at pace. That dramatically changes the focus of the job - elevating your day-to-day work from line-by-line coding to more systems thinking, design, code review, testing, steering and team orchestration.
Your engineering revolution
Welcome to your new world. You're now a product designer, architect, engineering lead/manager and QA all in one.
Being an engineer now is less about writing the code and is more about spending time thinking, designing, reviewing and orchestrating multiple AI agents at the same time. You're now a manager of a team - consisting of multiple AI agents following your instructions.
Let's consider the typical engineering work of a staff/principle dev today compared to 12 months ago.
12 months ago, we'd be given a validated functional idea or requirement and would spend let's say a week iterating on a technical solution design for it. The technical architecture, the software architecture, acceptance criteria and a delivery plan to achieve an MVP release, plus plans and considerations for further iteration after establishing fit.
Then, we'd take that and lead a team of maybe 2-3 developers implementing the feature. As seniors, staff or principles, we might be leading dev on 2-3 features at a time, context switching as little as possible between them to maintain a flow state. We'd interact daily with the product owner, the QA team, iterating and pivoting as needed to make sure that what we deliver is what's actually needed.
After maybe 2-3 weeks of the team working on the feature, we consider it done (or done to one of the delivery plan stages), we formally QA it, we ship it, we move on.
The point here is the bulk of the work is the hands on code writing.
Today, I still love to code but I hand write very little of it directly now. I have the agents do this for me. I spend a lot of time thinking deeply about the feature, documenting how I want it to be implemented and a lot of time reviewing code that's been created, plus some time correcting that code or realigning the agent to what needs to be done in a better or more efficient way, but as far as writing code, line by line, goes - I've gone from 100% to maybe 5-10%.
It's still my code, I still own it, I still take responsibility for it, I've still reviewed it and understand it intimately, but the agents have done the work, much like how a team would, only faster.
A typical day starts with a specification document. A relatively detailed markdown document that will be given to an agent to implement, I spend a good amount of time thinking through how I would implement the feature, what it's purpose is, what stack I would use, what database, queues, patterns and practices, how it should operate, what the UX should be like, what decomposition to components are needed, what the code structure will be, how we'll deal with security, authentication, authorisations etc etc. I'll describe any specific implementation details I want followed in the code in this markdown specification (which later becomes a useful documentation artefact).
I then feed that into both Claude and Codex to review my spec and ask for clarifications, we go through some design iteration to make the spec as clear as can be and turn this into a step by step implementation plan.
This is then given to an agent again to implement and it starts work writing code.
Whilst it's doing that I'll switch over to where I've got the same thing going but for a different project. I'll check in on how the AI team is doing on that one, review the code it's produced, give it any further prompts to steer it in a particular direction and be on it's merry way.
That continues until we have some potentially shippable code (which is usually in hours not days!). At that point, I run that code through another AI instance specifically geared up to do a security, architecture and code review - any feedback it generates is again reviewed by me and then passed back through this agentic loop.
Once that's complete, I'll do a more thorough review of the delivered code, the tests, I'll fire up the code and run it locally to see for myself how well it works and meets the objectives (though the AI will have also written various levels of tests alongside it's work) until I'm happy with it, at which point we start a PR to get it merged and shipped.
When you consider I'm potentially working the above process across 3 projects or features in parallel, alone, just using AI agents, the amount of things I'm able to build in a short space of time is staggering by comparison to the old way.
Results wise, I'd of course say my code would be superior, but with the right guidance to the models and a careful eye for review, we get there, but at 5-10x the speed.
But it's not perfect!
The most crucial point from that is keeping on top of the outputs from the agents. They aren't perfect - they sometimes get stuck, make naive decisions, and take shortcuts that create maintainability nightmares.
Without guidance I've seen it generate entire front end features in a single tsx file - 2000+ LOC long. I've seen it create render "helper" functions that should have been components, duplicate common and reusable code in 15 different places, over-engineer solutions with complex patterns when simplicity was better suited for the requirement and don't get me started on how often it invents requirements independently! - something I explicitly guard against in CLAUDE.md/AGENTS.md these days.
I've lost count of the times AI implemented a change to solve a bug, but did it in a rather ridiculous way. Perhaps it added 50 lines of unecessary code, another level of indirection, a long and duplicated inline function in the markup of an event handler or other similar nonsense - and after pointing out their mistake, the answer is usually "You're right! that would be much better!", fixed 5 seconds later. But the reason that gets picked up and fixed there and then is because I know what I'm doing - I've been writing software for 30 years.
Even when starting a new project, you need an architecture in your head, a choice of stack to use, the mental models of which key architectural patterns will best suit what you want to implement. The AI can help here, sure, but it helps if you're an experienced engineer already and can guide the models on how to start projects.
I recall recently Opus 4.6 getting itself in a big old loop, going around and around in circles and burning through tokens all because I wanted to use something it didn't already have enough knowledge about. The tech was tanstack start and whilst it had some knowledge, it was out of date.
When Claude kicked off the project and nothing worked, it tried to load documentation from the website, but it couldn't load it to read it (I think perhaps the site was dynamic or something, and of course claude was effectively just doing a curl).
After multiple attempts and going around in circles for about 20 minutes, I stopped it, asked it what it needed, pasted the content it needed from the site into the session and it was up and running 30 seconds later.
I've also had one instance of Claude (seems I'm picking on Claude here!) deciding that the quickest and most efficient way to solve a problem was to drop and recreate my database - that I had full of test data! - but those kinds of mistakes are thankfully rare and taught me a lesson in when NOT to use --dangerously-skip-permissions | --yolo!.
The point is, it needs to be steered. Even with the problems above, plus many more, it's still faster than hand cranking every line of code manually. The pace at which we can deliver quality software has dramatically changed.
The "Vibe-Coding" Myth
I'd be remiss if I didn't mention this - this is absolutely not what I'd consider "vibe-coding". On the whole, vibe-coding as most people understand it is, honestly, bullshit. ππ©
That's where you don't really know much about programming and engineering, just fire up a prompt and give the AI minimal instructions to go build something then keep tweaking with more and more prompts until it looks about right, push it to Vercel and call it done. We're not there yet and that tends to end up with an unholy mess of code that's inefficient, insecure and completely unmaintainable.
No, this isn't that - this is intentionally driven, spec based, AI augmented development where the person running the agents works hand in hand with them as team members and knows what they're doing. They intervene and steer them whenever they inevitably go a little off course, just like a team lead would with their intermediate and junior dev team.
Don't get me wrong - "Vibe coding" does have it's place, but it's limited to proof of concepts, market tests and other experiments that are throw away and then built properly.
At the end of the day, you're STILL an engineer, the code being produced is STILL your responsibility, and you should STILL own it fully.
Does code quality even matter now?
Which brings me to a common assumption and what I think is often a mistake. The assumption is, with AI, I'm not maintaining it, and if AI is going to fix any bugs it finds, implement new features, why should I care about the quality of the code? If it works, ship it right?
It comes back to the above - the mistakes it can make, the inefficiencies it can introduce and the unmaintainable spaghetti code it creates when left to it's own devices without any steering. I couldn't, currently, put my name to such slop.
The ownership of the output is STILL a key distinction for me. I own it, its my name on the commit, and that reason is enough to say yes, I do care about the code quality.
Add to that the fact that I'm not ready to fully let go of the code. I can imagine a scenario where AI generates everything, we don't care about the code quality, and then a particularly gnarly bug appears that AI can't seem to solve. It's now on me to dig in and fix it. I can't do that, or I'm going to have a hard time doing it, if the code is full of duplication, 2000+ line source files and is largely unreadable - what I'd consider code rot and technical debt in the traditional sense.
AI is as dumb as it'll ever be.
But, as I've said a few times now, AI is currently the most incapable it's ever going to be, and it's already pretty darn good!
We see step changes in capabilities as each frontier model is released, so yes, one day, I can see a scenario where our role changes further to the point we don't even read the code, we simply don't need to. Instead we focus on high level architecture, the design and ideation of product features and testing to ensure the feature delivery is top notch.
Can us meat sacks keep up? Cognative overload.
Over the last 6 months of working more and more in this way, I noticed something interesting - it can be mentally exhausting.
I find myself in deep thinking mode all the time, especially if I've got multiple projects or features on the go with multiple agents.
It used to be that writing the code itself was a bit of a mental break from deep thinking. Going from 100% grey matter power during ideation and design to 25-50% whilst writing the code line by line was a bit of a rest for the faculties.
Now, I'm in deep think getting an agent working on the next problem, then immediately switching to review, in detail, the output from another agent, and get it started on the next task, by which time I switch back to the first to review it's output because it's finished. That's running at 100% all the time, no respite.
Perhaps I need to learn to go make coffee and touch grass in between sessions a bit more, but I have found working at AI pace to be more mentally taxing.
The enjoyment factor - engineering isn't just code
I've spoken to people who've asked me whether I still enjoy being an engineer. AI has taken away maybe 90% of the code writing so how can I possibly still enjoy it?
And I think that's a misconception. Writing code itself is maybe 20% of the job at best. The rest is about system design, critical thinking, problem solving, engineering without over-engineering, understanding the product, it's purpose, it's fit, the usability, the UX, deciding how to test and verify, working out what telemetry we want to produce to measure success, and of course the meat skills - meeting and talking with customers and stake holders to understand their needs and desires that will solve their problems.
It's not just writing code.
I'd say I'm enjoying my work even more today than I did 5 years ago - I'm still scratching the exact same itch - solving complex problems, I'm still doing all the above, I'm still steering my team, but now they're agents as well as humans - I'm still doing the exact same job, I'm just doing it faster and doing less of the clickety-clack typing myself.
Final words
We're not just on the cusp of our own industrial revolution, we're slap bang in the middle of it. The looms are rolling out everywhere as we speak and us weavers need to adapt to work in this new world or risk obsolescence.
It's not a change to be feared in my opinion, but is rather an exciting time to be involved in the change, to continue enjoying engineering but with new fabulous new tools that let us realise solutions to problems much faster than ever before.
It's a time where risks are lower than ever to be able to take a gamble on a new idea - to get some MVP out to market, validate its suitability, iterate to market fit, and launch, or at least, fail fast without setting large wads of cash on fire. π°π₯