Software engineering has changed forever. We're far from reaching AI-assisted programming's final form, but it's already clear that the majority of code produced will no longer involve a human typing it out. For some people that's an uncontroversial and obvious statement. Others will dismiss it as overhyping AI's utility. Irrespective of where you fall on the pro-AI vs anti-AI spectrum, the change can be deeply unsettling when, until now, your job has been to write the code that AI is now generating.
I remember a similar feeling of discomfort early in my career as I transitioned from being the sole programmer on projects to growing and leading a team. I'd see work that wasn't objectively wrong or bad, it just wasn't exactly how I'd have written it. My urge was to nitpick, or make unnecessary changes myself but I realised that I had to let go. If I wanted every line of code to be exactly how I would write it, I needed to write them all myself, which of course, does not scale.
That's how I now feel about AI. Models have become capable enough that you don't have to micromanage every line of code. It might not be the code you would have written but in a lot of cases it will be perfectly good code. I could tweak it and rewrite it to be in my "voice" but if I do that the productivity boost is always going to be limited. Or much like I'd do with an engineering team, I could focus on the things that actually matter like the overall architecture, security, performance and scalability.
In the same way that delegating work to other engineers does not prevent you from having any say in how something is implemented, leveraging AI does not mean relinquishing all control to an LLM. Not all AI-assisted programming is vibe coding and utilising AI does not require you to lower the quality bar.
The job of a software engineer has never really been about writing code. It's about interpreting vague problems to know what to build. It's about making pragmatic trade-offs. It's about selecting the right architecture. It's about anticipating how things will break in the real world. None of that changes with AI. If you see your identity as writing code, AI feels like a threat but If your identity is solving problems, it is a multiplier.