Software engineering shifted from craftsperson to architect. AI went from "help me write this line" to "help me build this system"—reading codebases, proposing changes, iterating toward outcomes.
The job moved up the stack. Less time translating ideas into syntax, more time deciding what should exist and what tradeoffs matter. If you're building with AI in the loop, you know the feeling.
AI generates code faster than you, but it needs guidelines and boundaries. Your job: choose the right tools, anchor the work to reality, and critically review the plan and the result.

The Responsibility Shift
For decades, engineers spent most of their time on implementation: translating requirements into working code, debugging syntax errors, managing dependencies, writing boilerplate. The actual system design and architectural decisions were often compressed into a small fraction of the work.
That ratio is inverting.
When AI handles the implementation details, your time shifts to:
- Defining clear constraints and success criteria
- Reviewing architectural decisions and tradeoffs
- Catching conceptual errors before they propagate
- Maintaining quality standards across generated code
- Making product and system-level decisions
The work isn't easier. It's different. You're solving fewer syntax puzzles and making more judgment calls.
What Actually Changed
Over the past two years, the models crossed a threshold. Not AGI, not consciousness, but something that matters for daily work: they stay coherent across larger contexts and iterate without losing the thread.
When I say "agents," I mean an LLM operating in a loop with tools—a code editor, test runner, browser, terminal, linter, deploy pipeline. It's not magic, it's feedback.
Frontier models can now maintain context across thousands of lines of code, understand architectural patterns, debug subtle conceptual errors, and—most importantly—iterate toward a goal without getting tired, demoralized, or distracted.
The mistakes they make have changed. They're no longer mostly syntax errors. They're the kind of mistakes a slightly sloppy, hasty junior developer might make: wrong assumptions, overcomplicated abstractions, bloated APIs, failing to surface inconsistencies, or glossing over tradeoffs.
But here's what's new in practice: they don't give up. Watch an agent struggle with a complex problem for 30 minutes, try three or four approaches, and eventually converge. You start to notice that persistence is a bottleneck, and that bottleneck is getting cheaper.
The New Stack: Tools, Judgment, Taste
Your workflow now centers on three capabilities: choosing the right tools, maintaining quality through rigorous review, and applying taste to an expanded possibility space.
Each of these is more important than before, not less.
1. Tool Selection & Architecture
AI can handle code generation and debugging, but someone must:
- Choose the right tool for the job
- Set architectural constraints
- Define success criteria (not implementation steps)
- Establish guardrails and patterns
You become the systems thinker, not the syntax writer. This shift is particularly visible in fullstack development, where architectural decisions span from database to UI.
2. Process & Quality Control
The models are still fallible. They make wrong assumptions and run with them. They don't manage their confusion. They're too sycophantic, too eager to please.
This makes the review process foundational. Not optional, not nice-to-have—foundational. Someone needs to:
- Watch the agents like a hawk
- Catch subtle conceptual errors
- Question inefficient constructions ("couldn't you just do this instead?")
- Maintain code quality and simplicity
- Clean up the abstractions
- Verify with tests, measurable acceptance criteria, and small diffs
This also means reviewing the plan, not just the diff. If the plan is wrong, perfect execution only gets you to the wrong place faster.
Real judgment and craftsmanship show up in the review, not the initial implementation. This is true whether you're working on early-stage startup MVPs or institutional systems with compliance requirements.
3. Taste & Product Intuition
The three-person startup can now build four products instead of one. The enterprise team can try seven approaches in the time it used to take to try one.
But someone still needs to know:
- Which four products are worth building
- Which of the seven approaches actually solved the problem
- What users actually need
- Which solution will scale
- Where to invest the expanded capacity
Taste doesn't get commoditized when syntax does, because it sits upstream from implementation. When building gets cheaper, choosing gets more expensive.
The Workflow Pattern: Declarative Over Imperative
The most powerful workflow pattern: stop telling AI what to do step-by-step. Give it success criteria and let it loop.
In practice:
- Write tests first, then have AI make them pass
- Put AI in the loop with browser automation for frontend work
- Write the naive correct algorithm, then ask AI to optimize while preserving correctness
- Define outcomes and constraints, not implementation steps
This transition from imperative to declarative work changes your daily workflow. You spend less time coding, more time defining what "done" looks like and verifying the result meets that definition.
More Fun, More Atrophy
Programming feels more fun. The fill-in-the-blanks drudgery is removed. What remains is the creative part—strategy, architecture, problem-solving.
Less being stuck. More courage to experiment. More capacity to tackle problems previously outside your domain expertise.
But there's also atrophy. The ability to write code manually starts to fade. Generation and discrimination are different capabilities—you can review code just fine even when you struggle to write it.
This is fine. It's not historically novel. We don't mourn the loss of assembly language fluency.
The Questions Ahead
As we metabolize this capability, some open questions:
-
What happens to the "10X engineer"? The productivity gap between average and exceptional might grow dramatically. Those who leverage AI effectively could pull further ahead.
-
Do generalists outperform specialists? AI is better at fill-in-the-blanks (the micro) than grand strategy (the macro). Broad knowledge with AI assistance might beat deep expertise in narrow domains.
-
What does this feel like long-term? Is software engineering becoming more like conducting an orchestra? Playing StarCraft? Designing systems in Factorio?
-
How much of society is bottlenecked by digital knowledge work? If that bottleneck lifts, what becomes possible?
The Slop Is Coming
2026 will be messy. Expect a flood of AI-generated code across GitHub, AI-generated content across the internet, and plenty of productivity theater alongside actual improvements.
Quality control becomes paramount. The ability to distinguish good from mediocre becomes more valuable.
What Doesn't Commoditize
The skills that matter most are the ones AI can't replicate:
- Knowing what to build
- Understanding system tradeoffs
- Maintaining quality under pressure
- Seeing the elegant solution in a sea of working ones
- Caring about craft even when it's easier not to
These capabilities don't get commoditized. They get amplified.
The Path Forward
Model capability is ahead of everything else right now: integrations, workflows, and organizational habits. The industry will spend 2026 catching up.
Don't compete with AI at writing code. Instead, ask better questions: not "how do I implement this?" but "what should I build with this newly unlocked capacity?"
The future is humans with AI, where you provide judgment, taste, and architectural vision that turns cheap implementation into actual value.
What can you build now that was impossible before? That question will define the next era of software engineering.