In the 2000s, the hottest move in software was offshoring. You'd ship your requirements to a development shop in India, Vietnam, or Bangladesh, pay a fraction of Western developer rates, and wait. The cost savings were real, every spreadsheet said so. The failure modes were also real, every CTO said so.
Even assuming that the teams working on your code were smart, motivated, and hardworking, the distance, communication overhead, the time zone mismatch, and misaligned incentives created a brutal set of constraints. If you wanted to get good results from offshoring, you needed to be able to clearly specify what you wanted and be good at validating that you got what you expected.
You couldn't just say "I need a login system." You had to write detailed specs, break work into reviewable chunks, define acceptance criteria, and actually read the code that came back. Not rubber-stamp it. Read it, make sure that it passed muster and could be accepted internally, because the delta between "looks right" and "is right" could cost you six months of production incidents.
Sound familiar? Today, instead of shipping my requirements to a dev shop overseas, I'm shipping them to a GPU somewhere. I get something back. It looks like code. It might be code. It might be a very convincing facsimile of code that will quietly fail in production under load. I genuinely don't know until I sit down and read it carefully.
The same discipline that separated successful offshore engagements from expensive disasters applies here as well:
- Specification quality determines output quality. Vague prompts return vague code. The ability to articulate exactly what you want — at the right level of abstraction — is now a core engineering skill.
- Validation is non-negotiable. "It passed the vibe check" is not a code review. The reviewer needs to understand what the code is doing and why, not just that it compiles and the tests are green.
- Iterative delivery beats big-bang delivery. Nobody who survived offshoring tried to outsource an entire product in one shot. You stage it. You review at each stage. You course-correct before mistakes compound.
The Bottleneck Has Moved
Here's what I think is the deeper shift: for most of software history, the bottleneck was writing the code. That took time and required expensive humans. So the industry optimized heavily around it, better editors, better frameworks, and better abstractions. All in service of making the act of writing code faster and less error-prone.
That bottleneck is collapsing. What once took six months might take six hours. When the cost of implementation approaches zero, the bottleneck moves upstream: to design, specification, and verification. The expensive parts are now:
- Understanding the problem clearly enough to describe it precisely.
- Decomposing it into well-scoped, independently verifiable pieces.
- Reviewing what comes back and actually understanding it.
These are skills we largely deprioritized during the era when coding itself was the hard part. They're about to become the most valuable things a technical person can do.
A lot of that used to be done “along the way” when you wrote the code. You would explore the problem and gain depth of understanding as you wrote the code. Now that just doesn’t happen, but you still need to do that work explicitly.
A note about the importance of proper architecture
There is this idea that the path to building big systems with AI is to spin up a swarm of specialized agents (a frontend agent, a backend agent, a database administrator agent, etc.) and somehow orchestrate them into a coherent product.
I find this baffling, because we already have a well-established protocol for coordinating the work of specialized, partially independent contributors on a complex system. It's called software design.
Module boundaries. Interface contracts. Separation of concerns. Dependency management. SOLID principles and more. These patterns exist precisely because complex systems built by multiple contributors without clear interfaces turn into unmaintainable messes. This is true whether those contributors are humans, offshore teams, or language models.
The instinct to throw orchestration complexity at a coordination problem is exactly backwards. The answer isn't a smarter message bus between your agents. The answer is better system design that minimizes how much the pieces need to talk to each other in the first place.
We have literally decades of experience in how to build large software systems (and thousands of years of experience in how to handle large projects in general). There isn’t anything inherently new here to deal with.
The developers who will thrive in this environment aren't necessarily the ones who write the most elegant code. They're the ones who can hold a complex system design in their head and communicate it clearly, break the work into well-specified, verifiable increments, and actually read the code that comes back and hold it to a real standard of quality.
These are, in large part, the same skills that made the best engineering leads effective during the offshoring era. The context has changed completely. The discipline hasn't.
The GPU is the new Bangalore. Time to dust off the playbook.
