If fewer humans are writing code line by line, we should at least stop pretending languages shaped by decades of human compromise are the obvious foundation for the next phase of software.

Python did not win by accident. Neither did JavaScript. People did not stick with them because the industry rolled dice and got lucky. They won because they made a huge amount of useful work possible despite all kinds of historical baggage, rough edges, and tooling pain. Richard Gabriel’s “Worse is Better” remains one of the clearest explanations of why messier, more pragmatic systems often beat cleaner designs in the real world, and that logic applies here too. Python in particular is a good example. For years, setting it up cleanly was often harder than writing the code itself. People persevered anyway because the language was productive enough, flexible enough, and broadly useful enough to be worth the trouble.

But that still does not answer the more interesting question.

Are the properties that made languages successful in the human-written era the same properties we should want in a world where models generate more of the code and humans increasingly review, constrain, steer, and verify it?

I do not think so.

A lot of our mainstream languages are, bluntly, clusterfucks of historical design decisions, compatibility baggage, inconsistent semantics, and missing guarantees. Humans learned to work around that with conventions, tooling, frameworks, code review, tests, and institutional memory. That is ugly but manageable when code production is bottlenecked by human effort.

It looks different when code production stops being the bottleneck.

Once models can generate code faster than teams can reason about it, the constraint moves. The hard part is no longer expression. It is trust. Can this output be checked, constrained, transformed safely, reasoned about, and verified without requiring heroic amounts of human review every time the model decides to be clever? In a different era, Fred Brooks made a related point in “No Silver Bullet”: making software construction faster does not remove the essential complexity of software itself. That feels even more relevant when generation gets cheap.

That is the part of the conversation I think people still underweight.

A lot of the current excitement still assumes the basic substrate stays the same: same languages, same runtime assumptions, same loose relationship between intent and implementation, same pile of tooling added afterward to paper over design problems. The model just types faster. But if the volume of generated code keeps rising, I do not think “just add more tooling” is a serious long-term answer.

This is also why I do not buy the lazy line that English is the new programming language. Dijkstra made a version of this argument decades ago in “On the foolishness of ’natural language programming’,” and the basic objection still holds.

English is fine for rough intent. It is fine for exploration. It is fine for asking a model to sketch, compare, or draft. It is a much weaker medium for producing deterministic, constrained, verifiable software without repeated interpretation loss, back-and-forth, and hidden ambiguity. That does not mean prompts do not matter. It means English alone is a poor control surface for systems where correctness actually matters.

So the interesting question to me is not whether we get a shiny new programming language and everyone suddenly rewrites the world. Ecosystem gravity is real. Training data matters. Existing stacks matter. But popularity in the human-coded era is not the same thing as fitness for the next one.

The more important shift may be that we start wanting a stronger substrate between intent and execution: something more analyzable, more constrained, more explicit, and better suited to verification than the languages we inherited from a very different mode of software production.

Maybe that looks like a new language. Maybe it looks like a tighter intermediate representation, a typed specification layer, or systems that make the path from intent to executable code far more explicit than it is today.

Whatever shape it takes, I suspect the next real bottleneck is not generating more code. It is building software stacks that can survive an AI code avalanche without collapsing under ambiguity, review burden, and correctness debt.