
A new fatalism is growing in the software development industry. With the arrival of powerful AI agents capable of writing code, a seemingly ironclad logic has emerged:
- AI writes more and more code.
- Models write better code the larger their training dataset for that language is.
- Developers naturally choose popular languages (Python, JavaScript) so AI agents can help them more effectively.
- This, in turn, further increases the training dataset for these languages, creating a feedback loop.
This is the classic "Matthew Effect" - the rich get richer, and the poor get poorer.
The conclusion drawn from this is frightening: the competition among programming languages is over. AI has fossilized the current leaders. There is no longer any point in creating new languages, as they will never catch up to the giants in terms of training data volume.
This sounds logical. But this logic overlooks the real driving forces of the industry and how evolution actually happens. AI is not an innovation killer, but simply a new, extremely powerful environmental factor.
To understand the future, one must look beyond just AI. The real picture is a tug-of-war between three great forces.
The Force of Inertia
The myth that needs to be dispelled is that the key barrier for new languages is AI. Competition in the segment of languages capable of withstanding enterprise pressure really did end nearly completely, but this happened long before the arrival of assistant agents.
The real wall is inertia. It consists of two unbreakable components:
Economics: These are the astronomical switching costs. Billions, if not trillions, of lines of code written in Java, C#, C++, or Python sit on enterprise servers. Migrating a single legacy system can cost from several million to hundreds of millions of dollars. For example, the migration of the British bank TSB's system in 2018 cost roughly £330 million and led to massive failures. A 2025 study by CloudBees shows that 57% of IT leaders spent more than $1 million on platform migrations in the last year, with an average project budget overrun of 18%. The average business loss per platform migration project is $315,000 due to delays, employee burnout, and security issues. Knowing this, no top manager in their right mind would approve rewriting a working system in a trendy new language, even if it is 10 times better.
Ecosystem: This is everything that surrounds the language. Mature frameworks, libraries, polished tools, documentation, the availability of trained teams, and a huge community that has already found answers to all questions.
This force is the true master of the industry.
The Force of AI
AI is not a new wall, but an amplifier of the existing one. It is important to understand: economic inertia dominated the industry long before AI assistants appeared. The transition from COBOL to Java took decades not because of a lack of AI assistants, but because of trillions of lines of legacy code and huge migration costs. AI did not create this problem - it merely quantitatively amplified it, accelerating the gap between leaders and newcomers. This is an acceleration of existing dynamics, not a qualitatively new phenomenon.
How significant is the "Matthew Effect" in reality? Research has shown that the best model achieves 79.81% accuracy for Python, but only 24.31% for Erlang and 20.82% for Racket. This creates a measurable advantage for developers using popular languages. But this has a deeper, critically important consequence for the corporate segment.
The main weakness of AI assistants is context. They are great at writing isolated functions but stall when faced with complex, specific business logic or the tangled architecture of a 20-year-old legacy system.
And what do companies do? They start training internal AI models on their own giant codebases.
Imagine: an AI trained on all your bank's projects for the last 15 years. It knows all your internal APIs, patterns, and business constraints. Such an AI assistant makes supporting old systems in Java or C# even more efficient and cheaper, thereby further increasing attachment to the language and reducing the motivation for migration. Moreover, if clear requirements for developing a new system are not formulated for the assistant, it, like many of us, will highly likely choose the language it knows best.
The Force of Innovation
Does all this mean the era of competition is over? Definitely not. It means it has transformed.
The need to solve new problems has not gone anywhere. Innovation simply can no longer storm the wall of inertia head-on. You cannot create a "Java killer" from scratch today with a new ecosystem, new runtime, and new libraries.
Therefore, innovators are left to use guerrilla survival strategies.
The "Symbiote" Strategy
The most successful new languages do not compete with platforms but integrate into them, creating a mutually beneficial symbiosis. They come to an existing platform (JVM, .NET, JavaScript) and say: "We will use all your inertia (libraries, runtime, community), but we offer better syntax." The platform gets modern language features, and the new language gets a ready-made ecosystem.
Kotlin won not as a new language, but as a better Java. It runs on the JVM and is fully compatible with Java code. TypeScript won not as a JavaScript killer, but as its safe descendant. It compiles to JavaScript and works in the same Node.js and browser ecosystem.
However, symbiosis with an ecosystem does not guarantee success. Scala, despite compatibility with the JVM, demonstrates stagnation: according to the Stack Overflow Developer Survey 2025, its usage dropped from 3.2% in 2019 to 2.6% in 2025. Ceylon by RedHat, also aimed at the JVM, was completely abandoned in 2020. Nim, attempting to compile to C/C++ to use their ecosystems, remains niche with an adoption level of less than 0.5%. This shows: symbiosis opens doors, but does not guarantee long-term success - additional competitive advantages are needed.
The "Specialist" Strategy
This strategy works when a new language offers not just convenience, but a manifold advantage in solving one narrow but critically important problem that the "old guard" solves poorly.
Rust is the perfect example. Its memory safety guarantees are not a nice feature, but a fundamental solution to a problem that has cost the industry billions. For systems programming, drivers, and high-load services, this advantage outweighs any inconveniences for AI generation.
Rust succeeded by solving a critical problem, but many specialist languages fail. Pony promised to solve concurrency problems through a unique type system but failed to gain a critical mass of users. Zig positions itself as a simpler alternative to C with better error management but remains in Rust's shadow. This demonstrates that even solving an important problem requires the right market timing and sufficient ecosystem support.
AI Developers vs. The "Matthew Effect"
Finally, the creators of AI models themselves are not interested in creating a prohibitive barrier. They understand the risks of homogenization and do not want their models to be good only at Python.
To smooth out the "Matthew Effect," they use a whole arsenal of compensating approaches:
Transfer Learning: Modern language models use intermediate code representations that are similar across languages. Knowledge of programming semantics gained from training on Java is transferred to similar C# or even Swift. However, transfer efficiency decreases as differences increase: transferring knowledge from Python to Rust works worse than from Java to C# due to fundamentally different memory management models. Research shows that transfer learning allows achieving approximately 70-85% of the performance of a language with a large data corpus, which reduces but does not eliminate the gap.
Fine-tuning on Small Corpora: A model can be fine-tuned on a high-quality, albeit small, array of code (for example, on all Rust code from GitHub) so that it understands its specifics. Fine-tuning on specialized datasets really helps but has limits. A model fine-tuned on all public Rust code (~10M lines) yields to a model initially trained on Python (~1B lines). Quality is compensated by curation: manually selected high-quality Rust code gives a better result than random Python code from GitHub but requires significant resources for data preparation.
Cross-Language AI Translators: Translating existing code into other languages using AI models has proven to be an effective tool in the hands of innovators. This approach allows writing code in one language and then translating it to another. Paradoxically, this lowers the barrier for "small" languages, making them viable. Automatic translation between languages sounds like a panacea, but in practice, it works better for syntactically close languages (Java and C#, JavaScript and TypeScript). When translating between conceptually different languages (Python and Rust, JavaScript and Haskell), accuracy drops to 60-70%, requiring significant manual rework. Nevertheless, even imperfect translation lowers the entry threshold for new languages.
Beyond the Corporate Mainstream
Besides corporate development, there are other segments with different dynamics.
Domain-Specific Languages (DSLs): SQL, R, and MATLAB continue to dominate their niches regardless of AI assistant trends because they are chosen by non-programmer specialists.
Academic Languages: Coq, Agda, and Idris are used for formal verification and type theory research. Here, AI support is secondary - mathematical rigor and expressiveness are more important.
Embedded Systems Languages: Specialized languages for microcontrollers (Ada for aviation, MISRA C for automotive) are chosen based on safety and certification criteria, where AI assistants play a minimal role. In these contexts, the Matthew effect is weakened, and innovations follow a different logic.
Evolution Has Become Smarter
So, did AI kill innovation in programming languages? Definitely not.
The frightening myth was born because we confused cause and effect.
- The main barrier for new languages is not AI, but economic inertia (switching costs and network effects).
- AI is merely an amplifier of this inertia, making the "old guard" even more convenient to support.
- Innovation has not died. It has changed tactics: instead of a linear approach and arguing over subjective preferences, it uses "Symbiote" (Kotlin, TypeScript) and "Specialist" (Rust) strategies.
Evolution has not stopped. It has simply moved from a platform war to a more cunning, niche struggle and the improvement of existing ecosystems. And this is, perhaps, much more interesting than a simple change of the leader.