Jason C


State of AI 2026

State of AI 2026 2026-02-11

There's been a lot of hype around AI recently. Is it overhyped? Is it underhyped? I think the answer is both. AI is genuinely transformative, but it's transformative in different ways than most people expect. The path to the singularity is real, but there are fundamental limitations that the hype tends to gloss over.

Singularity

AGI, or artificial general intelligence, is AI that can do anything a human can do. ASI, or artificial superintelligence, is AI that surpasses human intelligence. The singularity is when ASI can self-improve without humans in the loop, leading to a rapid and unpredictable acceleration of intelligence.

We're not at AGI yet, let alone ASI or the singularity. But I think we're on the path. I've been one of many people predicting the singularity for a long time. The year I've always heard is 2040. In 2010 people were saying 2040. We're now halfway from 2010 to 2040, and it still feels like a reasonable prediction.

Elon says we're already in the singularity. Maybe that's just semantics. AI is improving AI, but humans are still in the loop. To me, the singularity means humans aren't needed anymore, and we're not there yet.

So how close are we really? To understand that, it helps to look at how we got here.

AI History

AI didn't start with ChatGPT. Speech to text has been around for a while. Amazon Alexa launched in 2014 and was super cool at the time, but it was narrow. It could respond to specific commands but couldn't hold a real conversation.

GitHub Copilot launched in June 2021 and was the first time AI felt like a real productivity tool. ChatGPT followed in November 2022 and brought AI to the mainstream. In 2025 AI coding agents took things further, acting autonomously to write code, run commands, and make changes across entire codebases.

The pace is clearly accelerating. But even with this acceleration, the gap between "impressive tool" and "replaces humans" is still large. A big part of the reason is what makes AI fundamentally different from previous leaps in programming.

Programming Language Abstractions

People used to write Assembly code. Then C was created, which generates Assembly code depending on which system it's compiled for. Then higher level abstractions like Java and PHP came along. Most programmers today don't know C and don't handle memory management. Each layer of abstraction made programming more accessible and more productive.

But all of these abstractions share something important: they have a deterministic compiler. The same input always produces the same output. You can reason about what the code will do before you run it.

AI is fundamentally different. It's non-deterministic, meaning the same prompt can produce different outputs. You can't fully predict what it will do. This is what makes it powerful, but it's also what makes it unreliable in ways that traditional programming never was. A compiler doesn't hallucinate. An LLM does.

Limitations

LLMs are great at predicting the next word, and you can do lots of cool things with that ability. But predicting the next word doesn't encompass everything humans can do.

The physical world is one example. Boston Dynamics, Tesla self-driving, and the Optimus robot are all tackling physical world AI and making real progress, but it's still early.

There are also deeper issues with LLMs themselves. They have no world model. They only predict text. A cat understands the physical world better than any LLM. They're prone to hallucinations and lack general understanding. They require enormous training data sets, while humans can see something once and learn from it.

As Yann LeCun has argued, AGI requires something beyond LLMs, a deeper understanding of the physical world that text prediction alone can't provide. I think he's probably right. LLMs are an incredible tool, but they may not be the path to true AGI on their own.

Software Companies

Stock prices for traditional software companies have been tanking recently. The sentiment is that AI makes writing code so much easier that these software giants are going to get upended and displaced. Are product managers going to vibe code the next Microsoft Excel killer?

AI coding tools do lower the barrier to entry. People without much programming experience can produce simple projects, and new developers can ramp up faster than ever before. But I don't think these software companies are going to suffer as much as the market is signaling. If anything, software companies may be the biggest beneficiaries of AI. They already have the engineers, the infrastructure, and the customers. AI lets them build faster and do more with the teams they have.

It's like a power drill vs a screwdriver. You can work much faster, but you still need to know where to put the screws. Before calculators, accountants had to do arithmetic by hand. A calculator lets anyone do arithmetic, but they didn't replace everyone who calculates things. You still have to know which numbers to calculate and why. With AI anyone can write code, but for serious projects it still requires thorough understanding and careful planning by people who know how everything fits together. At the end of the day we're still going to need software engineers. Their role will just look different.

Conclusion

So is AI overhyped or underhyped? The capabilities we're seeing today are genuinely impressive and will continue to transform how we work. In that sense, it might even be underhyped. But the leap from "powerful tool" to "artificial general intelligence" is bigger than most people assume. The market is overreacting to software stocks for the same reason. AI is a better tool, not a replacement for the people using it. The singularity is coming, but probably not as fast as the hype suggests. 2040 still feels about right.


← Back to all blog posts