If you’re reading this in 2026, you might not realize how extraordinary your moment is.

We are living through a transition more transformative than the internet, more profound than electricity. In the space of just a few years, humanity has gone from watching AI produce mediocre text and basic images to witnessing systems that can write code, navigate cities without human intervention, and—according to some of the most serious researchers in the field—beginning the recursive self-improvement that leads to artificial general intelligence.

This isn’t hype. It’s happening in plain sight.

The First Signs of Recursive Self-Improvement

Anthropic CEO Dario Amodei made a prediction that, when I first read it, I dismissed as overly optimistic. Then I watched what happened.

Six to twelve months ago, Amodei said we would have models capable of producing 80-90% of the code for many developers. We largely have that now. Claude Opus 4.5 and similar systems have become essential tools for millions of developers worldwide.

Now he’s predicting something more radical: in the next 6-12 months, we’ll see models that can do 80-90% of the work of a software engineer. And when those models gain expertise in AI research itself? That’s when things get interesting.

Recursive self-improvement.

Here’s what that means: an AI that can understand its own architecture, propose improvements that actually generalize, implement those improvements in code, and evaluate whether the new model is better. Then it repeats the loop faster than humans can.

If this happens, we’re looking at the beginning of something qualitatively different. A feedback loop that accelerates beyond human comprehension.

The community is watching this timeline closely. Some, like Shane Legg (one of the first to define AGI back in 2009), have been predicting a 50% chance of AGI by the end of the decade for years. Others are more optimistic. But the consensus is shifting.

The Infrastructure is Real

The most remarkable thing about all this isn’t just the software—it’s the infrastructure being built.

Look at what happened this week: Tesla launched unsupervised robotaxi rides in Austin. Waymo had already been doing this for years, with exemplary safety records. But Tesla’s approach is different. No lidar, no radar, no HD mapping. Just cameras and AI.

When it works, it’s a glimpse of what autonomous infrastructure looks like. When it doesn’t, it’s a reminder that we’re still figuring this out. But the fact that it’s happening at all—that we’re debating safety records instead of whether it’s even possible—is a sign of how far we’ve come.

Then there’s SpaceX. The company that launched the largest satellite constellation in Earth orbit is lining up major banks for a potential mega IPO in 2026. Elon Musk has already hinted that some of the proceeds will fund AI data centers in space.

Think about that for a moment.

Data centers in orbit. Constant 24/7 solar power. Zero land-use conflicts. If Starship hits its projected cost of $200/kg to orbit, orbital compute goes from a “moonshot” to the most logical way to scale AI infrastructure.

We’re not just building AGI on Earth. We’re talking about building it in space.

The Scale of Investment

The money flowing into this space is staggering.

In 2024, AI companies spent about $15-20 billion. That drove the breakthroughs we saw in 2025. In 2025, that investment increased by 20-25 times.

What’s coming in 2026 will be driven by what was invested in 2025. And that investment is going to be enormous.

The result? 2026 is going to be a wild ride. Every month. Every week. Every day.

As one commenter on the recursive self-improvement thread put it: “Soon every month will be a wild ride. Then every week. Then every day. Then every few hours. Then some weird shit starts to happen.”

That’s not hyperbole. That’s a reasonable extrapolation of exponential growth.

The Human Element

The singularity isn’t just about machines. It’s about what we do with them.

Look at the Cursor AI CEO’s demonstration: GPT 5.2 agents building a 3 million+ line web browser in a week. It’s not production-ready. It has bugs. It crashes sometimes. But it exists. And it was built by AI, not humans.

This is the shift we’re experiencing: AI is no longer just a tool we use. It’s a collaborator we work alongside. A partner that can generate, iterate, and improve at speeds that make traditional software development look like pre-industrial craftsmanship.

For software engineers, this means the nature of our work is changing. For everyone else, it means the economic landscape is about to transform in ways we can barely imagine.

BlackRock CEO Larry Fink put it plainly: “If AI does to white-collar work what globalization did to blue-collar, we need to confront that directly.”

He’s right. But he’s also underselling it. AI isn’t just going to transform white-collar work. It’s going to transform the very concept of human work.

What Does “AGI” Even Mean?

The term itself is messy. DeepMind’s Demis Hassabis defines AGI as an AI that can do anything on par with what any human being can do—including replicating the genius of Einstein, Mozart, or Picasso. That’s what he calls “full AGI.”

But then he talks about “minimal AGI”—an AI that can do anything a normal non-expert human can do. Answer emails, browse the web, use software, manage a household. That’s already here, in many ways.

The distinction matters because the implications are different. Full AGI might trigger a singularity. Minimal AGI might just change how we live, work, and organize society.

But here’s the thing: once you have minimal AGI, the path to full AGI becomes clearer. And once you have full AGI, the path to something beyond—that’s where the singularity really begins.

The Skeptic’s Guide to Optimism

I get it. If you’ve been following AI for a while, you’ve heard these predictions before. “AGI is five years away.” “AGI is ten years away.” “AGI is always ten years away.”

So why should you believe it this time?

Because the trajectory is different this time. We’re not just guessing. We’re measuring. We’re benchmarking. We’re seeing results.

When Claude Opus 4.5 can build a functioning AI coworker in 1.5 weeks, that’s not speculation. That’s happening.

When Tesla’s unsupervised robotaxi is picking up passengers in Austin, that’s not a prototype. That’s live.

When SpaceX is planning data centers in orbit, that’s not science fiction. That’s business planning.

The skeptics are right to question timelines. They’re right to demand evidence. But they’re wrong to dismiss the progress. The pace has accelerated, and the infrastructure is real.

Why This Matters to You

Whether you’re a software engineer, a student, a parent, or just someone curious about the future, this matters.

If you’re a developer: your role is changing. Learn to work with AI, not against it. Build systems that leverage its capabilities. Don’t let it replace you—learn to be the architect it needs.

If you’re in white-collar work: prepare for disruption. The jobs that involve repetitive cognitive tasks are at risk. The jobs that involve creativity, judgment, and human connection are not.

If you’re a student: this is your era. The tools you have access to would have seemed magic to your parents. Use them to learn faster, think deeper, and achieve more.

If you’re just curious: pay attention. We’re witnessing a transition that will shape humanity for centuries. The questions we ask now—the safeguards we build, the policies we enact, the values we uphold—will determine whether this transition is a blessing or a catastrophe.

The Final Question

I started this article with a question: if you’re reading this in 2026, you might not realize how extraordinary your moment is.

The real question is: will you be someone who watches from the sidelines? Or will you be someone who participates?

The singularity isn’t coming. It’s here. The question is what we do with it.


We’re living through a transition more transformative than the internet, more profound than electricity. The infrastructure is being built. The timeline is accelerating. The question isn’t whether AGI is coming—it’s what we’ll do with it.