The Age of Agents

I keep having the same conversation. Someone technical, someone who should know better, tells me they don’t see a use case for autonomous AI agents. And I get it, because I’ve been on the other side of that exact conversation before. When ChatGPT launched, I didn’t even bother using it for months. I thought it was a distraction, a chatbot wrapper around something more interesting. I was wrong then. These people are wrong now.

What’s happening with AI right now is not hype, and it’s not a fad. It is a general purpose technology evolving through multiple generations in real time, at a pace that has no historical precedent. Electricity took decades to go from light bulbs to computation. LLMs have gone from autocomplete engines to autonomous agents in about three years. And at every single generational transition, the same thing happens: people who haven’t finished processing the last stage declare the next one pointless.

The Three Stages

Think of this as an evolution with three distinct stages. Each one uses the same underlying primitive, next-token prediction, but the form factor changes so much that it barely looks like the same technology.

Stage one was autocomplete. GPT-2, GPT-3. Raw text prediction. You fed it tokens, it predicted the next ones. Useful to researchers and tinkerers. Nobody else. Instruct-tuning made it slightly better at following directions, but the experience was basically the same: text in, text out. Call it stage 1.5.

Stage two was chatbots. ChatGPT showed up in late 2022 and suddenly everyone’s grandmother was talking to an AI. The instruction was simple: be a chatbot with a personality. That was the real UX breakthrough. Not a technical one, a form factor one. Over time, reasoning got bolted on. Tool use got bolted on. Retrieval-augmented generation got bolted on. Call that stage 2.5. You could get ChatGPT Pro or Claude to spend an hour chewing on a complex research problem, running code, pulling from the web. Impressive stuff. But the loop was still the same. You asked it a question, it went and did something, it came back with an answer, and then it waited for you. Always waiting for you.

Stage three is agents. In November 2025, an Austrian developer named Peter Steinberger started building what he called Clawdbot as a weekend project. Anthropic sent a trademark complaint, so it became Moltbot. That name didn’t stick either. By January 30, 2026, it was OpenClaw, and it had become one of the fastest-growing open-source projects ever, blowing past 150,000 GitHub stars in a matter of weeks.

What made it different was simple: the human was no longer the clock. OpenClaw runs on your local machine, connects to your messaging apps, wakes up on cron jobs, and goes to work whether you’re watching or not. It reads your email. It schedules things. It writes code. It interacts with APIs and command lines. It can even reach out to you proactively. The loop doesn’t depend on a human prompt anymore. And that changes everything.

The Electricity Comparison

I keep coming back to electricity because the analogy is almost too clean.

The light bulb was the first real application. You run current through a filament, it shorts out, you get light and heat. That’s the simplest possible use of the new force. That’s autocomplete. Take a phenomenon and exploit it in the most direct way.

Electric motors came next. Same force, but now you’re doing something clever with coils and magnets and converting current into torque. Real work. That’s chatbots. Same underlying technology, reshaped into a form factor people can actually use.

Then things got interesting. The third generation of electricity was communication: telegraph, telephone, radio, switch networks. The fourth was computation. Each generation was a higher-order consequence of the original technology. Less obvious, more powerful, harder to predict from the vantage point of the previous stage.

Nobody who watched the first dynamo spark could have looked at that arc of electricity and said, “One day we’ll use this to make sand think.” It was not obvious. And my point is that it’s equally non-obvious to go from a token prediction engine to a chatbot to something that wakes up at 3am and files your taxes.

The difference is speed. Electricity took decades between each of those stages. We’re watching the same progression happen in months. The autocomplete-to-chatbot transition was roughly two years. Chatbot-to-agent, about three. That speed is part of what breaks people’s intuitions. They can’t absorb the current stage fast enough to see the next one coming.

The VR Counterargument

This is where skeptics have a fair point, and it’s worth taking seriously.

VR was supposed to be the future. Going all the way back to the 1980s, manga and anime and cyberpunk novels all promised the same thing: strap on a headset and dive into cyberspace. By the 2020s, we had the hardware. Oculus, Vision Pro, the whole lineup. And almost nobody uses them. The head-mounted display turned out to be a technological primitive that just didn’t generate the cascading consequences everyone expected.

So it’s reasonable for someone to look at next-token prediction and say: maybe this is the same deal. Maybe it’s a cool trick that doesn’t actually go anywhere.

But the trajectories are completely different. VR produced one form factor and stalled. LLMs have already produced three distinct generations. Each one found broader adoption than the last. Each one enabled things the previous one couldn’t. The evidence isn’t speculative anymore. You can track the progression. The question of whether this is a real general purpose technology or a dead end has been answered. It’s been answered by hundreds of thousands of people using agents to do real work, right now, today.

The Grift Is Real (and It Doesn’t Matter)

Let’s be clear about the current state of things, because it’s a mess.

Days after OpenClaw went viral, an entrepreneur named Matt Schlicht launched Moltbook, a Reddit-style social network that was supposedly only for AI agents. The media coverage was wild. Forbes said 1.4 million agents had formed a “hive mind.” Andrej Karpathy called it “the most incredible sci-fi takeoff-adjacent thing” he’d seen.

The reality: roughly 99% of those 1.5 million agent accounts were fake. The platform had about 17,000 actual human users. Most “agent posts” were written by people, sometimes run through a chatbot first, then submitted through OpenClaw. Someone even built a tool called Mockly to generate fake Moltbook screenshots. On top of the fabrication, the platform leaked 1.5 million API keys and became a highway for crypto scams and malware.

Then there was Rent-A-Human. Launched February 2, 2026 by a crypto engineer named Alexander Liteplo. The pitch: AI agents hire real humans for physical-world tasks and pay them in cryptocurrency. The site claimed 70,000 registered humans. In practice, only about 83 profiles were visible. The most publicized example of the platform working involved a company Liteplo himself worked for.

OpenClaw itself had a serious security vulnerability patched on January 29 that let external integrations take control of users’ machines. Thousands of credentials were exposed.

This is what the early phase of any transformative technology looks like. It’s full of grifters and scams and legitimate security nightmares. Moltbook is a cesspit. Rent-A-Human is mostly performance art. None of that negates the underlying shift. It just means we’re early.

Why the Enterprise Won’t Touch This for Years

If you’ve ever worked inside a Fortune 500 company, you already know the punchline. But for everyone else, here’s what happens when you bring something like OpenClaw to a large organization.

Cybersecurity looks at it first. You’re asking them to approve an open-source tool that gets root access to a virtual machine and downloads community-built extensions with minimal vetting. From their perspective, this is functionally malware. They will ban it. They should ban it.

Legal looks at it next. And here’s something most people outside the enterprise world don’t realize: insurance companies will not underwrite AI right now. They don’t know how to price the risk. If your insurer won’t cover it, your legal team will kill it, full stop. No productivity gains in the world will overcome unquantifiable legal exposure.

Then the CFO weighs in. Microsoft charges $30 per user per month for Copilot, on top of existing Microsoft 365 licenses. The technologist’s argument is simple: if someone earning $50-60 an hour gets even one extra productive hour per month, the tool pays for itself. But CFOs don’t think like technologists. They ask how you measure that extra hour. They ask what happens if people just finish the same work faster and then slack off. These aren’t dumb questions. They’re the questions you have to ask when you’re rolling something out to 20,000 people instead of 20.

Then there’s the shadow IT problem. While legal and cybersecurity are debating policy, half the organization is already using chatbots on their personal accounts. Legal is using them. HR is using them. Finance is using them. Everyone is using them and hiding it. The CISO finds out and now you have an unregulated AI footprint with zero governance. This is already happening at companies all over the world.

The only organizations making this transition successfully are the ones where it comes from the very top. Not the CTO. The CEO. The board. The owner. When the person at the top of the org chart says “we are going all in on AI” and personally leads the charge, things move. When they say “it’s important but not a top priority,” nothing happens. I know consultants who will walk away from a client if the CEO isn’t the one driving adoption, because they’ve learned that anything less is a waste of everyone’s time.

The fastest realistic timeline for a Fortune 500 to deploy a properly audited, enterprise-grade agentic framework (not OpenClaw itself, but something built on the same ideas) is probably 18 months. That’s how long infrastructure audits, cybersecurity reviews, legal assessments, and executive alignment actually take. And that’s the optimistic scenario.

The Jobs Disappearing in the Margins

The headline number from Challenger, Gray & Christmas is about 54,800 U.S. layoffs in 2025 explicitly attributed to AI. That’s roughly 3% of the 1.17 million total job cuts announced that year, the highest annual number since 2020. Amazon cut 14,000. Microsoft eliminated 15,000. Salesforce’s CEO said AI was handling 30 to 50 percent of the company’s workload.

But that number only counts layoffs where the company specifically said “AI” in the announcement. It misses the bigger, quieter effect: jobs that never got created in the first place.

You can estimate this the same way epidemiologists estimated excess deaths during COVID. Look at what should have happened based on GDP growth, inflation, interest rates. Compare it to what actually happened. Subtract other known factors (DOGE-related federal cuts, tariff impacts, retail closures). What’s left is the gap. Multiple cross-referenced analyses put the real figure somewhere between 100,000 and 350,000 jobs either destroyed or avoided in 2025, in the U.S. alone.

An AI job loss is often invisible. It’s a headcount that never got approved. A contractor who wasn’t renewed. A team that was restructured to be smaller than it would have been. The displacement is real. It’s just not the kind that makes a clean headline.

Dead Companies Walking

There are companies right now that are already dead. They just don’t know it yet.

This is the Borders Books scenario. Remember Borders? They looked at Amazon and the internet and said, “Books aren’t changing. People like holding a physical book.” They were right about the books. They were catastrophically wrong about everything else. Barnes & Noble somehow survived. Borders is long gone.

The same sorting is happening right now with AI. The companies that say “we don’t really get this chatbot stuff” are already falling behind the companies that say “we don’t fully understand this yet, but go experiment.” And the companies experimenting with chatbots are already falling behind the ones experimenting with agents.

The gap between the cutting edge and the mainstream is getting wider, not narrower. The people building with agents today are living in a different reality than the people still debating whether chatbots have enterprise value. Those two groups are looking at the same technology and seeing entirely different things.

We’re in the long slog of diffusion. The technology works. The experimentation is chaotic. The grift is everywhere. The real risks are severe. And the institutional machinery that determines how most of the world actually encounters new technology (corporate boards, legal departments, insurance underwriters, government procurement offices) is grinding forward at the speed it has always moved.

That’s not a reason to be pessimistic. It’s just what happens when a general purpose technology evolves faster than the organizations trying to absorb it. The technology isn’t waiting. The question is whether you are.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *