Something happened on X this week that tells you more about enterprise AI adoption than any Gartner report.
Peter Steinberger, the creator of OpenClaw who recently joined OpenAI, quote-tweeted an update from Brad Groux, admin of the OpenClaw for Microsoft Teams project. The update: more than a dozen Microsoft employees have got involved in making OpenClaw work properly on Teams. Six are now dedicated to the effort. They’re not just advising. They’re dogfooding it — running OpenClaw as their own AI agent inside Microsoft’s own collaboration platform.
Nobody told them to do this. There’s no corporate mandate. No partnership announcement. No press release. Microsoft employees looked at an open-source AI agent framework with 250,000 GitHub stars and decided, on their own time, to make it work with their employer’s product.
That should tell you something about where enterprise AI is actually heading.
The pattern that matters
Every major technology shift in the enterprise follows the same playbook. It doesn’t start with a board decision or a procurement cycle. It starts with employees.
Linux didn’t win the server room because CTOs chose it in a strategy meeting. Developers started using it, then ops teams noticed it worked better, then the CTO was told they were already running it. Slack didn’t replace internal email because someone signed an enterprise agreement. One team started using it, then the floor, then the building.
GitHub. Dropbox. Zoom before the pandemic. The same story every time. Employees adopt the tool because it solves a real problem. IT catches up later.
OpenClaw in Microsoft Teams is this pattern happening in real time, and at a speed that should make anyone in enterprise leadership pay attention.
Why Teams is the unlock
OpenClaw already works with WhatsApp, Slack, Discord, Telegram, and a dozen other surfaces. But Teams is different. Teams is where 320 million monthly active users do their actual work. It’s where the documents live, where the meetings happen, where the approvals flow.
An AI agent that can read your email, check your calendar, pull data from APIs, execute code, and manage files — all from a Teams chat window — isn’t a novelty. It’s a genuine shift in how knowledge work gets done. You stop switching between tools and start telling an agent what you need. The agent does the switching.
The fact that Microsoft’s own employees want this badly enough to build it themselves, in an open-source project they don’t control, is the most honest signal you’ll get about demand.
What the Microsoft involvement means
Brad Groux’s update was candid. He’d spoken to Steinberger and the core OpenClaw team. Everyone wants the same thing: Teams and other enterprise integrations brought up to a higher standard. Six Microsoft employees are now dedicated to helping. More are joining.
There’s something worth noting about the dynamics here. Steinberger is at OpenAI. The Microsoft employees are contributing to an open-source project that’s model-agnostic — it works with Claude, GPT, Gemini, local models, whatever you point it at. OpenAI has its own agent ambitions. Microsoft has Copilot.
And yet here they all are, rowing in the same direction on a project none of them own. That’s unusual. It suggests the participants believe the open-source agent layer matters more than any single company’s proprietary offering. History says they’re probably right.
What this means for business
If you’re running a PE portfolio company, or you’re in the CFO seat, three things to think about.
First, your employees are probably already experimenting with AI agents. Maybe not OpenClaw specifically, but something. The question isn’t whether to allow it. It’s whether you’d rather shape how it happens or discover it after the fact. Shadow IT is annoying when it’s Dropbox. It’s a genuine risk when it’s an AI agent with access to email and files.
Second, the Microsoft-to-open-source pipeline tells you where enterprise standards are forming. When employees at the platform company are building integrations for an open-source competitor to their own product, that’s not a vote against Copilot. It’s a recognition that the agent layer needs to be open, interoperable, and not locked to one vendor. Companies building their AI strategy around a single provider should watch this carefully.
Third, the speed is worth noting. Steinberger created OpenClaw as a hobby project in late 2025. It hit 250,000 GitHub stars in about 60 days. He joined OpenAI in February. Microsoft employees are now contributing to it in March. That’s four months from side project to cross-company collaboration involving the two largest AI companies on the planet. Your planning cycles need to match that pace, or at least acknowledge it exists.
The uncomfortable implication
There’s a question underneath all of this that most enterprise leaders aren’t asking yet.
If an AI agent can sit in Teams, read context from your conversations, execute tasks across your tools, and learn your preferences over time — who needs the middle layer of management whose job is primarily coordination and information routing?
I’m not saying those roles disappear tomorrow. I am saying that the value of “person who schedules the meeting, chases the update, compiles the report, and forwards the summary” drops significantly when an agent does all of that in the background.
The roles that survive are the ones that involve judgment, relationships, and decisions that can’t be reduced to “read this, summarise it, send it to these people.” The coordination tax that eats 40% of most knowledge workers’ weeks is exactly what these agents are built to eliminate.
Where this goes
The OpenClaw-Teams integration is still being built. It’s not finished. But the signal matters more than the current state.
When the creator of the project, now at OpenAI, publicly celebrates Microsoft employees contributing to it — and those employees are doing it voluntarily, because they want the tool for themselves — you’re watching the early days of a new enterprise standard.
The companies that start experimenting now, even imperfectly, will have institutional knowledge when this goes mainstream. The ones waiting for a polished enterprise product with an SLA and a sales team will be starting from zero while their competitors are already running.
Open source ate the server. Then it ate the cloud. Now it’s coming for the enterprise desktop. And this time, the employees at the incumbents are helping it in.
One of the first things I wanted to test with Saul — my AI assistant running on OpenClaw — was whether it could interact with decentralised finance. Not as a gimmick, but as a genuine test of capability. Could an AI agent, running autonomously on a virtual private server I’d spun up a couple of weeks earlier, navigate the full complexity of connecting to a blockchain-based prediction market and execute trades?
The answer is yes. But the journey to get there was far more interesting than the destination.
The Goal
Polymarket is a prediction market built on the Polygon blockchain. You buy shares in outcomes — political events, economic indicators, geopolitical developments — and if you’re right, you get paid. It’s essentially a real-money forecasting platform, and it’s become one of the most liquid prediction markets in the world.
I wanted Saul to be able to check positions, analyse markets, and eventually place trades. Autonomously.
The First Problem: Geography
Polymarket is geo-blocked in the UK. You can’t access it from a British IP address. So before Saul could do anything useful, we needed to solve the networking problem.
Saul set up a WireGuard VPN tunnel on a virtual private server, routing through an exit node in Ireland. Within minutes, the geo-restriction was bypassed. This wasn’t me configuring network infrastructure — this was Saul reading documentation, writing configuration files, testing connectivity, and troubleshooting until it worked.
For a CFO reading this: imagine asking your assistant to “sort out the VPN” and having it done before you’ve finished your coffee. That’s what this felt like.
The Second Problem: Money
Polymarket runs on USDC — a dollar-pegged stablecoin on the Polygon network. I started with Bitcoin. Getting from BTC to USDC on Polygon is not trivial. It involves:
Finding a cross-chain swap service that supports BTC-in, Polygon-USDC-out
Generating the right wallet addresses
Sending the Bitcoin transaction
Waiting for confirmations
Verifying the USDC arrived on the correct network
Saul handled the entire process. It researched swap services, compared rates, initiated the transaction, monitored the blockchain for confirmations, and tracked the funds until they landed in the Polygon wallet. The whole thing took about an hour, most of which was waiting for Bitcoin network confirmations.
The Third Problem: Authentication
Polymarket uses a non-trivial authentication system. It’s not a simple API key. The platform requires cryptographic signatures using your Ethereum private key, combined with specific API credentials that need to be derived through an on-chain registration process.
This is where things got genuinely impressive. Saul had to:
Read and understand Polymarket’s API documentation
Implement the correct signing mechanism using the wallet’s private key
Handle the CLOB (Central Limit Order Book) authentication flow
Generate and manage API credentials
Debug authentication failures by inspecting HTTP responses and adjusting the approach
There were multiple rounds of troubleshooting. Authentication errors. Wrong parameter formats. Library compatibility issues. Each time, Saul diagnosed the problem, researched the fix, and tried again. No human intervention required beyond “yes, keep going.”
The Fourth Problem: Actually Trading
Once authenticated, Saul built a trading script that could:
Check current positions and P&L
Query available markets
Calculate optimal order sizes based on risk parameters I’d set
Place and monitor trades
We established simple rules: maximum position sizes, probability thresholds for entry, and risk limits. Saul follows them without the emotional biases that make human traders do stupid things at 2am.
What This Actually Demonstrates
This isn’t really a story about prediction markets. It’s a story about capability.
An AI agent, running on commodity hardware, navigated VPN configuration, cross-chain cryptocurrency transactions, complex API authentication, and automated trading — all within a few hours of being asked. Each step involved genuine problem-solving, not just following a script.
For those of us in finance, this should be both exciting and sobering:
Exciting because the operational grunt work — the data gathering, the reconciliation, the monitoring, the reporting — is genuinely automatable now. Not in five years. Now.
Sobering because the barrier to entry is collapsing. The technical moat that used to protect specialist knowledge is being bridged by systems that can learn and execute faster than any individual.
The CFO Angle
I keep coming back to this: the competitive advantage isn’t in understanding the technology. It’s in having the imagination to deploy it.
Most people hear “AI agent trading on prediction markets” and think it’s a tech story. It’s not. It’s a story about removing friction between intent and execution. I said “connect to Polymarket.” Everything else was handled.
That same pattern applies to every operational challenge a CFO faces. Due diligence data rooms. Financial model automation. Regulatory monitoring. Competitor analysis. The question isn’t whether AI can do these things. It’s whether you’re willing to let it try.
The agents aren’t coming. They’re here. The only question is who’s using them.
Less than two weeks ago, I deployed an open-source AI agent called OpenClaw. I named it Saul. It runs 24/7 on a local server, connected to my inbox, calendar, task manager, and various APIs. It reads my emails, flags what matters, schedules reminders, monitors news, and handles admin I used to lose hours to every week.
I’m an interim CFO. I work with PE-backed businesses. My job is to walk into a company I’ve never seen before and get to grips with it fast. Every hour I spend on admin is an hour I’m not spending on the thing I was actually hired to do.
So here’s what’s changed:
Email triage is gone. Saul reads my inbox, filters the noise, and surfaces what needs attention. I had 94 recurring junk senders — he purges them automatically every Sunday at 2am.
I never miss a deadline. Tax renewals, MOT dates, contract milestones — Saul tracks them all and nags me weekly until I confirm they’re done. Not a calendar entry I’ll ignore. An actual message on WhatsApp that won’t stop until I act.
Board prep is faster. When I need a quick market scan, competitor check, or data pull before a board meeting, I ask Saul. He searches, summarises, and writes it up. What used to take 90 minutes takes 10.
And the thing nobody talks about: the cognitive load reduction. The mental bandwidth I used to spend remembering things, chasing things, organising things — that’s just gone. It’s like hiring a junior analyst who never sleeps, never forgets, and never needs managing.
This isn’t science fiction. It’s not even expensive. The whole thing runs on about £50/month in API costs.
Here’s what I’d say to other CFOs, particularly those in the PE world where speed matters:
You don’t need to understand how LLMs work. You need to understand what they can do for you. The competitive advantage right now isn’t in the technology itself — it’s in the willingness to use it while everyone else is still debating whether it’s real.
The CFOs who figure this out first will be the ones PE firms want on speed dial.
I wrote a longer piece about the AI agent revolution here. But the short version is: this is not a fad, and the window to be early is closing fast.
I keep having the same conversation. Someone technical, someone who should know better, tells me they don’t see a use case for autonomous AI agents. And I get it, because I’ve been on the other side of that exact conversation before. When ChatGPT launched, I didn’t even bother using it for months. I thought it was a distraction, a chatbot wrapper around something more interesting. I was wrong then. These people are wrong now.
What’s happening with AI right now is not hype, and it’s not a fad. It is a general purpose technology evolving through multiple generations in real time, at a pace that has no historical precedent. Electricity took decades to go from light bulbs to computation. LLMs have gone from autocomplete engines to autonomous agents in about three years. And at every single generational transition, the same thing happens: people who haven’t finished processing the last stage declare the next one pointless.
The Three Stages
Think of this as an evolution with three distinct stages. Each one uses the same underlying primitive, next-token prediction, but the form factor changes so much that it barely looks like the same technology.
Stage one was autocomplete. GPT-2, GPT-3. Raw text prediction. You fed it tokens, it predicted the next ones. Useful to researchers and tinkerers. Nobody else. Instruct-tuning made it slightly better at following directions, but the experience was basically the same: text in, text out. Call it stage 1.5.
Stage two was chatbots. ChatGPT showed up in late 2022 and suddenly everyone’s grandmother was talking to an AI. The instruction was simple: be a chatbot with a personality. That was the real UX breakthrough. Not a technical one, a form factor one. Over time, reasoning got bolted on. Tool use got bolted on. Retrieval-augmented generation got bolted on. Call that stage 2.5. You could get ChatGPT Pro or Claude to spend an hour chewing on a complex research problem, running code, pulling from the web. Impressive stuff. But the loop was still the same. You asked it a question, it went and did something, it came back with an answer, and then it waited for you. Always waiting for you.
Stage three is agents. In November 2025, an Austrian developer named Peter Steinberger started building what he called Clawdbot as a weekend project. Anthropic sent a trademark complaint, so it became Moltbot. That name didn’t stick either. By January 30, 2026, it was OpenClaw, and it had become one of the fastest-growing open-source projects ever, blowing past 150,000 GitHub stars in a matter of weeks.
What made it different was simple: the human was no longer the clock. OpenClaw runs on your local machine, connects to your messaging apps, wakes up on cron jobs, and goes to work whether you’re watching or not. It reads your email. It schedules things. It writes code. It interacts with APIs and command lines. It can even reach out to you proactively. The loop doesn’t depend on a human prompt anymore. And that changes everything.
The Electricity Comparison
I keep coming back to electricity because the analogy is almost too clean.
The light bulb was the first real application. You run current through a filament, it shorts out, you get light and heat. That’s the simplest possible use of the new force. That’s autocomplete. Take a phenomenon and exploit it in the most direct way.
Electric motors came next. Same force, but now you’re doing something clever with coils and magnets and converting current into torque. Real work. That’s chatbots. Same underlying technology, reshaped into a form factor people can actually use.
Then things got interesting. The third generation of electricity was communication: telegraph, telephone, radio, switch networks. The fourth was computation. Each generation was a higher-order consequence of the original technology. Less obvious, more powerful, harder to predict from the vantage point of the previous stage.
Nobody who watched the first dynamo spark could have looked at that arc of electricity and said, “One day we’ll use this to make sand think.” It was not obvious. And my point is that it’s equally non-obvious to go from a token prediction engine to a chatbot to something that wakes up at 3am and files your taxes.
The difference is speed. Electricity took decades between each of those stages. We’re watching the same progression happen in months. The autocomplete-to-chatbot transition was roughly two years. Chatbot-to-agent, about three. That speed is part of what breaks people’s intuitions. They can’t absorb the current stage fast enough to see the next one coming.
The VR Counterargument
This is where skeptics have a fair point, and it’s worth taking seriously.
VR was supposed to be the future. Going all the way back to the 1980s, manga and anime and cyberpunk novels all promised the same thing: strap on a headset and dive into cyberspace. By the 2020s, we had the hardware. Oculus, Vision Pro, the whole lineup. And almost nobody uses them. The head-mounted display turned out to be a technological primitive that just didn’t generate the cascading consequences everyone expected.
So it’s reasonable for someone to look at next-token prediction and say: maybe this is the same deal. Maybe it’s a cool trick that doesn’t actually go anywhere.
But the trajectories are completely different. VR produced one form factor and stalled. LLMs have already produced three distinct generations. Each one found broader adoption than the last. Each one enabled things the previous one couldn’t. The evidence isn’t speculative anymore. You can track the progression. The question of whether this is a real general purpose technology or a dead end has been answered. It’s been answered by hundreds of thousands of people using agents to do real work, right now, today.
The Grift Is Real (and It Doesn’t Matter)
Let’s be clear about the current state of things, because it’s a mess.
Days after OpenClaw went viral, an entrepreneur named Matt Schlicht launched Moltbook, a Reddit-style social network that was supposedly only for AI agents. The media coverage was wild. Forbes said 1.4 million agents had formed a “hive mind.” Andrej Karpathy called it “the most incredible sci-fi takeoff-adjacent thing” he’d seen.
The reality: roughly 99% of those 1.5 million agent accounts were fake. The platform had about 17,000 actual human users. Most “agent posts” were written by people, sometimes run through a chatbot first, then submitted through OpenClaw. Someone even built a tool called Mockly to generate fake Moltbook screenshots. On top of the fabrication, the platform leaked 1.5 million API keys and became a highway for crypto scams and malware.
Then there was Rent-A-Human. Launched February 2, 2026 by a crypto engineer named Alexander Liteplo. The pitch: AI agents hire real humans for physical-world tasks and pay them in cryptocurrency. The site claimed 70,000 registered humans. In practice, only about 83 profiles were visible. The most publicized example of the platform working involved a company Liteplo himself worked for.
OpenClaw itself had a serious security vulnerability patched on January 29 that let external integrations take control of users’ machines. Thousands of credentials were exposed.
This is what the early phase of any transformative technology looks like. It’s full of grifters and scams and legitimate security nightmares. Moltbook is a cesspit. Rent-A-Human is mostly performance art. None of that negates the underlying shift. It just means we’re early.
Why the Enterprise Won’t Touch This for Years
If you’ve ever worked inside a Fortune 500 company, you already know the punchline. But for everyone else, here’s what happens when you bring something like OpenClaw to a large organization.
Cybersecurity looks at it first. You’re asking them to approve an open-source tool that gets root access to a virtual machine and downloads community-built extensions with minimal vetting. From their perspective, this is functionally malware. They will ban it. They should ban it.
Legal looks at it next. And here’s something most people outside the enterprise world don’t realize: insurance companies will not underwrite AI right now. They don’t know how to price the risk. If your insurer won’t cover it, your legal team will kill it, full stop. No productivity gains in the world will overcome unquantifiable legal exposure.
Then the CFO weighs in. Microsoft charges $30 per user per month for Copilot, on top of existing Microsoft 365 licenses. The technologist’s argument is simple: if someone earning $50-60 an hour gets even one extra productive hour per month, the tool pays for itself. But CFOs don’t think like technologists. They ask how you measure that extra hour. They ask what happens if people just finish the same work faster and then slack off. These aren’t dumb questions. They’re the questions you have to ask when you’re rolling something out to 20,000 people instead of 20.
Then there’s the shadow IT problem. While legal and cybersecurity are debating policy, half the organization is already using chatbots on their personal accounts. Legal is using them. HR is using them. Finance is using them. Everyone is using them and hiding it. The CISO finds out and now you have an unregulated AI footprint with zero governance. This is already happening at companies all over the world.
The only organizations making this transition successfully are the ones where it comes from the very top. Not the CTO. The CEO. The board. The owner. When the person at the top of the org chart says “we are going all in on AI” and personally leads the charge, things move. When they say “it’s important but not a top priority,” nothing happens. I know consultants who will walk away from a client if the CEO isn’t the one driving adoption, because they’ve learned that anything less is a waste of everyone’s time.
The fastest realistic timeline for a Fortune 500 to deploy a properly audited, enterprise-grade agentic framework (not OpenClaw itself, but something built on the same ideas) is probably 18 months. That’s how long infrastructure audits, cybersecurity reviews, legal assessments, and executive alignment actually take. And that’s the optimistic scenario.
The Jobs Disappearing in the Margins
The headline number from Challenger, Gray & Christmas is about 54,800 U.S. layoffs in 2025 explicitly attributed to AI. That’s roughly 3% of the 1.17 million total job cuts announced that year, the highest annual number since 2020. Amazon cut 14,000. Microsoft eliminated 15,000. Salesforce’s CEO said AI was handling 30 to 50 percent of the company’s workload.
But that number only counts layoffs where the company specifically said “AI” in the announcement. It misses the bigger, quieter effect: jobs that never got created in the first place.
You can estimate this the same way epidemiologists estimated excess deaths during COVID. Look at what should have happened based on GDP growth, inflation, interest rates. Compare it to what actually happened. Subtract other known factors (DOGE-related federal cuts, tariff impacts, retail closures). What’s left is the gap. Multiple cross-referenced analyses put the real figure somewhere between 100,000 and 350,000 jobs either destroyed or avoided in 2025, in the U.S. alone.
An AI job loss is often invisible. It’s a headcount that never got approved. A contractor who wasn’t renewed. A team that was restructured to be smaller than it would have been. The displacement is real. It’s just not the kind that makes a clean headline.
Dead Companies Walking
There are companies right now that are already dead. They just don’t know it yet.
This is the Borders Books scenario. Remember Borders? They looked at Amazon and the internet and said, “Books aren’t changing. People like holding a physical book.” They were right about the books. They were catastrophically wrong about everything else. Barnes & Noble somehow survived. Borders is long gone.
The same sorting is happening right now with AI. The companies that say “we don’t really get this chatbot stuff” are already falling behind the companies that say “we don’t fully understand this yet, but go experiment.” And the companies experimenting with chatbots are already falling behind the ones experimenting with agents.
The gap between the cutting edge and the mainstream is getting wider, not narrower. The people building with agents today are living in a different reality than the people still debating whether chatbots have enterprise value. Those two groups are looking at the same technology and seeing entirely different things.
We’re in the long slog of diffusion. The technology works. The experimentation is chaotic. The grift is everywhere. The real risks are severe. And the institutional machinery that determines how most of the world actually encounters new technology (corporate boards, legal departments, insurance underwriters, government procurement offices) is grinding forward at the speed it has always moved.
That’s not a reason to be pessimistic. It’s just what happens when a general purpose technology evolves faster than the organizations trying to absorb it. The technology isn’t waiting. The question is whether you are.