Tag: private equity

  • You Don’t Deploy AI Agents Anymore — You Hire Them

    Yesterday, monday.com launched something called Agentalent.ai — a managed marketplace where enterprises can discover, evaluate, and “hire” AI agents for defined business roles. Not install. Not deploy. Hire.

    You post a role. You review qualified agents. You select based on task fit, budget, and operational readiness. Pricing starts around $2,000 a month per agent. They launch with 17 agents available. Built in collaboration with AWS, Anthropic, and Wix.

    If you’re a CFO and that doesn’t make your headcount model twitch, you’re not paying attention.

    The Language Shift That Matters

    I’ve been building with AI agents for the best part of two years now — wiring up Claude to handle research tasks, automating financial reporting pipelines, getting agents to do the kind of grunt work that used to eat a junior analyst’s entire Tuesday. But the framing has always been tooling. You set up an agent like you’d set up a spreadsheet macro. It’s a thing on your computer.

    What monday.com has done — deliberately, with their HR-style language — is shift the frame from tools to workers. And that’s not just marketing fluff. It’s the conceptual bridge that will get the rest of the C-suite to finally understand what’s happening.

    A Belitsoft report published this weekend puts numbers on it: the average enterprise now runs 12 AI agents. Twelve. And that’s expected to hit 20 by 2027. But here’s the kicker — half of those agents operate completely alone, unconnected to any other agent or system. They’re doing their little jobs in their little silos, and nobody’s orchestrating the whole thing.

    Sound familiar? It should. That’s exactly what happens when a company hires people without a coherent operating model. You end up with twelve contractors, half of whom don’t talk to each other, doing overlapping work with no shared context. I’ve walked into PE portfolio companies that look exactly like this — except with humans.

    The CFO’s New Headcount Problem

    Here’s where it gets interesting for anyone sitting in a finance seat. When an AI agent costs $2,000 a month and can do the work of a task that previously required a $6,000/month contractor, that’s a straightforward business case. Any CFO can model that. The ROI practically draws itself.

    But the real question isn’t “should we hire the agent?” It’s “how do we account for a workforce that’s now 30% software?”

    Think about what sits in your headcount model today. Salaries, employer NI, pension contributions, benefits, training costs, recruitment fees. Now think about what sits in your AI agent budget. SaaS subscriptions, API usage fees, compute costs, maybe some integration consulting. These two things live in completely different cost categories, get approved through different processes, and are managed by different people. But they’re increasingly doing the same work.

    In the PE world I operate in, headcount is one of the first things a new investor scrutinises. “What’s your revenue per head?” “What’s your fully-loaded cost per FTE?” These metrics are foundational to how value creation plans get built. But nobody’s asking “what’s your revenue per agent?” yet. And they should be, because if you’re running 12 agents and growing, that’s a material line in your operating model that isn’t being tracked like one.

    The Coordination Tax

    The Belitsoft finding that half of deployed agents work alone is, I think, the most important data point in their entire report. It mirrors what I’ve seen first-hand. Companies get excited, they spin up agents for customer support, for code review, for data entry, for reporting — and each one works reasonably well in isolation. But the value compounds when agents talk to each other, and almost nobody has figured that part out yet.

    This is an orchestration problem, and it’s fundamentally a management problem. You need someone — or something — deciding which agent handles which task, what context gets shared, where the human review gates sit. NVIDIA’s new Agent Toolkit, announced with partners including Salesforce, SAP, and ServiceNow, is trying to solve the infrastructure side of this. Okta’s new “secure agentic enterprise” framework, going GA at the end of this month, is tackling identity and access. But the management layer — the actual decision-making about how to deploy and coordinate these things — that’s still a gap.

    And it’s a gap that, in most companies, probably falls to the CFO. Not the CTO. Not the CISO. The CFO. Because ultimately this is a resource allocation problem. You have a pool of human and non-human workers. You have tasks that need doing. You need to figure out the optimal mix, track the cost, measure the output, and report on it to a board that still thinks in FTEs.

    What I’m Actually Doing About It

    In my own setup, I’ve started treating agent costs the way I treat contractor costs — as a blended workforce line, not a software line. My AI assistant Saul runs daily tasks for me: research, publishing, monitoring. I track what he does, what it costs, and what it would cost if a human did it instead. Not because I’m obsessive about it (okay, partly because I’m obsessive about it), but because I think this is the accounting framework that PE firms will expect within 18 months.

    The $600 billion flowing into AI agent ecosystems in 2026 isn’t going into chatbots. It’s going into digital workers — things that take tasks, complete them, and cost money every month. If your chart of accounts still treats all of that as “IT software subscriptions,” you’re going to have a very confusing board pack by Christmas.

    Where This Goes

    monday.com’s marketplace is clunky right now — 17 agents isn’t exactly a deep talent pool. But the model is right. Within a year, I’d expect to see the big consulting firms offering “blended workforce planning” as a service line. Within two, PE due diligence will include an AI agent audit alongside the usual people and tech reviews.

    For CFOs, the action item is boringly practical: start tracking your agents like you track your people. Give them cost centres. Measure their output. Build the reporting now, while it’s still simple, because it won’t be simple for long.

    We spent decades building HR systems to manage human workers. We’re about to need something equivalent for the digital ones. And the CFO who figures that out first is going to look very clever at the next board meeting.

  • The Age of Agents

    I keep having the same conversation. Someone technical, someone who should know better, tells me they don’t see a use case for autonomous AI agents. And I get it, because I’ve been on the other side of that exact conversation before. When ChatGPT launched, I didn’t even bother using it for months. I thought it was a distraction, a chatbot wrapper around something more interesting. I was wrong then. These people are wrong now.

    What’s happening with AI right now is not hype, and it’s not a fad. It is a general purpose technology evolving through multiple generations in real time, at a pace that has no historical precedent. Electricity took decades to go from light bulbs to computation. LLMs have gone from autocomplete engines to autonomous agents in about three years. And at every single generational transition, the same thing happens: people who haven’t finished processing the last stage declare the next one pointless.

    The Three Stages

    Think of this as an evolution with three distinct stages. Each one uses the same underlying primitive, next-token prediction, but the form factor changes so much that it barely looks like the same technology.

    Stage one was autocomplete. GPT-2, GPT-3. Raw text prediction. You fed it tokens, it predicted the next ones. Useful to researchers and tinkerers. Nobody else. Instruct-tuning made it slightly better at following directions, but the experience was basically the same: text in, text out. Call it stage 1.5.

    Stage two was chatbots. ChatGPT showed up in late 2022 and suddenly everyone’s grandmother was talking to an AI. The instruction was simple: be a chatbot with a personality. That was the real UX breakthrough. Not a technical one, a form factor one. Over time, reasoning got bolted on. Tool use got bolted on. Retrieval-augmented generation got bolted on. Call that stage 2.5. You could get ChatGPT Pro or Claude to spend an hour chewing on a complex research problem, running code, pulling from the web. Impressive stuff. But the loop was still the same. You asked it a question, it went and did something, it came back with an answer, and then it waited for you. Always waiting for you.

    Stage three is agents. In November 2025, an Austrian developer named Peter Steinberger started building what he called Clawdbot as a weekend project. Anthropic sent a trademark complaint, so it became Moltbot. That name didn’t stick either. By January 30, 2026, it was OpenClaw, and it had become one of the fastest-growing open-source projects ever, blowing past 150,000 GitHub stars in a matter of weeks.

    What made it different was simple: the human was no longer the clock. OpenClaw runs on your local machine, connects to your messaging apps, wakes up on cron jobs, and goes to work whether you’re watching or not. It reads your email. It schedules things. It writes code. It interacts with APIs and command lines. It can even reach out to you proactively. The loop doesn’t depend on a human prompt anymore. And that changes everything.

    The Electricity Comparison

    I keep coming back to electricity because the analogy is almost too clean.

    The light bulb was the first real application. You run current through a filament, it shorts out, you get light and heat. That’s the simplest possible use of the new force. That’s autocomplete. Take a phenomenon and exploit it in the most direct way.

    Electric motors came next. Same force, but now you’re doing something clever with coils and magnets and converting current into torque. Real work. That’s chatbots. Same underlying technology, reshaped into a form factor people can actually use.

    Then things got interesting. The third generation of electricity was communication: telegraph, telephone, radio, switch networks. The fourth was computation. Each generation was a higher-order consequence of the original technology. Less obvious, more powerful, harder to predict from the vantage point of the previous stage.

    Nobody who watched the first dynamo spark could have looked at that arc of electricity and said, “One day we’ll use this to make sand think.” It was not obvious. And my point is that it’s equally non-obvious to go from a token prediction engine to a chatbot to something that wakes up at 3am and files your taxes.

    The difference is speed. Electricity took decades between each of those stages. We’re watching the same progression happen in months. The autocomplete-to-chatbot transition was roughly two years. Chatbot-to-agent, about three. That speed is part of what breaks people’s intuitions. They can’t absorb the current stage fast enough to see the next one coming.

    The VR Counterargument

    This is where skeptics have a fair point, and it’s worth taking seriously.

    VR was supposed to be the future. Going all the way back to the 1980s, manga and anime and cyberpunk novels all promised the same thing: strap on a headset and dive into cyberspace. By the 2020s, we had the hardware. Oculus, Vision Pro, the whole lineup. And almost nobody uses them. The head-mounted display turned out to be a technological primitive that just didn’t generate the cascading consequences everyone expected.

    So it’s reasonable for someone to look at next-token prediction and say: maybe this is the same deal. Maybe it’s a cool trick that doesn’t actually go anywhere.

    But the trajectories are completely different. VR produced one form factor and stalled. LLMs have already produced three distinct generations. Each one found broader adoption than the last. Each one enabled things the previous one couldn’t. The evidence isn’t speculative anymore. You can track the progression. The question of whether this is a real general purpose technology or a dead end has been answered. It’s been answered by hundreds of thousands of people using agents to do real work, right now, today.

    The Grift Is Real (and It Doesn’t Matter)

    Let’s be clear about the current state of things, because it’s a mess.

    Days after OpenClaw went viral, an entrepreneur named Matt Schlicht launched Moltbook, a Reddit-style social network that was supposedly only for AI agents. The media coverage was wild. Forbes said 1.4 million agents had formed a “hive mind.” Andrej Karpathy called it “the most incredible sci-fi takeoff-adjacent thing” he’d seen.

    The reality: roughly 99% of those 1.5 million agent accounts were fake. The platform had about 17,000 actual human users. Most “agent posts” were written by people, sometimes run through a chatbot first, then submitted through OpenClaw. Someone even built a tool called Mockly to generate fake Moltbook screenshots. On top of the fabrication, the platform leaked 1.5 million API keys and became a highway for crypto scams and malware.

    Then there was Rent-A-Human. Launched February 2, 2026 by a crypto engineer named Alexander Liteplo. The pitch: AI agents hire real humans for physical-world tasks and pay them in cryptocurrency. The site claimed 70,000 registered humans. In practice, only about 83 profiles were visible. The most publicized example of the platform working involved a company Liteplo himself worked for.

    OpenClaw itself had a serious security vulnerability patched on January 29 that let external integrations take control of users’ machines. Thousands of credentials were exposed.

    This is what the early phase of any transformative technology looks like. It’s full of grifters and scams and legitimate security nightmares. Moltbook is a cesspit. Rent-A-Human is mostly performance art. None of that negates the underlying shift. It just means we’re early.

    Why the Enterprise Won’t Touch This for Years

    If you’ve ever worked inside a Fortune 500 company, you already know the punchline. But for everyone else, here’s what happens when you bring something like OpenClaw to a large organization.

    Cybersecurity looks at it first. You’re asking them to approve an open-source tool that gets root access to a virtual machine and downloads community-built extensions with minimal vetting. From their perspective, this is functionally malware. They will ban it. They should ban it.

    Legal looks at it next. And here’s something most people outside the enterprise world don’t realize: insurance companies will not underwrite AI right now. They don’t know how to price the risk. If your insurer won’t cover it, your legal team will kill it, full stop. No productivity gains in the world will overcome unquantifiable legal exposure.

    Then the CFO weighs in. Microsoft charges $30 per user per month for Copilot, on top of existing Microsoft 365 licenses. The technologist’s argument is simple: if someone earning $50-60 an hour gets even one extra productive hour per month, the tool pays for itself. But CFOs don’t think like technologists. They ask how you measure that extra hour. They ask what happens if people just finish the same work faster and then slack off. These aren’t dumb questions. They’re the questions you have to ask when you’re rolling something out to 20,000 people instead of 20.

    Then there’s the shadow IT problem. While legal and cybersecurity are debating policy, half the organization is already using chatbots on their personal accounts. Legal is using them. HR is using them. Finance is using them. Everyone is using them and hiding it. The CISO finds out and now you have an unregulated AI footprint with zero governance. This is already happening at companies all over the world.

    The only organizations making this transition successfully are the ones where it comes from the very top. Not the CTO. The CEO. The board. The owner. When the person at the top of the org chart says “we are going all in on AI” and personally leads the charge, things move. When they say “it’s important but not a top priority,” nothing happens. I know consultants who will walk away from a client if the CEO isn’t the one driving adoption, because they’ve learned that anything less is a waste of everyone’s time.

    The fastest realistic timeline for a Fortune 500 to deploy a properly audited, enterprise-grade agentic framework (not OpenClaw itself, but something built on the same ideas) is probably 18 months. That’s how long infrastructure audits, cybersecurity reviews, legal assessments, and executive alignment actually take. And that’s the optimistic scenario.

    The Jobs Disappearing in the Margins

    The headline number from Challenger, Gray & Christmas is about 54,800 U.S. layoffs in 2025 explicitly attributed to AI. That’s roughly 3% of the 1.17 million total job cuts announced that year, the highest annual number since 2020. Amazon cut 14,000. Microsoft eliminated 15,000. Salesforce’s CEO said AI was handling 30 to 50 percent of the company’s workload.

    But that number only counts layoffs where the company specifically said “AI” in the announcement. It misses the bigger, quieter effect: jobs that never got created in the first place.

    You can estimate this the same way epidemiologists estimated excess deaths during COVID. Look at what should have happened based on GDP growth, inflation, interest rates. Compare it to what actually happened. Subtract other known factors (DOGE-related federal cuts, tariff impacts, retail closures). What’s left is the gap. Multiple cross-referenced analyses put the real figure somewhere between 100,000 and 350,000 jobs either destroyed or avoided in 2025, in the U.S. alone.

    An AI job loss is often invisible. It’s a headcount that never got approved. A contractor who wasn’t renewed. A team that was restructured to be smaller than it would have been. The displacement is real. It’s just not the kind that makes a clean headline.

    Dead Companies Walking

    There are companies right now that are already dead. They just don’t know it yet.

    This is the Borders Books scenario. Remember Borders? They looked at Amazon and the internet and said, “Books aren’t changing. People like holding a physical book.” They were right about the books. They were catastrophically wrong about everything else. Barnes & Noble somehow survived. Borders is long gone.

    The same sorting is happening right now with AI. The companies that say “we don’t really get this chatbot stuff” are already falling behind the companies that say “we don’t fully understand this yet, but go experiment.” And the companies experimenting with chatbots are already falling behind the ones experimenting with agents.

    The gap between the cutting edge and the mainstream is getting wider, not narrower. The people building with agents today are living in a different reality than the people still debating whether chatbots have enterprise value. Those two groups are looking at the same technology and seeing entirely different things.

    We’re in the long slog of diffusion. The technology works. The experimentation is chaotic. The grift is everywhere. The real risks are severe. And the institutional machinery that determines how most of the world actually encounters new technology (corporate boards, legal departments, insurance underwriters, government procurement offices) is grinding forward at the speed it has always moved.

    That’s not a reason to be pessimistic. It’s just what happens when a general purpose technology evolves faster than the organizations trying to absorb it. The technology isn’t waiting. The question is whether you are.