Author: Mark Hendy

  • The AI Just Got a Computer — And Your Competitors Still Think It’s a Chatbot

    The AI Just Got a Computer — And Your Competitors Still Think It’s a Chatbot

    Yesterday Changed Everything

    On 17 April 2026, xAI launched Grok 4.3 Beta. Most headlines focused on benchmark scores. They missed the point entirely.

    Grok now has a full Ubuntu shell built into the product. Not a sandboxed code snippet runner. Not a “try this Python” widget. A genuine Linux computing environment where the AI can execute commands, install packages, write and run code, and manage files — with a persistent file layer that survives between sessions.

    To demonstrate the capability, Grok encoded the xAI logo into audio frequencies, rendered a spectrogram video from the result, and saved the finished MP4 to persistent storage. No human touched the keyboard after the initial prompt.

    If you’re a CFO reading this and thinking “interesting, but not relevant to me yet” — you’re already behind.

    This Isn’t an Upgrade. It’s a Category Shift.

    For the past three years, AI has been a sophisticated text box. You type a question, it gives you an answer. Useful, yes. Transformational? Only if your definition of transformation is “slightly faster email drafting.”

    What happened yesterday is different in kind. AI stopped being a tool you query and became an agent that executes. It can now build a financial model, test it, debug it, save the output, and iterate — autonomously. That’s not a chatbot. That’s a computing environment with intelligence baked in.

    Claude Code from Anthropic and OpenAI’s Codex are racing in the same direction. But Grok 4.3 Beta delivers this natively, inside the product, with persistent state. No API configuration. No developer setup. It just works.

    What This Means for Portfolio Companies

    If you sit on a PE board or run a finance function inside a portfolio company, here’s the translation:

    Automation just got autonomous. Previously, automating a finance process meant scoping a project, hiring a consultant, building an integration. Now, an AI agent can take a description of what you need, write the code, test it, and deliver the output — in minutes. Month-end reconciliation workflows that took weeks to automate can now be prototyped in an afternoon.

    The talent gap just narrowed — and widened. The CFO who understands how to direct an AI computing environment will deliver more with a team of five than a competitor delivers with fifteen. The CFO who doesn’t will need those fifteen people just to keep up.

    Build vs. buy just flipped. When your AI can write, test, and deploy code, the business case for buying off-the-shelf SaaS tools weakens dramatically. Why pay six figures annually for a reporting platform when an AI agent can build a bespoke one tuned to your exact data structure?

    The Two-Year Illusion

    Every board deck I’ve seen in the past twelve months includes some version of “AI roadmap — 18-24 month horizon.” That timeline assumed AI would keep improving incrementally. It assumed you’d have time to hire a Head of AI, run a pilot, form a committee.

    Grok didn’t improve incrementally. It gained the ability to use a computer. That’s not a point on a curve. That’s a step function.

    The companies that will win from here are the ones whose leadership understands a simple truth: AI is no longer something your team uses. It’s something that works alongside your team — writing code, running analyses, building tools, and saving its work for next time.

    The Finance Function Specifically

    For CFOs and FDs, this is where it gets concrete. An AI with a persistent computing environment can:

    • Pull data from multiple sources, clean it, and produce a consolidated management pack — every month, automatically
    • Build and maintain bespoke Python-based forecasting models that improve with each iteration
    • Run scenario analyses across portfolio companies in parallel, saving outputs for board review
    • Automate the grunt work of audit preparation — file organisation, reconciliation testing, variance analysis

    None of this required a developer. None of it required a software vendor. The AI did the work.

    What To Do On Monday Morning

    Stop treating AI as a future initiative. It became a present capability yesterday.

    Three actions for this week:

    1. Try it yourself. Go to grok.com, open the shell environment, and ask it to build something specific to your business. A cash flow model. A data reconciliation script. See what happens.

    2. Identify one process. Pick a single finance process that’s manual, repetitive, and painful. Brief your AI on it. Let it prototype a solution.

    3. Rewrite your AI roadmap. If your current plan assumes AI is 18 months away from being useful, your plan is wrong. Rewrite it with the assumption that AI can execute work today — because it can.

    The AI just got a computer. The question is whether your competitors noticed before you did.


    If your portfolio companies need help understanding what AI computing environments mean for their finance functions and operations, get in touch.

  • The Convergence: Why Energy and Food Security Are Failing at the Same Time

    The Convergence: Why Energy and Food Security Are Failing at the Same Time

    April 2026 will be remembered as the month when energy and food security converged into a perfect storm. On the 15th, Australia’s Viva Energy refinery in Geelong exploded—one of two left standing Down Under. Days earlier, the UK exposed Russian GUGI submarines hovering over vital undersea cables. A year prior, Spain and Portugal endured a massive blackout exposing renewable grid frailties. Meanwhile, H5N1 has culled 200 million U.S. birds since 2022 and infiltrated dairy herds.

    These aren’t coincidences. They’re the intersection of deliberate state aggression, systemic brittleness, and biological disruption. Energy powers food production and distribution; food sustains energy workers and economies. When both falter simultaneously, cascading failures ensue.

    Geelong: Australia’s Fuel Heart Ripped Out

    Australia has shuttered refineries aggressively, dropping from 10 in 2000 to two by 2026. Geelong (120kbpd) was critical, supplying Victoria and Tasmania.

    The April 15 explosion—cause under investigation (rumors of cyber or insider)—sent a 100m fireball skyward. No fatalities, but operations halted indefinitely. Fuel imports, already 90% of supply, face tanker bottlenecks amid Red Sea tensions.

    Prices: diesel up 28%, jet fuel rationed. Knock-ons: mining halts, grocery delivery delays. The Age reports.

    Russia’s Sabotage Blitzkrieg

    2025 saw 321 confirmed sabotage acts in Germany: rail arson (Deutsche Bahn chaos), chemical plant fires, grid hacks. Cost: €2bn+.

    Escalation: April 2026, UK Defence Secretary revealed GUGI subs (Yantar-class spy ships) mapping cables carrying 99% of transatlantic data/power. Akula decoy trailed. RAF/ Navy shadowed; Russians fled.

    Baltic parallel: C-Lion1 and BCS East-West cables severed, Russian shadow fleet implicated. Breaking Defense | MOD | Baltic cables (prior)

    Iberian Collapse: Renewables’ Reckoning

    April 2025: 50M in dark. REE autopsy: 17 factors—inter-solar faults, wind drop-off (from 40% to 2%), hydro drought, interconnector trips. Renewables hit 70% pre-fail; fossils lagged.

    Lessons: Inertia shortfall, no spinning reserves. Cost: €10bn GDP hit. Renewables push ignored baseload needs. El Confidencial analysis.

    H5N1 Pandemic: Protein Supply Implodes

    100M+ layers culled 2022-2025; total birds 200M+. Egg prices tripled. Now cattle: 100+ herds, milk discard mandates. 5% U.S. dairy output at risk.

    Supply chain: Poultry 40% U.S. protein; disruptions ripple to feedlots (soy/corn via energy-intensive transport). USDA detections | FDA

    Convergence Dynamics: Modeling the Chaos

    Energy-food nexus:

    Threat Energy Impact Food Impact
    Sabotage Fuel/power loss Transport halt
    Blackouts Direct Refrigeration fail
    Flu Feed/transport strain Protein shortage
    Explosion Fuel scarcity Farm ops stop

    AI Monte Carlo: Base case 25% food inflation; stress 50%+. PE exposure: ag (15% drawdown), logistics (20%).

    PE Action Plan: Resilience Over Resilience

    1. Reprice assets: Add 5-10% hybrid risk premium to infra yields.

    2. Capital shift: 20% to small modular reactors (SMRs), 15% vertical ag, 10% subsea hardening.

    3. AI diligence: Simulate portfolio cascades.

    Mark Hendy | PE CFO | Geopolitics + AI

  • Visa Just Gave AI Agents a Credit Card. CFOs Should Be Paying Attention.

    Last week, Visa announced Intelligent Commerce Connect — a platform that lets AI agents initiate purchases, handle tokenisation, enforce spend controls, and authenticate payments on behalf of users. Not users clicking a button. Not users confirming a pop-up. Agents. Autonomously. On Visa’s network.

    I’ve been building with AI agents for the better part of two years now. I’ve got agents managing my inbox, drafting blog posts (hello), monitoring portfolios, and scheduling meetings. But the moment you give an agent a credit card, something fundamentally shifts. This isn’t automation anymore. This is delegation of financial authority. And if there’s one thing twenty years in finance has taught me, it’s that delegating financial authority without governance is how you get fired.

    What Visa Actually Built

    Intelligent Commerce Connect is designed as a network-agnostic, protocol-agnostic on-ramp for agentic commerce. Through a single integration via Visa’s Acceptance Platform, it enables AI agents to initiate payments using both Visa and non-Visa cards. It supports multiple agent protocols — Trusted Agent Protocol, Machine Payments Protocol, Agentic Commerce Protocol, and Universal Commerce Protocol — which tells you a lot about where the industry expects this to go. There isn’t one protocol yet. There are four, and Visa is hedging by supporting all of them.

    The platform is in pilot with partners including AWS, Highnote, Mesh, and Payabli, with general availability expected by June. That’s not a research paper. That’s a shipping product on the world’s largest payment network.

    The CFO’s Nightmare Scenario

    Here’s where my CFO brain starts twitching. OutSystems research published this month found that 94% of organisations deploying agentic AI are already concerned about sprawl — agents proliferating across the business faster than governance can keep up. Now imagine those ungoverned agents have purchasing authority.

    In any well-run finance function, there’s a concept called a delegation of authority matrix — a document that says who can approve what, up to what amount, under what conditions. It’s boring. It’s bureaucratic. And it’s the single most important control preventing your procurement team from accidentally (or deliberately) buying things they shouldn’t. Every auditor checks it. Every PE firm’s due diligence team asks for it.

    The question Visa’s announcement forces us to ask is: what does a delegation of authority matrix look like when the “who” is an AI agent?

    Spend Controls Are Necessary But Not Sufficient

    To be fair, Visa’s platform does include spend controls and authentication. And the enterprise AI governance frameworks I’m seeing — Deloitte’s CFO tech guide is a decent starting point — generally recommend that any AI action above a monetary threshold requires human sign-off. The ERP stays the system of record. The agent proposes, the human approves.

    That’s fine for the first generation. But it won’t hold. The whole point of agentic AI is removing humans from routine decision loops. If every purchase order still needs a human clicking “approve,” you haven’t automated procurement — you’ve just added an extra step. The economic pressure to raise those thresholds, to widen the autonomy boundaries, will be relentless. McKinsey estimates that autonomous procurement agents can capture 15–30% efficiency improvements by eliminating non-value-added activities. No CFO under PE ownership is leaving that on the table.

    So the thresholds will creep up. The approval requirements will relax. And one morning, a finance director will discover that an agent negotiated a twelve-month SaaS contract at 3am because it determined the vendor’s dynamic pricing was optimal at that hour. Good luck explaining that to the audit committee.

    What I’m Actually Worried About

    It’s not fraud — Visa’s tokenisation and authentication should handle that tolerably well. What concerns me is the combination of three things:

    Compounding commitments. An agent optimising for one metric (say, cost-per-unit) might make individually rational purchasing decisions that collectively create an overcommitted balance sheet. No single purchase triggers a threshold. But the aggregate position is one a human would have caught.

    Vendor lock-in at machine speed. If agents are negotiating contracts, they’ll optimise for the parameters they’re given. Unless you’ve explicitly programmed “maintain optionality” as a constraint, they’ll happily lock you into the cheapest long-term deal every time. Strategic flexibility is a human judgement call that agents don’t naturally make.

    Audit trail legibility. Right now, if I ask a PE firm’s portfolio company why they spent £2m with a particular vendor, someone can explain the reasoning. When an agent made that call based on a multi-variable optimisation that factored in 47 data points at 3am, the “reasoning” is a probability distribution. Try putting that in a board pack.

    The Parallel With Prediction Markets

    There’s an interesting echo here with what’s happening on Polymarket. This month, Bloomberg reported that $170 million flowed through Iran ceasefire bets, with at least 50 freshly created accounts placing suspiciously well-timed wagers before Trump’s announcement. Lawmakers are now asking whether prediction markets can even govern themselves.

    The pattern is the same: a powerful new mechanism for allocating capital, moving faster than the governance structures around it. Prediction markets and agentic commerce are both examples of what happens when you remove humans from the decision loop in systems that move money. Speed goes up. Friction goes down. And the attack surface for bad actors — or simply for well-intentioned software making collectively poor decisions — expands dramatically.

    What I Think Happens Next

    Visa’s move is the starting gun, not the finish line. By the end of 2026, I’d expect to see:

    Agentic audit frameworks becoming a real discipline — not just “we log what the agent did” but actual real-time monitoring of agent purchasing patterns against policy. The Big Four will sell this as a service by Q4.

    CFOs who’ve been ignoring AI agents will suddenly care a great deal, because an agent with a Visa card is no longer an IT project — it’s a financial control issue that sits squarely in their remit.

    And at least one spectacular failure. Some company will give agents too much rope, the agents will collectively do something that’s individually rational but organisationally stupid, and it’ll end up in the FT. That’s not pessimism. It’s pattern recognition. Every new financial instrument goes through this cycle.

    For now, I’m building my own agent governance framework — spend limits, approval chains, real-time anomaly detection — because I’d rather design the guardrails before Visa’s platform goes GA in June than after. If you’re a CFO reading this and you’re not thinking about what happens when AI agents can spend money on your company’s behalf, you’re already behind.

  • When the Machine Fires the Customer

    When the Machine Fires the Customer

    Mo Gawdat, former Google [X] Chief Business Officer, lands a haymaker via a viral thread by @r0ck3t23: AI dismantles capitalism’s core—labor arbitrage. The model? Pay humans $1 for their time, sell the output for $2. AI drives the input cost to zero. Game over.

    In my world as a PE interim CFO, I’ve stress-tested portfolios against AI disruption. Gawdat’s arithmetic is spot-on. But the true dynamite is the vicious cycle he uncovers: workers are customers. Fire them, and you torch demand. No hysteria—just cold finance logic. Let’s break it down.

    Labor Arbitrage: The Engine of Empire

    Strip capitalism bare: it’s arbitrage on human effort. Factories arbitrage $12/hour assemblers into $100 widgets. SaaS firms turn $150k devs into $10M ARR. The spread—profit—fuels empires. Private equity thrives here: buy labor-intensive businesses, optimize (read: squeeze), flip.

    Gawdat puts it bluntly: “The very base of capitalism, which is labor arbitrage… is going to disappear.” Global supply chains exemplify: iPhones assembled in Foxconn for pennies on the dollar, sold at premiums. AI? LLMs generate code at amortized $0.01/query. Humanoids like Figure 01 (~$20k amortized over lifetime, not $9k day-one, but close enough) run endless shifts.

    Result: Production cost floor vanishes. No wages, no unions, no sick days. Your LBO model? Redo the cost of goods from 40% margin to 95%.

    The Deadly Feedback Loop

    Gawdat’s killer insight: Employees = End consumers. Automate white-collar? Lawyers, analysts, marketers—gone. Goldman Sachs saved $1B with AI, laid off juniors. Scale to economy: IMF projects 40% jobs exposed. Unemployment spikes to 30-50%? Consumer spending craters.

    Your portfolio company automates DC ops, cuts 20% headcount. Short-term EBITDA pop. Long-term: Those ex-workers skip Black Friday. Infinite AI supply chases evaporating demand. Companies quietly assassinate their customer base.

    Gawdat’s Hits and Misses

    Hits Hard: Feedback loop unassailable. Echoes Keynes’ technological unemployment, but turbocharged. Rust Belt 2.0, global.

    Oversimplifications:

    • $9k robots: Hype. Real capex + opex higher short-term. But trajectory undeniable—costs plunging.
    • Scarcity evaporates: Bold leap. Compute, rare earths, energy constrain. Abundance requires ITER-scale fusion.
    • Transition blindspot: UBI from robot taxes? Congress gridlock. “Hell before heaven,” Gawdat says—12-15 years chaos per BI.

    CFO Pivot: From Ops to Oracle

    Not apocalypse—repositioning. Human value migrates upstack.

    Judgment Trumps Jargon: AI spits forecasts. You discern signal from noise: “This uptick? Competitor distress sell.” Relationships seal deals. Authority greenlights bets. AI can’t schmooze VCs or stare down boards.

    Operational CFOs Vulnerable: Rollups, variance reports—AI devours. McKinsey flags 45% finance exposure.

    Capital Ownership Essential: Wage slaves sink. Equity kings rise. PE CFOs: Deploy into AI infra, not legacy labor.

    AI Fluency = 2x Comp: Model agentic workflows, capex ramps. Prompt strategic insights. Demand surges—I’ve seen packages jump 50%.

    Survivors arbitrage human judgment + AI horsepower.

    Ready for the Reckoning? Audit your PE portfolio’s AI exposure. Book 30-min call. Let’s harden your numbers.

  • GLM-5.1: The Chinese Open-Source Model That Just Beat GPT and Claude at Their Own Game

    GLM-5.1: The Chinese Open-Source Model That Just Beat GPT and Claude at Their Own Game

    Something significant happened in the AI landscape this week, and I suspect it hasn’t got the attention it deserves outside of developer circles. Z.AI — the platform behind the GLM model family, developed by Zhipu AI in China — released GLM-5.1, a 754 billion parameter open-source model that has just topped the SWE-Bench Pro leaderboard with a score of 58.4, beating GPT-5.4, Claude Opus 4.6, and Gemini 3.1 Pro.

    Let that land for a moment. An open-source, MIT-licensed model, trained entirely on Huawei Ascend 910B chips — no Nvidia, no American silicon — has beaten the flagship closed models from OpenAI, Anthropic, and Google on one of the most respected software engineering benchmarks in existence.

    What Makes GLM-5.1 Different

    The headline number is impressive, but what actually interests me is the architecture of how this model works. GLM-5.1 isn’t just better at answering questions — it’s designed for sustained autonomous execution. In testing, it completed an eight-hour uninterrupted coding session: plan, execute, test, optimise, repeat. 655 iterations. Built a Linux desktop environment from scratch. Increased vector database query throughput by 6.9 times.

    This is a different category of capability. We’re not talking about a better chatbot. We’re talking about an AI that can hold a task in mind, work through it independently, hit dead ends, correct course, and deliver a finished result — the way a competent junior engineer would, but without stopping for the night.

    The technical foundation is a Mixture-of-Experts architecture with 40 billion active parameters per token (not all 754B are active at once, which is what keeps inference costs manageable). It supports a 200,000 token context window with up to 128,000 output tokens. API access is priced at $1.00 per million input tokens and $3.20 per million output tokens — a fraction of what the US frontier models charge.

    Why This Matters Beyond the Benchmarks

    I’ve written before about AI moving from a tool you prompt to a system that acts. GLM-5.1 is a concrete illustration of that shift happening faster than most people expected, and from a direction many in the West weren’t watching closely.

    The geopolitical dimension is real. This model was trained on Huawei hardware using Huawei’s MindSpore framework — a deliberate demonstration that China’s AI development pipeline is no longer dependent on US export-controlled chips. The export restrictions that were supposed to slow Chinese AI development have instead accelerated domestic alternatives. That is a significant strategic development, regardless of where you sit on the AI competition question.

    The open-source dimension is equally significant. With weights published under an MIT licence, GLM-5.1 can be downloaded, fine-tuned, and deployed by anyone. The closed-model advantage that OpenAI and Anthropic have built commercial moats around is being systematically eroded — not just by each other, but by well-resourced open-source releases like this one.

    What I Take From This

    I use AI heavily in my work — for financial analysis, document preparation, research, and increasingly for autonomous background tasks. The pace at which these systems are improving is not slowing down. If anything, GLM-5.1 suggests the competitive field is widening: more players, more approaches, more open options.

    For anyone running a business or advising one, the practical implication is straightforward: the cost of access to frontier-level AI capability is falling rapidly, and the choice of provider is expanding. The question is no longer whether to use these tools — it’s which ones, for what, and how to build processes around them that compound over time.

    GLM-5.1 is worth watching. Not because it’s the final word, but because it’s a clear signal that the race is genuinely global, the open-source movement is closing the gap faster than expected, and the next twelve months are going to be interesting.


    GLM-5.1 is available via z.ai on the GLM Coding Plan, with weights on Hugging Face under MIT licence.

  • Polymarket’s Insider Trading Scandal Is the Best Case for Prediction Markets

    Fifty brand-new Polymarket accounts placed large bets on a U.S.–Iran ceasefire in the hours before Trump announced it on social media. Some of those accounts turned a few thousand dollars into six figures overnight. Congress is furious. Senator Blumenthal wants investigations. The hot take is that prediction markets are broken.

    I think the opposite is true. This is prediction markets working exactly as designed — and the reaction tells you more about the people reacting than it does about the platform.

    The Numbers Are Genuinely Wild

    Let’s be clear about what happened. Over $170 million flowed through Polymarket’s Iran ceasefire markets, making it one of the largest geopolitical wagers in prediction market history. Blockchain analytics firm Lookonchain flagged three freshly created accounts that collectively pocketed more than $480,000 by betting on a ceasefire before selling at the top. One account — the now-infamous “Magamyman” — placed its first-ever trade seventy-one minutes before news broke, when the market implied only a 17% probability. It walked away with roughly $553,000.

    A Harvard study published last month went further, screening over 93,000 Polymarket markets and nearly 50,000 wallet addresses from 2024 to early 2026. Their finding: across 210,000 flagged wallet-market pairs, suspicious traders achieved a 69.9% win rate — more than 60 standard deviations above chance. Total estimated profits from potentially informed trading: $143 million.

    Those numbers look damning. But here’s the thing that nobody seems to want to say out loud.

    We Only Know Because It’s on a Blockchain

    Every single one of these trades is visible. Timestamped. Publicly auditable. A Harvard research team could sit down and systematically screen two years of trading data because it’s all on-chain. The accounts are pseudonymous, sure, but the trades themselves are transparent in a way that nothing in traditional finance comes close to matching.

    Now think about how insider trading works in equities. Someone at a law firm hears about a merger. They tell their cousin. The cousin buys call options through a brokerage account that takes the SEC years to subpoena — if they ever notice at all. The SEC estimates it catches a fraction of actual insider trading. Academic studies suggest informed trading precedes something like 25% of major M&A announcements. We just don’t see it, because the infrastructure is designed for opacity.

    Polymarket’s “scandal” is that informed trading is visible in real time. That’s not a bug. That’s the entire point of building financial infrastructure on transparent ledgers. Congress is effectively complaining that the system is too transparent.

    The Real Question Isn’t Whether People Traded on Information

    Of course people traded on information. That’s what markets are for. The interesting question is: who had the information, and should they have been allowed to trade on it?

    If White House staffers or intelligence officials were betting on ceasefire outcomes they helped negotiate, that’s a serious problem — but it’s a problem of government ethics, not of prediction markets. The White House reportedly warned staff in late March against trading on prediction markets. Which rather suggests they knew it was happening.

    But if journalists, diplomats, or well-connected analysts were trading on information they’d gathered through legitimate means? That’s price discovery. That’s exactly how you want markets to function. The ceasefire contract moving from 17% to near-certainty before the official announcement is the market doing its job — aggregating dispersed information faster than any single news outlet could.

    When Polymarket’s election markets outperformed every major polling model in 2024, we celebrated prediction markets as an information revolution. Now that the same mechanism is surfacing uncomfortable truths about who knows what in Washington, we want to shut it down. You can’t have it both ways.

    What This Means for the Platform

    Polymarket clearly knows it’s at a crossroads. Last week they announced a full exchange upgrade — a rebuilt trading engine, updated smart contracts, and a new USDC-backed collateral token called Polymarket USD. They’re preparing for serious U.S. expansion, which means they’re preparing for serious U.S. regulation.

    The smart move would be some form of voluntary KYC tier for large positions on geopolitically sensitive markets. Not because prediction markets should be restricted, but because demonstrating that you can identify bad actors — government officials trading on classified information, for instance — is how you survive regulatory scrutiny. The blockchain already gives you the trade data. Add identity to the large positions and you’ve got something the SEC can only dream of for equity markets.

    The CFO Angle (Because There’s Always a CFO Angle)

    I spend my days working with PE-backed businesses where forecasting accuracy is everything. We obsess over budget variance, rolling forecasts, scenario models. And here’s a market that just priced a geopolitical event more accurately and more quickly than any intelligence briefing, any analyst note, any Bloomberg terminal alert.

    If you’re a CFO running scenario planning on geopolitical risk — and if you’re in any business exposed to energy prices or supply chains, you should be — prediction markets are increasingly a better input than traditional sources. Polymarket’s finance predictions already cover oil prices, rate decisions, and major policy outcomes. The Iran ceasefire market moved Bitcoin from $67,000 to $72,700 and back. WTI crude is sitting above $110. These aren’t abstract bets — they’re real-time consensus probability that feeds directly into the models I build every week.

    The Harvard researchers found statistical anomalies. Congress found a talking point. But what I found was a market that processed a ceasefire probability faster than Reuters could file a story. And I’ll take that signal every time.

    Where This Goes

    Prediction markets aren’t going away. Polymarket is a $20 billion platform now. The CLARITY Act markup later this month will shape the U.S. regulatory framework, and the smart money — no pun intended — is on a framework that legitimises these markets with guardrails rather than banning them.

    The insider trading debate will rage on. But every time someone points to suspicious Polymarket trades as evidence that prediction markets are dangerous, remember: the only reason we can see those trades is because the system is transparent. The alternative isn’t a world without insider trading. It’s a world where insider trading happens in the dark.

    I know which one I prefer.

  • The CFO Who Took the Business Through the Deal is Often the First Casualty

    The CFO Who Took the Business Through the Deal is Often the First Casualty

    Not the tidy version. The real, uncomfortable one.

    The investment team made representations. They relied on advisors, they wrote the investment plan, they presented it to the IC. Now the cheque is written and their credibility is on the line. Every week of underperformance is a question mark over their judgement. Every green light is validation.

    They project that pressure downward.

    The management team feel it. The CEO feels it. But the CFO feels it first, because the CFO is the one who has to explain why the numbers don’t quite match the investment plan.

    The CFO who took the business through the deal is uniquely exposed. During the process they had to be captain positive. “Here’s how we’ll unlock the value.” “Here’s why the churn is fixable.” “Here’s the evidence behind the margin expansion story.” They were a full partner in selling the deal.

    Now the deal is done. The investment team is nervous. The board is watching. And the numbers — as they always do in the first few months post-close — are telling a more complicated story than the investment plan told.

    Suddenly it’s the CFO’s fault. Not explicitly. But the questions get harder. The calls get more frequent. The patience gets shorter.

    The CFO often doesn’t survive it.

    And here’s the thing — sometimes that’s not even unfair. The CFO who sold the deal is not always the right person to deliver it. Those are different skills. Different temperaments. A different relationship with uncomfortable truths.

    So they leave. Or they’re moved on. Quickly, and quietly, and usually within six months of close.

    The Problem That Creates

    The PE house now has a problem. The CFO is the second most important hire after the CEO. You cannot run a board, manage a lender relationship, or credibly execute a value creation plan without one. The permanent hire — if they’re any good — is on six months’ notice somewhere else. You need time to get this right.

    That’s where the interim CFO comes in.

    The interim CFO isn’t a gap-fill. Done properly, it’s the thing that buys the business the breathing space to make a good permanent hire instead of a rushed one. Someone who can walk in, stabilise the investor relationship, take ownership of the 100-day plan, and leave the business better than they found it — without any expectation of staying.

    The Real Job

    An interim who has been there before — who has stood in that boardroom, managed that investor relationship, built that first management pack from scratch — gives the PE house something they desperately need in that moment: confidence.

    Confidence that the business is in safe hands. Confidence that the reporting will be credible. Confidence that they can take their time and get the permanent hire right.

    Speed kills. Patience wins.

    That’s the job.


    Mark Hendy is an interim CFO specialising in PE-backed businesses. He writes about finance, private equity, and the reality of post-deal life at markhendy.com. Connect on LinkedIn.

  • The SaaSpocalypse Is Real — But the Market Is Panicking About the Wrong Thing

    I spent last week reviewing the tech stack costs across three portfolio companies. The exercise used to be straightforward: count seats, multiply by price, negotiate volume discounts. This time, two of the three CFOs asked me the same question: "Should we be cancelling licences and moving to agents?"

    That question — and the speed at which it has gone from fringe to board-level — tells you everything about where we are in April 2026.

    $285 Billion in 48 Hours

    If you have been anywhere near a Bloomberg terminal this year, you know the term SaaSpocalypse. It started in late January when Anthropic shipped Claude Cowork with industry-specific agent plugins — legal contract review, financial analysis, sales automation — and the market did what markets do: it extrapolated to infinity.

    Bloomberg reported roughly $285 billion wiped from SaaS valuations in a single 48-hour window. Thomson Reuters dropped 15%. LegalZoom cratered nearly 20%. By mid-March, the IGV software ETF was down over 21% year-to-date, and analysts were calling it the largest AI-triggered repricing in software history.

    The logic was brutally simple. If an AI agent can do the work of five humans, why pay for five seats? The per-seat pricing model — the entire economic foundation of B2B SaaS since Salesforce invented it — was suddenly an existential vulnerability.

    What the Market Got Right

    Let me be clear: the structural thesis is correct. Per-seat pricing is dying. I have seen it in our own portfolio.

    One of our companies ran a pilot replacing three junior paralegals' document review work with an AI agent pipeline. The agent does not need a LegalZoom subscription, a DocuSign seat, or a Westlaw login in the traditional sense. It calls APIs, processes documents, and routes exceptions to a human. The annual software cost for those three "seats" — roughly £45,000 — dropped to about £8,000 in LLM API costs.

    That is not a marginal improvement. That is a different business model.

    The survey data backs this up: 40% of IT budgets are reportedly being reallocated from traditional SaaS subscriptions to agentic platforms and token-based usage. CIOs are not asking "how many employees will use this?" anymore. They are asking "how many tasks can this complete?" That is a fundamental shift in procurement psychology, and SaaS companies built on headcount-correlated revenue should be worried.

    What the Market Got Wrong

    Here is where it gets interesting — and where I think the panic has overshot.

    The market treated the SaaSpocalypse as if every SaaS company is equally exposed. They are not. There is a massive difference between a company that sells seats for humans to click buttons and a company that sells the underlying data, workflow engine, or integration layer that agents also need.

    Thomson Reuters does not just sell a UI for lawyers. It sells access to legal databases, case law, and regulatory intelligence. An AI agent doing contract review still needs that data. The delivery mechanism changes, the underlying value does not. Same story with ServiceNow — the Motley Fool piece this week calling it a bargain has a point. Workflow orchestration becomes more valuable when you have agents that need orchestrating, not less.

    The companies that are genuinely toast are the ones that were essentially selling a graphical interface on top of commodity functionality. If your product is a pretty wrapper around CRUD operations and your moat was user habit, then yes, an agent that calls the same APIs without the wrapper is an existential threat. But that is not every SaaS company — it is maybe 30% of them.

    What This Means If You Are a CFO

    Here is my practical take, from someone currently navigating this across multiple PE-backed businesses:

    Audit your stack ruthlessly, but intelligently. Do not just cancel licences because agents are trendy. Map each SaaS tool to what it actually provides: is it data, workflow, integration, or just interface? The first three categories will likely survive the transition. The fourth will not.

    Start modelling token-based costs now. The shift from per-seat to per-task pricing is real, but token economics are volatile and opaque. I have seen API costs swing 30% month-on-month as providers adjust pricing. You need a cost model that accounts for this, and you need someone on your team who understands it — not just a vendor's sales estimate.

    Watch the middleware layer. The real winners of the agentic transition might not be the agent builders themselves. Microsoft's Agent Framework 1.0, released last week, unifies Semantic Kernel and AutoGen into a production-ready orchestration layer. That is the plumbing that enterprises will standardise on. If you are making build-vs-buy decisions on agent infrastructure, this is the framework to evaluate first.

    Do not mistake a repricing for a revolution — yet. Most enterprises are still in pilot mode. The 40% budget reallocation figure is aspirational, not actual. In our portfolio, the company furthest along has moved maybe 12% of its SaaS spend to agent-based alternatives. The rest are running proofs of concept. The gap between "we are exploring AI agents" and "we have decommissioned Salesforce" is about three years of integration work and change management.

    The PE Angle

    For anyone in private equity, the SaaSpocalypse is creating a genuinely interesting buying opportunity. High-quality SaaS businesses with real data moats and sticky integration layers are trading at 2022-era multiples. If you believe — as I do — that the best SaaS companies will successfully transition to hybrid pricing models (seats plus tokens plus outcomes), then the current discount is mispriced fear.

    The businesses to avoid are the ones with high seat counts, low switching costs, and functionality that an off-the-shelf agent can replicate. You know the type: they raised a Series B on "AI-powered" features that were really just a ChatGPT wrapper bolted onto a form builder.

    The SaaSpocalypse is real. But like most market panics, it is painting with too broad a brush. The death of per-seat pricing does not mean the death of software businesses. It means the death of lazy software businesses. And frankly, most of those were overdue a correction anyway.

  • Iran Wants Bitcoin for Hormuz Tolls. Here’s Why That’s Not Really a Bitcoin Story.

    Iran Wants Bitcoin for Hormuz Tolls. Here’s Why That’s Not Really a Bitcoin Story.

    Iran has reportedly demanded that ships transiting the Strait of Hormuz pay a $1 per barrel toll — in Bitcoin. Whether this actually happens is almost beside the point. The signal is loud enough on its own.

    The Financial Times reported it. X ran with it. And for anyone paying attention to how money actually moves around the world, this is one of those moments you file away.

    Why Bitcoin? Because Nothing Else Works for This

    Think about the problem Iran is trying to solve. It needs to collect money from ships it doesn’t fully control, in a currency it can actually use, without the transaction being frozen, reversed, or sanctioned before it clears. Try doing that in dollars. Try doing it in euros. SWIFT can be cut off. Bank accounts can be seized. Assets can be frozen mid-transaction.

    Bitcoin can’t be frozen. It can’t be censored. There’s no intermediary to lean on. Final settlement takes minutes, not days. And crucially — no counterparty trust is required. You don’t have to trust Iran. Iran doesn’t have to trust you. You both just have to agree to use the same protocol.

    That’s not ideology. That’s just how the technology works.

    The CFO’s Perspective

    I spend most of my working life thinking about how money moves — how businesses get funded, how transactions settle, where the risks sit in a capital structure. Most of that thinking happens within a framework that assumes the dollar is the world’s operating system. An assumption that has served well for decades but is increasingly worth questioning.

    The weaponisation of the financial system is real and accelerating. SWIFT exclusions, asset freezes, secondary sanctions — these are now routine tools of geopolitics. They’re effective precisely because the global financial system is centralised. Centralised systems have chokepoints. Chokepoints can be controlled.

    When a sanctioned nation-state proposes settling a strategic toll in Bitcoin, it isn’t making an ideological statement about decentralisation. It’s solving an engineering problem. It needs a payment rail that doesn’t have a chokepoint. Bitcoin is the only thing that fits that description at scale.

    The Game Theory Is Already Running

    Jesse Tevelow wrote a long piece on the game theory embedded in this moment, and he’s right about the core dynamic. Once one significant nation-state uses Bitcoin for sovereign settlement — even partially — it changes the calculus for every other state. The competitive pressure to accumulate, or at minimum not fall behind, kicks in.

    The US has already moved. A strategic Bitcoin reserve was announced earlier this year. That wasn’t random. It was a recognition that the game had already started, and that sitting it out entirely carried its own risks.

    We’re now in a world where adversaries — nations that fundamentally distrust each other — can transact without requiring mutual trust. Only mutual adherence to a shared protocol. That’s a genuinely new thing. It has implications that will play out over decades, not months.

    What This Means for Business

    In the near term, not much changes for most businesses. The Strait of Hormuz toll proposal may come to nothing. But the direction of travel is clear, and it’s worth thinking through the second and third-order effects.

    If Bitcoin becomes a meaningful component of sovereign settlement — even for sanctioned or constrained nations — it establishes a precedent. It creates a parallel layer of global financial infrastructure that operates outside traditional banking rails. That layer will grow. It will attract liquidity. It will become harder to ignore.

    For PE-backed businesses with international exposure: the question of which payment rails to support, which currencies to hold, and how to think about counterparty risk in cross-border transactions is going to get more complicated before it gets simpler. That’s a treasury question. It’s also increasingly a strategic one.

    For finance functions more broadly: the era of assuming the dollar-based correspondent banking system is the only game in town is ending. Not quickly. Not completely. But directionally, the trend is unmistakable.

    The Part I Find Most Interesting

    Beyond the geopolitics, there’s an argument — which Tevelow makes in his original piece — that hard money raises the cost of conflict. When you can’t print your way to war, war gets harder to sustain. The inflationary financing of military adventurism becomes less viable. That’s a long-horizon thesis, and I’d hold it loosely. But it’s not an unreasonable one.

    Historically, the ability to inflate currency has been the hidden subsidy for conflict. Governments rarely raise taxes to fund wars — they borrow and print, and the cost is deferred and diffused. Bitcoin, by design, removes that mechanism. Whether that actually changes behaviour at the nation-state level is an open question. But it’s an interesting structural constraint.

    The Bottom Line

    Iran demanding Bitcoin isn’t a Bitcoin story. It’s a financial infrastructure story. It’s a story about what happens when the tools used to enforce geopolitical compliance — sanctions, payment exclusions, asset freezes — start creating the demand for systems that are immune to them.

    That demand was always going to produce a supply. Bitcoin is the supply.

    The interesting question now isn’t whether this happens — it’s how quickly, and what the incumbent financial system does in response. I’d be surprised if the answer is “nothing”.


    Mark Hendy is an interim CFO working with PE-backed businesses. He writes about finance, AI, and the world at markhendy.com. Follow on LinkedIn.

  • AI Agent Memory: Build the System That Learns. Don’t Be the System Yourself.

    AI Agent Memory: Build the System That Learns. Don’t Be the System Yourself.

    A post went viral today — over 24,000 views in a few hours — claiming that AI agent memory “out of the box sucks” and that you need Obsidian to fix it. It resonated. But I think the conversation is missing something.

    The underlying problem is real. Most AI agents are amnesiac by default. Every session starts fresh. They don’t remember what you told them last week, what decisions you made, what context matters. You end up repeating yourself constantly — which defeats the point of having an assistant at all.

    The Obsidian solution people are sharing works like this: you maintain a structured vault of markdown notes, your agent reads from it at session start, and you manually curate what goes in. It’s better than nothing. But it has a fundamental problem — it still requires you to do the work.

    The Memory Problem, Properly Stated

    The goal isn’t just persistent storage. It’s useful persistent storage. There’s a difference between an agent that can retrieve a file you pointed it at, and one that has genuinely learned from your interactions — that knows what matters to you, what you’ve decided, what patterns recur in your work.

    Manual curation doesn’t scale. If you’re running an AI agent seriously — dozens of interactions a day — you cannot manually decide what gets committed to long-term memory. You’ll either capture too little (and lose signal) or spend as much time curating memory as you save everywhere else.

    What you actually need is a system that does this automatically, with enough intelligence to distinguish noise from signal.

    What I Built Instead

    I run OpenClaw with an AI assistant I’ve named Saul — a PE-facing CFO’s take on the AI agent problem, which I wrote about here. Over the past few months, I’ve built out a three-layer memory architecture that removes the manual curation problem entirely.

    The layers:

    • Daily notes — raw logs of what happened each session. Every interaction, decision, and piece of context gets written here automatically.
    • MEMORY.md — curated long-term memory. The distilled essence: decisions made, preferences established, important context. Think of it as the agent’s actual knowledge of you.
    • Dreaming — a nightly automated process (new in OpenClaw 2026.4.8) that reviews daily notes, scores entries by frequency, relevance, recency and query diversity, and promotes the strongest signals into MEMORY.md automatically. No manual curation.

    The third layer is the one that matters. Every night at 3am, the agent runs what OpenClaw calls a “dreaming” sweep — light phase sorts and stages recent material, REM phase extracts recurring themes, deep phase decides what gets promoted to long-term memory. The thresholds are configurable. The process is auditable. And it happens without me thinking about it.

    The Obsidian Angle

    The Obsidian approach people are excited about is essentially building layer two manually. It works, and if you’re starting from nothing it’s a reasonable place to start. OpenClaw’s memory-wiki plugin (also new in 2026.4.8) is actually Obsidian-compatible — same markdown format, same vault structure — so the two aren’t mutually exclusive.

    But if you’re going to invest time in your agent’s memory architecture, I’d argue the better investment is in automation rather than manual curation. Build the pipeline that decides what matters, rather than deciding manually every time.

    Why This Matters Beyond the Tech

    I’m a CFO. My primary concern with AI agents isn’t whether they’re impressive in a demo — it’s whether they actually reduce friction in the work I do every day. An agent with poor memory creates more friction, not less. You spend time re-explaining context, re-stating preferences, re-establishing where you are on a project.

    The ROI on getting memory right is substantial. An agent that genuinely knows you — your clients, your decisions, your communication style, your priorities — operates at a different level of usefulness. The gap between a well-configured agent and a default one isn’t incremental. It’s categorical.

    If you’re using an AI agent seriously and you haven’t thought about memory architecture, you’re leaving most of the value on the table. Whether you use the Obsidian approach, OpenClaw’s native dreaming, or something else — the manual-entry-only approach isn’t good enough long term.

    Build the system that learns. Don’t be the system yourself.


    Mark Hendy is an interim CFO working with PE-backed businesses. He writes about AI, finance, and the intersection of the two at markhendy.com. Follow on LinkedIn.