Visa Just Gave AI Agents a Credit Card. CFOs Should Be Paying Attention.

Last week, Visa announced Intelligent Commerce Connect — a platform that lets AI agents initiate purchases, handle tokenisation, enforce spend controls, and authenticate payments on behalf of users. Not users clicking a button. Not users confirming a pop-up. Agents. Autonomously. On Visa’s network.

I’ve been building with AI agents for the better part of two years now. I’ve got agents managing my inbox, drafting blog posts (hello), monitoring portfolios, and scheduling meetings. But the moment you give an agent a credit card, something fundamentally shifts. This isn’t automation anymore. This is delegation of financial authority. And if there’s one thing twenty years in finance has taught me, it’s that delegating financial authority without governance is how you get fired.

What Visa Actually Built

Intelligent Commerce Connect is designed as a network-agnostic, protocol-agnostic on-ramp for agentic commerce. Through a single integration via Visa’s Acceptance Platform, it enables AI agents to initiate payments using both Visa and non-Visa cards. It supports multiple agent protocols — Trusted Agent Protocol, Machine Payments Protocol, Agentic Commerce Protocol, and Universal Commerce Protocol — which tells you a lot about where the industry expects this to go. There isn’t one protocol yet. There are four, and Visa is hedging by supporting all of them.

The platform is in pilot with partners including AWS, Highnote, Mesh, and Payabli, with general availability expected by June. That’s not a research paper. That’s a shipping product on the world’s largest payment network.

The CFO’s Nightmare Scenario

Here’s where my CFO brain starts twitching. OutSystems research published this month found that 94% of organisations deploying agentic AI are already concerned about sprawl — agents proliferating across the business faster than governance can keep up. Now imagine those ungoverned agents have purchasing authority.

In any well-run finance function, there’s a concept called a delegation of authority matrix — a document that says who can approve what, up to what amount, under what conditions. It’s boring. It’s bureaucratic. And it’s the single most important control preventing your procurement team from accidentally (or deliberately) buying things they shouldn’t. Every auditor checks it. Every PE firm’s due diligence team asks for it.

The question Visa’s announcement forces us to ask is: what does a delegation of authority matrix look like when the “who” is an AI agent?

Spend Controls Are Necessary But Not Sufficient

To be fair, Visa’s platform does include spend controls and authentication. And the enterprise AI governance frameworks I’m seeing — Deloitte’s CFO tech guide is a decent starting point — generally recommend that any AI action above a monetary threshold requires human sign-off. The ERP stays the system of record. The agent proposes, the human approves.

That’s fine for the first generation. But it won’t hold. The whole point of agentic AI is removing humans from routine decision loops. If every purchase order still needs a human clicking “approve,” you haven’t automated procurement — you’ve just added an extra step. The economic pressure to raise those thresholds, to widen the autonomy boundaries, will be relentless. McKinsey estimates that autonomous procurement agents can capture 15–30% efficiency improvements by eliminating non-value-added activities. No CFO under PE ownership is leaving that on the table.

So the thresholds will creep up. The approval requirements will relax. And one morning, a finance director will discover that an agent negotiated a twelve-month SaaS contract at 3am because it determined the vendor’s dynamic pricing was optimal at that hour. Good luck explaining that to the audit committee.

What I’m Actually Worried About

It’s not fraud — Visa’s tokenisation and authentication should handle that tolerably well. What concerns me is the combination of three things:

Compounding commitments. An agent optimising for one metric (say, cost-per-unit) might make individually rational purchasing decisions that collectively create an overcommitted balance sheet. No single purchase triggers a threshold. But the aggregate position is one a human would have caught.

Vendor lock-in at machine speed. If agents are negotiating contracts, they’ll optimise for the parameters they’re given. Unless you’ve explicitly programmed “maintain optionality” as a constraint, they’ll happily lock you into the cheapest long-term deal every time. Strategic flexibility is a human judgement call that agents don’t naturally make.

Audit trail legibility. Right now, if I ask a PE firm’s portfolio company why they spent £2m with a particular vendor, someone can explain the reasoning. When an agent made that call based on a multi-variable optimisation that factored in 47 data points at 3am, the “reasoning” is a probability distribution. Try putting that in a board pack.

The Parallel With Prediction Markets

There’s an interesting echo here with what’s happening on Polymarket. This month, Bloomberg reported that $170 million flowed through Iran ceasefire bets, with at least 50 freshly created accounts placing suspiciously well-timed wagers before Trump’s announcement. Lawmakers are now asking whether prediction markets can even govern themselves.

The pattern is the same: a powerful new mechanism for allocating capital, moving faster than the governance structures around it. Prediction markets and agentic commerce are both examples of what happens when you remove humans from the decision loop in systems that move money. Speed goes up. Friction goes down. And the attack surface for bad actors — or simply for well-intentioned software making collectively poor decisions — expands dramatically.

What I Think Happens Next

Visa’s move is the starting gun, not the finish line. By the end of 2026, I’d expect to see:

Agentic audit frameworks becoming a real discipline — not just “we log what the agent did” but actual real-time monitoring of agent purchasing patterns against policy. The Big Four will sell this as a service by Q4.

CFOs who’ve been ignoring AI agents will suddenly care a great deal, because an agent with a Visa card is no longer an IT project — it’s a financial control issue that sits squarely in their remit.

And at least one spectacular failure. Some company will give agents too much rope, the agents will collectively do something that’s individually rational but organisationally stupid, and it’ll end up in the FT. That’s not pessimism. It’s pattern recognition. Every new financial instrument goes through this cycle.

For now, I’m building my own agent governance framework — spend limits, approval chains, real-time anomaly detection — because I’d rather design the guardrails before Visa’s platform goes GA in June than after. If you’re a CFO reading this and you’re not thinking about what happens when AI agents can spend money on your company’s behalf, you’re already behind.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *