This Week in AI — 15-21 March 2026

Nvidia wants you to have an “OpenClaw strategy.” Trump wants states to stop regulating AI. And Anthropic just demonstrated that using Claude to fix Claude reveals exactly why we still need humans in the loop.

1. Nvidia Declares Every Company Needs an “OpenClaw Strategy”

At Nvidia’s GTC conference this week, CEO Jensen Huang delivered a 2.5-hour keynote projecting $1 trillion in AI chip sales through 2027. But buried in the product announcements was a strategic directive: every company needs an “OpenClaw strategy.”

What happened: Nvidia positioned AI agent infrastructure — the ability for AI systems to take autonomous actions across tools and platforms — as foundational to the next wave of enterprise AI. The company announced partnerships across autonomous vehicles, robotics, and even Disney theme parks.

Mark’s take: This isn’t about OpenClaw specifically; it’s Nvidia signalling that stateless chatbots are dead. If you’re building AI into your business and haven’t thought about persistence, tool access, and orchestration, you’re already behind. The race is shifting from “who has the best model” to “who can actually deploy agents that do things.” And Nvidia just bet a trillion dollars on that thesis.

Source: TechCrunch Equity

2. WordPress.com Goes All-In on AI Agents

WordPress.com announced it will now let AI agents draft, edit, publish, and manage entire websites via natural language commands. With WordPress powering 43% of all websites, this could reshape how the web gets built.

What happened: Using Model Context Protocol (MCP), customers can now connect AI clients like Claude or ChatGPT to their WordPress sites. AI agents can create posts, fix SEO metadata, manage comments, restructure categories — basically everything short of choosing the domain name. All changes require user approval, and AI-written posts default to draft status.

Mark’s take: This is both exciting and terrifying. It massively lowers the barrier to launching and maintaining websites — great for small businesses, solopreneurs, and anyone without a dev team. But it also risks flooding the web with machine-generated content that looks professional but lacks genuine insight. The saving grace? Approval workflows. If WordPress enforces them properly, humans stay in the loop. If they don’t, we’re about to see what an AI-written web actually looks like.

Source: TechCrunch

3. Trump’s AI Framework: Federal Power Grab Dressed as Innovation

The Trump administration unveiled a legislative framework for AI regulation that preempts state laws, shifts child safety responsibility to parents, and offers AI companies broad liability shields.

What happened: The framework proposes a “minimally burdensome national standard” that blocks states from regulating AI development, citing national security and interstate commerce. It emphasizes parental controls over platform accountability, uses vague language around copyright (“fair use” for training data), and focuses on preventing government censorship rather than platform moderation.

Mark’s take: This is accelerationist policy written by venture capitalists. States like New York and California were moving faster on AI safety (RAISE Act, SB-53) precisely because federal regulators were asleep at the wheel. Now the White House wants to centralise power in Washington while gutting enforcement. The child safety piece is especially cynical — putting the burden on parents while giving platforms a pass. If you’re an AI company, this is Christmas. If you’re everyone else, prepare for the Jevons Paradox: easier AI means more AI, which means more complexity, more risks, and more breakage.

Source: TechCrunch

4. Anthropic vs Pentagon: The First Amendment Fight That Could Define AI

Anthropic filed court declarations pushing back on the Pentagon’s claim that the company poses an “unacceptable risk to national security.” The filings reveal that the DOD told Anthropic the two sides were “nearly aligned” one day after designating it a supply-chain risk.

What happened: Anthropic’s Head of Policy Sarah Heck and Head of Public Sector Thiyagu Ramasamy submitted sworn statements disputing the government’s technical claims. They argue the Pentagon never raised its core objections during negotiations, that Anthropic has no “kill switch” for deployed models, and that the designation was retaliation for the company’s refusal to allow mass surveillance or autonomous lethal weapons.

Mark’s take: This is the AI industry’s defining legal battle. If the government can label a company a national security threat for refusing military use cases, every AI firm will face a choice: comply or get frozen out of federal contracts. Anthropic is betting on the First Amendment — that its AI safety principles are protected speech. The timeline Heck laid out is damning: Pentagon says “we’re close,” finalizes the risk designation anyway, then publicly says negotiations are dead. That’s not national security; that’s leverage. Watch this case closely. The precedent will shape every AI-defense relationship for the next decade.

Source: TechCrunch

5. Anthropic Uses Claude to Fix Claude — And Learns Why AI Can’t Replace SREs

At QCon London, Anthropic’s Alex Palcuie revealed his team uses Claude for incident response. The results? AI is brilliant at observation but catastrophically bad at distinguishing correlation from causation.

What happened: Palcuie showed how Claude reads logs at “the speed of I/O,” caught a fraud ring during a New Year’s Eve outage, and writes SQL queries in seconds. But it also repeatedly misdiagnosed a cache failure as a capacity problem, delivered “80% convincing” postmortems with wrong root causes, and lacks the “scar tissue” of experienced site reliability engineers.

Mark’s take: This is the honesty the AI industry needs more of. Claude is phenomenal at the grunt work — parsing logs, spotting patterns, writing queries. But it fundamentally doesn’t understand why systems fail. It sees “requests went up, then errors happened” and concludes causation. A human SRE with battle scars knows that’s almost never the full story. Palcuie’s warning about skill atrophy is spot-on: if we let AI handle the easy stuff, will the next generation of engineers have the instincts to solve the hard stuff? The Jevons Paradox applies here too — better tools mean more complexity, which means weirder failures, which means humans still matter.

Source: The Register

6. UK Backs Down on AI Copyright Grab After Creative Revolt

The UK government abandoned plans to let AI companies scrape copyrighted material by default after Paul McCartney, Elton John, Coldplay, and other artists pushed back.

What happened: Science minister Liz Kendall said “we have listened” and confirmed the government “no longer has a preferred option.” Instead of an opt-out copyright exception for AI training, the UK will pursue market-led licensing and monitor litigation. A pilot platform called Creative Content Exchange launches this summer to test commercial licensing models.

Mark’s take: This is what happens when governments actually consult the people whose livelihoods are on the line. The original proposal was Silicon Valley wishful thinking: let AI companies hoover up everything, make creators opt out, call it innovation. Artists called the bluff. Now the UK is betting on licensing markets instead of regulatory carve-outs. Whether that works depends on enforcement — can individual creators actually negotiate with billion-dollar AI labs? The pilot will tell us. But at least the government blinked before handing over the keys.

Source: The Register

Looking Ahead

This week crystallised three tensions that will define AI’s next phase: centralisation vs state experimentation (Trump framework), capability vs liability (Anthropic lawsuit), and automation vs human judgment (Claude SRE story). The through-line? AI is getting more powerful, but the hard problems — fairness, accountability, root cause analysis — still need humans.

If you’re building with AI, ask yourself: do you have an agent strategy, or are you still treating LLMs like glorified autocomplete? The companies betting on the latter are about to get left behind.

Follow along at markhendy.com for weekly AI analysis, CFO insights, and contrarian takes on where this is all heading.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *