AI Week in Review: Wall Street Bets on Claude, a Secret Model Breaks Everything, and Meta Scans Your Bones

It’s been a week that felt less like incremental progress and more like watching the tectonic plates shift under your feet. From Wall Street joint ventures to AI models that break software faster than humans can patch it, to social media giants scanning your bones to guess your age — the pace of change isn’t slowing. Here’s what mattered.

Anthropic Goes Wall Street: The $1.5bn Enterprise Play

The most significant structural move of the week: Anthropic announced a $1.5 billion joint venture with Blackstone, Hellman & Friedman, and Goldman Sachs to create an enterprise AI services firm built around Claude. The three PE firms contribute roughly $300m each (Goldman putting in $150m), with additional backing from Apollo, General Atlantic, GIC, and Sequoia.

The pitch is blunt: most companies want AI but can’t hire the people to implement it properly. The new firm embeds Anthropic engineers directly inside client organisations — healthcare, manufacturing, financial services, real estate — and does the heavy lifting. It’s AI-as-a-managed-service, with a built-in distribution network of hundreds of portfolio companies across the investor base.

This isn’t just a commercial deal. It’s Anthropic buying legitimacy at scale. Having Goldman on the cap table means access to the kind of institutional relationships that take decades to build organically. The PE ecosystem gets a preferred route into frontier AI. Everyone wins — except, perhaps, the consulting firms who thought they’d corner this market themselves.

Anthropic’s Secret Weapon Found Thousands of Zero-Days. Then They Locked It Away.

While the enterprise venture grabbed headlines, the more quietly alarming story was Claude Mythos Preview — an unreleased Anthropic model that, during controlled testing, uncovered thousands of zero-day vulnerabilities across every major operating system and web browser. We’re talking about a 27-year-old bug in OpenBSD. A 17-year-old remote code execution flaw in FreeBSD. Flaws that have been sitting in production systems for decades, invisible to human auditors.

Anthropic won’t release Mythos publicly. Instead, they launched Project Glasswing — giving controlled access to AWS, Apple, Microsoft, Google, CrowdStrike, and Palo Alto Networks so defenders can patch before adversaries catch up. Dario Amodei has framed this as a 6–12 month window before hostile actors develop comparable capability.

Sit with that for a moment. An AI that can scan your entire codebase and identify critical vulnerabilities faster than any human team. It exists. It’s not theoretical. And the clock is ticking. Meanwhile, The Guardian notes that similar capabilities may already be accessible in public models. The era of “security through obscurity” is over — it just doesn’t know it yet.

The Free AI Model Was Always Going to Run Ads

OpenAI officially launched a self-serve advertising platform for ChatGPT this week. The Ads Manager is in beta, accepting CPC bids, offering conversion tracking, and — after removing the previous $50,000 minimum spend — opening the doors to SMBs and startups. Agency partners include Dentsu, Omnicom, Publicis, and WPP. OpenAI is reportedly targeting $2.5 billion in ad revenue this year and $100 billion by 2030.

There’s nothing surprising here — this was always the trajectory. You can’t build a product used by hundreds of millions of people and sustain it on subscription revenue alone. The more interesting question is what it does to the user experience. ChatGPT’s value proposition is that it helps you think. Ads introduce an incentive misalignment: the platform now has a reason to serve you answers that favour paying advertisers. OpenAI says conversations remain private and advertisers get aggregated data only. We’ll see how long that holds as the revenue pressure grows.

OpenAI Updates: GPT-5.5 Instant + Three New Voice Models

On May 5th, OpenAI rolled out GPT-5.5 Instant as the new default model for all ChatGPT users. The headline claim: 52.5% reduction in hallucinated claims on high-stakes prompts versus its predecessor. Better image analysis, stronger STEM reasoning, smarter web search integration.

Two days later, three new Realtime API audio models dropped: GPT-Realtime-2 (GPT-5-class reasoning in voice, handles interruptions naturally), GPT-Realtime-Translate (live translation across 70+ input languages into 13 output languages), and GPT-Realtime-Whisper (streaming speech-to-text for low-latency transcription). These are developer-facing, but they signal where the consumer product is heading: voice-first, real-time, multilingual. The text box is becoming a legacy interface.

Meta Is Scanning Your Skeleton to Guess Your Age

Here’s the one that should concern everyone paying attention to where this is heading. Meta has deployed AI systems on Instagram and Facebook that analyse photos and videos for height and bone structure to estimate a user’s age range. The stated purpose is child protection — identifying under-13 accounts that lied during sign-up. Meta insists it’s not facial recognition, and that no individual is identified, only demographic characteristics inferred from images.

Let’s be clear about what’s actually happening here. Meta is scanning biometric characteristics — physical attributes of your body — across every image you post, without explicit consent, to build inferences about you. The “it’s not facial recognition” framing is technically accurate and completely misleading. You don’t need to identify someone’s face to extract sensitive personal data from their body.

Child safety is a legitimate concern. But “protecting children” has become the universal justification for mass biometric surveillance. Once the infrastructure exists to scan bone structure at scale, the question isn’t whether it will be used for other purposes — it’s when, and for what. The answer to child safety online is age verification at the platform level with privacy-preserving cryptographic proofs, not AI that scans every image you’ve ever posted looking for physical clues about your body. Meta has chosen the surveillance path because it doubles as a data enrichment exercise. Don’t mistake compliance for innovation.

Big Tech Hands Washington the Keys

Google, Microsoft, and xAI agreed this week to give the US government early access to their frontier AI models before public release. The evaluations will be conducted by the Commerce Department’s Center for AI Standards and Innovation (CAISI), focused on cybersecurity, biosecurity, and chemical weapons risk assessment. This extends prior arrangements OpenAI and Anthropic already had in place since 2024.

The framing is collaborative: industry and government working together to assess risk before deployment. The reality is more complex. Governments don’t just evaluate — they influence. Pre-deployment access means pre-deployment pressure. Any model that fails a government “evaluation” faces regulatory consequences, creating a quiet veto power over what capabilities reach the public. That’s a significant structural shift, and it’s happening with almost no public debate. The Trump administration has signalled interest in making this mandatory. When governments get to decide which AI capabilities are safe to release, the definition of “safe” will inevitably drift toward “politically acceptable.”

Anthropic’s Valuation Math Is Getting Ambitious

Separate from the Wall Street joint venture, reports emerged this week that Anthropic is approaching $45 billion in annualised revenue and targeting a $900 billion valuation in its next funding round — potentially eclipsing OpenAI. For context, the company was valued at $380 billion after its $30 billion Series G in February. The growth trajectory, if real, is extraordinary. The question is whether enterprise AI services revenue is durable or whether it’s being front-loaded by companies experimenting rather than embedding. The joint venture with Blackstone is partly an answer to that question: lock in enterprise clients with managed service contracts and make the revenue sticky.

Zuckerberg Clones Himself for His Employees

And finally — the story that is equal parts fascinating and unsettling. Meta is building a photorealistic 3D AI avatar of Mark Zuckerberg to interact with employees. The digital twin will mimic his voice, tone, mannerisms, strategic thinking, and decision-making style, allowing any of Meta’s 79,000 employees to essentially “meet with the boss” at scale. Zuckerberg is reportedly personally involved in training and testing it.

File this under: things that seemed like science fiction eighteen months ago. A CEO creating a simulacrum of himself to manage employee communications is either visionary efficiency or something from a Black Mirror episode, depending on your disposition. The practical question is authenticity — if employees know they’re talking to an AI trained on Zuckerberg’s patterns, do they trust the outputs? And what happens when the avatar gives advice that the real Zuckerberg would never have given? The HR implications alone are genuinely novel territory.

The Pattern This Week

Strip back the individual stories and the theme is consistent: AI is becoming infrastructure. Not a tool you pick up and put down — infrastructure that runs underneath everything, monitoring it, optimising it, and making decisions about it. The Anthropic/Wall Street venture is infrastructure for enterprise deployment. Mythos is infrastructure for software security. ChatGPT ads are infrastructure for commercial discovery. Meta’s age detection is infrastructure for population monitoring, dressed in child-safety clothing.

Infrastructure is hard to dismantle once it’s in place. The decisions being made this week about governance, privacy, and commercial incentives will define the conditions we operate in for the next decade. Pay attention to who is making those decisions — and who isn’t in the room.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *