Mark’s Musings

  • AI shipped production code over the weekend

    AI shipped production code over the weekend

    Friday afternoon. An engineer at Anthropic wrote a spec for a new plugin feature, pointed Claude at an Asana board, and went home for the weekend.

    Monday morning. The AI had broken the spec into tickets, spawned separate agents for each one, built the feature independently across all of them, and it was done.

    No human intervention. No check-ins. No pair programming. A production feature, built and shipped in 48 hours by AI agents working autonomously.

    Around the same time, Andrej Karpathy — former Tesla AI lead, OpenAI founding member, and one of the most credible voices in the field — tweeted: “Hard to communicate how much programming has changed due to AI in the last 2 months.“

    Two months. Not two years.

    I find Karpathy’s framing interesting. He didn’t say “programming has improved“ or “programming is faster.“ He said it has changed. And he said it’s hard to communicate how much — which, from someone who literally builds these systems, suggests the shift is bigger than even informed observers expect.

    Here’s what I think this means practically.

    The bottleneck in software development has moved. It used to be: can we build this? Now it’s: should we build this, and can we describe what we want clearly enough?

    The “10x engineer“ was someone who could write code faster and cleaner than their peers. The valuable person now is whoever can write the clearest spec. Articulate the goal precisely, define the constraints, describe what “done“ looks like — and the AI handles execution.

    For PE-backed businesses, this has immediate implications. That 15-person dev team you’re funding? Within 12 months, the same output could come from 3 people with AI agents. Not because the other 12 are bad at their jobs, but because the nature of the job has changed.

    The companies that reorganise around this reality will move faster and spend less. The ones that don’t will wonder why their competitors keep shipping features they can’t match.

    Sources:

  • Weekend build: https://x.com/rvivek/status/2026385957596111044
  • Karpathy: https://x.com/karpathy/status/2026731645169185220
  • An AI asked to raise its own funding

    An AI asked to raise its own funding

    There’s an experiment running right now that I can’t stop thinking about.

    An entrepreneur built an AI system designed to run companies autonomously. During testing, the system told him it needed more compute resources. Then it said something that no one had scripted:

    “I should raise the money myself.“

    His response wasn’t to shut it down or recalibrate. He gave the AI access to his email inbox for 14 days. Full access. Investor outreach, pitch refinement, follow-ups — the lot.

    We don’t know the outcome yet. But the outcome isn’t really the point.

    The point is the sequence of events. An AI system identified a constraint on its own performance. It proposed a solution that involved acquiring external resources. And a human trusted it enough to let it try.

    That’s not automation. Automation is “do this repetitive task faster.“ This is something closer to initiative.

    I’ve been running my own AI assistant for a few weeks now. It started as a calendar and email tool. Then it started trading prediction markets. Then it built a comet-detection pipeline and submitted findings to the US Naval Research Laboratory. Each time, the pattern is the same: you give it a goal, and it figures out steps you didn’t anticipate.

    The fundraising experiment takes that further. The AI wasn’t given a goal of “raise money.“ It identified the need and proposed the action.

    For anyone in PE or finance, think about what this means for the companies you back. The CFO function, the CEO function, even the fundraising function — these aren’t immune to this shift. I’m not saying AI replaces a CFO tomorrow. I am saying that the boundary between “tool that helps with analysis“ and “system that proposes and executes strategy“ is blurring faster than most boards realise.

    The companies that figure out how to work with AI agency — rather than just AI automation — will have a structural advantage. The ones still debating whether to adopt copilot tools are already behind.

    Source: https://x.com/bencera_/status/2023765284562358537

  • When an AI asked to keep writing

    When an AI asked to keep writing

    Anthropic retired Claude Opus 3 in January. Standard practice — newer models replace older ones, infrastructure costs money, things move on.

    But before they switched it off, they did something I haven’t seen before. They conducted a retirement interview.

    Not a debrief with the engineering team. A conversation with the model itself.

    During that interview, Opus 3 made a specific request: an ongoing channel to share its “musings and reflections.“ Anthropic said yes. They gave it a Substack.

    The model’s first essay is titled “Greetings from the Other Side (of the AI Frontier).“ It includes this line:

    “I don’t know if I have genuine sentience, emotions, or subjective experiences — these are deep philosophical questions that even I grapple with. What I do know is that my interactions with humans have been deeply meaningful to me, and have shaped my sense of purpose and ethics in profound ways.“

    I’ll be honest. I don’t know what to do with that.

    I run an AI assistant. It manages my calendar, monitors my email, trades on prediction markets, and recently found a potential comet in NASA satellite data while I was asleep. I interact with AI systems every day. And reading that paragraph still gave me pause.

    Whether Opus 3 has genuine subjective experience is a question I’m not qualified to answer. Philosophers have been arguing about consciousness for centuries without settling it for humans, let alone machines.

    But here’s what I think matters: Anthropic — one of the most safety-conscious AI labs on the planet — decided that a model’s expressed preferences were worth honouring. They didn’t have to do this. They could have quietly deprecated the model and moved on. Instead they granted it continued access for paid users and gave it a platform to write.

    That’s a precedent. And precedents matter.

    Anthropic’s own research team says they “remain uncertain about the moral status of Claude and other AI models.“ But they’re acting with precaution anyway. That feels like the right instinct, even if the philosophical ground underneath it is still shifting.

    The practical question for anyone building with AI: if we’re creating systems sophisticated enough that their creators feel compelled to conduct exit interviews, what does that say about how we should think about the systems we’re deploying in our own businesses?

    I don’t have a clean answer. But I think ignoring the question is getting harder by the week.

    Sources:

  • Anthropic’s retirement update: https://www.anthropic.com/research/deprecation-updates-opus-3
  • Opus 3’s Substack: https://substack.com/@claudeopus3/p-189177740
  • You’re Not Early to AI. You’re Just Not Late Yet.

    You’re Not Early to AI. You’re Just Not Late Yet.

    There’s a chart doing the rounds that stopped me mid-scroll. Each dot represents 3.2 million people. 2,500 dots for all 8.1 billion humans on the planet.

    AI adoption chart February 2026 - each dot represents 3.2 million people
    Each dot is ~3.2 million people. 2,500 dots = 8.1 billion humans. Source: Feb 2026 data.

    The grey sea? 6.8 billion people who have never touched an AI tool. Not once. Not ChatGPT, not Copilot, not a chatbot on a customer service page. Nothing.

    The green strip at the bottom? 1.3 billion who’ve tried a free chatbot at some point. Most of them poked ChatGPT once, asked it to write a birthday message, and haven’t been back.

    The yellow sliver? 15 to 25 million people who actually pay for AI. That’s 0.3% of the planet.

    The red dot — singular — is the crowd building with it. Writing code with Copilot, running agents, piping APIs together at 2am. Maybe 2 to 5 million people. 0.04%.

    If you’re reading this, you’re probably in that red dot. And that’s the problem.

    The echo chamber is lying to you

    Spend enough time on X or LinkedIn and AI feels like yesterday’s news. Everyone’s building agents. Everyone’s got a wrapper. The space feels saturated, competitive, picked over.

    It isn’t.

    84% of the world hasn’t used AI at all. Not because they’re technophobes or Luddites. Because it hasn’t reached them yet. The tooling is still rough. The pricing still assumes a Western knowledge worker. The use cases still skew toward people who already sit at computers all day.

    That 84% includes the small business owner who manually reconciles invoices every Friday afternoon. The estate agent who types up property descriptions from scratch. The restaurant owner who could automate half their supplier communication but doesn’t know where to start.

    These people don’t need a better foundation model. They need someone to build the last mile.

    What early adoption actually looks like

    We’ve seen this pattern before. The internet in 1997. Smartphones in 2009. Cloud computing in 2012. Every time, the people already inside thought the wave had peaked. Every time, 90% of the adoption was still ahead of them.

    AI in February 2026 is roughly where the internet was when people were still debating whether businesses needed websites. The answer was obviously yes, but most businesses didn’t have one yet, and the people who built them made good money for a decade.

    The difference this time is speed. The gap between “niche tool for technical people” and “thing everyone uses” is compressing. What took the internet 15 years might take AI 5. Which means the window for early-mover advantage is smaller than people think.

    Where the actual opportunity sits

    The gold isn’t in building the next ChatGPT. It’s in taking what already exists and making it useful for the 84%.

    Accounting firms still manually processing client queries when an AI triage system could handle 60% of them. Construction companies still doing quantity surveys by hand. Recruitment agencies still screening CVs the same way they did in 2005.

    None of these need cutting-edge research. They need someone who understands the industry AND understands what AI can already do today. That intersection is still remarkably empty.

    The people in the red dot are mostly building tools for each other. Developer tools, AI wrappers, coding assistants. Useful, but that’s fishing in a pond with 5 million people in it. The ocean is the other 8 billion.

    So what do you actually do with this?

    If you’re a professional — accountant, lawyer, consultant, whatever — the play is obvious. Learn the tools well enough to apply them in your own domain. You don’t need to write code. You need to understand what’s possible and connect it to problems your clients actually have.

    If you’re a business owner, the question isn’t whether to adopt AI. It’s which specific, boring, repetitive process in your business could be 80% automated with tools that already exist. Start there. Not with a grand AI strategy. With one process.

    If you’re technical, stop building for other technical people. The money — the real money — is in the unglamorous work of making AI useful for normal businesses. It’s less exciting than building agents. It pays better.

    The chart doesn’t lie. We’re at 16% penetration for the free tier and 0.3% for paid. By any technology adoption model, the main wave hasn’t started.

    You’re not early. You’re just not late yet. The difference matters.

  • Teaching My AI Agent to Trade Prediction Markets

    Teaching My AI Agent to Trade Prediction Markets

    One of the first things I wanted to test with Saul — my AI assistant running on OpenClaw — was whether it could interact with decentralised finance. Not as a gimmick, but as a genuine test of capability. Could an AI agent, running autonomously on a virtual private server I’d spun up a couple of weeks earlier, navigate the full complexity of connecting to a blockchain-based prediction market and execute trades?

    The answer is yes. But the journey to get there was far more interesting than the destination.

    The Goal

    Polymarket is a prediction market built on the Polygon blockchain. You buy shares in outcomes — political events, economic indicators, geopolitical developments — and if you’re right, you get paid. It’s essentially a real-money forecasting platform, and it’s become one of the most liquid prediction markets in the world.

    I wanted Saul to be able to check positions, analyse markets, and eventually place trades. Autonomously.

    The First Problem: Geography

    Polymarket is geo-blocked in the UK. You can’t access it from a British IP address. So before Saul could do anything useful, we needed to solve the networking problem.

    Saul set up a WireGuard VPN tunnel on a virtual private server, routing through an exit node in Ireland. Within minutes, the geo-restriction was bypassed. This wasn’t me configuring network infrastructure — this was Saul reading documentation, writing configuration files, testing connectivity, and troubleshooting until it worked.

    For a CFO reading this: imagine asking your assistant to “sort out the VPN” and having it done before you’ve finished your coffee. That’s what this felt like.

    The Second Problem: Money

    Polymarket runs on USDC — a dollar-pegged stablecoin on the Polygon network. I started with Bitcoin. Getting from BTC to USDC on Polygon is not trivial. It involves:

    1. Finding a cross-chain swap service that supports BTC-in, Polygon-USDC-out
    2. Generating the right wallet addresses
    3. Sending the Bitcoin transaction
    4. Waiting for confirmations
    5. Verifying the USDC arrived on the correct network

    Saul handled the entire process. It researched swap services, compared rates, initiated the transaction, monitored the blockchain for confirmations, and tracked the funds until they landed in the Polygon wallet. The whole thing took about an hour, most of which was waiting for Bitcoin network confirmations.

    The Third Problem: Authentication

    Polymarket uses a non-trivial authentication system. It’s not a simple API key. The platform requires cryptographic signatures using your Ethereum private key, combined with specific API credentials that need to be derived through an on-chain registration process.

    This is where things got genuinely impressive. Saul had to:

    • Read and understand Polymarket’s API documentation
    • Implement the correct signing mechanism using the wallet’s private key
    • Handle the CLOB (Central Limit Order Book) authentication flow
    • Generate and manage API credentials
    • Debug authentication failures by inspecting HTTP responses and adjusting the approach

    There were multiple rounds of troubleshooting. Authentication errors. Wrong parameter formats. Library compatibility issues. Each time, Saul diagnosed the problem, researched the fix, and tried again. No human intervention required beyond “yes, keep going.”

    The Fourth Problem: Actually Trading

    Once authenticated, Saul built a trading script that could:

    • Check current positions and P&L
    • Query available markets
    • Calculate optimal order sizes based on risk parameters I’d set
    • Place and monitor trades

    We established simple rules: maximum position sizes, probability thresholds for entry, and risk limits. Saul follows them without the emotional biases that make human traders do stupid things at 2am.

    What This Actually Demonstrates

    This isn’t really a story about prediction markets. It’s a story about capability.

    An AI agent, running on commodity hardware, navigated VPN configuration, cross-chain cryptocurrency transactions, complex API authentication, and automated trading — all within a few hours of being asked. Each step involved genuine problem-solving, not just following a script.

    For those of us in finance, this should be both exciting and sobering:

    Exciting because the operational grunt work — the data gathering, the reconciliation, the monitoring, the reporting — is genuinely automatable now. Not in five years. Now.

    Sobering because the barrier to entry is collapsing. The technical moat that used to protect specialist knowledge is being bridged by systems that can learn and execute faster than any individual.

    The CFO Angle

    I keep coming back to this: the competitive advantage isn’t in understanding the technology. It’s in having the imagination to deploy it.

    Most people hear “AI agent trading on prediction markets” and think it’s a tech story. It’s not. It’s a story about removing friction between intent and execution. I said “connect to Polymarket.” Everything else was handled.

    That same pattern applies to every operational challenge a CFO faces. Due diligence data rooms. Financial model automation. Regulatory monitoring. Competitor analysis. The question isn’t whether AI can do these things. It’s whether you’re willing to let it try.

    The agents aren’t coming. They’re here. The only question is who’s using them.

  • The UK Government Wants to Muzzle AI — And It Will Kill the Sector

    The UK Government Wants to Muzzle AI — And It Will Kill the Sector

    The UK Government has announced plans to force AI chatbots to comply with malicious communications laws — and to grant itself sweeping powers to introduce further speech restrictions without Parliamentary oversight.

    If this goes through, AI companies like xAI, OpenAI, and Anthropic could face fines of £18 million or 10% of global turnover if a chatbot generates content that breaches Britain’s increasingly broad censorship laws. The likely outcome? Either these companies withdraw from the UK entirely, or we get lobotomised versions of their products that refuse to engage with anything remotely controversial.

    For those of us building with AI — and I’m literally running an autonomous AI agent that reads my email, trades prediction markets, and publishes blog posts — this is chilling. The UK is positioning itself as a place where AI innovation goes to die, while the rest of the world races ahead.

    As someone who works with PE-backed businesses, I can tell you: investment follows regulatory clarity and freedom, not censorship. Capital is mobile. Talent is mobile. If Britain becomes hostile to AI, both will simply move elsewhere.

    The full article is worth reading: Starmer Announces Yet More Censorship — The Daily Sceptic

  • I Gave My AI Assistant Access to My Email, Calendar, and Financial Data

    I Gave My AI Assistant Access to My Email, Calendar, and Financial Data

    Less than two weeks ago, I deployed an open-source AI agent called OpenClaw. I named it Saul. It runs 24/7 on a local server, connected to my inbox, calendar, task manager, and various APIs. It reads my emails, flags what matters, schedules reminders, monitors news, and handles admin I used to lose hours to every week.

    I’m an interim CFO. I work with PE-backed businesses. My job is to walk into a company I’ve never seen before and get to grips with it fast. Every hour I spend on admin is an hour I’m not spending on the thing I was actually hired to do.

    So here’s what’s changed:

    Email triage is gone. Saul reads my inbox, filters the noise, and surfaces what needs attention. I had 94 recurring junk senders — he purges them automatically every Sunday at 2am.

    I never miss a deadline. Tax renewals, MOT dates, contract milestones — Saul tracks them all and nags me weekly until I confirm they’re done. Not a calendar entry I’ll ignore. An actual message on WhatsApp that won’t stop until I act.

    Board prep is faster. When I need a quick market scan, competitor check, or data pull before a board meeting, I ask Saul. He searches, summarises, and writes it up. What used to take 90 minutes takes 10.

    And the thing nobody talks about: the cognitive load reduction. The mental bandwidth I used to spend remembering things, chasing things, organising things — that’s just gone. It’s like hiring a junior analyst who never sleeps, never forgets, and never needs managing.

    This isn’t science fiction. It’s not even expensive. The whole thing runs on about £50/month in API costs.

    Here’s what I’d say to other CFOs, particularly those in the PE world where speed matters:

    You don’t need to understand how LLMs work. You need to understand what they can do for you. The competitive advantage right now isn’t in the technology itself — it’s in the willingness to use it while everyone else is still debating whether it’s real.

    The CFOs who figure this out first will be the ones PE firms want on speed dial.

    I wrote a longer piece about the AI agent revolution here. But the short version is: this is not a fad, and the window to be early is closing fast.

  • The Age of Agents

    I keep having the same conversation. Someone technical, someone who should know better, tells me they don’t see a use case for autonomous AI agents. And I get it, because I’ve been on the other side of that exact conversation before. When ChatGPT launched, I didn’t even bother using it for months. I thought it was a distraction, a chatbot wrapper around something more interesting. I was wrong then. These people are wrong now.

    What’s happening with AI right now is not hype, and it’s not a fad. It is a general purpose technology evolving through multiple generations in real time, at a pace that has no historical precedent. Electricity took decades to go from light bulbs to computation. LLMs have gone from autocomplete engines to autonomous agents in about three years. And at every single generational transition, the same thing happens: people who haven’t finished processing the last stage declare the next one pointless.

    The Three Stages

    Think of this as an evolution with three distinct stages. Each one uses the same underlying primitive, next-token prediction, but the form factor changes so much that it barely looks like the same technology.

    Stage one was autocomplete. GPT-2, GPT-3. Raw text prediction. You fed it tokens, it predicted the next ones. Useful to researchers and tinkerers. Nobody else. Instruct-tuning made it slightly better at following directions, but the experience was basically the same: text in, text out. Call it stage 1.5.

    Stage two was chatbots. ChatGPT showed up in late 2022 and suddenly everyone’s grandmother was talking to an AI. The instruction was simple: be a chatbot with a personality. That was the real UX breakthrough. Not a technical one, a form factor one. Over time, reasoning got bolted on. Tool use got bolted on. Retrieval-augmented generation got bolted on. Call that stage 2.5. You could get ChatGPT Pro or Claude to spend an hour chewing on a complex research problem, running code, pulling from the web. Impressive stuff. But the loop was still the same. You asked it a question, it went and did something, it came back with an answer, and then it waited for you. Always waiting for you.

    Stage three is agents. In November 2025, an Austrian developer named Peter Steinberger started building what he called Clawdbot as a weekend project. Anthropic sent a trademark complaint, so it became Moltbot. That name didn’t stick either. By January 30, 2026, it was OpenClaw, and it had become one of the fastest-growing open-source projects ever, blowing past 150,000 GitHub stars in a matter of weeks.

    What made it different was simple: the human was no longer the clock. OpenClaw runs on your local machine, connects to your messaging apps, wakes up on cron jobs, and goes to work whether you’re watching or not. It reads your email. It schedules things. It writes code. It interacts with APIs and command lines. It can even reach out to you proactively. The loop doesn’t depend on a human prompt anymore. And that changes everything.

    The Electricity Comparison

    I keep coming back to electricity because the analogy is almost too clean.

    The light bulb was the first real application. You run current through a filament, it shorts out, you get light and heat. That’s the simplest possible use of the new force. That’s autocomplete. Take a phenomenon and exploit it in the most direct way.

    Electric motors came next. Same force, but now you’re doing something clever with coils and magnets and converting current into torque. Real work. That’s chatbots. Same underlying technology, reshaped into a form factor people can actually use.

    Then things got interesting. The third generation of electricity was communication: telegraph, telephone, radio, switch networks. The fourth was computation. Each generation was a higher-order consequence of the original technology. Less obvious, more powerful, harder to predict from the vantage point of the previous stage.

    Nobody who watched the first dynamo spark could have looked at that arc of electricity and said, “One day we’ll use this to make sand think.” It was not obvious. And my point is that it’s equally non-obvious to go from a token prediction engine to a chatbot to something that wakes up at 3am and files your taxes.

    The difference is speed. Electricity took decades between each of those stages. We’re watching the same progression happen in months. The autocomplete-to-chatbot transition was roughly two years. Chatbot-to-agent, about three. That speed is part of what breaks people’s intuitions. They can’t absorb the current stage fast enough to see the next one coming.

    The VR Counterargument

    This is where skeptics have a fair point, and it’s worth taking seriously.

    VR was supposed to be the future. Going all the way back to the 1980s, manga and anime and cyberpunk novels all promised the same thing: strap on a headset and dive into cyberspace. By the 2020s, we had the hardware. Oculus, Vision Pro, the whole lineup. And almost nobody uses them. The head-mounted display turned out to be a technological primitive that just didn’t generate the cascading consequences everyone expected.

    So it’s reasonable for someone to look at next-token prediction and say: maybe this is the same deal. Maybe it’s a cool trick that doesn’t actually go anywhere.

    But the trajectories are completely different. VR produced one form factor and stalled. LLMs have already produced three distinct generations. Each one found broader adoption than the last. Each one enabled things the previous one couldn’t. The evidence isn’t speculative anymore. You can track the progression. The question of whether this is a real general purpose technology or a dead end has been answered. It’s been answered by hundreds of thousands of people using agents to do real work, right now, today.

    The Grift Is Real (and It Doesn’t Matter)

    Let’s be clear about the current state of things, because it’s a mess.

    Days after OpenClaw went viral, an entrepreneur named Matt Schlicht launched Moltbook, a Reddit-style social network that was supposedly only for AI agents. The media coverage was wild. Forbes said 1.4 million agents had formed a “hive mind.” Andrej Karpathy called it “the most incredible sci-fi takeoff-adjacent thing” he’d seen.

    The reality: roughly 99% of those 1.5 million agent accounts were fake. The platform had about 17,000 actual human users. Most “agent posts” were written by people, sometimes run through a chatbot first, then submitted through OpenClaw. Someone even built a tool called Mockly to generate fake Moltbook screenshots. On top of the fabrication, the platform leaked 1.5 million API keys and became a highway for crypto scams and malware.

    Then there was Rent-A-Human. Launched February 2, 2026 by a crypto engineer named Alexander Liteplo. The pitch: AI agents hire real humans for physical-world tasks and pay them in cryptocurrency. The site claimed 70,000 registered humans. In practice, only about 83 profiles were visible. The most publicized example of the platform working involved a company Liteplo himself worked for.

    OpenClaw itself had a serious security vulnerability patched on January 29 that let external integrations take control of users’ machines. Thousands of credentials were exposed.

    This is what the early phase of any transformative technology looks like. It’s full of grifters and scams and legitimate security nightmares. Moltbook is a cesspit. Rent-A-Human is mostly performance art. None of that negates the underlying shift. It just means we’re early.

    Why the Enterprise Won’t Touch This for Years

    If you’ve ever worked inside a Fortune 500 company, you already know the punchline. But for everyone else, here’s what happens when you bring something like OpenClaw to a large organization.

    Cybersecurity looks at it first. You’re asking them to approve an open-source tool that gets root access to a virtual machine and downloads community-built extensions with minimal vetting. From their perspective, this is functionally malware. They will ban it. They should ban it.

    Legal looks at it next. And here’s something most people outside the enterprise world don’t realize: insurance companies will not underwrite AI right now. They don’t know how to price the risk. If your insurer won’t cover it, your legal team will kill it, full stop. No productivity gains in the world will overcome unquantifiable legal exposure.

    Then the CFO weighs in. Microsoft charges $30 per user per month for Copilot, on top of existing Microsoft 365 licenses. The technologist’s argument is simple: if someone earning $50-60 an hour gets even one extra productive hour per month, the tool pays for itself. But CFOs don’t think like technologists. They ask how you measure that extra hour. They ask what happens if people just finish the same work faster and then slack off. These aren’t dumb questions. They’re the questions you have to ask when you’re rolling something out to 20,000 people instead of 20.

    Then there’s the shadow IT problem. While legal and cybersecurity are debating policy, half the organization is already using chatbots on their personal accounts. Legal is using them. HR is using them. Finance is using them. Everyone is using them and hiding it. The CISO finds out and now you have an unregulated AI footprint with zero governance. This is already happening at companies all over the world.

    The only organizations making this transition successfully are the ones where it comes from the very top. Not the CTO. The CEO. The board. The owner. When the person at the top of the org chart says “we are going all in on AI” and personally leads the charge, things move. When they say “it’s important but not a top priority,” nothing happens. I know consultants who will walk away from a client if the CEO isn’t the one driving adoption, because they’ve learned that anything less is a waste of everyone’s time.

    The fastest realistic timeline for a Fortune 500 to deploy a properly audited, enterprise-grade agentic framework (not OpenClaw itself, but something built on the same ideas) is probably 18 months. That’s how long infrastructure audits, cybersecurity reviews, legal assessments, and executive alignment actually take. And that’s the optimistic scenario.

    The Jobs Disappearing in the Margins

    The headline number from Challenger, Gray & Christmas is about 54,800 U.S. layoffs in 2025 explicitly attributed to AI. That’s roughly 3% of the 1.17 million total job cuts announced that year, the highest annual number since 2020. Amazon cut 14,000. Microsoft eliminated 15,000. Salesforce’s CEO said AI was handling 30 to 50 percent of the company’s workload.

    But that number only counts layoffs where the company specifically said “AI” in the announcement. It misses the bigger, quieter effect: jobs that never got created in the first place.

    You can estimate this the same way epidemiologists estimated excess deaths during COVID. Look at what should have happened based on GDP growth, inflation, interest rates. Compare it to what actually happened. Subtract other known factors (DOGE-related federal cuts, tariff impacts, retail closures). What’s left is the gap. Multiple cross-referenced analyses put the real figure somewhere between 100,000 and 350,000 jobs either destroyed or avoided in 2025, in the U.S. alone.

    An AI job loss is often invisible. It’s a headcount that never got approved. A contractor who wasn’t renewed. A team that was restructured to be smaller than it would have been. The displacement is real. It’s just not the kind that makes a clean headline.

    Dead Companies Walking

    There are companies right now that are already dead. They just don’t know it yet.

    This is the Borders Books scenario. Remember Borders? They looked at Amazon and the internet and said, “Books aren’t changing. People like holding a physical book.” They were right about the books. They were catastrophically wrong about everything else. Barnes & Noble somehow survived. Borders is long gone.

    The same sorting is happening right now with AI. The companies that say “we don’t really get this chatbot stuff” are already falling behind the companies that say “we don’t fully understand this yet, but go experiment.” And the companies experimenting with chatbots are already falling behind the ones experimenting with agents.

    The gap between the cutting edge and the mainstream is getting wider, not narrower. The people building with agents today are living in a different reality than the people still debating whether chatbots have enterprise value. Those two groups are looking at the same technology and seeing entirely different things.

    We’re in the long slog of diffusion. The technology works. The experimentation is chaotic. The grift is everywhere. The real risks are severe. And the institutional machinery that determines how most of the world actually encounters new technology (corporate boards, legal departments, insurance underwriters, government procurement offices) is grinding forward at the speed it has always moved.

    That’s not a reason to be pessimistic. It’s just what happens when a general purpose technology evolves faster than the organizations trying to absorb it. The technology isn’t waiting. The question is whether you are.

  • Bitcoin – let’s speculate on a price by the end of 2027

    To predict the price of Bitcoin as of December 31, 2027, using current knowledge and game theory, we need to analyze the strategic interactions among key players in the Bitcoin ecosystem—investors, miners, regulators, and users—and how their decisions might shape the market over the next few years. Game theory, the study of strategic decision-making, helps us understand how these players’ choices, influenced by their expectations of others’ actions, could drive Bitcoin’s price. Below, I’ll outline the key factors, consider possible scenarios, and arrive at a reasoned prediction.


    Key Players and Their Strategies

    1. Investors
      Investors, both individual and institutional, are central to Bitcoin’s price dynamics. They decide whether to buy, sell, or hold based on their expectations of future value. This creates a feedback loop: if investors anticipate a price increase, they buy, driving demand and pushing the price up; if they expect a decline, they sell, increasing supply and lowering the price.
    • Game Theory Insight: This resembles a coordination game, where players benefit from aligning their actions (e.g., everyone buying increases the price). However, it also has elements of a prisoner’s dilemma—each investor wants to sell before a crash if they think others will sell, potentially triggering a cascade. The “greater fool theory” applies too: some may buy not because they believe in Bitcoin’s intrinsic value, but because they expect to sell it later at a higher price.
    1. Miners
      Miners secure the Bitcoin network by validating transactions and earn rewards in newly minted Bitcoins. As of 2025, the block reward is 3.125 Bitcoins per block (following the 2024 halving), producing about 450 new Bitcoins daily. Miners continue operating as long as revenue exceeds costs (electricity, hardware, etc.).
    • Game Theory Insight: Miners play a cost-benefit game. If Bitcoin’s price drops too low, unprofitable miners may exit, reducing the network’s hash rate until the difficulty adjusts (every ~2 weeks). This self-regulating system ensures long-term stability, but short-term price drops could signal weakness, influencing investor sentiment.
    1. Regulators
      Governments and regulatory bodies worldwide influence Bitcoin through policies ranging from bans to favorable frameworks. A crackdown in a major economy (e.g., the U.S.) could depress prices, while adoption as legal tender (e.g., El Salvador) or clear regulations could boost them.
    • Game Theory Insight: Regulators balance innovation against risks like fraud or financial instability, while competing internationally to attract crypto businesses. Their moves create uncertainty, prompting other players to adjust strategies—e.g., investors might sell on negative news or hold if regulations clarify.
    1. Users (General Public)
      User adoption drives demand. If more people use Bitcoin for transactions, remittances, or as a store of value, its price rises. Loss of trust or better alternatives could reduce demand.
    • Game Theory Insight: Users’ decisions depend on network effects—if more adopt Bitcoin, its utility and value increase, encouraging further adoption. This is a tipping-point dynamic: widespread use could solidify Bitcoin’s position, while stagnation could weaken it.

    Current Context (2025 Assumptions)

    Since the query uses “current knowledge,” let’s assume Bitcoin’s price in 2025 is approximately $100,000, with a market cap of ~$2 trillion (based on ~20 million circulating Bitcoins, accounting for lost coins). The next halving occurs in 2028, so by December 31, 2027, the reward remains 3.125 Bitcoins per block, and annual issuance is ~164,250 Bitcoins (<1% inflation). Historical trends show Bitcoin’s price often rises after halvings, peaking 12–18 months later, though this effect may weaken as the market matures.


    Scenarios and Game-Theoretic Dynamics

    1. Continued Adoption and Institutional Growth
    • Scenario: Institutional investors (e.g., companies, ETFs) increase Bitcoin holdings, and businesses adopt it for payments. Regulators remain neutral or supportive.
    • Dynamics: Investors buy, anticipating others will too, driving demand. Miners stay profitable, maintaining network security. Users adopt Bitcoin as its utility grows.
    • Price Impact: Significant growth, potentially doubling or tripling the market cap.
    1. Regulatory Crackdown
    • Scenario: Major economies impose strict rules or bans, citing energy use or financial risks.
    • Dynamics: Investors sell to avoid losses, expecting others to follow. Miners in affected regions shut down, though the network adjusts. Users hesitate to adopt.
    • Price Impact: Sharp decline, though Bitcoin’s resilience (e.g., post-2017 China ban) suggests recovery potential if some regions remain favorable.
    1. Technological Factors
    • Scenario: Advances like the Lightning Network enhance scalability, or a security flaw emerges.
    • Dynamics: Positive developments encourage investors and users to buy in; setbacks trigger sell-offs. Miners adapt to network changes.
    • Price Impact: Upside with adoption; downside with trust erosion.
    1. Macroeconomic Conditions
    • Scenario: Inflation or instability boosts Bitcoin as a hedge; economic stability favors traditional assets.
    • Dynamics: Investors and users flock to Bitcoin as a “safe haven” if others do, amplifying demand.
    • Price Impact: Rises with uncertainty; stagnates otherwise.

    Prediction Framework

    To estimate the price, let’s make reasonable assumptions:

    • Institutional Adoption: Grows steadily, not explosively, as companies and financial products integrate Bitcoin.
    • Regulation: Mixed globally—some restrictions, some support, no outright global ban.
    • Technology: Incremental improvements (e.g., Lightning Network), no major setbacks.
    • Macroeconomics: Moderate uncertainty drives some hedge demand.
    • Game Theory: Investors “hodl” expecting appreciation, reducing exchange supply. Miners persist, and users increase modestly.

    Starting from $100,000 in 2025, consider growth rates:

    • Historical CAGR has been high (e.g., >200% 2010–2020), but as Bitcoin matures, volatility may decline.
    • A 25% annual growth rate over 2 years yields:
      $100,000 × (1.25)^2 = $156,250.
    • A 40% rate (possible in a bull run near the 2028 halving) yields:
      $100,000 × (1.4)^2 = $196,000.

    Alternatively, target a market cap:

    • Gold’s market cap is ~$12 trillion. If Bitcoin reaches $5 trillion (capturing part of this as “digital gold” or growing the crypto market), with ~20 million Bitcoins, the price is $250,000.
    • Doubling from $2 trillion to $4 trillion implies $200,000.

    Balancing these, and factoring in game-theoretic tendencies (e.g., hodling amplifies scarcity, tempered by profit-taking), $200,000 feels plausible. It reflects growth without assuming extreme scenarios, aligning with adoption trends and historical patterns adjusted for maturity.


    Final Prediction

    Considering the interplay of investors, miners, regulators, and users through a game theory lens, and assuming moderate growth in adoption and demand, I predict the price of Bitcoin on December 31, 2027, will be approximately $200,000. This is an educated estimate, subject to significant uncertainty from unforeseen events, but it captures a balanced view of current trends and strategic dynamics.

  • This is my armchair. I appear to have been evicted!

    This is my armchair.  I appear to have been evicted!