MCP Isn't Dead. We're Just Early.

The narrative this week is that MCP servers were a mistake. Back in January, OpenClaw creator Peter Steinberger wrote “mcp were a mistake. bash is better.”. This week, Eric Holmes wrote a pointed post arguing that CLIs outperform MCP for AI agents, and it’s been making the rounds. He’s not entirely wrong - but the post describes a protocol in its infancy and draws conclusions about its ceiling. That’s worth pushing back on.

Today’s MCP Servers have real problems worth admitting

Let’s not be precious about it. Current MCP implementations are rough . I spent an hour last week tracking down why my Mac had slowed to a crawl, only to find 100+ zombie Node processes left behind by MCP servers that Claude Code had started and never cleaned up. The permission model is blunt - all-or-nothing access where you’d want scoped controls. Initialization fails silently. Re-auth prompts appear at random. If you’ve been running local MCP servers for any length of time, you’ve felt all of this.

Holmes is also right that CLIs have a genuine edge today. Composability through pipes, grep, jq - that’s not nothing. They’re debuggable, universal, and they don’t require a background process to be healthy before your AI can do anything. Every platform in the world already offers a CLI you can use. These are real advantages.

These are the problems of a protocol that most people have been using for a few months. “Flaky early implementations” and “fundamentally limited” are not the same diagnosis.

CLIs carry their own baggage that’s easy to forget

The case for CLI assumes that what’s convenient for humans to debug is what’s best for machines to consume. That’s not obvious.

CLI output is inconsistent, poorly documented, and changes between versions without warning. How far behind is the version of wrangler on your work laptop right now? I’m at least one major version behind on mine. When an AI agent misinterprets CLI output, you’re left reverse-engineering free-form text with no schema to validate against. When an MCP call fails, you have a typed contract to check.

Holmes raises composability as a CLI strength - and it is, when the person composing is a human who knows what they’re doing. But an AI agent chaining grep | jq | awk is navigating a surface area that’s wide, poorly specified, and full of edge cases that no one documented because they seemed obvious. Unix pipes are flexible precisely because they’re untyped - and that’s also why they fail silently.

There’s also an auth argument in the original post that doesn’t hold up. Holmes suggests CLIs already handle auth fine through things like AWS profiles and GitHub CLI. That’s true for a developer who set up their machine carefully. It doesn’t extend to multi-agent workflows, or to any context where you want delegation, scoped permissions, or token revocation.

The missing piece: MCP as infrastructure, not a local daemon

The version of MCP Holmes is critiquing is uvx mcp-server running on your laptop - a local process you have to install, manage, and troubleshoot. That’s a real description of the early-adopter experience. It’s not a description of where the protocol is going.

Streamable HTTP transport and OAuth 2.0 client credentials change the picture. Instead of installing and running a server locally, you point your client at a URL, complete an OAuth flow, and you’re connected. The MCP server becomes infrastructure - {==hosted, maintained, updated==} by whoever runs the service. The end user never sees it. Linear and Granola are already shipping this way.

This is what the web API model looked like once it outgrew SOAP. You didn’t install anything; you just called an endpoint. The “maintenance burden” argument against MCP largely evaporates when the server isn’t running on your machine.

Structured schemas do something CLIs fundamentally can’t

What makes MCP interesting long-term isn’t the transport layer - it’s the contract.

When a service exposes an MCP interface, the AI gets a machine-readable copy of what actions are available and what they return. It means the agent can verify what it’s doing, reason about errors, and behave predictably. Compare that to a CLI where the output format is “whatever the maintainer felt like that day” and breaking changes arrive in patch releases without a changelog entry.

The analogy that keeps coming to mind is OpenAPI and REST. Before OpenAPI, you had a thousand idiosyncratic JSON APIs with inconsistent patterns and documentation that was always slightly out of date. OpenAPI didn’t replace REST - it gave REST a standard contract, which made far more useful. MCP is trying to do that for agent-tool interaction. The spec is from November 2025. We’re at the “inconsistent early implementations” stage, not the “this is what it will always be” stage.

Streaming is the part of the spec that hasn’t gotten enough attention - real-time updates from server to client. Hardly any MCP servers implement it today, which is part of why agentic workflows feel so clunky. But the capability is in the spec, and when it gets built out properly, it unlocks a class of long-running agent tasks that CLIs can’t touch.

This is like declaring REST dead in 1999

Holmes’ critique of MCP reminds me of critiques of early REST: too much overhead, inconsistent behavior, easier to just use what we have. Those critiques weren’t wrong about REST in 1999. They were wrong about REST as a category.

The early API era was messy - SOAP, custom XML interfaces, hand-rolled auth. Then OpenAPI came along and REST became infrastructure. We’re somewhere in that messy middle period with MCP right now. The question isn’t whether today’s tooling is rough (it is), but whether the underlying abstraction is worth standardizing around. The problem it’s trying to solve is real and the CLI isn’t the answer to it.