The AI Adoption Curve
Series · Updated February 2026
Every layer of the AI stack is being democratized faster than it's being secured.
The Pattern
ChatGPT, DeepSeek, and OpenClaw aren't separate stories. They're one story: capability becoming accessible at every layer of the stack, with security consistently arriving late.
Each step moves power closer to the edge—from API calls, to local models, to autonomous agents, to universal tool access. And at every step, the default configuration prioritises convenience over safety.
The Layers
Anyone can reason with AI
OpenAI wrapped GPT-3.5 in a chat interface and 100 million people showed up in two months. The model layer went from research paper to dinner table conversation overnight.
Anyone can run AI locally
DeepSeek proved you don't need Big Tech's compute budget to build frontier models. Open-weight models running on consumer hardware broke the assumption that AI capability requires centralised infrastructure.
Anyone can connect AI to everything
Anthropic's Model Context Protocol became the USB standard for AI agents—a universal way to connect LLMs to tools, databases, and services. Every vendor started shipping MCP servers.
Anyone can deploy autonomous agents
OpenClaw made deploying AI agents to messaging platforms as easy as npm install. 140K+ GitHub stars and 600+ publicly-exposed instances later, the agent layer is wide open.
Anyone can deploy embodied AI
When robotics hits the same accessibility inflection—consumer-grade hardware, open-source control systems, plug-and-play deployment—the pattern repeats in the physical world.
Why It Compounds
2025 was the plumbing year. Both compute (DeepSeek) and connectivity (MCP) democratized simultaneously. 2026 is when agents arrived to use that plumbing—which is why it feels more dangerous. An agent built on OpenClaw can run a local DeepSeek model, connected to every tool in your stack via MCP, deployed in minutes with no security review.
Each layer doesn't just add capability. It multiplies the attack surface of every layer beneath it.
Layer Assessments
OpenClaw Risk Assessment
PublishedAgent Layer
Independent security analysis of the open-source AI agent framework. Default-open security posture, 600+ exposed instances, and a phased pilot recommendation.
MCP Trust Assessment
PublishedProtocol Layer
315 vulnerabilities in year one, the first malicious server on npm, and an ecosystem where 53% of servers rely on static credentials. The USB port that has no lock.
What Comes Next?
The pattern predicts itself. When robotics hardware hits consumer price points and open-source control systems mature, the physical layer follows the same curve: capability first, security later. The question is whether we learn from the software layers fast enough to get the physical layer right.
This series applies trust engineering principles to each layer of the AI stack as it democratizes.