Anthropic pushes Agent Skills into the enterprise mainstream — and into the hands of its rivals

Key Takeaways

  • Anthropic is establishing its "Agent Skills" framework—built on the Model Context Protocol (MCP)—as an open standard, securing early adoption from Microsoft and influencing similar architectures at OpenAI.
  • Enterprise demand is shifting from generic chatbots to modular skills that encode specific procedural knowledge across legal, finance, and engineering workflows.
  • The strategy accelerates industry convergence toward a single assistant model augmented by interchangeable capabilities, creating new urgency around security and governance.

Anthropic is taking an unexpectedly open route with its agentic technology, positioning the system not just as a product feature but as infrastructure the broader AI industry can build on. The company’s decision to release its "Skills" framework as an open standard—technically anchored in the Model Context Protocol (MCP)—may look generous on the surface, but it’s a calculated push to define the architecture everyone else eventually adopts.

The approach seems to be working. Microsoft has already implemented the standard inside VS Code and GitHub Copilot. Popular coding agents like Cursor and Goose have followed suit. In a small but telling micro‑moment, developers recently noted that the directory layout and metadata format inside OpenAI’s emerging toolsets look strikingly similar to Anthropic's schema. It’s the kind of discovery that makes technical teams sit up: when competitors quietly replicate your architecture, you have likely set the standard.

At a technical level, these Agent Skills are fairly simple. They are essentially folders bundling instructions, scripts, configuration files, and other resources that teach an AI assistant how to execute a repeatable task. A team that builds presentations every week can encode formatting preferences, slide structure, and quality checks into a skill rather than rewriting the same long prompt over and over. For enterprises, that’s a massive relief. LLMs are broad but often lack the deep procedural knowledge required for complex accounting close cycles, contract reviews, or regulatory filings.

Anthropic designed these skills around what it calls progressive disclosure. The assistant loads only a compact summary—often just a few dozen tokens—into its context window. The full instructions and capabilities activate only when specifically needed. It’s a small technical detail, but it tells you a lot about how Anthropic expects organizations to scale: they envision massive libraries of domain‑specific skills running without blowing out context limits or requiring expensive model fine‑tuning.

Enterprises are already pushing the system into production. According to Anthropic product manager Mahesh Murag, companies across legal, finance, data science, and engineering are standardizing workflows inside these skills. Administrators on the company’s Team and Enterprise plans can publish skills centrally and manage permissions, while still giving employees room to customize. The GitHub repository hosting community-built skills has passed 20,000 stars—rapid momentum for a format introduced only recently.

Partner support at launch shows where Anthropic thinks the standard can land. Atlassian, Figma, Canva, Stripe, Notion, and Zapier have all published skills that integrate Claude directly into core workplace tools. There’s no revenue-share program behind it. Partners participate because it makes their software work better with AI, the same logic that fueled early API ecosystems. It’s a bet on usage and stickiness rather than a transactional channel.

Skills work across Claude.ai, the Agent SDK, and the API without add‑on fees. Standard API pricing applies. This is where the business strategy gets interesting: Anthropic appears less focused on monetizing the skills directly and more focused on making them ubiquitous. It’s reminiscent of how open standards evolve—slowly at first, then suddenly everyone aligns because it solves a shared interoperability problem.

That problem is clear. For years, vendors built isolated agents for customer support, research, or engineering. But internal research at Anthropic suggests that the underlying assistant is far more universal than teams assumed. Skills give that general‑purpose assistant specialized capabilities without creating separate, disconnected agent stacks. The concept effectively eliminates a significant amount of duplicated engineering work.

Anthropic’s own internal data highlights how deeply the assistant model is embedding itself into workflows. Employees reported using Claude in a significant portion of their work, citing a measurable productivity lift compared to the prior year. Notably, roughly a quarter of the work completed with the tool involved tasks that previously wouldn't have happened at all—internal documentation and perpetual back‑burner projects that finally moved forward. It raises a fair question: how will organizations measure throughput when AI makes it trivial to execute work previously considered too costly to prioritize?

Still, the approach isn’t without complications. Some engineers have expressed concerns about skill atrophy. When an AI turns out production‑ready code or polished visualizations in minutes, humans may lose the incentive to learn the nuances of those domains. That’s not a new worry in AI circles, but it lands differently when the tooling is embedded in the plumbing of day‑to‑day enterprise systems.

Security is another pressure point. Because skills can contain code and executable instructions, a malicious or poorly reviewed skill could introduce vulnerabilities. Anthropic recommends installing skills only from trusted sources and auditing those from outside the official directory. It’s sensible advice, though organizations will inevitably push the boundaries as they build internal inventories.

The governance of the standard remains an open question. Anthropic recently donated the Model Context Protocol to the Linux Foundation, a move intended to neutralize concerns about vendor lock-in. While early backers include Block (Square), and major cloud players are watching closely, standards bodies take time to form real influence. However, the pieces are clearly moving toward a shared ecosystem.

What is becoming clear is that Anthropic’s real leverage may not be Claude itself. It may be the scaffolding that connects enterprise knowledge, third‑party software, and AI systems from multiple vendors. Two months ago, this looked like a minor developer feature. Today, it is showing up in Microsoft products, mirrored inside OpenAI tools, and embedded across enterprise workflows.

If the industry continues converging on this architecture—and it increasingly looks like it will—Anthropic will have shaped the foundation beneath the next wave of workplace AI, even as it hands the blueprint to everyone else.