
We’re standing at a pivotal moment. The rise of AI agents and Large Language Models (LLMs) isn’t just a technological shift; it’s fundamentally reshaping how we think about APIs and their role in digital ecosystems. Through my work with independent software vendors (ISVs) at Amazon Web Services (AWS) and my involvement with the MACH Alliance, I’ve seen firsthand how APIs are becoming the absolute backbone of success in this emerging agentic world. My goal is to help you understand how APIs and AI agents will intersect to redefine commerce, enterprise workflows, and user experiences.
In this piece, I’ll share my perspectives on the trends shaping APIs in 2025 and beyond, discuss the challenges SaaS vendors face in adapting to AI agents, and offer practical strategies to make your APIs truly LLM-friendly. Whether you’re a developer, a product manager, or a technology strategist, I hope this deep dive offers valuable insights into preparing your API ecosystem for the agentic future.
The Agentic World: A New Paradigm for APIs
The years 2023 and 2025 represent a crucial turning point. The emergence of AI agents – these powerful software systems driven by LLMs that can autonomously plan and complete tasks for users – is completely changing the landscape. A crucial point I want to emphasize is this: while LLM capabilities will continue to improve and become commoditized, the real differentiator will be how these models integrate with external systems to deliver tangible, real-world value.
One of the most significant trends I’m seeing is the surge in agentic traffic to websites, especially in retail and commerce. A study by Adobe from March even showed agent-driven traffic increasing by over 1,200% year-over-year, which translates to a staggering 200% growth month-over-month. This means AI agents are increasingly becoming the intermediaries, shopping, booking, and interacting with brands on behalf of human users.
This shift presents both exciting opportunities and significant challenges:
- Opportunity: Brands can unlock entirely new channels of engagement by optimizing for agent interactions, creating seamless experiences for AI-driven shopping and services.
- Challenge: Agents simply don’t have inherent brand loyalty. Businesses must design their APIs and digital presences to allow agents to complete tasks as efficiently and effectively as possible.
I often draw an analogy to the early days of SEO. When web crawlers emerged, businesses had to fundamentally rethink how they presented content online. Similarly, we’re now entering a phase where “agent optimization” will become a critical component of your API and digital strategy.
The Complexity of Multi-Agent Collaboration
Another fascinating aspect I’ve observed is that the agentic world isn’t just about single, isolated agents. Often, it involves multiple agents collaborating to complete complex tasks. Imagine a user wanting to plan a party and bake a chocolate cake for ten people, with a $60 budget, needing all ingredients delivered by a specific date and time.
In scenarios like this, different agents might be responsible for sourcing ingredients, managing schedules, or handling delivery logistics. This orchestrated collaboration demands APIs that are not only robust but also composable and interoperable across various agent systems.
Understanding AI Agents: Anatomy and Functionality
To provide a clear understanding, let me define what an AI agent is in practical terms. At its core, an agent is a software system that leverages AI, especially LLMs, to plan and execute tasks autonomously. It typically possesses several key components:
- Memory: The ability to retain context over a conversation or task sequence.
- Brain: The LLM, which provides reasoning, planning, and natural language understanding.
- Tools: These are the APIs and external systems that the agent interacts with to perform actions.
- Knowledge Base / Registry (Reg): Structured information and metadata the agent can access.
It’s the interaction with these tools – essentially your APIs – that allows agents to extend their capabilities beyond mere language understanding into the real world. APIs serve as the crucial bridge connecting AI reasoning with actionable services.
I emphasize that agents typically operate by receiving a user prompt, using their LLM brain to plan and reason, and then orchestrating a series of interactions with external APIs to fulfill the task. This process is fundamental to how AI agents can deliver value in both personal and business contexts.
The Need for Standardization: Enter the Model-Controller-Protocol (MCP)
The rapid growth of AI agents has brought a critical challenge to light: integrating disparate APIs into agent workflows is incredibly complex and often inefficient. For a while, many developers tried to build a registry (reg) that calls tools within a single agent, but this quickly became unwieldy due to one-to-one integration complexities.
This is where the Model-Controller-Protocol (MCP) comes in. It’s an open protocol released by Anthropic, and I believe it’s designed to standardize how AI agents connect to tools and data sources. MCP provides:
- A standardized way to connect AI agents to APIs, allowing tool providers or SaaS vendors to build an MCP server once and have it usable everywhere.
- Composability and ease of use, enabling agents to orchestrate multiple tools efficiently.
- Clear separation of concerns between tool developers and application developers, which is ideal for large enterprises.
I see this protocol as a game-changer for the AI agent ecosystem because it offers a common language and structure for tool invocation, making it significantly easier to scale and maintain integrations.
Building AI Agents with MCP
As I’ve explored, MCP servers can be built locally or remotely relative to the application host. Tool providers expose their capabilities through these MCP servers, which then act as intermediaries between the AI agent and the underlying APIs.
I’ve even shared a simple example of an MCP server that handles user queries, calls the appropriate tools via MCP, and fetches data from remote service APIs. This architecture effectively decouples the agent logic from the API specifics, leading to better scalability and maintainability.
At AWS, Amazon Bedrock perfectly exemplifies this approach. It provides a serverless way to access LLM capabilities and build agents that interact with multiple frontier models and AI tasks, demonstrating how cloud services are evolving to support the agentic world.
Challenges in Building LLM-Friendly APIs
Despite the immense promise of MCP and AI agents, I’ve identified several persistent challenges in making APIs truly LLM-friendly:
Legacy API Complexity
Most APIs today aren’t greenfield projects; they’ve been developed over many years. SaaS companies often have vast collections of APIs designed for human developers, which means varying documentation quality and inconsistent naming conventions.
For instance, Twilio recently announced making all of its more than 1,400 API endpoints accessible through an MCP server. While this is a huge step forward, it also highlights the challenge for LLMs: given thousands of endpoints, an AI agent must intelligently select the right API call for a specific use case, which is non-trivial.
Documentation and Standardization Gaps
I’ve had direct experience working with Cloudinary—an API-first image and video platform—on exposing their APIs through an MCP server. Even for an API-first company, the effort was challenging due to inconsistent documentation, varied naming, and different ways APIs were structured over time.
This leads me to a critical insight: being API-first is necessary but not sufficient for agentic compatibility. Your APIs must also be uniformly documented and structured with the needs of LLMs specifically in mind.
API Design Not Tailored for LLM Usage
Traditionally, APIs haven’t been designed with the expectation that their responses might serve as context for subsequent LLM requests. This creates difficulties in effectively chaining API calls within agent workflows.
I compare this to the mobile transition era after the iPhone’s introduction, when APIs had to be rethought for mobile constraints like bandwidth and processing power. Similarly, the agentic world demands APIs that are “LLM ready” — optimized for token usage, error handling, and performance.
Token Limits and Throttling Issues
Through my experimentation, I’ve discovered that some API responses can be large enough to consume the entire token quota allowed for an LLM session, resulting in throttling or quota exhaustion. This is a very practical challenge when integrating APIs with LLMs, as the response size directly impacts token consumption, which is often limited.
Effective API design for agents must therefore consider response size and token efficiency to ensure smooth operation.
Complex Enterprise Workflows
In enterprise applications, workflows typically involve multiple chained API calls, such as authentication, token retrieval, and subsequent data fetches. These chains add significant complexity for LLMs that need to understand sequencing, error handling, and retry patterns.
Designing APIs with clear, predictable behaviors and comprehensive error responses is essential to enable agents to navigate these workflows successfully.
Making APIs LLM-Friendly: My Best Practices and Recommendations
Given these challenges, what can API providers do to truly prepare for the agentic world? I’ve distilled several key recommendations from my work with various SaaS companies and industry experts:
Leverage OpenAPI and SMEDI Specifications
OpenAPI (formerly Swagger) is the industry standard for API description, while SMEDI is an open-source, protocol-agnostic interface definition language. Both provide structured ways to define APIs, enabling automated SDK generation, validation, and documentation.
Using these specifications as a foundation helps create APIs that are more understandable to both humans and machines, including LLMs.
Enhance API Documentation with LLM-Specific Metadata
To make APIs truly LLM-friendly, providers should enrich their documentation with:
- Clear descriptions: Provide meaningful names and descriptions for operations, structures, and fields.
- Human-readable comments: Add explanations that clarify the purpose and usage of APIs.
- Examples: Include structured examples of typical use cases and expected responses.
- Error handling patterns: Define how errors are communicated and how clients should respond, including retries and fallbacks.
- Authentication details: Clearly document authentication requirements and flows.
- Performance considerations: Note response sizes, throttling limits, and other constraints relevant to token usage.
- Custom traits: Use custom metadata to provide LLM-specific hints and context.
Implement an API Readiness Gate
Just as the mobile revolution introduced “mobile-ready” API gates to ensure compatibility, I strongly suggest implementing an “LLM-ready” gate for APIs. This would involve automated checks and standards to verify that APIs meet criteria for agentic consumption before release.
This practice promotes continuous improvement and alignment with the agentic ecosystem’s needs.
Improve Data Hygiene and Governance
Drawing parallels with retail personalization, I stress that agent readiness depends heavily on strong data strategies. Good data hygiene, governance, and strategy underpin effective personalization and seamless agent interactions.
Similarly, APIs must be well-governed, consistent, and maintain high-quality metadata to enable agents to operate optimally.
Preparing for the Agentic Future
The agentic world is rapidly becoming a reality. AI agents are autonomously acting on behalf of users, interacting with services, shopping, and managing complex workflows. APIs are the critical enablers of this transformation, but they absolutely must evolve to meet the unique demands of LLM integration.
My insights aim to illuminate the path forward for API providers and SaaS vendors. By embracing standards like MCP, enhancing API documentation with LLM-friendly metadata, and adopting rigorous design and governance practices, organizations can truly position themselves to thrive in the agentic future.
Ultimately, success in this new world requires recognizing that simply being “API-first” is just the beginning. To be truly agent-ready, your APIs must be designed, documented, and governed with AI agents and LLMs firmly in mind—this will unlock new possibilities for automation, personalization, and customer engagement.
As I aptly conclude, “In order to be agent ready, in order to be providing an MCP server, you gotta be API ready.”





