Model Context Protocol (MCP) Explained
- Priank Ravichandar
- Jan 27
- 4 min read
Updated: Sep 15
An overview of the Model Context Protocol (MCP) for product team members and stakeholders.

Key Takeaways
MCPs provide a standardized way to connect AI applications with diverse external capabilities.
They reduce the development and maintenance burden of integrating with external services.
They help teams to make their AI applications more capable, efficient, and user-friendly.
Teams can quickly add new functionalities by connecting to the right MCP servers.
Overview
Note: Generated using NotebookLM using content from Anthropic | Introduction To Model Context Protocol, Anthropic | Model Context Protocol Advanced Topics, and Hugging Face | Model Context Protocol (MCP) Course.
Common FAQs
What is the Model Context Protocol (MCP)?
MCP is a standardized protocol designed to connect AI models with external systems.
Model Context Protocol (MCP) is a communication layer that enables AI models to seamlessly connect with external tools, data sources, and environments without requiring extensive custom integration code. It provides a standardized way to link AI systems to the broader digital world. This shifts the burden of defining and executing these external capabilities away from your application to specialized MCP Servers.
What problem does MCP solve?
MCPs address the challenge of connecting multiple applications to multiple sources.
Traditionally, integrating multiple AI applications with multiple sources (e.g., external services, databases, etc.) has been quite tedious. Developers had to write, test, and maintain a custom integration for every possible pairing, which can be incredibly complex, time-consuming, and expensive. The challenge of connecting M applications to N sources is referred to as the "M×N integration problem." For example, integrating an AI model with GitHub's vast API would require a product team to create countless tool schemas and functions.
MCP creates a more manageable solution for the MxN problem. Each AI application (Host) implements the Client-side of MCP once, and each external service implements the Server-side once, drastically reducing complexity and maintenance. For example, instead of integrating an AI model with GitHub, a team would just need to connect to an MCP server that provides access to GitHub (along with other relevant tools, functions, etc.).
What are the key components of MCP?
An MCP consists of a host, an MCP client, and an MCP server.

MCP operates on a client-server model:
Host: This is your user-facing AI application (e.g., Cursor, VS Code) that end-users interact with directly. The Host orchestrates the overall flow and manages user interactions.
MCP Client: A component within your Host application that manages the 1:1 communication with a specific MCP Server, handling the protocol details.
MCP Server: An external program or service that acts as an interface to an outside service (e.g., GitHub, a document management system). It wraps up tons of functionality and exposes it as a standardized set of Tools, Resources, and Prompts.
How does MCP work?

Here’s an example to understand how MCP works:
A user submits a query to your Host application.
Your Host asks the MCP Client for available tools from an MCP Server.
The MCP Client requests tools from the MCP Server, which responds with a list of its capabilities.
Your Host sends the user's query and the available capabilities to the AI model.
The AI model decides if it needs to call a tool to answer the question.
If a tool is needed, your Host instructs the MCP Client to execute it on the MCP Server.
The MCP Server performs the action (e.g., calls GitHub's API) and sends the results back through the MCP Client to your Host.
The AI model uses these results to formulate a final answer, which your Host then delivers to the user.
How do MCPs help AI systems?
MCPs help AI applications leverage tools, resources, and prompts.
MCPs provide access to three core capabilities (via the MCP server), each controlled by a different part of the application stack:
Tools (Model-Controlled): Allowing the AI model to access tools that an AI model can autonomously invoke to perform actions or retrieve computed data. The model decides when to use them. Use cases include querying APIs, performing calculations, sending messages, etc.
Resources (App-Controlled): Allowing the application to obtain read-only access to data sources, allowing your application to fetch context without executing complex logic. The app decides when to fetch resource data. Use cases include populating autocomplete options in a UI, retrieving file contents, adding context to prompts, etc.
Prompts (User-Controlled): Allowing the user to trigger actions using pre-built, high-quality instructions or workflows through UI interactions (e.g., slash commands, buttons). The prompt optimizes the use of available Tools and Resources. Use cases include formatting documents, summarizing content, generating reports, etc.
References