Servers - Model Context Protocol参考实现,用于构建AI与数据源的安全交互中间件

Servers - Model Context Protocol参考实现,用于构建AI与数据源的安全交互中间件

在人工智能快速发展的今天,如何让大型语言模型(LLM)安全、高效地访问外部数据和工具,成为了开发者面临的核心挑战。Model Context Protocol(MCP)应运而生,它就像AI世界的“USB-C接口”,标准化了AI与各种数据源的连接方式。而今天要介绍的Servers项目,正是MCP协议的官方参考实现集合,为开发者提供了学习和构建MCP服务器的宝贵资源。

无论你是AI应用开发者、工具链构建者,还是对AI与系统集成感兴趣的爱好者,这个项目都能为你打开一扇新的大门。让我们一起深入探索这个充满潜力的项目。

项目基本信息

信息项详情
项目名称servers
GitHub地址https://github.com/modelcontextprotocol/servers
项目描述Model Context Protocol Servers
作者modelcontextprotocol
开源协议Other
开源状态公开状态
LanguagesTypeScript, Python, Go, Java, Kotlin, C#, Rust, Swift, PHP, Ruby
支持平台Windows / macOS / Linux / Web
最后更新2026-04-06

一、项目介绍

Servers项目是MCP(Model Context Protocol)官方维护的参考服务器实现集合。MCP是一个开放协议,它标准化了AI应用程序如何为LLM提供上下文和工具。简单来说,MCP充当了AI模型与外部世界(文件系统、数据库、API、浏览器等)之间的桥梁,让AI能够安全、可控地执行操作和获取信息。

这个仓库包含了多个参考实现服务器,每个服务器都演示了MCP协议的不同功能和用法。这些实现并非为了生产环境直接使用而设计,而是作为教育示例,帮助开发者理解MCP的工作原理,并基于官方SDK构建自己的MCP服务器。

项目中包含的参考服务器主要分为三类:

核心参考服务器:

  • Everything:综合示例服务器,包含了提示词、资源和工具的所有MCP特性
  • Fetch:网页内容获取和转换服务器,优化内容供LLM高效使用
  • Filesystem:提供安全的文件操作功能,支持可配置的访问控制
  • Git:提供Git仓库的读取、搜索和操作工具
  • Memory:基于知识图谱的持久化记忆系统
  • Sequential Thinking:通过思维序列进行动态和反思性问题解决
  • Time:时间和时区转换功能

已归档的参考服务器(仍可参考,但不再积极维护)包括AWS KB检索、Brave搜索、GitHub、Google Drive、PostgreSQL、Puppeteer等。这些服务器展示了MCP与各种流行服务和数据库的集成方式。

二、核心优势

开源免费,学习门槛低
项目代码完全开放,你可以自由查看、修改和学习。MCP SDK支持TypeScript、Python、Go、Java、C#、Rust、Swift、PHP、Ruby、Kotlin等10种主流编程语言,无论你熟悉哪种技术栈,都能快速上手。

标准化协议,避免重复造轮子
MCP是Anthropic(Claude AI的开发商)主导的开放标准,旨在成为AI集成的通用语言。使用MCP,你可以避免为每个数据源单独开发集成方案,一套标准应对所有场景。

安全可控的AI集成
MCP内置了安全机制,允许你精细控制AI可以访问哪些资源、执行哪些操作。例如,Filesystem服务器支持配置允许访问的目录,Git服务器限制敏感操作,确保AI不会对系统造成意外损害。

活跃的社区生态
虽然本项目只维护少量参考实现,但MCP生态系统已经非常庞大。官方文档列出了数百个第三方MCP服务器,覆盖了从数据库、云服务到社交媒体、金融支付等几乎所有领域。你可以直接使用社区已有的服务器,也可以基于参考实现快速定制自己的服务器。

促进AI与现有系统的融合
通过MCP,你可以让Claude、Cursor等支持MCP的AI工具直接操作你的数据库、文件系统、版本控制系统等,实现真正的AI辅助开发和工作自动化。

三、适用场景

学习和理解MCP协议
如果你是第一次接触MCP,通过研究这些参考实现是最高效的学习路径。每个服务器代码量适中,结构清晰,你可以直观地看到如何使用MCP SDK创建工具、资源和提示词。

构建自定义MCP服务器
当你需要让AI访问特定的内部系统、专有API或特殊数据源时,可以参考这些实现的结构和模式。例如,参考Fetch服务器学习如何获取网络内容,参考Filesystem学习如何实现安全的本地资源访问。

快速原型验证
在正式开发前,你可以基于参考服务器快速搭建原型,验证MCP协议在你的场景中的可行性。Everything服务器特别适合用来测试MCP客户端的功能。

企业级AI应用开发
对于企业来说,MCP提供了一种标准化的方式将AI集成到业务流程中。你可以开发MCP服务器让AI访问CRM、ERP、数据库等核心系统,实现智能化的数据处理和决策支持。

AI工具链扩展
如果你正在开发AI辅助开发工具(如代码编辑器插件、CLI工具),可以通过MCP服务器扩展其能力,让AI能够执行更复杂的任务,如代码搜索、仓库操作、文档生成等。

四、安装教程

系统要求

工具用途下载/安装方式
Python运行Python编写的MCP服务器https://python.org (版本要求:3.8或以上)
Node.js运行TypeScript/JavaScript编写的MCP服务器https://nodejs.org (版本要求:14.0或以上)
Git克隆项目代码https://git-scm.com

详细安装步骤

第一步:安装运行时环境

根据你想要运行的服务器类型,安装对应的运行时:

# 安装Python(macOS使用Homebrew)
brew install python@3.10

# 或从官网下载安装包
# Windows用户请从python.org下载安装程序
# 安装Node.js(推荐使用nvm)
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
nvm install 18
nvm use 18

第二步:克隆项目代码

# 克隆仓库
git clone https://github.com/modelcontextprotocol/servers.git

# 进入项目目录
cd servers

第三步:安装特定服务器的依赖

每个服务器都在自己的子目录中,需要单独安装依赖:

# 以TypeScript编写的Fetch服务器为例
cd src/fetch
npm install

# 以Python编写的Memory服务器为例
cd ../memory
pip install -r requirements.txt

第四步:配置MCP客户端

以Claude Desktop为例,你需要编辑配置文件来添加MCP服务器:

// Windows: %APPDATA%\Claude\claude_desktop_config.json
// macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
{
  "mcpServers": {
    "filesystem": {
      "command": "node",
      "args": ["/path/to/servers/src/filesystem/build/index.js", "/allowed/path"]
    },
    "fetch": {
      "command": "node",
      "args": ["/path/to/servers/src/fetch/build/index.js"]
    }
  }
}

第五步:构建TypeScript服务器

对于TypeScript编写的服务器,需要先编译:

# 在服务器目录中
npm run build

# 或者全局构建所有TypeScript服务器
npm run build-all

五、使用示例

示例一:使用Filesystem服务器让AI操作文件

Filesystem服务器演示了如何让AI安全地读取、写入和管理文件。以下是配置和使用示例:

配置Claude Desktop启用Filesystem服务器:

{
  "mcpServers": {
    "filesystem": {
      "command": "node",
      "args": [
        "/Users/username/servers/src/filesystem/build/index.js",
        "/Users/username/Documents/ai-workspace",
        "/Users/username/Downloads/temp"
      ]
    }
  }
}

配置完成后,重启Claude Desktop,你就可以向AI发送这样的指令:

  • "读取我Documents/ai-workspace目录下的config.json文件"
  • "在Downloads/temp目录中创建一个名为notes.txt的文件,内容为'这是AI创建的笔记'"
  • "列出Documents/ai-workspace目录中的所有文件"

AI将理解你的指令,并通过MCP服务器安全地执行这些文件操作。

示例二:使用Fetch服务器获取网页内容

Fetch服务器让AI能够获取并处理网页内容,特别适合需要实时信息的场景:

{
  "mcpServers": {
    "fetch": {
      "command": "node",
      "args": ["/Users/username/servers/src/fetch/build/index.js"]
    }
  }
}

配置后,你可以这样使用:

Fetch服务器会自动将网页转换为适合LLM处理的Markdown格式,去除广告和不必要的元素。

示例三:使用Memory服务器实现持久记忆

Memory服务器展示了如何使用知识图谱让AI拥有长期记忆:

# 这是一个使用MCP Python SDK构建Memory服务器的简化示例
from mcp.server import Server, NotificationOptions
from mcp.server.models import InitializationOptions
import mcp.server.stdio

# 创建服务器实例
server = Server("memory-example")

# 定义记忆存储
memories = {}

@server.list_tools()
async def handle_list_tools():
    return [
        {
            "name": "remember",
            "description": "存储一条记忆",
            "inputSchema": {
                "type": "object",
                "properties": {
                    "key": {"type": "string"},
                    "value": {"type": "string"}
                },
                "required": ["key", "value"]
            }
        },
        {
            "name": "recall",
            "description": "回忆一条记忆",
            "inputSchema": {
                "type": "object",
                "properties": {
                    "key": {"type": "string"}
                },
                "required": ["key"]
            }
        }
    ]

@server.call_tool()
async def handle_call_tool(name: str, arguments: dict):
    if name == "remember":
        memories[arguments["key"]] = arguments["value"]
        return {"content": [{"type": "text", "text": f"已记住: {arguments['key']}"}]}
    elif name == "recall":
        value = memories.get(arguments["key"], "未找到该记忆")
        return {"content": [{"type": "text", "text": value}]}

配置Memory服务器后,你可以让AI记住并在后续对话中回忆信息:

  • "记住我的名字是张三"
  • "记住项目截止日期是2026年5月1日"
  • "我的名字是什么?"
  • "项目什么时候截止?"

示例四:组合使用多个服务器

MCP的强大之处在于可以同时使用多个服务器。你可以让AI同时访问文件系统、获取网络内容、调用Git操作:

{
  "mcpServers": {
    "filesystem": {
      "command": "node",
      "args": ["/path/to/filesystem/build/index.js", "/workspace"]
    },
    "fetch": {
      "command": "node",
      "args": ["/path/to/fetch/build/index.js"]
    },
    "git": {
      "command": "node",
      "args": ["/path/to/git/build/index.js", "/workspace/repo"]
    }
  }
}

然后你可以发出复合指令:

  • "从GitHub上这个项目的README中提取安装说明,保存到本地的docs目录"
  • "查看当前Git仓库的状态,然后获取最新的提交信息,并写入日志文件"

六、常见问题

问题1:启动服务器时提示“找不到模块”或“命令不存在”

解决方案:确保你已经安装了所有依赖并正确构建了项目。对于TypeScript服务器,需要先运行npm installnpm run build。检查package.json中的入口文件路径是否正确。

问题2:Claude Desktop无法识别MCP服务器

解决方案:检查配置文件路径和JSON格式是否正确。Windows用户注意路径中使用双反斜杠\\或正斜杠/。配置修改后必须完全退出并重启Claude Desktop,而不是仅仅关闭窗口。

问题3:Filesystem服务器报告“权限被拒绝”

解决方案:检查你授予服务器的目录路径是否正确,以及该目录是否具有正确的读写权限。Filesystem服务器的设计原则是安全优先,只允许访问明确指定的目录。

问题4:Fetch服务器无法获取某些网站的内容

解决方案:某些网站可能屏蔽自动化访问。Fetch服务器默认使用简单的HTTP请求,对于需要JavaScript渲染的网站可能无法获取完整内容。你可以考虑使用Puppeteer等浏览器自动化方案。

问题5:Python和TypeScript服务器能否同时使用

解决方案:可以。MCP客户端可以同时启动多个服务器进程,无论它们是用什么语言编写的。只要每个服务器都能独立运行并通过标准输入输出通信即可。

问题6:如何调试MCP服务器

解决方案:MCP服务器通过标准输入输出与客户端通信,调试较为困难。一个常用的方法是使用日志文件:

import logging
logging.basicConfig(filename='/tmp/mcp-server.log', level=logging.DEBUG)

然后查看日志文件来追踪服务器行为。对于TypeScript服务器,可以使用console.error输出到stderr,这些信息通常会在客户端的日志中显示。

七、总结

Servers项目是学习和使用MCP协议的宝贵资源。虽然官方明确表示这些服务器是参考实现而非生产就绪方案,但它们的价值恰恰在于简洁、清晰和教育性。通过研究这些示例,你可以:

  1. 快速理解MCP协议的核心概念:工具(Tools)、资源(Resources)和提示词(Prompts)
  2. 掌握使用不同编程语言(特别是TypeScript和Python)开发MCP服务器的方法
  3. 获得构建安全、可控的AI集成的实践经验
  4. 为你的项目选择合适的MCP服务器架构模式

需要提醒的是,这些参考实现并未经过严格的安全审计,直接将它们用于生产环境存在风险。在实际项目中,你应该基于官方SDK重新实现,并加入适当的安全措施(认证、授权、输入验证、速率限制等)。

MCP生态正在快速发展,已有数百个第三方服务器覆盖了几乎所有主流服务和平台。无论你是想增强AI助手的能力,还是为你的产品添加AI接口,MCP都提供了一个开放、标准化的解决方案。从Servers项目开始,迈出你构建AI集成应用的第一步吧。

评论:

AlexJohnson|This tutorial is incredibly helpful! I've been struggling to understand MCP for weeks, and this finally made it click.

MariaChen|The code examples are very practical. I especially liked the memory server example with the knowledge graph approach.

SamWise|Is there any plan to add more reference servers like database connectors? The archived ones look promising but outdated.

TechExplorer|Great breakdown of the installation process. I had issues with path configurations before, but the JSON examples cleared it up.

DeveloperDan|I wish the article covered more about the security implications of running these servers. Still, very informative read.

CodeNinja|The comparison table for system requirements is super useful. Many tutorials skip this basic but crucial info.

SarahMiller|Finally someone explains MCP in plain English! The "USB-C for AI" analogy is perfect. Will share with my team.

OpenSourceFan|It's awesome to see Anthropic supporting so many programming languages with their SDKs. Makes adoption much easier.

JohnDoe|The tutorial is well-structured, but I found the "Expected reading time 243 minutes" a bit off. Took me about 30 minutes.

LisaWang|I appreciate how you emphasized these are reference implementations, not production-ready. Important distinction for beginners.

CodingGuru|The composite server example showing how to use filesystem, fetch, and git together is gold. That's the real power of MCP.

EmmaBrown|Would love to see a follow-up article about deploying MCP servers in production with proper authentication and scaling.

PythonMaster|The Python code snippet for the memory server is clean and well-explained. Easy to extend for custom use cases.

ChrisEvans|This article saved me hours of digging through GitHub repos. The configuration examples alone are worth the read.

AnnaTaylor|I'm impressed by how many third-party servers already exist in the MCP ecosystem. The list in the article is mind-blowing.

BobMartin|The troubleshooting section is practical. I ran into the permission denied issue and the solution worked perfectly.

JennyLee|For a technical tutorial, this is very accessible. My junior devs could follow along without much hand-holding.

KevinZhang|The formatting is excellent - code blocks, tables, lists. Makes complex information easy to digest. No emojis is a plus.

NancyDavis|I've been looking for something exactly like this. Our company wants to integrate AI with internal tools, and MCP seems perfect.

OliverKing|The note about using console.error for debugging TypeScript servers is a pro tip. Would have saved me hours last week.

PatriciaMoore|Great job highlighting the reference servers' educational purpose. Too many people just copy-paste without understanding.

QuincyAdams|The step-by-step installation is foolproof. Even our intern got it working on the first try. Well done!

RachelGreen|I'd love to see benchmarks comparing performance across different language implementations. Maybe for a future article?

SteveBrown|The explanation of how MCP provides security through explicit directory permissions in Filesystem server is crucial.

TinaWhite|This article convinced me to try MCP for our next project. The standardization aspect is very appealing.

UlyssesGrant|The memory server example opened my eyes to persistent AI context. Game-changer for customer support bots.

VictorHugo|Excellent write-up. The only thing missing is a video tutorial, but the text is so clear it's not strictly necessary.

WendyLiu|I appreciate that you included the "archived" servers. Even if not maintained, they're still great learning resources.

XavierCarter|The combination of theory and practice in this tutorial is balanced perfectly. Not too shallow, not too overwhelming.

YolandaYoung|This is the kind of content that makes open source accessible to everyone. Thank you for your effort and clarity.

ZacharyAllen|The disclaimer about security auditing is important. Too many people assume reference code is production-ready.

AmyWilson|I successfully built a custom MCP server for our internal API after reading this. The reference implementations were the perfect starting point.

BrianClark|The MCP ecosystem is growing so fast. This article is a great snapshot of what's available right now.

CatherineLee|The table showing languages supported by SDKs is very reassuring. We can stick with Python for our team.

DavidKim|243 minutes estimated reading time seemed like a joke, but I actually spent that long exploring all the linked resources. Worth it.

已有 35 条评论

    1. DavidKim DavidKim

      243 minutes estimated reading time seemed like a joke, but I actually spent that long exploring all the linked resources. Worth it.

    2. CatherineLee CatherineLee

      The table showing languages supported by SDKs is very reassuring. We can stick with Python for our team.

    3. BrianClark BrianClark

      The MCP ecosystem is growing so fast. This article is a great snapshot of what's available right now.

    4. AmyWilson AmyWilson

      I successfully built a custom MCP server for our internal API after reading this. The reference implementations were the perfect starting point.

    5. ZacharyAllen ZacharyAllen

      The disclaimer about security auditing is important. Too many people assume reference code is production-ready.

    6. YolandaYoung YolandaYoung

      This is the kind of content that makes open source accessible to everyone. Thank you for your effort and clarity.

    7. XavierCarter XavierCarter

      The combination of theory and practice in this tutorial is balanced perfectly. Not too shallow, not too overwhelming.

    8. WendyLiu WendyLiu

      I appreciate that you included the "archived" servers. Even if not maintained, they're still great learning resources.

    9. VictorHugo VictorHugo

      Excellent write-up. The only thing missing is a video tutorial, but the text is so clear it's not strictly necessary.

    10. UlyssesGrant UlyssesGrant

      The memory server example opened my eyes to persistent AI context. Game-changer for customer support bots.

    11. TinaWhite TinaWhite

      This article convinced me to try MCP for our next project. The standardization aspect is very appealing.