MCPHub LabRegistrymcp-llm-bridge
bartolli

mcp llm bridge

Built by bartolli 335 stars

What is mcp llm bridge?

MCP implementation that enables communication between MCP servers and OpenAI-compatible LLMs

How to use mcp llm bridge?

1. Install a compatible MCP client (like Claude Desktop). 2. Open your configuration settings. 3. Add mcp llm bridge using the following command: npx @modelcontextprotocol/mcp-llm-bridge 4. Restart the client and verify the new tools are active.
🛡️ Scoped (Restricted)
npx @modelcontextprotocol/mcp-llm-bridge --scope restricted
🔓 Unrestricted Access
npx @modelcontextprotocol/mcp-llm-bridge

Key Features

Native MCP Protocol Support
Real-time Tool Activation & Execution
Verified High-performance Implementation
Secure Resource & Context Handling

Optimized Use Cases

Extending AI models with custom local capabilities
Automating system workflows via natural language
Connecting external data sources to LLM context windows

mcp llm bridge FAQ

Q

Is mcp llm bridge safe?

Yes, mcp llm bridge follows the standardized Model Context Protocol security patterns and only executes tools with explicit user-granted permissions.

Q

Is mcp llm bridge up to date?

mcp llm bridge is currently active in the registry with 335 stars on GitHub, indicating its reliability and community support.

Q

Are there any limits for mcp llm bridge?

Usage limits depend on the specific implementation of the MCP server and your system resources. Refer to the official documentation below for technical details.

Official Documentation

View on GitHub

MCP LLM Bridge

A bridge connecting Model Context Protocol (MCP) servers to OpenAI-compatible LLMs. Primary support for OpenAI API, with additional compatibility for local endpoints that implement the OpenAI API specification.

The implementation provides a bidirectional protocol translation layer between MCP and OpenAI's function-calling interface. It converts MCP tool specifications into OpenAI function schemas and handles the mapping of function invocations back to MCP tool executions. This enables any OpenAI-compatible language model to leverage MCP-compliant tools through a standardized interface, whether using cloud-based models or local implementations like Ollama.

Read more about MCP by Anthropic here:

Demo:

MCP LLM Bridge Demo

Quick Start

# Install
curl -LsSf https://astral.sh/uv/install.sh | sh
git clone https://github.com/bartolli/mcp-llm-bridge.git
cd mcp-llm-bridge
uv venv
source .venv/bin/activate
uv pip install -e .

# Create test database
python -m mcp_llm_bridge.create_test_db

Configuration

OpenAI (Primary)

Create .env:

OPENAI_API_KEY=your_key
OPENAI_MODEL=gpt-4o # or any other OpenAI model that supports tools

Note: reactivate the environment if needed to use the keys in .env: source .venv/bin/activate

Then configure the bridge in src/mcp_llm_bridge/main.py

config = BridgeConfig(
    mcp_server_params=StdioServerParameters(
        command="uvx",
        args=["mcp-server-sqlite", "--db-path", "test.db"],
        env=None
    ),
    llm_config=LLMConfig(
        api_key=os.getenv("OPENAI_API_KEY"),
        model=os.getenv("OPENAI_MODEL", "gpt-4o"),
        base_url=None
    )
)

Additional Endpoint Support

The bridge also works with any endpoint implementing the OpenAI API specification:

Ollama

llm_config=LLMConfig(
    api_key="not-needed",
    model="mistral-nemo:12b-instruct-2407-q8_0",
    base_url="http://localhost:11434/v1"
)

Note: After testing various models, including llama3.2:3b-instruct-fp16, I found that mistral-nemo:12b-instruct-2407-q8_0 handles complex queries more effectively.

LM Studio

llm_config=LLMConfig(
    api_key="not-needed",
    model="local-model",
    base_url="http://localhost:1234/v1"
)

I didn't test this, but it should work.

Usage

python -m mcp_llm_bridge.main

# Try: "What are the most expensive products in the database?"
# Exit with 'quit' or Ctrl+C

Running Tests

Install the package with test dependencies:

uv pip install -e ".[test]"

Then run the tests:

python -m pytest -v tests/

License

MIT

Contributing

PRs welcome.

Global Ranking

-
Trust ScoreMCPHub Index

Based on codebase health & activity.

Manual Config

{ "mcpServers": { "mcp-llm-bridge": { "command": "npx", "args": ["mcp-llm-bridge"] } } }