MCPHub LabRegistryPortkey-AI/gateway
Portkey-AI

Portkey AI/gateway

Built by Portkey-AI β€’ 11,075 stars

What is Portkey AI/gateway?

A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with 1 fast & friendly API.

How to use Portkey AI/gateway?

1. Install a compatible MCP client (like Claude Desktop). 2. Open your configuration settings. 3. Add Portkey AI/gateway using the following command: npx @modelcontextprotocol/portkey-ai-gateway 4. Restart the client and verify the new tools are active.
πŸ›‘οΈ Scoped (Restricted)
npx @modelcontextprotocol/portkey-ai-gateway --scope restricted
πŸ”“ Unrestricted Access
npx @modelcontextprotocol/portkey-ai-gateway

Key Features

Native MCP Protocol Support
Real-time Tool Activation & Execution
Verified High-performance Implementation
Secure Resource & Context Handling

Optimized Use Cases

Extending AI models with custom local capabilities
Automating system workflows via natural language
Connecting external data sources to LLM context windows

Portkey AI/gateway FAQ

Q

Is Portkey AI/gateway safe?

Yes, Portkey AI/gateway follows the standardized Model Context Protocol security patterns and only executes tools with explicit user-granted permissions.

Q

Is Portkey AI/gateway up to date?

Portkey AI/gateway is currently active in the registry with 11,075 stars on GitHub, indicating its reliability and community support.

Q

Are there any limits for Portkey AI/gateway?

Usage limits depend on the specific implementation of the MCP server and your system resources. Refer to the official documentation below for technical details.

Official Documentation

View on GitHub
<p align="right"> <strong>English</strong> | <a href="./.github/README.cn.md">δΈ­ζ–‡</a> | <a href="./.github/README.jp.md">ζ—₯本θͺž</a> </p>

[!IMPORTANT] :rocket: Gateway 2.0 (Pre-Release) Portkey's core enterprise gateway is merging into open-source with our 2.0 release. You can try the pre-release branch here. Read more about what's next for Portkey in our Series A announcement.

<div align="center">

πŸ†• Portkey Models - Open-source LLM pricing for 2,300+ models across 40+ providers. Explore β†’

AI Gateway

Route to 250+ LLMs with 1 fast & friendly API

<img src="https://cfassets.portkey.ai/sdk.gif" width="550px" alt="Portkey AI Gateway Demo showing LLM routing capabilities" style="margin-left:-35px">

Docs | Enterprise | Hosted Gateway | Changelog | API Reference

License Discord Twitter npm version Better Stack Badge

<a href="https://us-east-1.console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/quickcreate?stackName=portkey-gateway&templateURL=https://portkey-gateway-ec2-quicklaunch.s3.us-east-1.amazonaws.com/portkey-gateway-ec2-quicklaunch.template.yaml"><img src="https://img.shields.io/badge/Deploy_to_EC2-232F3E?style=for-the-badge&logo=amazonwebservices&logoColor=white" alt="Deploy to AWS EC2" width="105"/></a> Ask DeepWiki

</div> <br/>

The AI Gateway is designed for fast, reliable & secure routing to 1600+ language, vision, audio, and image models. It is a lightweight, open-source, and enterprise-ready solution that allows you to integrate with any language model in under 2 minutes.

  • Blazing fast (<1ms latency) with a tiny footprint (122kb)
  • Battle tested, with over 10B tokens processed everyday
  • Enterprise-ready with enhanced security, scale, and custom deployments
<br>

What can you do with the AI Gateway?

<br><br>

[!TIP] Starring this repo helps more developers discover the AI Gateway πŸ™πŸ»

star-2

<br> <br>

Quickstart (2 mins)

1. Setup your AI Gateway

# Run the gateway locally (needs Node.js and npm)
npx @portkey-ai/gateway

The Gateway is running on http://localhost:8787/v1

The Gateway Console is running on http://localhost:8787/public/

<sup> Deployment guides: &nbsp; <a href="https://portkey.wiki/gh-18"><img height="12" width="12" src="https://cfassets.portkey.ai/logo/dew-color.svg" /> Portkey Cloud (Recommended)</a> &nbsp; <a href="./docs/installation-deployments.md#docker"><img height="12" width="12" src="https://cdn.simpleicons.org/docker/3776AB" /> Docker</a> &nbsp; <a href="./docs/installation-deployments.md#nodejs-server"><img height="12" width="12" src="https://cdn.simpleicons.org/node.js/3776AB" /> Node.js</a> &nbsp; <a href="./docs/installation-deployments.md#cloudflare-workers"><img height="12" width="12" src="https://cdn.simpleicons.org/cloudflare/3776AB" /> Cloudflare</a> &nbsp; <a href="./docs/installation-deployments.md#replit"><img height="12" width="12" src="https://cdn.simpleicons.org/replit/3776AB" /> Replit</a> &nbsp; <a href="./docs/installation-deployments.md"> Others...</a> </sup>

2. Make your first request

<!-- <details open> <summary>Python Example</summary> -->
# pip install -qU portkey-ai

from portkey_ai import Portkey

# OpenAI compatible client
client = Portkey(
    provider="openai", # or 'anthropic', 'bedrock', 'groq', etc
    Authorization="sk-***" # the provider API key
)

# Make a request through your AI Gateway
client.chat.completions.create(
    messages=[{"role": "user", "content": "What's the weather like?"}],
    model="gpt-4o-mini"
)

<sup>Supported Libraries: Β  <img height="12" width="12" src="https://cdn.simpleicons.org/javascript/3776AB" /> JS Β  <img height="12" width="12" src="https://cdn.simpleicons.org/python/3776AB" /> Python Β  <img height="12" width="12" src="https://cdn.simpleicons.org/gnubash/3776AB" /> REST Β  <img height="12" width="12" src="https://cdn.simpleicons.org/openai/3776AB" /> OpenAI SDKs Β  <img height="12" width="12" src="https://cdn.simpleicons.org/langchain/3776AB" /> Langchain Β  LlamaIndex Β  Autogen Β  CrewAI Β  More.. </sup>

On the Gateway Console (http://localhost:8787/public/) you can see all of your local logs in one place.

<img src="https://github.com/user-attachments/assets/362bc916-0fc9-43f1-a39e-4bd71aac4a3a" width="400" />

3. Routing & Guardrails

Configs in the LLM gateway allow you to create routing rules, add reliability and setup guardrails.

config = {
  "retry": {"attempts": 5},

  "output_guardrails": [{
    "default.contains": {"operator": "none", "words": ["Apple"]},
    "deny": True
  }]
}

# Attach the config to the client
client = client.with_options(config=config)

client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Reply randomly with Apple or Bat"}]
)

# This would always response with "Bat" as the guardrail denies all replies containing "Apple". The retry config would retry 5 times before giving up.
<div align="center"> <img src="https://portkey.ai/blog/content/images/size/w1600/2024/11/image-15.png" width=600 title="Request flow through Portkey's AI gateway with retries and guardrails" alt="Request flow through Portkey's AI gateway with retries and guardrails"/> </div>

You can do a lot more stuff with configs in your AI gateway. Jump to examples β†’

<br/>

Enterprise Version (Private deployments)

<sup>

<img height="12" width="12" src="https://cfassets.portkey.ai/amazon-logo.svg" /> AWS Β  <img height="12" width="12" src="https://cfassets.portkey.ai/azure-logo.svg" /> Azure Β  <img height="12" width="12" src="https://cdn.simpleicons.org/googlecloud/3776AB" /> GCP Β  <img height="12" width="12" src="https://cdn.simpleicons.org/redhatopenshift/3776AB" /> OpenShift Β  <img height="12" width="12" src="https://cdn.simpleicons.org/kubernetes/3776AB" /> Kubernetes

</sup>

The LLM Gateway's enterprise version offers advanced capabilities for org management, governance, security and more out of the box. View Feature Comparison β†’

The enterprise deployment architecture for supported platforms is available here - Enterprise Private Cloud Deployments

<a href="https://portkey.sh/demo-13"><img src="https://portkey.ai/blog/content/images/2024/08/Get-API-Key--5-.png" height=50 alt="Book an enterprise AI gateway demo" /></a><br/>

<br>

MCP Gateway

MCP Gateway provides a centralized control plane for managing MCP (Model Context Protocol) servers across your organization.

  • Authentication β€” Single auth layer at the gateway. Users authenticate once; your MCP servers receive verified requests
  • Access Control β€” Control which teams and users can access which servers and tools. Revoke access instantly
  • Observability β€” Every tool call logged with full context: who called what, parameters, response, latency
  • Identity Forwarding β€” Forward user identity (email, team, roles) to MCP servers automatically

Works with Claude Desktop, Cursor, VS Code, and any MCP-compatible client. Get started β†’

<br>

Core Features

Reliable Routing

  • <a href="https://portkey.wiki/gh-37">Fallbacks</a>: Fallback to another provider or model on failed requests using the LLM gateway. You can specify the errors on which to trigger the fallback. Improves reliability of your application.
  • <a href="https://portkey.wiki/gh-38">Automatic Retries</a>: Automatically retry failed requests up to 5 times. An exponential backoff strategy spaces out retry attempts to prevent network overload.
  • <a href="https://portkey.wiki/gh-39">Load Balancing</a>: Distribute LLM requests across multiple API keys or AI providers with weights to ensure high availability and optimal performance.
  • <a href="https://portkey.wiki/gh-40">Request Timeouts</a>: Manage unruly LLMs & latencies by setting up granular request timeouts, allowing automatic termination of requests that exceed a specified duration.
  • <a href="https://portkey.wiki/gh-41">Multi-modal LLM Gateway</a>: Call vision, audio (text-to-speech & speech-to-text), and image generation models from multiple providers β€” all using the familiar OpenAI signature
  • <a href="https://portkey.wiki/gh-42">Realtime APIs</a>: Call realtime APIs launched by OpenAI through the integrate websockets server.

Security & Accuracy

  • <a href="https://portkey.wiki/gh-88">Guardrails</a>: Verify your LLM inputs and outputs to adhere to your specified checks. Choose from the 40+ pre-built guardrails to ensure compliance with security and accuracy standards. You can <a href="https://portkey.wiki/gh-43">bring your own guardrails</a> or choose from our <a href="https://portkey.wiki/gh-44">many partners</a>.
  • Secure Key Management: Use your own keys or generate virtual keys on the fly.
  • Role-based access control: Granular access control for your users, workspaces and API keys.
  • <a href="https://portkey.wiki/gh-47">Compliance & Data Privacy</a>: The AI gateway is SOC2, HIPAA, GDPR, and CCPA compliant.

Cost Management

  • Smart caching: Cache responses from LLMs to reduce costs and improve latency. Supports simple and semantic* caching.
  • Usage analytics: Monitor and analyze your AI and LLM usage, including request volume, latency, costs and error rates.
  • Provider optimization*: Automatically switch to the most cost-effective provider based on usage patterns and pricing models.

Collaboration & Workflows

<sup> *&nbsp;Available in hosted and enterprise versions </sup> <br>

Portkey Models

Open-source LLM pricing database for 40+ providers - used by the Gateway for cost tracking.

GitHub | Model Explorer

<br>

Cookbooks

β˜„οΈ Trending

🚨 Latest

View all cookbooks β†’ <br/><br/>

Supported Providers

Explore Gateway integrations with 45+ providers and 8+ agent frameworks.

ProviderSupportStream
<img src="docs/images/openai.png" width=35 />OpenAIβœ…βœ…
<img src="docs/images/azure.png" width=35>Azure OpenAIβœ…βœ…
<img src="docs/images/anyscale.png" width=35>Anyscaleβœ…βœ…
<img src="https://upload.wikimedia.org/wikipedia/commons/2/2d/Google-favicon-2015.png" width=35>Google Geminiβœ…βœ…
<img src="docs/images/anthropic.png" width=35>Anthropicβœ…βœ…
<img src="docs/images/cohere.png" width=35>Cohereβœ…βœ…
<img src="https://assets-global.website-files.com/64f6f2c0e3f4c5a91c1e823a/654693d569494912cfc0c0d4_favicon.svg" width=35>Together AIβœ…βœ…
<img src="https://www.perplexity.ai/favicon.svg" width=35>Perplexityβœ…βœ…
<img src="https://docs.mistral.ai/img/favicon.ico" width=35>Mistralβœ…βœ…
<img src="https://docs.nomic.ai/img/nomic-logo.png" width=35>Nomicβœ…βœ…
<img src="https://files.readme.io/d38a23e-small-studio-favicon.png" width=35>AI21βœ…βœ…
<img src="https://platform.stability.ai/small-logo-purple.svg" width=35>Stability AIβœ…βœ…
<img src="https://deepinfra.com/_next/static/media/logo.4a03fd3d.svg" width=35>DeepInfraβœ…βœ…
<img src="https://ollama.com/public/ollama.png" width=35>Ollamaβœ…βœ…
<img src="https://novita.ai/favicon.ico" width=35>Novita AIβœ…βœ…

View the complete list of 200+ supported models here

<br>
<br>

Agents

Gateway seamlessly integrates with popular agent frameworks. Read the documentation here.

FrameworkCall 200+ LLMsAdvanced RoutingCachingLogging & Tracing*Observability*Prompt Management*
Autogenβœ…βœ…βœ…βœ…βœ…βœ…
CrewAIβœ…βœ…βœ…βœ…βœ…βœ…
LangChainβœ…βœ…βœ…βœ…βœ…βœ…
Phidataβœ…βœ…βœ…βœ…βœ…βœ…
Llama Indexβœ…βœ…βœ…βœ…βœ…βœ…
Control Flowβœ…βœ…βœ…βœ…βœ…βœ…
Build Your Own Agentsβœ…βœ…βœ…βœ…βœ…βœ…
<img src="https://io.net/favicon.ico" width=35>IO Intelligenceβœ…βœ…
<br>

*Available on the hosted app. For detailed documentation click here.

Gateway Enterprise Version

Make your AI app more <ins>reliable</ins> and <ins>forward compatible</ins>, while ensuring complete <ins>data security</ins> and <ins>privacy</ins>.

βœ…Β  Secure Key Management - for role-based access control and tracking <br> βœ…Β  Simple & Semantic Caching - to serve repeat queries faster & save costs <br> βœ…Β  Access Control & Inbound Rules - to control which IPs and Geos can connect to your deployments <br> βœ…Β  PII Redaction - to automatically remove sensitive data from your requests to prevent indavertent exposure <br> βœ…Β  SOC2, ISO, HIPAA, GDPR Compliances - for best security practices <br> βœ…Β  Professional Support - along with feature prioritization <br>

Schedule a call to discuss enterprise deployments

<br>

Contributing

The easiest way to contribute is to pick an issue with the good first issue tag πŸ’ͺ. Read the contribution guidelines here.

Bug Report? File here | Feature Request? File here

Getting Started with the Community

Join our weekly AI Engineering Hours every Friday (8 AM PT) to:

  • Meet other contributors and community members
  • Learn advanced Gateway features and implementation patterns
  • Share your experiences and get help
  • Stay updated with the latest development priorities

Join the next session β†’ | Meeting notes

<br>

Community

Join our growing community around the world, for help, ideas, and discussions on AI.

<!-- - Questions tagged #portkey on [Stack Overflow](https://stackoverflow.com/questions/tagged/portkey) -->

Rubeus Social Share (4)

Global Ranking

8.5
Trust ScoreMCPHub Index

Based on codebase health & activity.

Manual Config

{ "mcpServers": { "portkey-ai-gateway": { "command": "npx", "args": ["portkey-ai-gateway"] } } }