maximhq

bifrost

Built by maximhq โ€ข 3,270 stars

What is bifrost?

Fastest enterprise AI gateway (50x faster than LiteLLM) with adaptive load balancer, cluster mode, guardrails, 1000+ models support & <100 ยตs overhead at 5k RPS.

How to use bifrost?

1. Install a compatible MCP client (like Claude Desktop). 2. Open your configuration settings. 3. Add bifrost using the following command: npx @modelcontextprotocol/bifrost 4. Restart the client and verify the new tools are active.
๐Ÿ›ก๏ธ Scoped (Restricted)
npx @modelcontextprotocol/bifrost --scope restricted
๐Ÿ”“ Unrestricted Access
npx @modelcontextprotocol/bifrost

Key Features

Native MCP Protocol Support
Real-time Tool Activation & Execution
Verified High-performance Implementation
Secure Resource & Context Handling

Optimized Use Cases

Extending AI models with custom local capabilities
Automating system workflows via natural language
Connecting external data sources to LLM context windows

bifrost FAQ

Q

Is bifrost safe?

Yes, bifrost follows the standardized Model Context Protocol security patterns and only executes tools with explicit user-granted permissions.

Q

Is bifrost up to date?

bifrost is currently active in the registry with 3,270 stars on GitHub, indicating its reliability and community support.

Q

Are there any limits for bifrost?

Usage limits depend on the specific implementation of the MCP server and your system resources. Refer to the official documentation below for technical details.

Official Documentation

View on GitHub

Bifrost AI Gateway

Go Report Card Discord badge Known Vulnerabilities codecov Docker Pulls <img src="https://run.pstmn.io/button.svg" alt="Run In Postman" style="width: 95px; height: 21px;"> Artifact Hub License

The fastest way to build AI applications that never go down

Bifrost is a high-performance AI gateway that unifies access to 15+ providers (OpenAI, Anthropic, AWS Bedrock, Google Vertex, and more) through a single OpenAI-compatible API. Deploy in seconds with zero configuration and get automatic failover, load balancing, semantic caching, and enterprise-grade features.

Quick Start

Get started

Go from zero to production-ready AI gateway in under a minute.

Step 1: Start Bifrost Gateway

# Install and run locally
npx -y @maximhq/bifrost

# Or use Docker
docker run -p 8080:8080 maximhq/bifrost

Step 2: Configure via Web UI

# Open the built-in web interface
open http://localhost:8080

Step 3: Make your first API call

curl -X POST http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4o-mini",
    "messages": [{"role": "user", "content": "Hello, Bifrost!"}]
  }'

That's it! Your AI gateway is running with a web interface for visual configuration, real-time monitoring, and analytics.

Complete Setup Guides:


Enterprise Deployments

Bifrost supports enterprise-grade, private deployments for teams running production AI systems at scale. In addition to private networking, custom security controls, and governance, enterprise deployments unlock advanced capabilities including adaptive load balancing, clustering, guardrails, MCP gateway and and other features designed for enterprise-grade scale and reliability.

<img src=".github/assets/features.png" alt="Book a Demo" width="100%" style="margin-top:5px;"/> <div align="center" style="display: flex; flex-direction: column;"> <a href="https://calendly.com/maximai/bifrost-demo"> <img src=".github/assets/book-demo-button.png" alt="Book a Demo" width="170" style="margin-top:5px;"/> </a> <div> <a href="https://www.getmaxim.ai/bifrost/enterprise" target="_blank" rel="noopener noreferrer">Explore enterprise capabilities</a> </div> </div>

Key Features

Core Infrastructure

  • Unified Interface - Single OpenAI-compatible API for all providers
  • Multi-Provider Support - OpenAI, Anthropic, AWS Bedrock, Google Vertex, Azure, Cerebras, Cohere, Mistral, Ollama, Groq, and more
  • Automatic Fallbacks - Seamless failover between providers and models with zero downtime
  • Load Balancing - Intelligent request distribution across multiple API keys and providers

Advanced Features

  • Model Context Protocol (MCP) - Enable AI models to use external tools (filesystem, web search, databases)
  • Semantic Caching - Intelligent response caching based on semantic similarity to reduce costs and latency
  • Multimodal Support - Support for text,images, audio, and streaming, all behind a common interface.
  • Custom Plugins - Extensible middleware architecture for analytics, monitoring, and custom logic
  • Governance - Usage tracking, rate limiting, and fine-grained access control

Enterprise & Security

  • Budget Management - Hierarchical cost control with virtual keys, teams, and customer budgets
  • SSO Integration - Google and GitHub authentication support
  • Observability - Native Prometheus metrics, distributed tracing, and comprehensive logging
  • Vault Support - Secure API key management with HashiCorp Vault integration

Developer Experience


Repository Structure

Bifrost uses a modular architecture for maximum flexibility:

bifrost/
โ”œโ”€โ”€ npx/                 # NPX script for easy installation
โ”œโ”€โ”€ core/                # Core functionality and shared components
โ”‚   โ”œโ”€โ”€ providers/       # Provider-specific implementations (OpenAI, Anthropic, etc.)
โ”‚   โ”œโ”€โ”€ schemas/         # Interfaces and structs used throughout Bifrost
โ”‚   โ””โ”€โ”€ bifrost.go       # Main Bifrost implementation
โ”œโ”€โ”€ framework/           # Framework components for data persistence
โ”‚   โ”œโ”€โ”€ configstore/     # Configuration storages
โ”‚   โ”œโ”€โ”€ logstore/        # Request logging storages
โ”‚   โ””โ”€โ”€ vectorstore/     # Vector storages
โ”œโ”€โ”€ transports/          # HTTP gateway and other interface layers
โ”‚   โ””โ”€โ”€ bifrost-http/    # HTTP transport implementation
โ”œโ”€โ”€ ui/                  # Web interface for HTTP gateway
โ”œโ”€โ”€ plugins/             # Extensible plugin system
โ”‚   โ”œโ”€โ”€ governance/      # Budget management and access control
โ”‚   โ”œโ”€โ”€ jsonparser/      # JSON parsing and manipulation utilities
โ”‚   โ”œโ”€โ”€ logging/         # Request logging and analytics
โ”‚   โ”œโ”€โ”€ maxim/           # Maxim's observability integration
โ”‚   โ”œโ”€โ”€ mocker/          # Mock responses for testing and development
โ”‚   โ”œโ”€โ”€ semanticcache/   # Intelligent response caching
โ”‚   โ””โ”€โ”€ telemetry/       # Monitoring and observability
โ”œโ”€โ”€ docs/                # Documentation and guides
โ””โ”€โ”€ tests/               # Comprehensive test suites

Getting Started Options

Choose the deployment method that fits your needs:

1. Gateway (HTTP API)

Best for: Language-agnostic integration, microservices, and production deployments

# NPX - Get started in 30 seconds
npx -y @maximhq/bifrost

# Docker - Production ready
docker run -p 8080:8080 -v $(pwd)/data:/app/data maximhq/bifrost

Features: Web UI, real-time monitoring, multi-provider management, zero-config startup

Learn More: Gateway Setup Guide

2. Go SDK

Best for: Direct Go integration with maximum performance and control

go get github.com/maximhq/bifrost/core

Features: Native Go APIs, embedded deployment, custom middleware integration

Learn More: Go SDK Guide

3. Drop-in Replacement

Best for: Migrating existing applications with zero code changes

# OpenAI SDK
- base_url = "https://api.openai.com"
+ base_url = "http://localhost:8080/openai"

# Anthropic SDK  
- base_url = "https://api.anthropic.com"
+ base_url = "http://localhost:8080/anthropic"

# Google GenAI SDK
- api_endpoint = "https://generativelanguage.googleapis.com"
+ api_endpoint = "http://localhost:8080/genai"

Learn More: Integration Guides


Performance

Bifrost adds virtually zero overhead to your AI requests. In sustained 5,000 RPS benchmarks, the gateway added only 11 ยตs of overhead per request.

Metrict3.mediumt3.xlargeImprovement
Added latency (Bifrost overhead)59 ยตs11 ยตs-81%
Success rate @ 5k RPS100%100%No failed requests
Avg. queue wait time47 ยตs1.67 ยตs-96%
Avg. request latency (incl. provider)2.12 s1.61 s-24%

Key Performance Highlights:

  • Perfect Success Rate - 100% request success rate even at 5k RPS
  • Minimal Overhead - Less than 15 ยตs additional latency per request
  • Efficient Queuing - Sub-microsecond average wait times
  • Fast Key Selection - ~10 ns to pick weighted API keys

Complete Benchmarks: Performance Analysis


Documentation

Complete Documentation: https://docs.getbifrost.ai

Quick Start

Features

Integrations

Enterprise


Need Help?

Join our Discord for community support and discussions.

Get help with:

  • Quick setup assistance and troubleshooting
  • Best practices and configuration tips
  • Community discussions and support
  • Real-time help with integrations

Contributing

We welcome contributions of all kinds! See our Contributing Guide for:

  • Setting up the development environment
  • Code conventions and best practices
  • How to submit pull requests
  • Building and testing locally

For development requirements and build instructions, see our Development Setup Guide.


License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.

Built with โค๏ธ by Maxim

Global Ranking

-
Trust ScoreMCPHub Index

Based on codebase health & activity.

Manual Config

{ "mcpServers": { "bifrost": { "command": "npx", "args": ["bifrost"] } } }