š Volcano Agent SDK
The TypeScript SDK for Multi-Provider AI Agents
Build agents that chain LLM reasoning with MCP tools. Mix OpenAI, Claude, Mistral in one workflow. Parallel execution, branching, loops. Native retries, streaming, and typed errors.
š Read the full documentation at volcano.dev ā
⨠Features
<table> <tr> <td width="33%">š¤ Automatic Tool Selection
LLM automatically picks which MCP tools to call based on your prompt. No manual routing needed.
</td> <td width="33%">š§© Multi-Agent Crews
Define specialized agents and let the coordinator autonomously delegate tasks. Like automatic tool selection, but for agents.
</td> <td width="33%">š¬ Conversational Results
Ask questions about what your agent did. Use .summary() or .ask() instead of parsing JSON.
š§ 100s of Models
OpenAI, Anthropic, Mistral, Bedrock, Vertex, Azure. Switch providers per-step or globally.
</td> <td width="33%">š Advanced Patterns
Parallel execution, branching, loops, sub-agent composition. Enterprise-grade workflow control.
</td> <td width="33%">š” Streaming
Stream tokens in real-time as LLMs generate them. Perfect for chat UIs and SSE endpoints.
</td> </tr> <tr> <td width="33%">š”ļø TypeScript-First
Full type safety with IntelliSense. Catch errors before runtime.
</td> <td width="33%">š Observability
OpenTelemetry traces and metrics. Export to Jaeger, Prometheus, DataDog, or any OTLP backend.
</td> <td width="33%">ā” Production-Ready
Built-in retries, timeouts, error handling, and connection pooling. Battle-tested at scale.
</td> </tr> </table>Quick Start
Installation
npm install @volcano.dev/agent
That's it! Includes MCP support and all common LLM providers (OpenAI, Anthropic, Mistral, Llama, Vertex).
Hello World with Automatic Tool Selection
import { agent, llmOpenAI, mcp } from "@volcano.dev/agent";
const llm = llmOpenAI({
apiKey: process.env.OPENAI_API_KEY!,
model: "gpt-4o-mini"
});
const weather = mcp("http://localhost:8001/mcp");
const tasks = mcp("http://localhost:8002/mcp");
// Agent automatically picks the right tools
const results = await agent({ llm })
.then({
prompt: "What's the weather in Seattle? If it will rain, create a task to bring an umbrella",
mcps: [weather, tasks] // LLM chooses which tools to call
})
.run();
// Ask questions about what happened
const summary = await results.summary(llm);
console.log(summary);
Multi-Agent Coordinator
import { agent, llmOpenAI } from "@volcano.dev/agent";
const llm = llmOpenAI({ apiKey: process.env.OPENAI_API_KEY! });
// Define specialized agents
const researcher = agent({ llm, name: 'researcher', description: 'Finds facts and data' })
.then({ prompt: "Research the topic." })
.then({ prompt: "Summarize the research." });
const writer = agent({ llm, name: 'writer', description: 'Creates content' })
.then({ prompt: "Write content." });
// Coordinator autonomously delegates to specialists
const results = await agent({ llm })
.then({
prompt: "Write a blog post about quantum computing",
agents: [researcher, writer] // Coordinator decides when done
})
.run();
// Ask what happened
const post = await results.ask(llm, "Show me the final blog post");
console.log(post);
Documentation
š Comprehensive Guides
- Getting Started - Installation, quick start, core concepts
- LLM Providers - OpenAI, Anthropic, Mistral, Llama, Bedrock, Vertex, Azure
- MCP Tools - Automatic selection, OAuth authentication, connection pooling
- Advanced Patterns - Parallel, branching, loops, multi-LLM workflows
- Features - Streaming, retries, timeouts, hooks, error handling
- Observability - OpenTelemetry traces and metrics
- API Reference - Complete API documentation
- Examples - Ready-to-run code examples
Contributing
We welcome contributions! Please see our Contributing Guide for details.
Questions or Feature Requests?
- š Report bugs or issues
- š” Request features or ask questions
- ā Star the project if you find it useful
License
Apache 2.0 - see LICENSE file for details.