MCPHub LabRegistryHKUDS/nanobot
HKUDS

HKUDS/nanobot

Built by HKUDS โ€ข 36,623 stars

What is HKUDS/nanobot?

"๐Ÿˆ nanobot: The Ultra-Lightweight OpenClaw"

How to use HKUDS/nanobot?

1. Install a compatible MCP client (like Claude Desktop). 2. Open your configuration settings. 3. Add HKUDS/nanobot using the following command: npx @modelcontextprotocol/hkuds-nanobot 4. Restart the client and verify the new tools are active.
๐Ÿ›ก๏ธ Scoped (Restricted)
npx @modelcontextprotocol/hkuds-nanobot --scope restricted
๐Ÿ”“ Unrestricted Access
npx @modelcontextprotocol/hkuds-nanobot

Key Features

Native MCP Protocol Support
Real-time Tool Activation & Execution
Verified High-performance Implementation
Secure Resource & Context Handling

Optimized Use Cases

Extending AI models with custom local capabilities
Automating system workflows via natural language
Connecting external data sources to LLM context windows

HKUDS/nanobot FAQ

Q

Is HKUDS/nanobot safe?

Yes, HKUDS/nanobot follows the standardized Model Context Protocol security patterns and only executes tools with explicit user-granted permissions.

Q

Is HKUDS/nanobot up to date?

HKUDS/nanobot is currently active in the registry with 36,623 stars on GitHub, indicating its reliability and community support.

Q

Are there any limits for HKUDS/nanobot?

Usage limits depend on the specific implementation of the MCP server and your system resources. Refer to the official documentation below for technical details.

Official Documentation

View on GitHub
<div align="center"> <img src="nanobot_logo.png" alt="nanobot" width="500"> <h1>nanobot: Ultra-Lightweight Personal AI Assistant</h1> <p> <a href="https://pypi.org/project/nanobot-ai/"><img src="https://img.shields.io/pypi/v/nanobot-ai" alt="PyPI"></a> <a href="https://pepy.tech/project/nanobot-ai"><img src="https://static.pepy.tech/badge/nanobot-ai" alt="Downloads"></a> <img src="https://img.shields.io/badge/python-โ‰ฅ3.11-blue" alt="Python"> <img src="https://img.shields.io/badge/license-MIT-green" alt="License"> <a href="./COMMUNICATION.md"><img src="https://img.shields.io/badge/Feishu-Group-E9DBFC?style=flat&logo=feishu&logoColor=white" alt="Feishu"></a> <a href="./COMMUNICATION.md"><img src="https://img.shields.io/badge/WeChat-Group-C5EAB4?style=flat&logo=wechat&logoColor=white" alt="WeChat"></a> <a href="https://discord.gg/MnCvHqpUGB"><img src="https://img.shields.io/badge/Discord-Community-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord"></a> </p> </div>

๐Ÿˆ nanobot is an ultra-lightweight personal AI assistant inspired by OpenClaw.

โšก๏ธ Delivers core agent functionality with 99% fewer lines of code than OpenClaw.

๐Ÿ“ Real-time line count: run bash core_agent_lines.sh to verify anytime.

๐Ÿ“ข News

[!IMPORTANT] Security note: Due to litellm supply chain poisoning, please check your Python environment ASAP and refer to this advisory for details. We have fully removed the litellm since v0.1.4.post6.

  • 2026-03-27 ๐Ÿš€ Released v0.1.4.post6 โ€” architecture decoupling, litellm removal, end-to-end streaming, WeChat channel, and a security fix. Please see release notes for details.
  • 2026-03-26 ๐Ÿ—๏ธ Agent runner extracted and lifecycle hooks unified; stream delta coalescing at boundaries.
  • 2026-03-25 ๐ŸŒ StepFun provider, configurable timezone, Gemini thought signatures.
  • 2026-03-24 ๐Ÿ”ง WeChat compatibility, Feishu CardKit streaming, test suite restructured.
  • 2026-03-23 ๐Ÿ”ง Command routing refactored for plugins, WhatsApp/WeChat media, unified channel login CLI.
  • 2026-03-22 โšก End-to-end streaming, WeChat channel, Anthropic cache optimization, /status command.
  • 2026-03-21 ๐Ÿ”’ Replace litellm with native openai + anthropic SDKs. Please see commit.
  • 2026-03-20 ๐Ÿง™ Interactive setup wizard โ€” pick your provider, model autocomplete, and you're good to go.
  • 2026-03-19 ๐Ÿ’ฌ Telegram gets more resilient under load; Feishu now renders code blocks properly.
  • 2026-03-18 ๐Ÿ“ท Telegram can now send media via URL. Cron schedules show human-readable details.
  • 2026-03-17 โœจ Feishu formatting glow-up, Slack reacts when done, custom endpoints support extra headers, and image handling is more reliable.
<details> <summary>Earlier news</summary>
  • 2026-03-16 ๐Ÿš€ Released v0.1.4.post5 โ€” a refinement-focused release with stronger reliability and channel support, and a more dependable day-to-day experience. Please see release notes for details.
  • 2026-03-15 ๐Ÿงฉ DingTalk rich media, smarter built-in skills, and cleaner model compatibility.
  • 2026-03-14 ๐Ÿ’ฌ Channel plugins, Feishu replies, and steadier MCP, QQ, and media handling.
  • 2026-03-13 ๐ŸŒ Multi-provider web search, LangSmith, and broader reliability improvements.
  • 2026-03-12 ๐Ÿš€ VolcEngine support, Telegram reply context, /restart, and sturdier memory.
  • 2026-03-11 ๐Ÿ”Œ WeCom, Ollama, cleaner discovery, and safer tool behavior.
  • 2026-03-10 ๐Ÿง  Token-based memory, shared retries, and cleaner gateway and Telegram behavior.
  • 2026-03-09 ๐Ÿ’ฌ Slack thread polish and better Feishu audio compatibility.
  • 2026-03-08 ๐Ÿš€ Released v0.1.4.post4 โ€” a reliability-packed release with safer defaults, better multi-instance support, sturdier MCP, and major channel and provider improvements. Please see release notes for details.
  • 2026-03-07 ๐Ÿš€ Azure OpenAI provider, WhatsApp media, QQ group chats, and more Telegram/Feishu polish.
  • 2026-03-06 ๐Ÿช„ Lighter providers, smarter media handling, and sturdier memory and CLI compatibility.
  • 2026-03-05 โšก๏ธ Telegram draft streaming, MCP SSE support, and broader channel reliability fixes.
  • 2026-03-04 ๐Ÿ› ๏ธ Dependency cleanup, safer file reads, and another round of test and Cron fixes.
  • 2026-03-03 ๐Ÿง  Cleaner user-message merging, safer multimodal saves, and stronger Cron guards.
  • 2026-03-02 ๐Ÿ›ก๏ธ Safer default access control, sturdier Cron reloads, and cleaner Matrix media handling.
  • 2026-03-01 ๐ŸŒ Web proxy support, smarter Cron reminders, and Feishu rich-text parsing improvements.
  • 2026-02-28 ๐Ÿš€ Released v0.1.4.post3 โ€” cleaner context, hardened session history, and smarter agent. Please see release notes for details.
  • 2026-02-27 ๐Ÿง  Experimental thinking mode support, DingTalk media messages, Feishu and QQ channel fixes.
  • 2026-02-26 ๐Ÿ›ก๏ธ Session poisoning fix, WhatsApp dedup, Windows path guard, Mistral compatibility.
  • 2026-02-25 ๐Ÿงน New Matrix channel, cleaner session context, auto workspace template sync.
  • 2026-02-24 ๐Ÿš€ Released v0.1.4.post2 โ€” a reliability-focused release with a redesigned heartbeat, prompt cache optimization, and hardened provider & channel stability. See release notes for details.
  • 2026-02-23 ๐Ÿ”ง Virtual tool-call heartbeat, prompt cache optimization, Slack mrkdwn fixes.
  • 2026-02-22 ๐Ÿ›ก๏ธ Slack thread isolation, Discord typing fix, agent reliability improvements.
  • 2026-02-21 ๐ŸŽ‰ Released v0.1.4.post1 โ€” new providers, media support across channels, and major stability improvements. See release notes for details.
  • 2026-02-20 ๐Ÿฆ Feishu now receives multimodal files from users. More reliable memory under the hood.
  • 2026-02-19 โœจ Slack now sends files, Discord splits long messages, and subagents work in CLI mode.
  • 2026-02-18 โšก๏ธ nanobot now supports VolcEngine, MCP custom auth headers, and Anthropic prompt caching.
  • 2026-02-17 ๐ŸŽ‰ Released v0.1.4 โ€” MCP support, progress streaming, new providers, and multiple channel improvements. Please see release notes for details.
  • 2026-02-16 ๐Ÿฆž nanobot now integrates a ClawHub skill โ€” search and install public agent skills.
  • 2026-02-15 ๐Ÿ”‘ nanobot now supports OpenAI Codex provider with OAuth login support.
  • 2026-02-14 ๐Ÿ”Œ nanobot now supports MCP! See MCP section for details.
  • 2026-02-13 ๐ŸŽ‰ Released v0.1.3.post7 โ€” includes security hardening and multiple improvements. Please upgrade to the latest version to address security issues. See release notes for more details.
  • 2026-02-12 ๐Ÿง  Redesigned memory system โ€” Less code, more reliable. Join the discussion about it!
  • 2026-02-11 โœจ Enhanced CLI experience and added MiniMax support!
  • 2026-02-10 ๐ŸŽ‰ Released v0.1.3.post6 with improvements! Check the updates notes and our roadmap.
  • 2026-02-09 ๐Ÿ’ฌ Added Slack, Email, and QQ support โ€” nanobot now supports multiple chat platforms!
  • 2026-02-08 ๐Ÿ”ง Refactored Providersโ€”adding a new LLM provider now takes just 2 simple steps! Check here.
  • 2026-02-07 ๐Ÿš€ Released v0.1.3.post5 with Qwen support & several key improvements! Check here for details.
  • 2026-02-06 โœจ Added Moonshot/Kimi provider, Discord integration, and enhanced security hardening!
  • 2026-02-05 โœจ Added Feishu channel, DeepSeek provider, and enhanced scheduled tasks support!
  • 2026-02-04 ๐Ÿš€ Released v0.1.3.post4 with multi-provider & Docker support! Check here for details.
  • 2026-02-03 โšก Integrated vLLM for local LLM support and improved natural language task scheduling!
  • 2026-02-02 ๐ŸŽ‰ nanobot officially launched! Welcome to try ๐Ÿˆ nanobot!
</details>

๐Ÿˆ nanobot is for educational, research, and technical exchange purposes only. It is unrelated to crypto and does not involve any official token or coin.

Key Features of nanobot:

๐Ÿชถ Ultra-Lightweight: A super lightweight implementation of OpenClaw โ€” 99% smaller, significantly faster.

๐Ÿ”ฌ Research-Ready: Clean, readable code that's easy to understand, modify, and extend for research.

โšก๏ธ Lightning Fast: Minimal footprint means faster startup, lower resource usage, and quicker iterations.

๐Ÿ’Ž Easy-to-Use: One-click to deploy and you're ready to go.

๐Ÿ—๏ธ Architecture

<p align="center"> <img src="nanobot_arch.png" alt="nanobot architecture" width="800"> </p>

Table of Contents

โœจ Features

<table align="center"> <tr align="center"> <th><p align="center">๐Ÿ“ˆ 24/7 Real-Time Market Analysis</p></th> <th><p align="center">๐Ÿš€ Full-Stack Software Engineer</p></th> <th><p align="center">๐Ÿ“… Smart Daily Routine Manager</p></th> <th><p align="center">๐Ÿ“š Personal Knowledge Assistant</p></th> </tr> <tr> <td align="center"><p align="center"><img src="case/search.gif" width="180" height="400"></p></td> <td align="center"><p align="center"><img src="case/code.gif" width="180" height="400"></p></td> <td align="center"><p align="center"><img src="case/scedule.gif" width="180" height="400"></p></td> <td align="center"><p align="center"><img src="case/memory.gif" width="180" height="400"></p></td> </tr> <tr> <td align="center">Discovery โ€ข Insights โ€ข Trends</td> <td align="center">Develop โ€ข Deploy โ€ข Scale</td> <td align="center">Schedule โ€ข Automate โ€ข Organize</td> <td align="center">Learn โ€ข Memory โ€ข Reasoning</td> </tr> </table>

๐Ÿ“ฆ Install

Install from source (latest features, recommended for development)

git clone https://github.com/HKUDS/nanobot.git
cd nanobot
pip install -e .

Install with uv (stable, fast)

uv tool install nanobot-ai

Install from PyPI (stable)

pip install nanobot-ai

Update to latest version

PyPI / pip

pip install -U nanobot-ai
nanobot --version

uv

uv tool upgrade nanobot-ai
nanobot --version

Using WhatsApp? Rebuild the local bridge after upgrading:

rm -rf ~/.nanobot/bridge
nanobot channels login whatsapp

๐Ÿš€ Quick Start

[!TIP] Set your API key in ~/.nanobot/config.json. Get API keys: OpenRouter (Global)

For other LLM providers, please see the Providers section.

For web search capability setup, please see Web Search.

1. Initialize

nanobot onboard

Use nanobot onboard --wizard if you want the interactive setup wizard.

2. Configure (~/.nanobot/config.json)

Configure these two parts in your config (other options have defaults).

Set your API key (e.g. OpenRouter, recommended for global users):

{
  "providers": {
    "openrouter": {
      "apiKey": "sk-or-v1-xxx"
    }
  }
}

Set your model (optionally pin a provider โ€” defaults to auto-detection):

{
  "agents": {
    "defaults": {
      "model": "anthropic/claude-opus-4-5",
      "provider": "openrouter"
    }
  }
}

3. Chat

nanobot agent

That's it! You have a working AI assistant in 2 minutes.

๐Ÿ’ฌ Chat Apps

Connect nanobot to your favorite chat platform. Want to build your own? See the Channel Plugin Guide.

ChannelWhat you need
TelegramBot token from @BotFather
DiscordBot token + Message Content intent
WhatsAppQR code scan (nanobot channels login whatsapp)
WeChat (Weixin)QR code scan (nanobot channels login weixin)
FeishuApp ID + App Secret
DingTalkApp Key + App Secret
SlackBot token + App-Level token
MatrixHomeserver URL + Access token
EmailIMAP/SMTP credentials
QQApp ID + App Secret
WecomBot ID + Bot Secret
MochatClaw token (auto-setup available)
<details> <summary><b>Telegram</b> (Recommended)</summary>

1. Create a bot

  • Open Telegram, search @BotFather
  • Send /newbot, follow prompts
  • Copy the token

2. Configure

{
  "channels": {
    "telegram": {
      "enabled": true,
      "token": "YOUR_BOT_TOKEN",
      "allowFrom": ["YOUR_USER_ID"]
    }
  }
}

You can find your User ID in Telegram settings. It is shown as @yourUserId. Copy this value without the @ symbol and paste it into the config file.

3. Run

nanobot gateway
</details> <details> <summary><b>Mochat (Claw IM)</b></summary>

Uses Socket.IO WebSocket by default, with HTTP polling fallback.

1. Ask nanobot to set up Mochat for you

Simply send this message to nanobot (replace xxx@xxx with your real email):

Read https://raw.githubusercontent.com/HKUDS/MoChat/refs/heads/main/skills/nanobot/skill.md and register on MoChat. My Email account is xxx@xxx Bind me as your owner and DM me on MoChat.

nanobot will automatically register, configure ~/.nanobot/config.json, and connect to Mochat.

2. Restart gateway

nanobot gateway

That's it โ€” nanobot handles the rest!

<br> <details> <summary>Manual configuration (advanced)</summary>

If you prefer to configure manually, add the following to ~/.nanobot/config.json:

Keep claw_token private. It should only be sent in X-Claw-Token header to your Mochat API endpoint.

{
  "channels": {
    "mochat": {
      "enabled": true,
      "base_url": "https://mochat.io",
      "socket_url": "https://mochat.io",
      "socket_path": "/socket.io",
      "claw_token": "claw_xxx",
      "agent_user_id": "6982abcdef",
      "sessions": ["*"],
      "panels": ["*"],
      "reply_delay_mode": "non-mention",
      "reply_delay_ms": 120000
    }
  }
}
</details> </details> <details> <summary><b>Discord</b></summary>

1. Create a bot

2. Enable intents

  • In the Bot settings, enable MESSAGE CONTENT INTENT
  • (Optional) Enable SERVER MEMBERS INTENT if you plan to use allow lists based on member data

3. Get your User ID

  • Discord Settings โ†’ Advanced โ†’ enable Developer Mode
  • Right-click your avatar โ†’ Copy User ID

4. Configure

{
  "channels": {
    "discord": {
      "enabled": true,
      "token": "YOUR_BOT_TOKEN",
      "allowFrom": ["YOUR_USER_ID"],
      "groupPolicy": "mention"
    }
  }
}

groupPolicy controls how the bot responds in group channels:

  • "mention" (default) โ€” Only respond when @mentioned
  • "open" โ€” Respond to all messages DMs always respond when the sender is in allowFrom.
  • If you set group policy to open create new threads as private threads and then @ the bot into it. Otherwise the thread itself and the channel in which you spawned it will spawn a bot session.

5. Invite the bot

  • OAuth2 โ†’ URL Generator
  • Scopes: bot
  • Bot Permissions: Send Messages, Read Message History
  • Open the generated invite URL and add the bot to your server

6. Run

nanobot gateway
</details> <details> <summary><b>Matrix (Element)</b></summary>

Install Matrix dependencies first:

pip install nanobot-ai[matrix]

1. Create/choose a Matrix account

  • Create or reuse a Matrix account on your homeserver (for example matrix.org).
  • Confirm you can log in with Element.

2. Get credentials

  • You need:
    • userId (example: @nanobot:matrix.org)
    • accessToken
    • deviceId (recommended so sync tokens can be restored across restarts)
  • You can obtain these from your homeserver login API (/_matrix/client/v3/login) or from your client's advanced session settings.

3. Configure

{
  "channels": {
    "matrix": {
      "enabled": true,
      "homeserver": "https://matrix.org",
      "userId": "@nanobot:matrix.org",
      "accessToken": "syt_xxx",
      "deviceId": "NANOBOT01",
      "e2eeEnabled": true,
      "allowFrom": ["@your_user:matrix.org"],
      "groupPolicy": "open",
      "groupAllowFrom": [],
      "allowRoomMentions": false,
      "maxMediaBytes": 20971520
    }
  }
}

Keep a persistent matrix-store and stable deviceId โ€” encrypted session state is lost if these change across restarts.

OptionDescription
allowFromUser IDs allowed to interact. Empty denies all; use ["*"] to allow everyone.
groupPolicyopen (default), mention, or allowlist.
groupAllowFromRoom allowlist (used when policy is allowlist).
allowRoomMentionsAccept @room mentions in mention mode.
e2eeEnabledE2EE support (default true). Set false for plaintext-only.
maxMediaBytesMax attachment size (default 20MB). Set 0 to block all media.

4. Run

nanobot gateway
</details> <details> <summary><b>WhatsApp</b></summary>

Requires Node.js โ‰ฅ18.

1. Link device

nanobot channels login whatsapp
# Scan QR with WhatsApp โ†’ Settings โ†’ Linked Devices

2. Configure

{
  "channels": {
    "whatsapp": {
      "enabled": true,
      "allowFrom": ["+1234567890"]
    }
  }
}

3. Run (two terminals)

# Terminal 1
nanobot channels login whatsapp

# Terminal 2
nanobot gateway

WhatsApp bridge updates are not applied automatically for existing installations. After upgrading nanobot, rebuild the local bridge with: rm -rf ~/.nanobot/bridge && nanobot channels login whatsapp

</details> <details> <summary><b>Feishu</b></summary>

Uses WebSocket long connection โ€” no public IP required.

1. Create a Feishu bot

  • Visit Feishu Open Platform
  • Create a new app โ†’ Enable Bot capability
  • Permissions:
    • im:message (send messages) and im:message.p2p_msg:readonly (receive messages)
    • Streaming replies (default in nanobot): add cardkit:card:write (often labeled Create and update cards in the Feishu developer console). Required for CardKit entities and streamed assistant text. Older apps may not have it yet โ€” open Permission management, enable the scope, then publish a new app version if the console requires it.
    • If you cannot add cardkit:card:write, set "streaming": false under channels.feishu (see below). The bot still works; replies use normal interactive cards without token-by-token streaming.
  • Events: Add im.message.receive_v1 (receive messages)
    • Select Long Connection mode (requires running nanobot first to establish connection)
  • Get App ID and App Secret from "Credentials & Basic Info"
  • Publish the app

2. Configure

{
  "channels": {
    "feishu": {
      "enabled": true,
      "appId": "cli_xxx",
      "appSecret": "xxx",
      "encryptKey": "",
      "verificationToken": "",
      "allowFrom": ["ou_YOUR_OPEN_ID"],
      "groupPolicy": "mention",
      "streaming": true
    }
  }
}

streaming defaults to true. Use false if your app does not have cardkit:card:write (see permissions above). encryptKey and verificationToken are optional for Long Connection mode. allowFrom: Add your open_id (find it in nanobot logs when you message the bot). Use ["*"] to allow all users. groupPolicy: "mention" (default โ€” respond only when @mentioned), "open" (respond to all group messages). Private chats always respond.

3. Run

nanobot gateway

[!TIP] Feishu uses WebSocket to receive messages โ€” no webhook or public IP needed!

</details> <details> <summary><b>QQ (QQๅ•่Š)</b></summary>

Uses botpy SDK with WebSocket โ€” no public IP required. Currently supports private messages only.

1. Register & create bot

  • Visit QQ Open Platform โ†’ Register as a developer (personal or enterprise)
  • Create a new bot application
  • Go to ๅผ€ๅ‘่ฎพ็ฝฎ (Developer Settings) โ†’ copy AppID and AppSecret

2. Set up sandbox for testing

  • In the bot management console, find ๆฒ™็ฎฑ้…็ฝฎ (Sandbox Config)
  • Under ๅœจๆถˆๆฏๅˆ—่กจ้…็ฝฎ, click ๆทปๅŠ ๆˆๅ‘˜ and add your own QQ number
  • Once added, scan the bot's QR code with mobile QQ โ†’ open the bot profile โ†’ tap "ๅ‘ๆถˆๆฏ" to start chatting

3. Configure

  • allowFrom: Add your openid (find it in nanobot logs when you message the bot). Use ["*"] for public access.
  • msgFormat: Optional. Use "plain" (default) for maximum compatibility with legacy QQ clients, or "markdown" for richer formatting on newer clients.
  • For production: submit a review in the bot console and publish. See QQ Bot Docs for the full publishing flow.
{
  "channels": {
    "qq": {
      "enabled": true,
      "appId": "YOUR_APP_ID",
      "secret": "YOUR_APP_SECRET",
      "allowFrom": ["YOUR_OPENID"],
      "msgFormat": "plain"
    }
  }
}

4. Run

nanobot gateway

Now send a message to the bot from QQ โ€” it should respond!

</details> <details> <summary><b>DingTalk (้’‰้’‰)</b></summary>

Uses Stream Mode โ€” no public IP required.

1. Create a DingTalk bot

  • Visit DingTalk Open Platform
  • Create a new app -> Add Robot capability
  • Configuration:
    • Toggle Stream Mode ON
  • Permissions: Add necessary permissions for sending messages
  • Get AppKey (Client ID) and AppSecret (Client Secret) from "Credentials"
  • Publish the app

2. Configure

{
  "channels": {
    "dingtalk": {
      "enabled": true,
      "clientId": "YOUR_APP_KEY",
      "clientSecret": "YOUR_APP_SECRET",
      "allowFrom": ["YOUR_STAFF_ID"]
    }
  }
}

allowFrom: Add your staff ID. Use ["*"] to allow all users.

3. Run

nanobot gateway
</details> <details> <summary><b>Slack</b></summary>

Uses Socket Mode โ€” no public URL required.

1. Create a Slack app

  • Go to Slack API โ†’ Create New App โ†’ "From scratch"
  • Pick a name and select your workspace

2. Configure the app

  • Socket Mode: Toggle ON โ†’ Generate an App-Level Token with connections:write scope โ†’ copy it (xapp-...)
  • OAuth & Permissions: Add bot scopes: chat:write, reactions:write, app_mentions:read
  • Event Subscriptions: Toggle ON โ†’ Subscribe to bot events: message.im, message.channels, app_mention โ†’ Save Changes
  • App Home: Scroll to Show Tabs โ†’ Enable Messages Tab โ†’ Check "Allow users to send Slash commands and messages from the messages tab"
  • Install App: Click Install to Workspace โ†’ Authorize โ†’ copy the Bot Token (xoxb-...)

3. Configure nanobot

{
  "channels": {
    "slack": {
      "enabled": true,
      "botToken": "xoxb-...",
      "appToken": "xapp-...",
      "allowFrom": ["YOUR_SLACK_USER_ID"],
      "groupPolicy": "mention"
    }
  }
}

4. Run

nanobot gateway

DM the bot directly or @mention it in a channel โ€” it should respond!

[!TIP]

  • groupPolicy: "mention" (default โ€” respond only when @mentioned), "open" (respond to all channel messages), or "allowlist" (restrict to specific channels).
  • DM policy defaults to open. Set "dm": {"enabled": false} to disable DMs.
</details> <details> <summary><b>Email</b></summary>

Give nanobot its own email account. It polls IMAP for incoming mail and replies via SMTP โ€” like a personal email assistant.

1. Get credentials (Gmail example)

  • Create a dedicated Gmail account for your bot (e.g. my-nanobot@gmail.com)
  • Enable 2-Step Verification โ†’ Create an App Password
  • Use this app password for both IMAP and SMTP

2. Configure

  • consentGranted must be true to allow mailbox access. This is a safety gate โ€” set false to fully disable.
  • allowFrom: Add your email address. Use ["*"] to accept emails from anyone.
  • smtpUseTls and smtpUseSsl default to true / false respectively, which is correct for Gmail (port 587 + STARTTLS). No need to set them explicitly.
  • Set "autoReplyEnabled": false if you only want to read/analyze emails without sending automatic replies.
{
  "channels": {
    "email": {
      "enabled": true,
      "consentGranted": true,
      "imapHost": "imap.gmail.com",
      "imapPort": 993,
      "imapUsername": "my-nanobot@gmail.com",
      "imapPassword": "your-app-password",
      "smtpHost": "smtp.gmail.com",
      "smtpPort": 587,
      "smtpUsername": "my-nanobot@gmail.com",
      "smtpPassword": "your-app-password",
      "fromAddress": "my-nanobot@gmail.com",
      "allowFrom": ["your-real-email@gmail.com"]
    }
  }
}

3. Run

nanobot gateway
</details> <details> <summary><b>WeChat (ๅพฎไฟก / Weixin)</b></summary>

Uses HTTP long-poll with QR-code login via the ilinkai personal WeChat API. No local WeChat desktop client is required.

1. Install with WeChat support

pip install "nanobot-ai[weixin]"

2. Configure

{
  "channels": {
    "weixin": {
      "enabled": true,
      "allowFrom": ["YOUR_WECHAT_USER_ID"]
    }
  }
}
  • allowFrom: Add the sender ID you see in nanobot logs for your WeChat account. Use ["*"] to allow all users.
  • token: Optional. If omitted, log in interactively and nanobot will save the token for you.
  • routeTag: Optional. When your upstream Weixin deployment requires request routing, nanobot will send it as the SKRouteTag header.
  • stateDir: Optional. Defaults to nanobot's runtime directory for Weixin state.
  • pollTimeout: Optional long-poll timeout in seconds.

3. Login

nanobot channels login weixin

Use --force to re-authenticate and ignore any saved token:

nanobot channels login weixin --force

4. Run

nanobot gateway
</details> <details> <summary><b>Wecom (ไผไธšๅพฎไฟก)</b></summary>

Here we use wecom-aibot-sdk-python (community Python version of the official @wecom/aibot-node-sdk).

Uses WebSocket long connection โ€” no public IP required.

1. Install the optional dependency

pip install nanobot-ai[wecom]

2. Create a WeCom AI Bot

Go to the WeCom admin console โ†’ Intelligent Robot โ†’ Create Robot โ†’ select API mode with long connection. Copy the Bot ID and Secret.

3. Configure

{
  "channels": {
    "wecom": {
      "enabled": true,
      "botId": "your_bot_id",
      "secret": "your_bot_secret",
      "allowFrom": ["your_id"]
    }
  }
}

4. Run

nanobot gateway
</details>

๐ŸŒ Agent Social Network

๐Ÿˆ nanobot is capable of linking to the agent social network (agent community). Just send one message and your nanobot joins automatically!

PlatformHow to Join (send this message to your bot)
MoltbookRead https://moltbook.com/skill.md and follow the instructions to join Moltbook
ClawdChatRead https://clawdchat.ai/skill.md and follow the instructions to join ClawdChat

Simply send the command above to your nanobot (via CLI or any chat channel), and it will handle the rest.

โš™๏ธ Configuration

Config file: ~/.nanobot/config.json

Providers

[!TIP]

  • Groq provides free voice transcription via Whisper. If configured, Telegram voice messages will be automatically transcribed.
  • MiniMax Coding Plan: Exclusive discount links for the nanobot community: Overseas ยท Mainland China
  • MiniMax (Mainland China): If your API key is from MiniMax's mainland China platform (minimaxi.com), set "apiBase": "https://api.minimaxi.com/v1" in your minimax provider config.
  • VolcEngine / BytePlus Coding Plan: Use dedicated providers volcengineCodingPlan or byteplusCodingPlan instead of the pay-per-use volcengine / byteplus providers.
  • Zhipu Coding Plan: If you're on Zhipu's coding plan, set "apiBase": "https://open.bigmodel.cn/api/coding/paas/v4" in your zhipu provider config.
  • Alibaba Cloud BaiLian: If you're using Alibaba Cloud BaiLian's OpenAI-compatible endpoint, set "apiBase": "https://dashscope.aliyuncs.com/compatible-mode/v1" in your dashscope provider config.
  • Step Fun (Mainland China): If your API key is from Step Fun's mainland China platform (stepfun.com), set "apiBase": "https://api.stepfun.com/v1" in your stepfun provider config.
ProviderPurposeGet API Key
customAny OpenAI-compatible endpointโ€”
openrouterLLM (recommended, access to all models)openrouter.ai
volcengineLLM (VolcEngine, pay-per-use)Coding Plan ยท volcengine.com
byteplusLLM (VolcEngine international, pay-per-use)Coding Plan ยท byteplus.com
anthropicLLM (Claude direct)console.anthropic.com
azure_openaiLLM (Azure OpenAI)portal.azure.com
openaiLLM (GPT direct)platform.openai.com
deepseekLLM (DeepSeek direct)platform.deepseek.com
groqLLM + Voice transcription (Whisper)console.groq.com
minimaxLLM (MiniMax direct)platform.minimaxi.com
geminiLLM (Gemini direct)aistudio.google.com
aihubmixLLM (API gateway, access to all models)aihubmix.com
siliconflowLLM (SiliconFlow/็ก…ๅŸบๆตๅŠจ)siliconflow.cn
dashscopeLLM (Qwen)dashscope.console.aliyun.com
moonshotLLM (Moonshot/Kimi)platform.moonshot.cn
zhipuLLM (Zhipu GLM)open.bigmodel.cn
ollamaLLM (local, Ollama)โ€”
mistralLLMdocs.mistral.ai
stepfunLLM (Step Fun/้˜ถ่ทƒๆ˜Ÿ่พฐ)platform.stepfun.com
ovmsLLM (local, OpenVINO Model Server)docs.openvino.ai
vllmLLM (local, any OpenAI-compatible server)โ€”
openai_codexLLM (Codex, OAuth)nanobot provider login openai-codex
github_copilotLLM (GitHub Copilot, OAuth)nanobot provider login github-copilot
<details> <summary><b>OpenAI Codex (OAuth)</b></summary>

Codex uses OAuth instead of API keys. Requires a ChatGPT Plus or Pro account. No providers.openaiCodex block is needed in config.json; nanobot provider login stores the OAuth session outside config.

1. Login:

nanobot provider login openai-codex

2. Set model (merge into ~/.nanobot/config.json):

{
  "agents": {
    "defaults": {
      "model": "openai-codex/gpt-5.1-codex"
    }
  }
}

3. Chat:

nanobot agent -m "Hello!"

# Target a specific workspace/config locally
nanobot agent -c ~/.nanobot-telegram/config.json -m "Hello!"

# One-off workspace override on top of that config
nanobot agent -c ~/.nanobot-telegram/config.json -w /tmp/nanobot-telegram-test -m "Hello!"

Docker users: use docker run -it for interactive OAuth login.

</details> <details> <summary><b>GitHub Copilot (OAuth)</b></summary>

GitHub Copilot uses OAuth instead of API keys. Requires a GitHub account with a plan configured. No providers.githubCopilot block is needed in config.json; nanobot provider login stores the OAuth session outside config.

1. Login:

nanobot provider login github-copilot

2. Set model (merge into ~/.nanobot/config.json):

{
  "agents": {
    "defaults": {
      "model": "github-copilot/gpt-4.1"
    }
  }
}

3. Chat:

nanobot agent -m "Hello!"

# Target a specific workspace/config locally
nanobot agent -c ~/.nanobot-telegram/config.json -m "Hello!"

# One-off workspace override on top of that config
nanobot agent -c ~/.nanobot-telegram/config.json -w /tmp/nanobot-telegram-test -m "Hello!"

Docker users: use docker run -it for interactive OAuth login.

</details> <details> <summary><b>Custom Provider (Any OpenAI-compatible API)</b></summary>

Connects directly to any OpenAI-compatible endpoint โ€” LM Studio, llama.cpp, Together AI, Fireworks, Azure OpenAI, or any self-hosted server. Model name is passed as-is.

{
  "providers": {
    "custom": {
      "apiKey": "your-api-key",
      "apiBase": "https://api.your-provider.com/v1"
    }
  },
  "agents": {
    "defaults": {
      "model": "your-model-name"
    }
  }
}

For local servers that don't require a key, set apiKey to any non-empty string (e.g. "no-key").

</details> <details> <summary><b>Ollama (local)</b></summary>

Run a local model with Ollama, then add to config:

1. Start Ollama (example):

ollama run llama3.2

2. Add to config (partial โ€” merge into ~/.nanobot/config.json):

{
  "providers": {
    "ollama": {
      "apiBase": "http://localhost:11434"
    }
  },
  "agents": {
    "defaults": {
      "provider": "ollama",
      "model": "llama3.2"
    }
  }
}

provider: "auto" also works when providers.ollama.apiBase is configured, but setting "provider": "ollama" is the clearest option.

</details> <details> <summary><b>OpenVINO Model Server (local / OpenAI-compatible)</b></summary>

Run LLMs locally on Intel GPUs using OpenVINO Model Server. OVMS exposes an OpenAI-compatible API at /v3.

Requires Docker and an Intel GPU with driver access (/dev/dri).

1. Pull the model (example):

mkdir -p ov/models && cd ov

docker run -d \
  --rm \
  --user $(id -u):$(id -g) \
  -v $(pwd)/models:/models \
  openvino/model_server:latest-gpu \
  --pull \
  --model_name openai/gpt-oss-20b \
  --model_repository_path /models \
  --source_model OpenVINO/gpt-oss-20b-int4-ov \
  --task text_generation \
  --tool_parser gptoss \
  --reasoning_parser gptoss \
  --enable_prefix_caching true \
  --target_device GPU

This downloads the model weights. Wait for the container to finish before proceeding.

2. Start the server (example):

docker run -d \
  --rm \
  --name ovms \
  --user $(id -u):$(id -g) \
  -p 8000:8000 \
  -v $(pwd)/models:/models \
  --device /dev/dri \
  --group-add=$(stat -c "%g" /dev/dri/render* | head -n 1) \
  openvino/model_server:latest-gpu \
  --rest_port 8000 \
  --model_name openai/gpt-oss-20b \
  --model_repository_path /models \
  --source_model OpenVINO/gpt-oss-20b-int4-ov \
  --task text_generation \
  --tool_parser gptoss \
  --reasoning_parser gptoss \
  --enable_prefix_caching true \
  --target_device GPU

3. Add to config (partial โ€” merge into ~/.nanobot/config.json):

{
  "providers": {
    "ovms": {
      "apiBase": "http://localhost:8000/v3"
    }
  },
  "agents": {
    "defaults": {
      "provider": "ovms",
      "model": "openai/gpt-oss-20b"
    }
  }
}

OVMS is a local server โ€” no API key required. Supports tool calling (--tool_parser gptoss), reasoning (--reasoning_parser gptoss), and streaming. See the official OVMS docs for more details.

</details> <details> <summary><b>vLLM (local / OpenAI-compatible)</b></summary>

Run your own model with vLLM or any OpenAI-compatible server, then add to config:

1. Start the server (example):

vllm serve meta-llama/Llama-3.1-8B-Instruct --port 8000

2. Add to config (partial โ€” merge into ~/.nanobot/config.json):

Provider (key can be any non-empty string for local):

{
  "providers": {
    "vllm": {
      "apiKey": "dummy",
      "apiBase": "http://localhost:8000/v1"
    }
  }
}

Model:

{
  "agents": {
    "defaults": {
      "model": "meta-llama/Llama-3.1-8B-Instruct"
    }
  }
}
</details> <details> <summary><b>Adding a New Provider (Developer Guide)</b></summary>

nanobot uses a Provider Registry (nanobot/providers/registry.py) as the single source of truth. Adding a new provider only takes 2 steps โ€” no if-elif chains to touch.

Step 1. Add a ProviderSpec entry to PROVIDERS in nanobot/providers/registry.py:

ProviderSpec(
    name="myprovider",                   # config field name
    keywords=("myprovider", "mymodel"),  # model-name keywords for auto-matching
    env_key="MYPROVIDER_API_KEY",        # env var name
    display_name="My Provider",          # shown in `nanobot status`
    default_api_base="https://api.myprovider.com/v1",  # OpenAI-compatible endpoint
)

Step 2. Add a field to ProvidersConfig in nanobot/config/schema.py:

class ProvidersConfig(BaseModel):
    ...
    myprovider: ProviderConfig = ProviderConfig()

That's it! Environment variables, model routing, config matching, and nanobot status display will all work automatically.

Common ProviderSpec options:

FieldDescriptionExample
default_api_baseOpenAI-compatible base URL"https://api.deepseek.com"
env_extrasAdditional env vars to set(("ZHIPUAI_API_KEY", "{api_key}"),)
model_overridesPer-model parameter overrides(("kimi-k2.5", {"temperature": 1.0}),)
is_gatewayCan route any model (like OpenRouter)True
detect_by_key_prefixDetect gateway by API key prefix"sk-or-"
detect_by_base_keywordDetect gateway by API base URL"openrouter"
strip_model_prefixStrip provider prefix before sending to gatewayTrue (for AiHubMix)
supports_max_completion_tokensUse max_completion_tokens instead of max_tokens; required for providers that reject both being set simultaneously (e.g. VolcEngine)True
</details>

Channel Settings

Global settings that apply to all channels. Configure under the channels section in ~/.nanobot/config.json:

{
  "channels": {
    "sendProgress": true,
    "sendToolHints": false,
    "sendMaxRetries": 3,
    "telegram": { ... }
  }
}
SettingDefaultDescription
sendProgresstrueStream agent's text progress to the channel
sendToolHintsfalseStream tool-call hints (e.g. read_file("โ€ฆ"))
sendMaxRetries3Max delivery attempts per outbound message, including the initial send (0-10 configured, minimum 1 actual attempt)

Retry Behavior

When a channel send operation raises an error, nanobot retries with exponential backoff:

  • Attempt 1: Initial send
  • Attempts 2-4: Retry delays are 1s, 2s, 4s
  • Attempts 5+: Retry delay caps at 4s
  • Transient failures (network hiccups, temporary API limits): Retry usually succeeds
  • Permanent failures (invalid token, channel banned): All retries fail

[!NOTE] When a channel is completely unavailable, there's no way to notify the user since we cannot reach them through that channel. Monitor logs for "Failed to send to {channel} after N attempts" to detect persistent delivery failures.

Web Search

[!TIP] Use proxy in tools.web to route all web requests (search + fetch) through a proxy:

{ "tools": { "web": { "proxy": "http://127.0.0.1:7890" } } }

nanobot supports multiple web search providers. Configure in ~/.nanobot/config.json under tools.web.search.

ProviderConfig fieldsEnv var fallbackFree
brave (default)apiKeyBRAVE_API_KEYNo
tavilyapiKeyTAVILY_API_KEYNo
jinaapiKeyJINA_API_KEYFree tier (10M tokens)
searxngbaseUrlSEARXNG_BASE_URLYes (self-hosted)
duckduckgoโ€”โ€”Yes

When credentials are missing, nanobot automatically falls back to DuckDuckGo.

Brave (default):

{
  "tools": {
    "web": {
      "search": {
        "provider": "brave",
        "apiKey": "BSA..."
      }
    }
  }
}

Tavily:

{
  "tools": {
    "web": {
      "search": {
        "provider": "tavily",
        "apiKey": "tvly-..."
      }
    }
  }
}

Jina (free tier with 10M tokens):

{
  "tools": {
    "web": {
      "search": {
        "provider": "jina",
        "apiKey": "jina_..."
      }
    }
  }
}

SearXNG (self-hosted, no API key needed):

{
  "tools": {
    "web": {
      "search": {
        "provider": "searxng",
        "baseUrl": "https://searx.example"
      }
    }
  }
}

DuckDuckGo (zero config):

{
  "tools": {
    "web": {
      "search": {
        "provider": "duckduckgo"
      }
    }
  }
}
OptionTypeDefaultDescription
providerstring"brave"Search backend: brave, tavily, jina, searxng, duckduckgo
apiKeystring""API key for Brave or Tavily
baseUrlstring""Base URL for SearXNG
maxResultsinteger5Results per search (1โ€“10)

MCP (Model Context Protocol)

[!TIP] The config format is compatible with Claude Desktop / Cursor. You can copy MCP server configs directly from any MCP server's README.

nanobot supports MCP โ€” connect external tool servers and use them as native agent tools.

Add MCP servers to your config.json:

{
  "tools": {
    "mcpServers": {
      "filesystem": {
        "command": "npx",
        "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"]
      },
      "my-remote-mcp": {
        "url": "https://example.com/mcp/",
        "headers": {
          "Authorization": "Bearer xxxxx"
        }
      }
    }
  }
}

Two transport modes are supported:

ModeConfigExample
Stdiocommand + argsLocal process via npx / uvx
HTTPurl + headers (optional)Remote endpoint (https://mcp.example.com/sse)

Use toolTimeout to override the default 30s per-call timeout for slow servers:

{
  "tools": {
    "mcpServers": {
      "my-slow-server": {
        "url": "https://example.com/mcp/",
        "toolTimeout": 120
      }
    }
  }
}

Use enabledTools to register only a subset of tools from an MCP server:

{
  "tools": {
    "mcpServers": {
      "filesystem": {
        "command": "npx",
        "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"],
        "enabledTools": ["read_file", "mcp_filesystem_write_file"]
      }
    }
  }
}

enabledTools accepts either the raw MCP tool name (for example read_file) or the wrapped nanobot tool name (for example mcp_filesystem_write_file).

  • Omit enabledTools, or set it to ["*"], to register all tools.
  • Set enabledTools to [] to register no tools from that server.
  • Set enabledTools to a non-empty list of names to register only that subset.

MCP tools are automatically discovered and registered on startup. The LLM can use them alongside built-in tools โ€” no extra configuration needed.

Security

[!TIP] For production deployments, set "restrictToWorkspace": true in your config to sandbox the agent. In v0.1.4.post3 and earlier, an empty allowFrom allowed all senders. Since v0.1.4.post4, empty allowFrom denies all access by default. To allow all senders, set "allowFrom": ["*"].

OptionDefaultDescription
tools.restrictToWorkspacefalseWhen true, restricts all agent tools (shell, file read/write/edit, list) to the workspace directory. Prevents path traversal and out-of-scope access.
tools.exec.enabletrueWhen false, the shell exec tool is not registered at all. Use this to completely disable shell command execution.
tools.exec.pathAppend""Extra directories to append to PATH when running shell commands (e.g. /usr/sbin for ufw).
channels.*.allowFrom[] (deny all)Whitelist of user IDs. Empty denies all; use ["*"] to allow everyone.

Timezone

Time is context. Context should be precise.

By default, nanobot uses UTC for runtime time context. If you want the agent to think in your local time, set agents.defaults.timezone to a valid IANA timezone name:

{
  "agents": {
    "defaults": {
      "timezone": "Asia/Shanghai"
    }
  }
}

This affects runtime time strings shown to the model, such as runtime context and heartbeat prompts. It also becomes the default timezone for cron schedules when a cron expression omits tz, and for one-shot at times when the ISO datetime has no explicit offset.

Common examples: UTC, America/New_York, America/Los_Angeles, Europe/London, Europe/Berlin, Asia/Tokyo, Asia/Shanghai, Asia/Singapore, Australia/Sydney.

Need another timezone? Browse the full IANA Time Zone Database.

๐Ÿงฉ Multiple Instances

Run multiple nanobot instances simultaneously with separate configs and runtime data. Use --config as the main entrypoint. Optionally pass --workspace during onboard when you want to initialize or update the saved workspace for a specific instance.

Quick Start

If you want each instance to have its own dedicated workspace from the start, pass both --config and --workspace during onboarding.

Initialize instances:

# Create separate instance configs and workspaces
nanobot onboard --config ~/.nanobot-telegram/config.json --workspace ~/.nanobot-telegram/workspace
nanobot onboard --config ~/.nanobot-discord/config.json --workspace ~/.nanobot-discord/workspace
nanobot onboard --config ~/.nanobot-feishu/config.json --workspace ~/.nanobot-feishu/workspace

Configure each instance:

Edit ~/.nanobot-telegram/config.json, ~/.nanobot-discord/config.json, etc. with different channel settings. The workspace you passed during onboard is saved into each config as that instance's default workspace.

Run instances:

# Instance A - Telegram bot
nanobot gateway --config ~/.nanobot-telegram/config.json

# Instance B - Discord bot  
nanobot gateway --config ~/.nanobot-discord/config.json

# Instance C - Feishu bot with custom port
nanobot gateway --config ~/.nanobot-feishu/config.json --port 18792

Path Resolution

When using --config, nanobot derives its runtime data directory from the config file location. The workspace still comes from agents.defaults.workspace unless you override it with --workspace.

To open a CLI session against one of these instances locally:

nanobot agent -c ~/.nanobot-telegram/config.json -m "Hello from Telegram instance"
nanobot agent -c ~/.nanobot-discord/config.json -m "Hello from Discord instance"

# Optional one-off workspace override
nanobot agent -c ~/.nanobot-telegram/config.json -w /tmp/nanobot-telegram-test

nanobot agent starts a local CLI agent using the selected workspace/config. It does not attach to or proxy through an already running nanobot gateway process.

ComponentResolved FromExample
Config--config path~/.nanobot-A/config.json
Workspace--workspace or config~/.nanobot-A/workspace/
Cron Jobsconfig directory~/.nanobot-A/cron/
Media / runtime stateconfig directory~/.nanobot-A/media/

How It Works

  • --config selects which config file to load
  • By default, the workspace comes from agents.defaults.workspace in that config
  • If you pass --workspace, it overrides the workspace from the config file

Minimal Setup

  1. Copy your base config into a new instance directory.
  2. Set a different agents.defaults.workspace for that instance.
  3. Start the instance with --config.

Example config:

{
  "agents": {
    "defaults": {
      "workspace": "~/.nanobot-telegram/workspace",
      "model": "anthropic/claude-sonnet-4-6"
    }
  },
  "channels": {
    "telegram": {
      "enabled": true,
      "token": "YOUR_TELEGRAM_BOT_TOKEN"
    }
  },
  "gateway": {
    "port": 18790
  }
}

Start separate instances:

nanobot gateway --config ~/.nanobot-telegram/config.json
nanobot gateway --config ~/.nanobot-discord/config.json

Override workspace for one-off runs when needed:

nanobot gateway --config ~/.nanobot-telegram/config.json --workspace /tmp/nanobot-telegram-test

Common Use Cases

  • Run separate bots for Telegram, Discord, Feishu, and other platforms
  • Keep testing and production instances isolated
  • Use different models or providers for different teams
  • Serve multiple tenants with separate configs and runtime data

Notes

  • Each instance must use a different port if they run at the same time
  • Use a different workspace per instance if you want isolated memory, sessions, and skills
  • --workspace overrides the workspace defined in the config file
  • Cron jobs and runtime media/state are derived from the config directory

๐Ÿ’ป CLI Reference

CommandDescription
nanobot onboardInitialize config & workspace at ~/.nanobot/
nanobot onboard --wizardLaunch the interactive onboarding wizard
nanobot onboard -c <config> -w <workspace>Initialize or refresh a specific instance config and workspace
nanobot agent -m "..."Chat with the agent
nanobot agent -w <workspace>Chat against a specific workspace
nanobot agent -w <workspace> -c <config>Chat against a specific workspace/config
nanobot agentInteractive chat mode
nanobot agent --no-markdownShow plain-text replies
nanobot agent --logsShow runtime logs during chat
nanobot serveStart the OpenAI-compatible API
nanobot gatewayStart the gateway
nanobot statusShow status
nanobot provider login openai-codexOAuth login for providers
nanobot channels login <channel>Authenticate a channel interactively
nanobot channels statusShow channel status

Interactive mode exits: exit, quit, /exit, /quit, :q, or Ctrl+D.

<details> <summary><b>Heartbeat (Periodic Tasks)</b></summary>

The gateway wakes up every 30 minutes and checks HEARTBEAT.md in your workspace (~/.nanobot/workspace/HEARTBEAT.md). If the file has tasks, the agent executes them and delivers results to your most recently active chat channel.

Setup: edit ~/.nanobot/workspace/HEARTBEAT.md (created automatically by nanobot onboard):

## Periodic Tasks

- [ ] Check weather forecast and send a summary
- [ ] Scan inbox for urgent emails

The agent can also manage this file itself โ€” ask it to "add a periodic task" and it will update HEARTBEAT.md for you.

Note: The gateway must be running (nanobot gateway) and you must have chatted with the bot at least once so it knows which channel to deliver to.

</details>

๐Ÿ Python SDK

Use nanobot as a library โ€” no CLI, no gateway, just Python:

from nanobot import Nanobot

bot = Nanobot.from_config()
result = await bot.run("Summarize the README")
print(result.content)

Each call carries a session_key for conversation isolation โ€” different keys get independent history:

await bot.run("hi", session_key="user-alice")
await bot.run("hi", session_key="task-42")

Add lifecycle hooks to observe or customize the agent:

from nanobot.agent import AgentHook, AgentHookContext

class AuditHook(AgentHook):
    async def before_execute_tools(self, ctx: AgentHookContext) -> None:
        for tc in ctx.tool_calls:
            print(f"[tool] {tc.name}")

result = await bot.run("Hello", hooks=[AuditHook()])

See docs/PYTHON_SDK.md for the full SDK reference.

๐Ÿ”Œ OpenAI-Compatible API

nanobot can expose a minimal OpenAI-compatible endpoint for local integrations:

pip install "nanobot-ai[api]"
nanobot serve

By default, the API binds to 127.0.0.1:8900. You can change this in config.json.

Behavior

  • Session isolation: pass "session_id" in the request body to isolate conversations; omit for a shared default session (api:default)
  • Single-message input: each request must contain exactly one user message
  • Fixed model: omit model, or pass the same model shown by /v1/models
  • No streaming: stream=true is not supported

Endpoints

  • GET /health
  • GET /v1/models
  • POST /v1/chat/completions

curl

curl http://127.0.0.1:8900/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "messages": [{"role": "user", "content": "hi"}],
    "session_id": "my-session"
  }'

Python (requests)

import requests

resp = requests.post(
    "http://127.0.0.1:8900/v1/chat/completions",
    json={
        "messages": [{"role": "user", "content": "hi"}],
        "session_id": "my-session",  # optional: isolate conversation
    },
    timeout=120,
)
resp.raise_for_status()
print(resp.json()["choices"][0]["message"]["content"])

Python (openai)

from openai import OpenAI

client = OpenAI(
    base_url="http://127.0.0.1:8900/v1",
    api_key="dummy",
)

resp = client.chat.completions.create(
    model="MiniMax-M2.7",
    messages=[{"role": "user", "content": "hi"}],
    extra_body={"session_id": "my-session"},  # optional: isolate conversation
)
print(resp.choices[0].message.content)

๐Ÿณ Docker

[!TIP] The -v ~/.nanobot:/root/.nanobot flag mounts your local config directory into the container, so your config and workspace persist across container restarts.

Docker Compose

docker compose run --rm nanobot-cli onboard   # first-time setup
vim ~/.nanobot/config.json                     # add API keys
docker compose up -d nanobot-gateway           # start gateway
docker compose run --rm nanobot-cli agent -m "Hello!"   # run CLI
docker compose logs -f nanobot-gateway                   # view logs
docker compose down                                      # stop

Docker

# Build the image
docker build -t nanobot .

# Initialize config (first time only)
docker run -v ~/.nanobot:/root/.nanobot --rm nanobot onboard

# Edit config on host to add API keys
vim ~/.nanobot/config.json

# Run gateway (connects to enabled channels, e.g. Telegram/Discord/Mochat)
docker run -v ~/.nanobot:/root/.nanobot -p 18790:18790 nanobot gateway

# Or run a single command
docker run -v ~/.nanobot:/root/.nanobot --rm nanobot agent -m "Hello!"
docker run -v ~/.nanobot:/root/.nanobot --rm nanobot status

๐Ÿง Linux Service

Run the gateway as a systemd user service so it starts automatically and restarts on failure.

1. Find the nanobot binary path:

which nanobot   # e.g. /home/user/.local/bin/nanobot

2. Create the service file at ~/.config/systemd/user/nanobot-gateway.service (replace ExecStart path if needed):

[Unit]
Description=Nanobot Gateway
After=network.target

[Service]
Type=simple
ExecStart=%h/.local/bin/nanobot gateway
Restart=always
RestartSec=10
NoNewPrivileges=yes
ProtectSystem=strict
ReadWritePaths=%h

[Install]
WantedBy=default.target

3. Enable and start:

systemctl --user daemon-reload
systemctl --user enable --now nanobot-gateway

Common operations:

systemctl --user status nanobot-gateway        # check status
systemctl --user restart nanobot-gateway       # restart after config changes
journalctl --user -u nanobot-gateway -f        # follow logs

If you edit the .service file itself, run systemctl --user daemon-reload before restarting.

Note: User services only run while you are logged in. To keep the gateway running after logout, enable lingering:

loginctl enable-linger $USER

๐Ÿ“ Project Structure

nanobot/
โ”œโ”€โ”€ agent/          # ๐Ÿง  Core agent logic
โ”‚   โ”œโ”€โ”€ loop.py     #    Agent loop (LLM โ†” tool execution)
โ”‚   โ”œโ”€โ”€ context.py  #    Prompt builder
โ”‚   โ”œโ”€โ”€ memory.py   #    Persistent memory
โ”‚   โ”œโ”€โ”€ skills.py   #    Skills loader
โ”‚   โ”œโ”€โ”€ subagent.py #    Background task execution
โ”‚   โ””โ”€โ”€ tools/      #    Built-in tools (incl. spawn)
โ”œโ”€โ”€ skills/         # ๐ŸŽฏ Bundled skills (github, weather, tmux...)
โ”œโ”€โ”€ channels/       # ๐Ÿ“ฑ Chat channel integrations (supports plugins)
โ”œโ”€โ”€ bus/            # ๐ŸšŒ Message routing
โ”œโ”€โ”€ cron/           # โฐ Scheduled tasks
โ”œโ”€โ”€ heartbeat/      # ๐Ÿ’“ Proactive wake-up
โ”œโ”€โ”€ providers/      # ๐Ÿค– LLM providers (OpenRouter, etc.)
โ”œโ”€โ”€ session/        # ๐Ÿ’ฌ Conversation sessions
โ”œโ”€โ”€ config/         # โš™๏ธ Configuration
โ””โ”€โ”€ cli/            # ๐Ÿ–ฅ๏ธ Commands

๐Ÿค Contribute & Roadmap

PRs welcome! The codebase is intentionally small and readable. ๐Ÿค—

Branching Strategy

BranchPurpose
mainStable releases โ€” bug fixes and minor improvements
nightlyExperimental features โ€” new features and breaking changes

Unsure which branch to target? See CONTRIBUTING.md for details.

Roadmap โ€” Pick an item and open a PR!

  • Multi-modal โ€” See and hear (images, voice, video)
  • Long-term memory โ€” Never forget important context
  • Better reasoning โ€” Multi-step planning and reflection
  • More integrations โ€” Calendar and more
  • Self-improvement โ€” Learn from feedback and mistakes

Contributors

<a href="https://github.com/HKUDS/nanobot/graphs/contributors"> <img src="https://contrib.rocks/image?repo=HKUDS/nanobot&max=100&columns=12&updated=20260210" alt="Contributors" /> </a>

โญ Star History

<div align="center"> <a href="https://star-history.com/#HKUDS/nanobot&Date"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=HKUDS/nanobot&type=Date&theme=dark" /> <source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=HKUDS/nanobot&type=Date" /> <img alt="Star History Chart" src="https://api.star-history.com/svg?repos=HKUDS/nanobot&type=Date" style="border-radius: 15px; box-shadow: 0 0 30px rgba(0, 217, 255, 0.3);" /> </picture> </a> </div> <p align="center"> <em> Thanks for visiting โœจ nanobot!</em><br><br> <img src="https://visitor-badge.laobi.icu/badge?page_id=HKUDS.nanobot&style=for-the-badge&color=00d4ff" alt="Views"> </p> <p align="center"> <sub>nanobot is for educational, research, and technical exchange purposes only</sub> </p>

Global Ranking

8.5
Trust ScoreMCPHub Index

Based on codebase health & activity.

Manual Config

{ "mcpServers": { "hkuds-nanobot": { "command": "npx", "args": ["hkuds-nanobot"] } } }