ridafkih

keeper.sh

Built by ridafkih 913 stars

What is keeper.sh?

Open-source calendar sync tool & universal calendar MCP server. Aggregate, sync and control calendars on Google, Outlook, Office 365, iCloud, CalDAV or ICS.

How to use keeper.sh?

1. Install a compatible MCP client (like Claude Desktop). 2. Open your configuration settings. 3. Add keeper.sh using the following command: npx @modelcontextprotocol/keeper-sh 4. Restart the client and verify the new tools are active.
🛡️ Scoped (Restricted)
npx @modelcontextprotocol/keeper-sh --scope restricted
🔓 Unrestricted Access
npx @modelcontextprotocol/keeper-sh

Key Features

Native MCP Protocol Support
Real-time Tool Activation & Execution
Verified High-performance Implementation
Secure Resource & Context Handling

Optimized Use Cases

Extending AI models with custom local capabilities
Automating system workflows via natural language
Connecting external data sources to LLM context windows

keeper.sh FAQ

Q

Is keeper.sh safe?

Yes, keeper.sh follows the standardized Model Context Protocol security patterns and only executes tools with explicit user-granted permissions.

Q

Is keeper.sh up to date?

keeper.sh is currently active in the registry with 913 stars on GitHub, indicating its reliability and community support.

Q

Are there any limits for keeper.sh?

Usage limits depend on the specific implementation of the MCP server and your system resources. Refer to the official documentation below for technical details.

Official Documentation

View on GitHub

About

Keeper is a simple & open-source calendar syncing tool. It allows you to pull events from remotely hosted iCal or ICS links, and push them to one or many calendars so the time slots can align across them all.

Features

  • Aggregating calendar events from remote sources
  • Event content agnostic syncing engine
  • Push aggregate events to one or more calendars
  • MCP (Model Context Protocol) server for AI agent calendar access
  • Open source under AGPL-3.0
  • Easy to self-host
  • Easy-to-purge remote events

Bug Reports & Feature Requests

If you encounter a bug or have an idea for a feature, you may open an issue on GitHub and it will be triaged and addressed as soon as possible.

Contributing

High-value and high-quality contributions are appreciated. Before working on large features you intend to see merged, please open an issue first to discuss beforehand.

Local Development

The dev environment runs behind HTTPS at https://keeper.localhost using a Caddy reverse proxy with automatic TLS. The .localhost TLD resolves to 127.0.0.1 automatically per RFC 6761 — no /etc/hosts entry is needed.

Prerequisites

Getting Started

bun install

Generate and Trust a Root CA

The dev environment runs behind HTTPS via Caddy. You need to generate a local root certificate authority and trust it so your browser accepts the certificate.

mkdir -p .pki
openssl req -x509 -new -nodes \
  -newkey ec -pkeyopt ec_paramgen_curve:prime256v1 \
  -keyout .pki/root.key -out .pki/root.crt \
  -days 3650 -subj "/CN=Keeper.sh CA"

Then trust it on your platform:

macOS

sudo security add-trusted-cert -d -r trustRoot \
  -k /Library/Keychains/System.keychain .pki/root.crt

Linux

sudo cp .pki/root.crt /usr/local/share/ca-certificates/keeper-dev-root.crt
sudo update-ca-certificates

Start the Dev Environment

bun dev

This starts PostgreSQL, Redis, and a Caddy reverse proxy via Docker Compose, along with the API, web, MCP, and cron services locally. Once running, open https://keeper.localhost.

Architecture

ServiceLocal PortAccessed Via
Caddy443https://keeper.localhost
Web5173Proxied by Caddy
API3000Proxied by Web at /api
MCP3001Proxied by Web at /mcp
Postgres5432postgresql://postgres:postgres@localhost:5432/postgres
Redis6379redis://localhost:6379

Qs

Why does this exist?

Because I needed it. Ever since starting Sedna—the AI governance platform—I've had to work across three calendars. One for my business, one for work, and one for personal.

Meetings have landed on top of one-another a frustratingly high number of times.

Why not use this other service?

I've probably tried it. It was probably too finicky, ended up making me waste hours of my time having to delete stale events it didn't seem to want to track anymore, or just didn't sync reliably.

How does the syncing engine work?

  • If we have a local event but no corresponding "source → destination" mapping for an event, we push the event to the destination calendar.
  • If we have a mapping for an event, but the source ID is not present on the source any longer, we delete the event from the destination.
  • Any events with markers of having been created by Keeper, but with no corresponding local tracking, we remove it. This is only done for backwards compatibility.

Events are flagged as having been created by Keeper either using a @keeper.sh suffix on the remote UID, or in the case of a platform like Outlook that doesn't support custom UIDs, we just put it in a "keeper.sh" category.

Cloud Hosted

I've made Keeper easy to self-host, but whether you simply want to support the project or don't want to deal with the hassle or overhead of configuring and running your own infrastructure cloud hosting is always an option.

Head to keeper.sh to get started with the cloud-hosted version. Use code README for 25% off.

FreePro (Cloud-Hosted)Pro (Self-Hosted)
Monthly Price$0 USD$5 USD$0
Annual Price$0 USD$42 USD (-30%)$0
Refresh Interval30 minutes1 minute1 minute
Source Limit2
Destination Limit1

Self Hosted

By hosting Keeper yourself, you get all premium features for free, can guarantee data governance and autonomy, and it's fun. If you'll be self-hosting, please consider supporting me and development of the project by sponsoring me on GitHub.

There are seven images currently available, two of them are designed for convenience, while the five are designed to serve the granular underlying services.

[!NOTE]

Migrating from a previous version? If you are upgrading from the older Next.js-based release, see the migration guide for environment variable changes. The new web server will also print a migration notice at startup if it detects old environment variables.

Environment Variables

NameService(s)Description
DATABASE_URLapi, cron, worker, mcpPostgreSQL connection URL.<br><br>e.g. postgres://user:pass@postgres:5432/keeper
REDIS_URLapi, cron, workerRedis connection URL. Must be the same Redis instance across all services.<br><br>e.g. redis://redis:6379
WORKER_JOB_QUEUE_ENABLEDcronRequired. Set to true to enqueue sync jobs to the worker queue, or false to disable. If unset, the cron service will exit with a migration notice.
BETTER_AUTH_URLapi, mcpThe base URL used for auth redirects.<br><br>e.g. http://localhost:3000
BETTER_AUTH_SECRETapi, mcpSecret key for session signing.<br><br>e.g. openssl rand -base64 32
API_PORTapiPort the Bun API listens on. Defaults to 3001 in container images.
ENVwebOptional. Runtime environment. One of development, production, or test. Defaults to production.
PORTwebPort the web server listens on. Defaults to 3000 in container images.
VITE_API_URLwebThe URL the web server uses to proxy requests to the Bun API.<br><br>e.g. http://api:3001
COMMERCIAL_MODEapi, cronEnable Polar billing flow. Set to true if using Polar for subscriptions.
POLAR_ACCESS_TOKENapi, cronOptional. Polar API token for subscription management.
POLAR_MODEapi, cronOptional. Polar environment, sandbox or production.
POLAR_WEBHOOK_SECRETapiOptional. Secret to verify Polar webhooks.
ENCRYPTION_KEYapi, cron, workerKey for encrypting CalDAV credentials at rest.<br><br>e.g. openssl rand -base64 32
RESEND_API_KEYapiOptional. API key for sending emails via Resend.
PASSKEY_RP_IDapiOptional. Relying party ID for passkey authentication.
PASSKEY_RP_NAMEapiOptional. Relying party display name for passkeys.
PASSKEY_ORIGINapiOptional. Origin allowed for passkey flows (e.g., https://keeper.example.com).
GOOGLE_CLIENT_IDapi, cron, workerOptional. Required for Google Calendar integration.
GOOGLE_CLIENT_SECRETapi, cron, workerOptional. Required for Google Calendar integration.
MICROSOFT_CLIENT_IDapi, cron, workerOptional. Required for Microsoft Outlook integration.
MICROSOFT_CLIENT_SECRETapi, cron, workerOptional. Required for Microsoft Outlook integration.
POSTGRES_PASSWORDstandaloneOptional. Custom password for the internal PostgreSQL database in keeper-standalone. If unset, defaults to keeper. The database is not exposed outside the container, so this is low risk, but can be set for defense in depth.
BLOCK_PRIVATE_RESOLUTIONapi, cronOptional. Set to true to block outbound fetches (ICS subscriptions, CalDAV servers) from resolving to private/reserved network addresses. Prevents SSRF. Defaults to false for backward compatibility with self-hosted setups that use local CalDAV/ICS servers.
PRIVATE_RESOLUTION_WHITELISTapi, cronOptional. When BLOCK_PRIVATE_RESOLUTION is true, this comma-separated list of hostnames or IPs is exempt from the restriction.<br><br>e.g. 192.168.1.50,radicale.local,10.0.2.12
TRUSTED_ORIGINSapiOptional. Comma-separated list of additional trusted origins for CSRF protection.<br><br>e.g. http://192.168.1.100,http://keeper.local,https://keeper.example.com
MCP_PUBLIC_URLapi, mcpOptional. Public URL of the MCP resource. Enables OAuth on the API and identifies the MCP server to clients.<br><br>e.g. https://keeper.example.com/mcp
VITE_MCP_URLwebOptional. Internal URL the web server uses to proxy /mcp requests to the MCP service.<br><br>e.g. http://mcp:3002
MCP_PORTmcpOptional. Port the MCP server listens on.<br><br>e.g. 3002
OTEL_EXPORTER_OTLP_ENDPOINTapi, cron, worker, mcp, webOptional. When set, enables forwarding structured logs to an OpenTelemetry collector via pino-opentelemetry-transport. The transport runs in a dedicated worker thread and does not affect application performance.<br><br>e.g. https://otel-collector.example.com:4318
OTEL_EXPORTER_OTLP_PROTOCOLapi, cron, worker, mcp, webOptional. Protocol used by the OTLP exporter. Defaults to http/protobuf per the OpenTelemetry spec.<br><br>e.g. http/protobuf, grpc, http/json
OTEL_EXPORTER_OTLP_HEADERSapi, cron, worker, mcp, webOptional. Headers sent with every OTLP export request. Use this for authentication (e.g. Basic auth or API keys).<br><br>e.g. Authorization=Basic dXNlcjpwYXNz

The following environment variables are baked into the web image at build time. They are pre-configured in the official Docker images and only need to be set if you are building from source.

NameDescription
VITE_COMMERCIAL_MODEToggle commercial mode in the web UI (true/false).
POLAR_PRO_MONTHLY_PRODUCT_IDOptional. Polar monthly product ID to power in-app upgrade links.
POLAR_PRO_YEARLY_PRODUCT_IDOptional. Polar yearly product ID to power in-app upgrade links.
VITE_VISITORS_NOW_TOKENOptional. visitors.now token for analytics
VITE_GOOGLE_ADS_IDOptional. Google Ads conversion tracking ID (e.g., AW-123456789)
VITE_GOOGLE_ADS_CONVERSION_LABELOptional. Google Ads conversion label for purchase tracking

[!NOTE]

  • keeper-standalone auto-configures everything internally — both the web server and Bun API sit behind a single Caddy reverse proxy on port 80.
  • keeper-services runs the web, API, cron, and worker services inside one container. The web server proxies /api requests internally, so only port 3000 needs to be exposed.
  • For individual images, only the web container needs to be exposed. The API is accessed internally via VITE_API_URL.

Images

TagDescriptionIncluded Services
keeper-standalone:2.9The "standalone" image is everything you need to get up and running with Keeper with as little configuration as possible.keeper-web, keeper-api, keeper-cron, keeper-worker, redis, postgresql, caddy
keeper-services:2.9If you'd like for the Redis & Database to exist outside of the container, you can use the "services" image to launch without them included in the image.keeper-web, keeper-api, keeper-cron, keeper-worker
keeper-web:2.9An image containing the Vite SSR web interface.keeper-web
keeper-api:2.9An image containing the Bun API service.keeper-api
keeper-cron:2.9An image containing the Bun cron service. Requires keeper-worker for destination syncing.keeper-cron
keeper-worker:2.9An image containing the BullMQ worker that processes calendar sync jobs enqueued by keeper-cron.keeper-worker
keeper-mcp:2.9An image containing the MCP server for AI agent calendar access. Optional — only needed if using MCP clients.keeper-mcp

[!TIP]

Pin your images to a major.minor version tag (e.g., 2.9) rather than latest. This prevents breaking changes from automatically applying when you pull new images.

Prerequisites

Docker & Docker Compose

In order to install Docker Compose, please refer to the official Docker documentation..

Google OAuth Credentials

[!TIP]

This is optional, although you will not be able to set Google Calendar as a destination without this.

Reference the official Google Cloud Platform documentation to generate valid credentials for Google OAuth. You must grant your consent screen the calendar.events, calendar.calendarlist.readonly, and userinfo.email scopes.

Once this is configured, set the client ID and client secret as the GOOGLE_CLIENT_ID and GOOGLE_CLIENT_SECRET environment variables at runtime.

Microsoft Azure Credentials

[!TIP]

Once again, this is optional. If you do not configure this, you will not be able to configure Microsoft Outlook as a destination.

Microsoft does not appear to do documentation well, the best I could find for non-legacy instructions on configuring OAuth is this community thread.. The required scopes are Calendars.ReadWrite, User.Read, and offline_access. The client ID and secret for Microsoft go into the MICROSOFT_CLIENT_ID and MICROSOFT_CLIENT_SECRET environment variables respectively.

Standalone Container

While you'd typically want to run containers granularly, if you just want to get up and running, a convenience image keeper-standalone:2.9 has been provided. This container contains the cron, worker, web, api services as well as a configured redis, database, and caddy instance that puts everything behind the same port. While this is the easiest way to spin up Keeper, it is not recognized as best-practice.

Generate keeper-standalone Environment Variables

The following will generate a .env file that contains the key used to generate sessions, as well as the key that is used to encrypt CalDAV credentials at rest.

[!IMPORTANT]

If you plan on accessing Keeper from a URL other than http://localhost, you will need to set the TRUSTED_ORIGINS environment variable. This should be a comma-delimited list of protocol-hostname inclusive origins you will be using.

Here is an example where we would be accessing Keeper from the LAN IP and where we are routing Keeper through a reverse proxy that hosts it at https://keeper.example.com/

TRUSTED_ORIGINS=http://10.0.0.2,https://keeper.example.com

Without this, you will fail CSRF checks on the better-auth package.

cat > .env << EOF
# BETTER_AUTH_SECRET and ENCRYPTION_KEY are required.
# TRUSTED_ORIGINS is required if you plan on accessing Keeper from an
# origin other than http://localhost/
BETTER_AUTH_SECRET=$(openssl rand -base64 32)
ENCRYPTION_KEY=$(openssl rand -base64 32)
TRUSTED_ORIGINS=
GOOGLE_CLIENT_ID=
GOOGLE_CLIENT_SECRET=
MICROSOFT_CLIENT_ID=
MICROSOFT_CLIENT_SECRET=
EOF

Run keeper-standalone with Docker

If you'd like to just run using the Docker CLI, you can use the following command. I would however recommend using a compose.yaml file.

docker run -d \
  -p 80:80 \
  -v keeper-data:/var/lib/postgresql/data \
  --env-file .env \
  ghcr.io/ridafkih/keeper-standalone:2.9

Run keeper-standalone with Docker Compose

If you'd prefer to use a compose.yaml file, the following is an example. Remember to populate your .env file first.

services:
  keeper:
    image: ghcr.io/ridafkih/keeper-standalone:2.9
    ports:
      - "80:80"
    volumes:
      - keeper-data:/var/lib/postgresql/data
    environment:
      BETTER_AUTH_SECRET: ${BETTER_AUTH_SECRET}
      ENCRYPTION_KEY: ${ENCRYPTION_KEY}
      TRUSTED_ORIGINS: ${TRUSTED_ORIGINS}
      GOOGLE_CLIENT_ID: ${GOOGLE_CLIENT_ID:-}
      GOOGLE_CLIENT_SECRET: ${GOOGLE_CLIENT_SECRET:-}
      MICROSOFT_CLIENT_ID: ${MICROSOFT_CLIENT_ID:-}
      MICROSOFT_CLIENT_SECRET: ${MICROSOFT_CLIENT_SECRET:-}

volumes:
  keeper-data:

Once that's configured, you can launch Keeper using the following command.

docker compose up -d

With all said and done, you can access Keeper at http://localhost/. You can use a reverse-proxy like Nginx or Caddy to put Keeper behind a domain on your network.

Collective Services Image

If you'd like to bring your own Redis and PostgreSQL, you can use the keeper-services image. This contains the cron, web and api services in one.

Generate keeper-services Environment Variables

cat > .env << EOF
# DATABASE_URL and REDIS_URL are required.
# *_CLIENT_ID and *_CLIENT_SECRET are optional.
BETTER_AUTH_SECRET=$(openssl rand -base64 32)
ENCRYPTION_KEY=$(openssl rand -base64 32)
DATABASE_URL=postgres://keeper:keeper@postgres:5432/keeper
REDIS_URL=redis://redis:6379
BETTER_AUTH_URL=http://localhost:3000
GOOGLE_CLIENT_ID=
GOOGLE_CLIENT_SECRET=
MICROSOFT_CLIENT_ID=
MICROSOFT_CLIENT_SECRET=
EOF

Run keeper-services with Docker Compose

Once you've populated your environment variables, you can choose to run redis and postgres alongside the keeper-services image to get up and running.

services:
  postgres:
    image: postgres:17
    environment:
      POSTGRES_USER: keeper
      POSTGRES_PASSWORD: keeper
      POSTGRES_DB: keeper
    volumes:
      - postgres-data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U keeper -d keeper"]
      interval: 5s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    volumes:
      - redis-data:/data
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 5s
      timeout: 5s
      retries: 5

  keeper:
    image: ghcr.io/ridafkih/keeper-services:latest
    environment:
      DATABASE_URL: ${DATABASE_URL}
      REDIS_URL: ${REDIS_URL}
      BETTER_AUTH_URL: ${BETTER_AUTH_URL}
      BETTER_AUTH_SECRET: ${BETTER_AUTH_SECRET}
      ENCRYPTION_KEY: ${ENCRYPTION_KEY}
      GOOGLE_CLIENT_ID: ${GOOGLE_CLIENT_ID:-}
      GOOGLE_CLIENT_SECRET: ${GOOGLE_CLIENT_SECRET:-}
      MICROSOFT_CLIENT_ID: ${MICROSOFT_CLIENT_ID:-}
      MICROSOFT_CLIENT_SECRET: ${MICROSOFT_CLIENT_SECRET:-}
    ports:
      - "3000:3000"
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy

volumes:
  postgres-data:
  redis-data:

Once that's configured, you can launch Keeper using the following command.

docker compose up -d

Individual Service Images

While running services individually is considered best-practice, it is verbose and more complicated to configure. Each service is hosted in its own image.

Generate Individual Service Environment Variables

cat > .env << EOF
# The only optional variables are *_CLIENT_ID, *_CLIENT_SECRET
BETTER_AUTH_SECRET=$(openssl rand -base64 32)
ENCRYPTION_KEY=$(openssl rand -base64 32)
VITE_API_URL=http://api:3001
POSTGRES_USER=keeper
POSTGRES_PASSWORD=keeper
POSTGRES_DB=keeper
REDIS_URL=redis://redis:6379
BETTER_AUTH_URL=http://localhost:3000
GOOGLE_CLIENT_ID=
GOOGLE_CLIENT_SECRET=
MICROSOFT_CLIENT_ID=
MICROSOFT_CLIENT_SECRET=
EOF

Configure Individual Service compose.yaml

services:
  postgres:
    image: postgres:17
    environment:
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_DB: ${POSTGRES_DB}
    volumes:
      - postgres-data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U keeper -d keeper"]
      interval: 5s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    volumes:
      - redis-data:/data
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 5s
      timeout: 5s
      retries: 5

  api:
    image: ghcr.io/ridafkih/keeper-api:latest
    environment:
      API_PORT: 3001
      DATABASE_URL: postgres://keeper:keeper@postgres:5432/keeper
      REDIS_URL: redis://redis:6379
      BETTER_AUTH_URL: ${BETTER_AUTH_URL}
      BETTER_AUTH_SECRET: ${BETTER_AUTH_SECRET}
      ENCRYPTION_KEY: ${ENCRYPTION_KEY}
      GOOGLE_CLIENT_ID: ${GOOGLE_CLIENT_ID:-}
      GOOGLE_CLIENT_SECRET: ${GOOGLE_CLIENT_SECRET:-}
      MICROSOFT_CLIENT_ID: ${MICROSOFT_CLIENT_ID:-}
      MICROSOFT_CLIENT_SECRET: ${MICROSOFT_CLIENT_SECRET:-}
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy

  cron:
    image: ghcr.io/ridafkih/keeper-cron:latest
    environment:
      DATABASE_URL: postgres://keeper:keeper@postgres:5432/keeper
      REDIS_URL: redis://redis:6379
      ENCRYPTION_KEY: ${ENCRYPTION_KEY}
      GOOGLE_CLIENT_ID: ${GOOGLE_CLIENT_ID:-}
      GOOGLE_CLIENT_SECRET: ${GOOGLE_CLIENT_SECRET:-}
      MICROSOFT_CLIENT_ID: ${MICROSOFT_CLIENT_ID:-}
      MICROSOFT_CLIENT_SECRET: ${MICROSOFT_CLIENT_SECRET:-}
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy

  web:
    image: ghcr.io/ridafkih/keeper-web:latest
    environment:
      VITE_API_URL: ${VITE_API_URL}
      PORT: 3000
    ports:
      - "3000:3000"
    depends_on:
      api:
        condition: service_started

volumes:
  postgres-data:
  redis-data:

Once that's configured, you can launch Keeper using the following command.

docker compose up -d

MCP (Model Context Protocol)

Keeper includes an optional MCP server that lets AI agents (such as Claude) access your calendar data through a standardized protocol. The MCP server authenticates via OAuth 2.1 with a consent flow hosted by the web application.

Available Tools

ToolDescription
list_calendarsList all calendars connected to Keeper, including provider name and account.
get_eventsGet calendar events within a date range. Accepts ISO 8601 datetimes and an IANA timezone identifier.
get_event_countGet the total number of calendar events synced to Keeper.

Connecting an MCP Client

To connect an MCP-compatible client (e.g. Claude Code, Claude Desktop), point it at your MCP server URL. The client will be guided through the OAuth consent flow to authorize read access to your calendar data.

Example Claude Code MCP configuration:

{
  "mcpServers": {
    "keeper": {
      "type": "url",
      "url": "https://keeper.example.com/mcp"
    }
  }
}

Self-Hosted MCP Setup

[!NOTE]

MCP is fully optional. All MCP-related environment variables are optional across every service and image. If they are not set, Keeper starts normally without MCP functionality. Existing self-hosted deployments are unaffected.

The MCP server is proxied through the web service at /mcp, the same way the API is proxied at /api. MCP is not bundled in the keeper-standalone or keeper-services convenience images — run the keeper-mcp image as a separate container alongside them.

To enable MCP on a self-hosted instance:

  1. Run the keeper-mcp container with MCP_PORT, MCP_PUBLIC_URL, DATABASE_URL, BETTER_AUTH_SECRET, and BETTER_AUTH_URL.
  2. Set MCP_PUBLIC_URL on the api service to the same value (e.g. https://keeper.example.com/mcp).
  3. Set VITE_MCP_URL on the web service to the internal URL of the MCP container (e.g. http://mcp:3002).

Modules

Applications

  1. @keeper.sh/api
  2. @keeper.sh/cron
  3. @keeper.sh/mcp
  4. @keeper.sh/web
  5. @keeper.sh/cli (Coming Soon)
  6. @keeper.sh/mobile (Coming Soon)
  7. @keeper.sh/ssh (Coming Soon)

Modules

  1. @keeper.sh/auth
  2. @keeper.sh/auth-plugin-username-only
  3. @keeper.sh/broadcast
  4. @keeper.sh/broadcast-client
  5. @keeper.sh/calendar
  6. @keeper.sh/constants
  7. @keeper.sh/data-schemas
  8. @keeper.sh/database
  9. @keeper.sh/date-utils
  10. @keeper.sh/encryption
  11. @keeper.sh/env
  12. @keeper.sh/fixtures
  13. @keeper.sh/keeper-api
  14. @keeper.sh/oauth
  15. @keeper.sh/oauth-google
  16. @keeper.sh/oauth-microsoft
  17. @keeper.sh/premium
  18. @keeper.sh/provider-caldav
  19. @keeper.sh/provider-core
  20. @keeper.sh/provider-fastmail
  21. @keeper.sh/provider-google-calendar
  22. @keeper.sh/provider-icloud
  23. @keeper.sh/provider-outlook
  24. @keeper.sh/provider-registry
  25. @keeper.sh/pull-calendar
  26. @keeper.sh/sync-calendar
  27. @keeper.sh/sync-events
  28. @keeper.sh/typescript-config

Global Ranking

-
Trust ScoreMCPHub Index

Based on codebase health & activity.

Manual Config

{ "mcpServers": { "keeper-sh": { "command": "npx", "args": ["keeper-sh"] } } }