ForLoopCodes

contextplus

Built by ForLoopCodes 1,605 stars

What is contextplus?

Semantic Intelligence for Large-Scale Engineering. Context+ is an MCP server designed for developers who demand 99% accuracy. By combining RAG, Tree-sitter AST, Spectral Clustering, and Obsidian-style linking, Context+ turns a massive codebase into a searchable, hierarchical feature graph.

How to use contextplus?

1. Install a compatible MCP client (like Claude Desktop). 2. Open your configuration settings. 3. Add contextplus using the following command: npx @modelcontextprotocol/contextplus 4. Restart the client and verify the new tools are active.
🛡️ Scoped (Restricted)
npx @modelcontextprotocol/contextplus --scope restricted
🔓 Unrestricted Access
npx @modelcontextprotocol/contextplus

Key Features

Native MCP Protocol Support
Real-time Tool Activation & Execution
Verified High-performance Implementation
Secure Resource & Context Handling

Optimized Use Cases

Extending AI models with custom local capabilities
Automating system workflows via natural language
Connecting external data sources to LLM context windows

contextplus FAQ

Q

Is contextplus safe?

Yes, contextplus follows the standardized Model Context Protocol security patterns and only executes tools with explicit user-granted permissions.

Q

Is contextplus up to date?

contextplus is currently active in the registry with 1,605 stars on GitHub, indicating its reliability and community support.

Q

Are there any limits for contextplus?

Usage limits depend on the specific implementation of the MCP server and your system resources. Refer to the official documentation below for technical details.

Official Documentation

View on GitHub

Context+

Semantic Intelligence for Large-Scale Engineering.

Context+ is an MCP server designed for developers who demand 99% accuracy. By combining RAG, Tree-sitter AST, Spectral Clustering, and Obsidian-style linking, Context+ turns a massive codebase into a searchable, hierarchical feature graph.

https://github.com/user-attachments/assets/a97a451f-c9b4-468d-b036-15b65fc13e79

Tools

Discovery

ToolDescription
get_context_treeStructural AST tree of a project with file headers and symbol ranges (line numbers for functions/classes/methods). Dynamic pruning shrinks output automatically.
get_file_skeletonFunction signatures, class methods, and type definitions with line ranges, without reading full bodies. Shows the API surface.
semantic_code_searchSearch by meaning, not exact text. Uses embeddings over file headers/symbols and returns matched symbol definition lines.
semantic_identifier_searchIdentifier-level semantic retrieval for functions/classes/variables with ranked call sites and line numbers.
semantic_navigateBrowse codebase by meaning using spectral clustering. Groups semantically related files into labeled clusters.

Analysis

ToolDescription
get_blast_radiusTrace every file and line where a symbol is imported or used. Prevents orphaned references.
run_static_analysisRun native linters and compilers to find unused variables, dead code, and type errors. Supports TypeScript, Python, Rust, Go.

Code Ops

ToolDescription
propose_commitThe only way to write code. Validates against strict rules before saving. Creates a shadow restore point before writing.
get_feature_hubObsidian-style feature hub navigator. Hubs are .md files with [[wikilinks]] that map features to code files.

Version Control

ToolDescription
list_restore_pointsList all shadow restore points created by propose_commit. Each captures file state before AI changes.
undo_changeRestore files to their state before a specific AI change. Uses shadow restore points. Does not affect git.

Memory & RAG

ToolDescription
upsert_memory_nodeCreate or update a memory node (concept, file, symbol, note) with auto-generated embeddings.
create_relationCreate typed edges between nodes (relates_to, depends_on, implements, references, similar_to, contains).
search_memory_graphSemantic search with graph traversal — finds direct matches then walks 1st/2nd-degree neighbors.
prune_stale_linksRemove decayed edges (e^(-λt) below threshold) and orphan nodes with low access counts.
add_interlinked_contextBulk-add nodes with auto-similarity linking (cosine ≥ 0.72 creates edges automatically).
retrieve_with_traversalStart from a node and walk outward — returns all reachable neighbors scored by decay and depth.

Setup

Quick Start (npx / bunx)

No installation needed. Add Context+ to your IDE MCP config.

For Claude Code, Cursor, and Windsurf, use mcpServers:

{
  "mcpServers": {
    "contextplus": {
      "command": "bunx",
      "args": ["contextplus"],
      "env": {
        "OLLAMA_EMBED_MODEL": "nomic-embed-text",
        "OLLAMA_CHAT_MODEL": "gemma2:27b",
        "OLLAMA_API_KEY": "YOUR_OLLAMA_API_KEY"
      }
    }
  }
}

For VS Code (.vscode/mcp.json), use servers and inputs:

{
  "servers": {
    "contextplus": {
      "type": "stdio",
      "command": "bunx",
      "args": ["contextplus"],
      "env": {
        "OLLAMA_EMBED_MODEL": "nomic-embed-text",
        "OLLAMA_CHAT_MODEL": "gemma2:27b",
        "OLLAMA_API_KEY": "YOUR_OLLAMA_API_KEY"
      }
    }
  },
  "inputs": []
}

If you prefer npx, use:

  • "command": "npx"
  • "args": ["-y", "contextplus"]

Or generate the MCP config file directly in your current directory:

npx -y contextplus init claude
bunx contextplus init cursor
npx -y contextplus init opencode

Supported coding agent names: claude, cursor, vscode, windsurf, opencode.

Config file locations:

IDEConfig File
Claude Code.mcp.json
Cursor.cursor/mcp.json
VS Code.vscode/mcp.json
Windsurf.windsurf/mcp.json
OpenCodeopencode.json

CLI Subcommands

  • init [target] - Generate MCP configuration (targets: claude, cursor, vscode, windsurf, opencode).
  • skeleton [path] or tree [path] - (New) View the structural tree of a project with file headers and symbol definitions directly in your terminal.
  • [path] - Start the MCP server (stdio) for the specified path (defaults to current directory).

From Source

npm install
npm run build

Embedding Providers

Context+ supports two embedding backends controlled by CONTEXTPLUS_EMBED_PROVIDER:

ProviderValueRequiresBest For
Ollama (default)ollamaLocal Ollama serverFree, offline, private
OpenAI-compatibleopenaiAPI keyGemini (free tier), OpenAI, Groq, vLLM

Ollama (Default)

No extra configuration needed. Just run Ollama with an embedding model:

ollama pull nomic-embed-text
ollama serve

Google Gemini (Free Tier)

Full Claude Code .mcp.json example:

{
  "mcpServers": {
    "contextplus": {
      "command": "npx",
      "args": ["-y", "contextplus"],
      "env": {
        "CONTEXTPLUS_EMBED_PROVIDER": "openai",
        "CONTEXTPLUS_OPENAI_API_KEY": "YOUR_GEMINI_API_KEY",
        "CONTEXTPLUS_OPENAI_BASE_URL": "https://generativelanguage.googleapis.com/v1beta/openai",
        "CONTEXTPLUS_OPENAI_EMBED_MODEL": "text-embedding-004"
      }
    }
  }
}

Get a free API key at Google AI Studio.

OpenAI

{
  "mcpServers": {
    "contextplus": {
      "command": "npx",
      "args": ["-y", "contextplus"],
      "env": {
        "CONTEXTPLUS_EMBED_PROVIDER": "openai",
        "OPENAI_API_KEY": "sk-...",
        "OPENAI_EMBED_MODEL": "text-embedding-3-small"
      }
    }
  }
}

Other OpenAI-compatible APIs (Groq, vLLM, LiteLLM)

Any endpoint implementing the OpenAI Embeddings API works:

{
  "mcpServers": {
    "contextplus": {
      "command": "npx",
      "args": ["-y", "contextplus"],
      "env": {
        "CONTEXTPLUS_EMBED_PROVIDER": "openai",
        "CONTEXTPLUS_OPENAI_API_KEY": "YOUR_KEY",
        "CONTEXTPLUS_OPENAI_BASE_URL": "https://your-proxy.example.com/v1",
        "CONTEXTPLUS_OPENAI_EMBED_MODEL": "your-model-name"
      }
    }
  }
}

Note: The semantic_navigate tool also uses a chat model for cluster labeling. When using the openai provider, set CONTEXTPLUS_OPENAI_CHAT_MODEL (default: gpt-4o-mini).

For VS Code, Cursor, or OpenCode, use the same env block inside your IDE's MCP config format (see Config file locations table above).

Architecture

Three layers built with TypeScript over stdio using the Model Context Protocol SDK:

Core (src/core/) - Multi-language AST parsing (tree-sitter, 43 extensions), gitignore-aware traversal, Ollama vector embeddings with disk cache, wikilink hub graph, in-memory property graph with decay scoring.

Tools (src/tools/) - 17 MCP tools exposing structural, semantic, operational, and memory graph capabilities.

Git (src/git/) - Shadow restore point system for undo without touching git history.

Runtime Cache (.mcp_data/) - created on server startup; stores reusable file, identifier, and call-site embeddings to avoid repeated GPU/CPU embedding work. A realtime tracker refreshes changed files/functions incrementally.

Config

VariableTypeDefaultDescription
CONTEXTPLUS_EMBED_PROVIDERstringollamaEmbedding backend: ollama or openai
OLLAMA_EMBED_MODELstringnomic-embed-textOllama embedding model
OLLAMA_API_KEYstring-Ollama Cloud API key
OLLAMA_CHAT_MODELstringllama3.2Ollama chat model for cluster labeling
CONTEXTPLUS_OPENAI_API_KEYstring-API key for OpenAI-compatible provider (alias: OPENAI_API_KEY)
CONTEXTPLUS_OPENAI_BASE_URLstringhttps://api.openai.com/v1OpenAI-compatible endpoint URL (alias: OPENAI_BASE_URL)
CONTEXTPLUS_OPENAI_EMBED_MODELstringtext-embedding-3-smallOpenAI-compatible embedding model (alias: OPENAI_EMBED_MODEL)
CONTEXTPLUS_OPENAI_CHAT_MODELstringgpt-4o-miniOpenAI-compatible chat model for labeling (alias: OPENAI_CHAT_MODEL)
CONTEXTPLUS_EMBED_BATCH_SIZEstring (parsed as number)8Embedding batch size per GPU call, clamped to 5-10
CONTEXTPLUS_EMBED_CHUNK_CHARSstring (parsed as number)2000Per-chunk chars before merge, clamped to 256-8000
CONTEXTPLUS_MAX_EMBED_FILE_SIZEstring (parsed as number)51200Skip non-code text files larger than this many bytes
CONTEXTPLUS_EMBED_NUM_GPUstring (parsed as number)-Optional Ollama embed runtime num_gpu override
CONTEXTPLUS_EMBED_MAIN_GPUstring (parsed as number)-Optional Ollama embed runtime main_gpu override
CONTEXTPLUS_EMBED_NUM_THREADstring (parsed as number)-Optional Ollama embed runtime num_thread override
CONTEXTPLUS_EMBED_NUM_BATCHstring (parsed as number)-Optional Ollama embed runtime num_batch override
CONTEXTPLUS_EMBED_NUM_CTXstring (parsed as number)-Optional Ollama embed runtime num_ctx override
CONTEXTPLUS_EMBED_LOW_VRAMstring (parsed as boolean)-Optional Ollama embed runtime low_vram override
CONTEXTPLUS_EMBED_TRACKERstring (parsed as boolean)trueEnable realtime embedding refresh on file changes
CONTEXTPLUS_EMBED_TRACKER_MAX_FILESstring (parsed as number)8Max changed files processed per tracker tick, clamped to 5-10
CONTEXTPLUS_EMBED_TRACKER_DEBOUNCE_MSstring (parsed as number)700Debounce window before tracker refresh

Test

npm test
npm run test:demo
npm run test:all

Global Ranking

-
Trust ScoreMCPHub Index

Based on codebase health & activity.

Manual Config

{ "mcpServers": { "contextplus": { "command": "npx", "args": ["contextplus"] } } }