MCPHub LabRegistrytirth8205/code-review-graph
tirth8205

tirth8205/code review graph

Built by tirth8205 3,872 stars

What is tirth8205/code review graph?

Local knowledge graph for Claude Code. Builds a persistent map of your codebase so Claude reads only what matters — 6.8× fewer tokens on reviews and up to 49× on daily coding tasks.

How to use tirth8205/code review graph?

1. Install a compatible MCP client (like Claude Desktop). 2. Open your configuration settings. 3. Add tirth8205/code review graph using the following command: npx @modelcontextprotocol/tirth8205-code-review-graph 4. Restart the client and verify the new tools are active.
🛡️ Scoped (Restricted)
npx @modelcontextprotocol/tirth8205-code-review-graph --scope restricted
🔓 Unrestricted Access
npx @modelcontextprotocol/tirth8205-code-review-graph

Key Features

Native MCP Protocol Support
Real-time Tool Activation & Execution
Verified High-performance Implementation
Secure Resource & Context Handling

Optimized Use Cases

Extending AI models with custom local capabilities
Automating system workflows via natural language
Connecting external data sources to LLM context windows

tirth8205/code review graph FAQ

Q

Is tirth8205/code review graph safe?

Yes, tirth8205/code review graph follows the standardized Model Context Protocol security patterns and only executes tools with explicit user-granted permissions.

Q

Is tirth8205/code review graph up to date?

tirth8205/code review graph is currently active in the registry with 3,872 stars on GitHub, indicating its reliability and community support.

Q

Are there any limits for tirth8205/code review graph?

Usage limits depend on the specific implementation of the MCP server and your system resources. Refer to the official documentation below for technical details.

Official Documentation

View on GitHub
<h1 align="center">code-review-graph</h1> <p align="center"> <strong>Stop burning tokens. Start reviewing smarter.</strong> </p> <p align="center"> <a href="https://github.com/tirth8205/code-review-graph/stargazers"><img src="https://img.shields.io/github/stars/tirth8205/code-review-graph?style=flat-square" alt="Stars"></a> <a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-yellow.svg?style=flat-square" alt="MIT Licence"></a> <a href="https://github.com/tirth8205/code-review-graph/actions/workflows/ci.yml"><img src="https://github.com/tirth8205/code-review-graph/actions/workflows/ci.yml/badge.svg" alt="CI"></a> <a href="https://www.python.org/"><img src="https://img.shields.io/badge/python-3.10%2B-blue.svg?style=flat-square" alt="Python 3.10+"></a> <a href="https://modelcontextprotocol.io/"><img src="https://img.shields.io/badge/MCP-compatible-green.svg?style=flat-square" alt="MCP"></a> <a href="#"><img src="https://img.shields.io/badge/version-2.0.0-purple.svg?style=flat-square" alt="v2.0.0"></a> </p> <br>

Claude Code re-reads your entire codebase on every task. code-review-graph fixes that. It builds a structural map of your code with Tree-sitter, tracks changes incrementally, and gives Claude precise context so it reads only what matters.

<p align="center"> <img src="diagrams/diagram1_before_vs_after.png" alt="The Token Problem: 8.2x average token reduction across 6 real repositories" width="85%" /> </p>

Quick Start

pip install code-review-graph                     # or: pipx install code-review-graph
code-review-graph install          # auto-detects and configures all supported platforms
code-review-graph build            # parse your codebase

One command sets up everything. install detects which AI coding tools you have and writes the correct MCP configuration for each one. It auto-detects whether you installed via uvx or pip/pipx and generates the right config. Restart your editor/tool after installing.

To target a specific platform:

code-review-graph install --platform cursor      # configure only Cursor
code-review-graph install --platform claude-code  # configure only Claude Code

Requires Python 3.10+. For the best experience, install uv (the MCP config will use uvx if available, otherwise falls back to the code-review-graph command directly).

Supported Platforms

PlatformConfig fileAuto-detected
Claude Code.mcp.jsonYes
Cursor.cursor/mcp.jsonYes
Windsurf~/.codeium/windsurf/mcp_config.jsonYes
ZedZed settings.jsonYes
Continue~/.continue/config.jsonYes
OpenCode.opencode.jsonYes
Antigravity~/.gemini/antigravity/mcp_config.jsonYes

Then open your project and ask your AI assistant:

Build the code review graph for this project

The initial build takes ~10 seconds for a 500-file project. After that, the graph updates automatically on every file edit and git commit.


How It Works

Your repository is parsed into an AST with Tree-sitter, stored as a graph of nodes (functions, classes, imports) and edges (calls, inheritance, test coverage), then queried at review time to compute the minimal set of files Claude needs to read.

<p align="center"> <img src="diagrams/diagram2_architecture_pipeline.png" alt="Architecture pipeline: Repository to Tree-sitter Parser to SQLite Graph to Blast Radius to Minimal Review Set" width="100%" /> </p> <details> <summary><strong>Blast-radius analysis</strong></summary> <br>

When a file changes, the graph traces every caller, dependent, and test that could be affected. This is the "blast radius" of the change. Claude reads only these files instead of scanning the whole project.

<p align="center"> <img src="diagrams/diagram3_blast_radius.png" alt="Blast radius visualization showing how a change to login() propagates to callers, dependents, and tests" width="70%" /> </p> </details> <details> <summary><strong>Incremental updates in &lt; 2 seconds</strong></summary> <br>

On every git commit or file save, a hook fires. The graph diffs changed files, finds their dependents via SHA-256 hash checks, and re-parses only what changed. A 2,900-file project re-indexes in under 2 seconds.

<p align="center"> <img src="diagrams/diagram4_incremental_update.png" alt="Incremental update flow: git commit triggers diff, finds dependents, re-parses only 5 files while 2,910 are skipped" width="90%" /> </p> </details> <details> <summary><strong>19 supported languages + Jupyter notebooks</strong></summary> <br>

Python, TypeScript/TSX, JavaScript, Vue, Go, Rust, Java, Scala, C#, Ruby, Kotlin, Swift, PHP, Solidity, C/C++, Dart, R, Perl, Lua

Plus Jupyter/Databricks notebook parsing (.ipynb) with multi-language cell support (Python, R, SQL), and Perl XS files (.xs, parsed as C).

Each language has full Tree-sitter grammar support for functions, classes, imports, call sites, inheritance, and test detection.

</details>

Benchmarks

All numbers come from the automated evaluation runner against 6 real open-source repositories (13 commits total). Reproduce with code-review-graph eval --all. Raw data in evaluate/reports/summary.md.

<details> <summary><strong>Token efficiency: 8.2x average reduction (naive vs graph)</strong></summary> <br>

The graph replaces reading entire source files with a compact structural context covering blast radius, dependency chains, and test coverage gaps.

RepoCommitsAvg Naive TokensAvg Graph TokensReduction
express26939830.7x
fastapi24,9446148.1x
flask244,7514,2529.1x
gin321,9721,15316.4x
httpx212,0441,7286.9x
nextjs29,8821,2498.0x
Average138.2x

Why express shows <1x: For single-file changes in small packages, the graph context (metadata, edges, review guidance) can exceed the raw file size. The graph approach pays off on multi-file changes where it prunes irrelevant code.

</details> <details> <summary><strong>Impact accuracy: 100% recall, 0.54 average F1</strong></summary> <br>

The blast-radius analysis never misses an actually impacted file (perfect recall). It over-predicts in some cases, which is a conservative trade-off — better to flag too many files than miss a broken dependency.

RepoCommitsAvg F1Avg PrecisionRecall
express20.6670.501.0
fastapi20.5840.421.0
flask20.4750.341.0
gin30.4290.291.0
httpx20.7620.631.0
nextjs20.3310.201.0
Average130.540.381.0
</details> <details> <summary><strong>Build performance</strong></summary> <br>
RepoFilesNodesEdgesFlow DetectionSearch Latency
express1411,91017,553106ms0.7ms
fastapi1,1226,28527,117128ms1.5ms
flask831,4467,97495ms0.7ms
gin991,28616,762111ms0.5ms
httpx601,2537,89696ms0.4ms
</details> <details> <summary><strong>Limitations and known weaknesses</strong></summary> <br>
  • Small single-file changes: Graph context can exceed naive file reads for trivial edits (see express results above). The overhead is the structural metadata that enables multi-file analysis.
  • Search quality (MRR 0.35): Keyword search finds the right result in the top-4 for most queries, but ranking needs improvement. Express queries return 0 hits due to module-pattern naming.
  • Flow detection (33% recall): Only reliably detects entry points in Python repos (fastapi, httpx) where framework patterns are recognized. JavaScript and Go flow detection needs work.
  • Precision vs recall trade-off: Impact analysis is deliberately conservative. It flags files that might be affected, which means some false positives in large dependency graphs.
</details>

Usage

<details> <summary><strong>Slash commands</strong></summary> <br>
CommandDescription
/code-review-graph:build-graphBuild or rebuild the code graph
/code-review-graph:review-deltaReview changes since last commit
/code-review-graph:review-prFull PR review with blast-radius analysis
</details> <details> <summary><strong>CLI reference</strong></summary> <br>
code-review-graph install          # Auto-detect and configure all platforms
code-review-graph install --platform <name>  # Target a specific platform
code-review-graph build            # Parse entire codebase
code-review-graph update           # Incremental update (changed files only)
code-review-graph status           # Graph statistics
code-review-graph watch            # Auto-update on file changes
code-review-graph visualize        # Generate interactive HTML graph
code-review-graph wiki             # Generate markdown wiki from communities
code-review-graph detect-changes   # Risk-scored change impact analysis
code-review-graph register <path>  # Register repo in multi-repo registry
code-review-graph unregister <id>  # Remove repo from registry
code-review-graph repos            # List registered repositories
code-review-graph eval             # Run evaluation benchmarks
code-review-graph serve            # Start MCP server
</details> <details> <summary><strong>MCP tools</strong></summary> <br>

Claude uses these automatically once the graph is built.

ToolDescription
build_or_update_graph_toolBuild or incrementally update the graph
get_impact_radius_toolBlast radius of changed files
get_review_context_toolToken-optimised review context with structural summary
query_graph_toolCallers, callees, tests, imports, inheritance queries
semantic_search_nodes_toolSearch code entities by name or meaning
embed_graph_toolCompute vector embeddings for semantic search
list_graph_stats_toolGraph size and health
get_docs_section_toolRetrieve documentation sections
find_large_functions_toolFind functions/classes exceeding a line-count threshold
list_flows_toolList execution flows sorted by criticality
get_flow_toolGet details of a single execution flow
get_affected_flows_toolFind flows affected by changed files
list_communities_toolList detected code communities
get_community_toolGet details of a single community
get_architecture_overview_toolArchitecture overview from community structure
detect_changes_toolRisk-scored change impact analysis for code review
refactor_toolRename preview, dead code detection, suggestions
apply_refactor_toolApply a previously previewed refactoring
generate_wiki_toolGenerate markdown wiki from communities
get_wiki_page_toolRetrieve a specific wiki page
list_repos_toolList registered repositories
cross_repo_search_toolSearch across all registered repositories

MCP Prompts (5 workflow templates): review_changes, architecture_map, debug_issue, onboard_developer, pre_merge_check

</details>

Features

FeatureDetails
Incremental updatesRe-parses only changed files. Subsequent updates complete in under 2 seconds.
19 languages + notebooksPython, TypeScript/TSX, JavaScript, Vue, Go, Rust, Java, Scala, C#, Ruby, Kotlin, Swift, PHP, Solidity, C/C++, Dart, R, Perl, Lua, Jupyter/Databricks (.ipynb)
Blast-radius analysisShows exactly which functions, classes, and files are affected by any change
Auto-update hooksGraph updates on every file edit and git commit without manual intervention
Semantic searchOptional vector embeddings via sentence-transformers, Google Gemini, or MiniMax
Interactive visualisationD3.js force-directed graph with edge-type toggles and search
Local storageSQLite file in .code-review-graph/. No external database, no cloud dependency.
Watch modeContinuous graph updates as you work
Execution flowsTrace call chains from entry points, sorted by criticality
Community detectionCluster related code via Leiden algorithm or file grouping
Architecture overviewAuto-generated architecture map with coupling warnings
Risk-scored reviewsdetect_changes maps diffs to affected functions, flows, and test gaps
Refactoring toolsRename preview, dead code detection, community-driven suggestions
Wiki generationAuto-generate markdown wiki from community structure
Multi-repo registryRegister multiple repos, search across all of them
MCP prompts5 workflow templates: review, architecture, debug, onboard, pre-merge
Full-text searchFTS5-powered hybrid search combining keyword and vector similarity
<details> <summary><strong>Configuration</strong></summary> <br>

To exclude paths from indexing, create a .code-review-graphignore file in your repository root:

generated/**
*.generated.ts
vendor/**
node_modules/**

Optional dependency groups:

pip install code-review-graph[embeddings]          # Local vector embeddings (sentence-transformers)
pip install code-review-graph[google-embeddings]   # Google Gemini embeddings
pip install code-review-graph[communities]         # Community detection (igraph)
pip install code-review-graph[eval]                # Evaluation benchmarks (matplotlib)
pip install code-review-graph[wiki]                # Wiki generation with LLM summaries (ollama)
pip install code-review-graph[all]                 # All optional dependencies
</details>

Contributing

git clone https://github.com/tirth8205/code-review-graph.git
cd code-review-graph
python3 -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
pytest
<details> <summary><strong>Adding a new language</strong></summary> <br>

Edit code_review_graph/parser.py and add your extension to EXTENSION_TO_LANGUAGE along with node type mappings in _CLASS_TYPES, _FUNCTION_TYPES, _IMPORT_TYPES, and _CALL_TYPES. Include a test fixture and open a PR.

</details>

Licence

MIT. See LICENSE.

<p align="center"> <br> <code>pip install code-review-graph && code-review-graph install</code><br> <sub>Works with Claude Code, Cursor, Windsurf, Zed, Continue, and OpenCode</sub> </p>

Global Ranking

8.5
Trust ScoreMCPHub Index

Based on codebase health & activity.

Manual Config

{ "mcpServers": { "tirth8205-code-review-graph": { "command": "npx", "args": ["tirth8205-code-review-graph"] } } }