MCPHub LabRegistryjgravelle/jdatamunch-mcp
jgravelle

jgravelle/jdatamunch mcp

Built by jgravelle โ€ข 21 stars

What is jgravelle/jdatamunch mcp?

Token-efficient MCP server for tabular data retrieval. Index CSV/Excel files, query rows, aggregate โ€” 99%+ token savings vs raw file reads.

How to use jgravelle/jdatamunch mcp?

1. Install a compatible MCP client (like Claude Desktop). 2. Open your configuration settings. 3. Add jgravelle/jdatamunch mcp using the following command: npx @modelcontextprotocol/jgravelle-jdatamunch-mcp 4. Restart the client and verify the new tools are active.
๐Ÿ›ก๏ธ Scoped (Restricted)
npx @modelcontextprotocol/jgravelle-jdatamunch-mcp --scope restricted
๐Ÿ”“ Unrestricted Access
npx @modelcontextprotocol/jgravelle-jdatamunch-mcp

Key Features

Native MCP Protocol Support
Real-time Tool Activation & Execution
Verified Standard Implementation
Secure Resource & Context Handling

Optimized Use Cases

Extending AI models with custom local capabilities
Automating system workflows via natural language
Connecting external data sources to LLM context windows

jgravelle/jdatamunch mcp FAQ

Q

Is jgravelle/jdatamunch mcp safe?

Yes, jgravelle/jdatamunch mcp follows the standardized Model Context Protocol security patterns and only executes tools with explicit user-granted permissions.

Q

Is jgravelle/jdatamunch mcp up to date?

jgravelle/jdatamunch mcp is currently active in the registry with 21 stars on GitHub, indicating its reliability and community support.

Q

Are there any limits for jgravelle/jdatamunch mcp?

Usage limits depend on the specific implementation of the MCP server and your system resources. Refer to the official documentation below for technical details.

Official Documentation

View on GitHub

Quickstart - https://github.com/jgravelle/jdatamunch-mcp/blob/main/QUICKSTART.md

<!-- mcp-name: io.github.jgravelle/jdatamunch-mcp -->

FREE FOR PERSONAL USE

Use it to make money, and Uncle J. gets a taste. Fair enough? details


Documentation

DocWhat it covers
QUICKSTART.mdZero-to-indexed in three steps
USER-MANUAL.mdFull guide for analysts, ops, and non-developers

Cut spreadsheet token usage by 99.997%

Most AI agents explore tabular data the expensive way:

dump the whole file into the prompt โ†’ skim a million irrelevant rows โ†’ repeat.

That is not "a little inefficient." That is a token incinerator.

A 255 MB CSV file with 1 million rows costs 111 million tokens if you paste it raw. A single describe_dataset call answers the same orientation question in 3,849 tokens.

That is a 25,333ร— reduction โ€” measured, not estimated, on a real 1M-row public dataset.

jDataMunch indexes the file once and lets agents retrieve only the exact data they need: column profiles, filtered rows, server-side aggregations, cross-dataset joins, and semantic search โ€” with SQL precision.

Benchmark: LAPD crime records โ€” 1,004,894 rows, 28 columns, 255 MB Baseline (raw file): 111,028,360 tokens ย |ย  jDataMunch: ~3,849 tokens ย |ย  25,333ร— reduction Methodology & harness ยท Full results

TaskTraditional approachWith jDataMunch
Understand a datasetPaste entire CSVdescribe_dataset โ†’ column names, types, cardinality, samples
Find relevant columnsRead every rowsearch_data โ†’ column-level results with IDs
Answer a filtered questionLoad millions of rowsget_rows with structured filters โ†’ only matching rows
Compute a group-byReturn all dataaggregate โ†’ server-side SQL, one result set
Compare two datasetsLoad both entirelyjoin_datasets โ†’ SQL JOIN across indexed stores
Find column relationshipsExport to spreadsheetget_correlations โ†’ pairwise Pearson correlations

Index once. Query cheaply. Keep moving. Precision retrieval beats brute-force context.


jDataMunch MCP

Structured tabular data retrieval for AI agents

License MCP Local-first SQLite jMRI DOI PyPI version PyPI - Python Version

Commercial licenses

jDataMunch-MCP is free for non-commercial use.

Commercial use requires a paid license.

jDataMunch-only licenses

Want the full jMunch suite?

Stop paying your model to read the whole damn spreadsheet.

jDataMunch turns tabular data exploration into structured retrieval.

Instead of forcing an agent to load an entire CSV, scan millions of rows, and burn through context just to find the right column name, jDataMunch lets it navigate by what the data is and retrieve only what matters.

That means:

  • 25,333ร— lower data-reading token usage on a 1M-row CSV (measured)
  • less irrelevant context polluting the prompt
  • faster dataset orientation โ€” one call tells you everything about the schema
  • accurate filtered queries โ€” the agent asks for Hollywood assaults, it gets Hollywood assaults
  • server-side aggregations โ€” GROUP BY runs in SQLite, not inside the context window
  • cross-dataset joins โ€” combine two indexed files in a single SQL query
  • semantic search โ€” find columns by meaning, not just keyword match
  • natural-language summaries โ€” auto-generated descriptions of every column and dataset

It indexes your files once using a streaming parser and SQLite, stores column profiles and row data with proper type affinity, and retrieves exactly what the agent asked for instead of re-loading the entire file on every question.


Supported file formats

FormatExtensionsInstall extra
CSV / TSV.csv, .tsvโ€” (built-in)
Excel.xlsx, .xlspip install "jdatamunch-mcp[excel]"
Parquet.parquetpip install "jdatamunch-mcp[parquet]"
JSONL / NDJSON.jsonl, .ndjsonโ€” (built-in)

Why agents need this

Most agents still handle spreadsheets like someone who prints the entire internet before reading one article:

  • paste the whole CSV to answer a narrow question
  • re-load the same file repeatedly across tool calls
  • consume column headers, empty cells, malformed rows, and irrelevant records
  • burn context window on data that was never part of the question

jDataMunch fixes that by giving them a structured way to:

  • describe a dataset's schema before touching any row data
  • search for the specific column that holds the answer โ€” by keyword or meaning
  • retrieve only the rows that match the filter
  • run aggregations server-side and get back a single result set
  • join two datasets without loading either into the prompt
  • orient themselves with samples before committing to a full query
  • detect data-quality issues and column correlations automatically

Agents do not need bigger context windows.

They need better aim.


What you get

Column-level retrieval

Understand a dataset's full schema โ€” types, cardinality, null rates, value distributions, samples, and natural-language summaries โ€” in a single sub-10ms call. No rows loaded.

Filtered row retrieval

Structured filters with 10 operators (eq, neq, gt, gte, lt, lte, contains, in, is_null, between). All parameterized SQL โ€” no injection surface. Hard cap of 500 rows per call to protect context budgets.

Server-side aggregations

GROUP BY with count, sum, avg, min, max, count_distinct, median. The computation stays in SQLite. One compact result set comes back instead of the data the model would aggregate itself.

Smart column search

search_data searches column names, value indexes, and AI summaries simultaneously. Ask for "weapon type" and get Weapon Used Cd back. Ask for "Hollywood" and get the column whose values contain it.

Semantic search (v0.8+): Enable semantic=true for embedding-based search. Queries like "where did the crime happen" match AREA NAME even without keyword overlap. Supports local embeddings (sentence-transformers), Gemini, or OpenAI as providers.

Cross-dataset joins

join_datasets combines two indexed datasets via SQL ATTACH DATABASE โ€” inner, left, right, or cross joins. Column projection, per-side filters, ordering, and pagination. No data leaves SQLite.

Correlation discovery

get_correlations computes pairwise Pearson correlations between all numeric columns. Discover hidden relationships without manual exploration.

Natural-language summaries

Every indexed dataset gets auto-generated summaries describing data shape, column types, ranges, cardinality, quality issues, and temporal spans โ€” no external API calls needed.

Data quality triage

get_data_hotspots ranks columns by composite risk: null rate, cardinality anomalies, and numeric outlier spread. get_schema_drift compares schema between two dataset versions and classifies changes as identical, additive, or breaking.

Token savings telemetry

Every call reports tokens_saved and cost_avoided estimates. get_session_stats shows your cumulative savings across the session, with per-model cost breakdowns. Lifetime stats persist across sessions.

GitHub repository indexing

index_repo discovers and indexes data files directly from a GitHub repository โ€” CSV, Excel, Parquet, and JSONL. Incremental by HEAD SHA. Supports private repos via GITHUB_TOKEN.

Local-first speed

Indexes are stored at ~/.data-index/ by default. No cloud. No API keys required for core functionality.

Built-in guardrails

  • Token budget enforcement โ€” every response is capped at a configurable token limit (default 8,000)
  • Anti-loop detection โ€” warns when an agent is paginating row-by-row in a tight loop
  • Wide-table pagination โ€” describe_dataset auto-paginates at 60 columns
  • Hard caps on all parameters to prevent runaway queries

How it works

jDataMunch parses local CSV, Excel, Parquet, and JSONL files using a streaming, single-pass pipeline:

CSV/Excel/Parquet/JSONL file
  โ†’ Streaming parser (never loads full file into memory)
  โ†’ Column profiler (type inference, cardinality, min/max/mean/median, value indexes)
  โ†’ Natural-language summary generator (dataset + per-column descriptions)
  โ†’ SQLite writer (10,000-row batches, WAL mode, indexes on low-cardinality columns)
  โ†’ index.json (column profiles, stats, summaries, file hash for incremental detection)

When an agent queries:

describe_dataset  โ†’  reads index.json in memory (< 10ms)
get_rows          โ†’  parameterized SQL on data.sqlite (< 100ms on indexed columns)
aggregate         โ†’  GROUP BY SQL on data.sqlite (< 200ms for simple group-by)
search_data       โ†’  scans column profiles in memory (< 50ms)
join_datasets     โ†’  ATTACH DATABASE + cross-store SQL (< 300ms)

No raw file is ever re-read after the initial index. The SQLite database serves all row-level queries.

For a 255 MB, 1,004,894-row CSV (measured on real data):

  • Index time: ~43 seconds (one-time)
  • describe_dataset: 35 ms, 3,849 tokens vs 111,028,360 tokens raw โ€” 25,333ร—
  • describe_column (single column deep-dive): 22โ€“33 ms, ~600 tokens
  • get_rows (indexed filter): < 100 ms
  • Peak indexing memory: < 500 MB

Start fast

1. Install it

pip install jdatamunch-mcp

For additional format support:

pip install "jdatamunch-mcp[excel]"       # Excel (.xlsx, .xls)
pip install "jdatamunch-mcp[parquet]"     # Parquet
pip install "jdatamunch-mcp[semantic]"    # Semantic search (local embeddings)
pip install "jdatamunch-mcp[all]"         # Everything

2. Add it to your MCP client

Claude Code (one command)

claude mcp add jdatamunch uvx jdatamunch-mcp

Restart Claude Code. Confirm with /mcp.

Claude Desktop

Add to your config file (~/Library/Application Support/Claude/claude_desktop_config.json on macOS, %APPDATA%\Claude\claude_desktop_config.json on Windows):

{
  "mcpServers": {
    "jdatamunch": {
      "command": "uvx",
      "args": ["jdatamunch-mcp"]
    }
  }
}

OpenClaw

Option A โ€” CLI:

openclaw mcp set jdatamunch '{"command":"uvx","args":["jdatamunch-mcp"]}'

Option B โ€” Edit ~/.openclaw/openclaw.json:

{
  "mcpServers": {
    "jdatamunch": {
      "command": "uvx",
      "args": ["jdatamunch-mcp"],
      "transport": "stdio"
    }
  }
}

Restart the gateway: openclaw gateway restart. Verify: openclaw mcp list.

Other clients (Cursor, Windsurf, Roo, etc.)

Any MCP-compatible client accepts the same JSON block in its MCP config file.

3. Index a file and start querying

index_local(path="/path/to/data.csv", name="my-dataset")
describe_dataset(dataset="my-dataset")
get_rows(dataset="my-dataset", filters=[{"column": "City", "op": "eq", "value": "Los Angeles"}], limit=10)

4. Tell your agent to actually use it

Installing jDataMunch makes the tools available. It does not guarantee the agent will stop pasting entire CSVs into prompts unless you tell it to use structured retrieval first.

Claude Code / Claude Desktop

Add this to your CLAUDE.md (global or project-level):

## Data Exploration Policy
Use jdatamunch-mcp for tabular data whenever available.
Always call describe_dataset first to understand the schema.
Use get_rows with filters rather than loading raw files.
Use aggregate for any group-by or summary questions.

OpenClaw

Add the same policy to your agent's system prompt file (e.g. ~/.openclaw/agents/analyst.md), then reference it in ~/.openclaw/openclaw.json:

{
  "agents": {
    "named": {
      "analyst": {
        "systemPromptFile": "~/.openclaw/agents/analyst.md"
      }
    }
  }
}

Check your token savings

Ask your agent: "How many tokens has jDataMunch saved me?"

The agent will call get_session_stats, which returns session and lifetime token savings with per-model cost breakdowns. Lifetime stats persist to ~/.data-index/session_stats.json across sessions.


Tools

Indexing

ToolWhat it does
index_localIndex a local CSV, Excel, Parquet, or JSONL file. Profiles columns, generates NL summaries, loads rows into SQLite. Incremental by default (skips if file unchanged).
index_repoIndex data files from a GitHub repository. Discovers CSV, Excel, Parquet, and JSONL files via the Trees API and indexes each. Incremental by HEAD SHA. Max 50 MB/file, 20 files/repo.

Exploration

ToolWhat it does
list_datasetsList all indexed datasets with row counts, column counts, and file sizes.
list_reposList GitHub repositories indexed via index_repo. Shows repo name, HEAD SHA, dataset count, total rows.
describe_datasetFull schema profile: every column's name, type, cardinality, null%, sample values, and NL summary. Primary orientation tool. Auto-paginates at 60 columns.
describe_columnDeep profile of one column: full value distribution, histogram bins, temporal range, NL summary.
search_dataSearch column names and values by keyword or semantically. Returns column IDs โ€” tells the agent where to look, not the data. Supports hybrid keyword + embedding search.
sample_rowsHead, tail, or random sample. Good for first-look at an unfamiliar dataset.

Querying

ToolWhat it does
get_rowsFiltered row retrieval with 10 operators. Parameterized SQL. 500-row hard cap. Column projection to reduce tokens.
aggregateServer-side GROUP BY: count, sum, avg, min, max, count_distinct, median. Pre-filter support. 1,000-group cap.
join_datasetsSQL JOIN across two indexed datasets. Supports inner, left, right, cross. Per-side filters and column projection.

Analysis

ToolWhat it does
get_correlationsPairwise Pearson correlations between numeric columns. Sorted by strength, with labels and pair counts.
get_schema_driftCompare schema between two datasets. Detects added/removed columns, type changes, null-rate shifts.
get_data_hotspotsRank columns by data-quality risk: null rate, cardinality anomalies, numeric outlier spread.

Management

ToolWhat it does
summarize_datasetRegenerate NL summaries for an already-indexed dataset without re-parsing the source file.
embed_datasetPrecompute column embeddings for semantic search. Optional warm-up to eliminate first-query latency.
delete_datasetRemove an indexed dataset and its SQLite store. Irreversible.
validate_indexVerify a dataset's on-disk integrity: SQLite integrity_check, row-count cross-check, schema match, index.json checksum, stale-lock detection. Returns ok / warning / error.
get_dataset_historyReturn the last N profile snapshots for a dataset (appended on every successful index_local). Use to detect schema/content drift across re-ingests.
get_session_statsCumulative token savings and cost avoided across the session. Lifetime stats persist across sessions.

Stability guarantees (v1.0.0)

Earned by Phase A in todo.md. These are commitments, not aspirations.

Statistical correctness

  • Means use Welford online updates with Neumaier-compensated sums โ€” accurate to 1e-9 relative error across 1e-6..1e6 mixed magnitudes.
  • Quantiles (p01 / p25 / p50 / p75 / p95 / p99) come from a streaming t-digest with bounded ~3 KB/column memory regardless of row count.
  • Cardinality reports an exact value below 5,000 distinct keys; above the cap, a HyperLogLog estimate is reported with cardinality_estimated: true and ~2% standard error.

Crash safety

  • A kill at any point during index_local leaves the dataset in one of two states only: fully indexed or absent. Never partial.
  • data.sqlite is written to a .tmp and renamed only after profiles compute successfully. index.json is atomic with a SHA-256 sidecar.
  • A _lock file marks in-progress runs. index_local auto-recovers from prior crashes by cleaning stale tmp files before starting.

Recovery flow

  • Run validate_index on any dataset whose state is suspect. If it returns overall_status: ok, the dataset is consistent. Otherwise the report names the specific finding (row-count mismatch, checksum drift, missing SQLite, stale lock, etc.).

Schema versioning

  • The on-disk index format is versioned (INDEX_VERSION = 2 at 1.0.0).
  • New profile fields are added under additive migrations registered in storage/migrations.py. Indexes from prior versions are upgraded in place rather than triggering silent re-indexing.
  • Public profile fields documented in CHANGELOG [1.0.0] are stable.

Reproducibility

  • sample_rows(method='random', seed=N) is deterministic.
  • index_local produces byte-identical index.json (modulo timestamps + the resolved source path) for the same input file across runs.
  • All four parsers (CSV / JSONL / Parquet / Excel) route native-typed cells through one normalizer, so the same logical data produces identical column profiles regardless of source format.

Filter operators

get_rows, aggregate, and join_datasets accept structured filters:

{"column": "AREA NAME",    "op": "eq",      "value": "Hollywood"}
{"column": "Vict Age",     "op": "between", "value": [25, 35]}
{"column": "Crm Cd Desc",  "op": "contains","value": "ASSAULT"}
{"column": "Weapon Used Cd","op": "is_null","value": true}
{"column": "AREA",         "op": "in",      "value": [1, 2, 7]}
OperatorMeaning
eqequals
neqnot equals
gt, gtegreater than (or equal)
lt, lteless than (or equal)
containscase-insensitive substring
invalue in list
is_nullnull / not null check
betweeninclusive range [min, max]

Multiple filters are ANDed. No raw SQL accepted โ€” injection surface is zero.


Configuration

VariableDefaultPurpose
DATA_INDEX_PATH~/.data-index/Index storage location
JDATAMUNCH_MAX_ROWS5,000,000Row cap for indexing
JDATAMUNCH_MAX_RESPONSE_TOKENS8,000Token budget cap per response
JDATAMUNCH_SHARE_SAVINGS1Set 0 to disable anonymous token savings telemetry
ANTHROPIC_API_KEYโ€”AI column summaries via Claude
GOOGLE_API_KEYโ€”AI column summaries via Gemini
GITHUB_TOKENโ€”Private repo access for index_repo
JDATAMUNCH_EMBED_MODELโ€”Local sentence-transformers model for semantic search
GOOGLE_EMBED_MODELโ€”Gemini embedding model for semantic search
OPENAI_API_KEYโ€”OpenAI embeddings for semantic search
OPENAI_EMBED_MODELโ€”OpenAI embedding model for semantic search

When does it help?

ScenarioWithout jDataMunchWith jDataMunchMeasured savings
Orient on a 255 MB CSVPaste raw file โ†’ 111M tokensdescribe_dataset โ†’ 3,849 tokens25,333ร—
Schema + column deep-diveSame 111M tokensdescribe_dataset + describe_column โ†’ ~4,400 tokens~25,000ร—
Find the crime-type columnScan headers manuallysearch_data("crime type") โ†’ column IDstructural
Find column by meaningNo way to search semanticallysearch_data("where did it happen", semantic=true) โ†’ AREA NAMEstructural
Get Hollywood assault rowsLoad all 1M rowsget_rows with 2 filters โ†’ matching rows only~99%+
Crime count by areaReturn all rows, aggregate in LLMaggregate(group_by=["AREA NAME"]) โ†’ 21 rows~99.9%
Understand weapon nullsLoad column, count manuallydescribe_column("Weapon Used Cd") โ†’ null_pct: 64.2%~99.9%
Compare two dataset versionsLoad both filesget_schema_drift(a, b) โ†’ breaking/additive assessmentstructural
Find correlated columnsExport, pivot, eyeballget_correlations โ†’ ranked pairs with strength labelsstructural
Combine two datasetsLoad both into promptjoin_datasets โ†’ SQL JOIN, only matching rows~99%+
Re-query an unchanged fileRe-load file every timeHash check โ†’ instant skip if unchanged100% of re-read cost

The case where it doesn't help: you genuinely need every row for ML training or full exports. For that, read the file directly. For everything else โ€” exploration, filtering, aggregation, orientation โ€” structured retrieval wins every time.


ID scheme

Every column and row gets a stable ID:

{dataset}::{column_name}#column     โ†’  "lapd-crime::AREA NAME#column"
{dataset}::row_{rowid}#row          โ†’  "lapd-crime::row_4421#row"
{dataset}::{pk_col}={value}#row     โ†’  "lapd-crime::DR_NO=211507896#row"

Pass column IDs directly to describe_column. Row IDs are returned in get_rows results.


Part of the jMunch family

ProductDomainUnit of retrievalPyPI
jcodemunch-mcpSource codeSymbols (functions, classes)jcodemunch-mcp
jdocmunch-mcpDocumentationSections (headings)jdocmunch-mcp
jdatamunch-mcpTabular dataColumns, row slices, aggregationsjdatamunch-mcp

All three implement jMRI โ€” the open retrieval interface spec. Same response envelope, same token tracking, same telemetry pattern.


Best for

  • analysts, finance, ops, and consultants working with large spreadsheets
  • AI agents that answer questions about CSV, Excel, Parquet, or JSONL data
  • anyone paying token costs to load files they query repeatedly
  • teams that want structured, auditable data access instead of raw file dumps
  • developers building data-aware agents who need a drop-in retrieval layer

Works with

jDataMunch plugs into any MCP-compatible agent or IDE. Tested configurations:

PlatformConfig
Claude Code / Claude DesktopManual config or mcp.json
Cursor / WindsurfManual mcp.json
Hermes AgentAdd to ~/.hermes/config.yaml โ€” see skill
Any MCP clientstdio: jdatamunch-mcp
<details> <summary>Hermes Agent config</summary>
# ~/.hermes/config.yaml
mcp_servers:
  jdatamunch:
    command: "uvx"
    args: ["jdatamunch-mcp"]
</details>

New here?

Start with the QuickStart guide โ€” zero to indexed in three steps.

Or if you prefer learning by doing: index a file, run describe_dataset, and look at what comes back.

That single call โ€” 35 milliseconds, 3,849 tokens โ€” tells you everything that would have cost you 111 million tokens to read raw.

That's the whole idea... <picture> <source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/image?repos=jgravelle/jdatamunch-mcp&type=date&theme=dark&legend=top-left" /> <source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/image?repos=jgravelle/jdatamunch-mcp&type=date&legend=top-left" /> <img alt="Star History Chart" src="https://api.star-history.com/image?repos=jgravelle/jdatamunch-mcp&type=date&legend=top-left" /> </picture> </a>

Global Ranking

2.1
Trust ScoreMCPHub Index

Based on codebase health & activity.

Manual Config

{ "mcpServers": { "jgravelle-jdatamunch-mcp": { "command": "npx", "args": ["jgravelle-jdatamunch-mcp"] } } }