MCPHub LabRegistryjgravelle/jdatamunch-mcp
jgravelle

jgravelle/jdatamunch mcp

Built by jgravelle โ€ข 21 stars

What is jgravelle/jdatamunch mcp?

Token-efficient MCP server for tabular data retrieval. Index CSV/Excel files, query rows, aggregate โ€” 99%+ token savings vs raw file reads.

How to use jgravelle/jdatamunch mcp?

1. Install a compatible MCP client (like Claude Desktop). 2. Open your configuration settings. 3. Add jgravelle/jdatamunch mcp using the following command: npx @modelcontextprotocol/jgravelle-jdatamunch-mcp 4. Restart the client and verify the new tools are active.
๐Ÿ›ก๏ธ Scoped (Restricted)
npx @modelcontextprotocol/jgravelle-jdatamunch-mcp --scope restricted
๐Ÿ”“ Unrestricted Access
npx @modelcontextprotocol/jgravelle-jdatamunch-mcp

Key Features

Native MCP Protocol Support
Real-time Tool Activation & Execution
Verified Standard Implementation
Secure Resource & Context Handling

Optimized Use Cases

Extending AI models with custom local capabilities
Automating system workflows via natural language
Connecting external data sources to LLM context windows

jgravelle/jdatamunch mcp FAQ

Q

Is jgravelle/jdatamunch mcp safe?

Yes, jgravelle/jdatamunch mcp follows the standardized Model Context Protocol security patterns and only executes tools with explicit user-granted permissions.

Q

Is jgravelle/jdatamunch mcp up to date?

jgravelle/jdatamunch mcp is currently active in the registry with 21 stars on GitHub, indicating its reliability and community support.

Q

Are there any limits for jgravelle/jdatamunch mcp?

Usage limits depend on the specific implementation of the MCP server and your system resources. Refer to the official documentation below for technical details.

Official Documentation

View on GitHub
<!-- mcp-name: io.github.jgravelle/jdatamunch-mcp -->

FREE FOR PERSONAL USE

Use it to make money, and Uncle J. gets a taste. Fair enough? details


Cut spreadsheet token usage by 99.997%

Most AI agents explore tabular data the expensive way:

dump the whole file into the prompt โ†’ skim a million irrelevant rows โ†’ repeat.

That is not "a little inefficient." That is a token incinerator.

A 255 MB CSV file with 1 million rows costs 111 million tokens if you paste it raw. A single describe_dataset call answers the same orientation question in 3,849 tokens.

That is a 25,333ร— reduction โ€” measured, not estimated, on a real 1M-row public dataset.

jDataMunch indexes the file once and lets agents retrieve only the exact data they need: column profiles, filtered rows, and server-side aggregations โ€” with SQL precision.

Benchmark: LAPD crime records โ€” 1,004,894 rows, 28 columns, 255 MB Baseline (raw file): 111,028,360 tokens ย |ย  jDataMunch: ~3,849 tokens ย |ย  25,333ร— reduction Methodology & harness ยท Full results

TaskTraditional approachWith jDataMunch
Understand a datasetPaste entire CSVdescribe_dataset โ†’ column names, types, cardinality, samples
Find relevant columnsRead every rowsearch_data โ†’ column-level results with IDs
Answer a filtered questionLoad millions of rowsget_rows with structured filters โ†’ only matching rows
Compute a group-byReturn all dataaggregate โ†’ server-side SQL, one result set

Index once. Query cheaply. Keep moving. Precision retrieval beats brute-force context.


jDataMunch MCP

Structured tabular data retrieval for AI agents

License MCP Local-first SQLite jMRI PyPI version PyPI - Python Version

Commercial licenses

jDataMunch-MCP is free for non-commercial use.

Commercial use requires a paid license.

jDataMunch-only licenses

Want the full jMunch suite?

Stop paying your model to read the whole damn spreadsheet.

jDataMunch turns tabular data exploration into structured retrieval.

Instead of forcing an agent to load an entire CSV, scan millions of rows, and burn through context just to find the right column name, jDataMunch lets it navigate by what the data is and retrieve only what matters.

That means:

  • 25,333ร— lower data-reading token usage on a 1M-row CSV (measured)
  • less irrelevant context polluting the prompt
  • faster dataset orientation โ€” one call tells you everything about the schema
  • accurate filtered queries โ€” the agent asks for Hollywood assaults, it gets Hollywood assaults
  • server-side aggregations โ€” GROUP BY runs in SQLite, not inside the context window

It indexes your files once using a streaming parser and SQLite, stores column profiles and row data with proper type affinity, and retrieves exactly what the agent asked for instead of re-loading the entire file on every question.


Why agents need this

Most agents still handle spreadsheets like someone who prints the entire internet before reading one article:

  • paste the whole CSV to answer a narrow question
  • re-load the same file repeatedly across tool calls
  • consume column headers, empty cells, malformed rows, and irrelevant records
  • burn context window on data that was never part of the question

jDataMunch fixes that by giving them a structured way to:

  • describe a dataset's schema before touching any row data
  • search for the specific column that holds the answer
  • retrieve only the rows that match the filter
  • run aggregations server-side and get back a single result set
  • orient themselves with samples before committing to a full query

Agents do not need bigger context windows.

They need better aim.


What you get

Column-level retrieval

Understand a dataset's full schema โ€” types, cardinality, null rates, value distributions, samples โ€” in a single sub-10ms call. No rows loaded.

Filtered row retrieval

Structured filters with 10 operators (eq, neq, gt, gte, lt, lte, contains, in, is_null, between). All parameterized SQL โ€” no injection surface. Hard cap of 500 rows per call to protect context budgets.

Server-side aggregations

GROUP BY with count, sum, avg, min, max, count_distinct, median. The computation stays in SQLite. One compact result set comes back instead of the data the model would aggregate itself.

Smart column search

search_data searches column names, value indexes, and AI summaries simultaneously. Ask for "weapon type" and get Weapon Used Cd back. Ask for "Hollywood" and get the column whose values contain it.

Token savings telemetry

Every call reports tokens_saved and cost_avoided estimates. get_session_stats shows your cumulative savings across the session.

Local-first speed

Indexes are stored at ~/.data-index/ by default. No cloud. No API keys required for core functionality.


How it works

jDataMunch parses local CSV and Excel files using a streaming, single-pass pipeline:

CSV/Excel file
  โ†’ Streaming parser (never loads full file into memory)
  โ†’ Column profiler (type inference, cardinality, min/max/mean/median, value indexes)
  โ†’ SQLite writer (10,000-row batches, WAL mode, indexes on low-cardinality columns)
  โ†’ index.json (column profiles, stats, file hash for incremental detection)

When an agent queries:

describe_dataset  โ†’  reads index.json in memory (< 10ms)
get_rows          โ†’  parameterized SQL on data.sqlite (< 100ms on indexed columns)
aggregate         โ†’  GROUP BY SQL on data.sqlite (< 200ms for simple group-by)
search_data       โ†’  scans column profiles in memory (< 50ms)

No raw file is ever re-read after the initial index. The SQLite database serves all row-level queries.

For a 255 MB, 1,004,894-row CSV (measured on real data):

  • Index time: ~43 seconds (one-time)
  • describe_dataset: 35 ms, 3,849 tokens vs 111,028,360 tokens raw โ€” 25,333ร—
  • describe_column (single column deep-dive): 22โ€“33 ms, ~600 tokens
  • get_rows (indexed filter): < 100 ms
  • Peak indexing memory: < 500 MB

Start fast

1. Install it

pip install jdatamunch-mcp

For Excel (.xlsx) support:

pip install "jdatamunch-mcp[excel]"

2. Add it to your MCP client

If you're using Claude Code:

claude mcp add jdatamunch uvx jdatamunch-mcp

Or add manually to your ~/.claude.json:

{
  "mcpServers": {
    "jdatamunch-mcp": {
      "command": "uvx",
      "args": ["jdatamunch-mcp"]
    }
  }
}

3. Index a file and start querying

index_local(path="/path/to/data.csv", name="my-dataset")
describe_dataset(dataset="my-dataset")
get_rows(dataset="my-dataset", filters=[{"column": "City", "op": "eq", "value": "Los Angeles"}], limit=10)

4. Tell your agent to actually use it

Installing jDataMunch makes the tools available. It does not guarantee the agent will stop pasting entire CSVs into prompts unless you tell it to use structured retrieval first.

A simple instruction like this helps:

Use jdatamunch-mcp for tabular data whenever available.
Always call describe_dataset first to understand the schema.
Use get_rows with filters rather than loading raw files.
Use aggregate for any group-by or summary questions.

Tools

ToolWhat it does
index_localIndex a CSV or Excel file. Profiles columns, loads rows into SQLite. Incremental by default (skips if file unchanged).
list_datasetsList all indexed datasets with row counts, column counts, and file sizes.
describe_datasetFull schema profile: every column's name, type, cardinality, null%, and sample values. Primary orientation tool.
describe_columnDeep profile of one column: full value distribution, histogram bins, temporal range.
search_dataSearch column names and values by keyword. Returns column IDs โ€” tells the agent where to look, not the data.
get_rowsFiltered row retrieval with 10 operators. Parameterized SQL. 500-row hard cap.
aggregateServer-side GROUP BY: count, sum, avg, min, max, count_distinct, median.
sample_rowsHead, tail, or random sample. Good for first-look at an unfamiliar dataset.
get_session_statsCumulative token savings and cost avoided across the session.

Filter operators

get_rows and aggregate accept structured filters:

{"column": "AREA NAME",    "op": "eq",      "value": "Hollywood"}
{"column": "Vict Age",     "op": "between", "value": [25, 35]}
{"column": "Crm Cd Desc",  "op": "contains","value": "ASSAULT"}
{"column": "Weapon Used Cd","op": "is_null","value": true}
{"column": "AREA",         "op": "in",      "value": [1, 2, 7]}
OperatorMeaning
eqequals
neqnot equals
gt, gtegreater than (or equal)
lt, lteless than (or equal)
containscase-insensitive substring
invalue in list
is_nullnull / not null check
betweeninclusive range [min, max]

Multiple filters are ANDed. No raw SQL accepted โ€” injection surface is zero.


Configuration

VariableDefaultPurpose
DATA_INDEX_PATH~/.data-index/Index storage location
JDATAMUNCH_MAX_ROWS5,000,000Row cap for indexing
JDATAMUNCH_SHARE_SAVINGS1Set 0 to disable anonymous token savings telemetry
ANTHROPIC_API_KEYโ€”AI column summaries via Claude (v1.1+)
GOOGLE_API_KEYโ€”AI column summaries via Gemini (v1.1+)

When does it help?

ScenarioWithout jDataMunchWith jDataMunchMeasured savings
Orient on a 255 MB CSVPaste raw file โ†’ 111M tokensdescribe_dataset โ†’ 3,849 tokens25,333ร—
Schema + column deep-diveSame 111M tokensdescribe_dataset + describe_column โ†’ ~4,400 tokens~25,000ร—
Find the crime-type columnScan headers manuallysearch_data("crime type") โ†’ column IDstructural
Get Hollywood assault rowsLoad all 1M rowsget_rows with 2 filters โ†’ matching rows only~99%+
Crime count by areaReturn all rows, aggregate in LLMaggregate(group_by=["AREA NAME"]) โ†’ 21 rows~99.9%
Understand weapon nullsLoad column, count manuallydescribe_column("Weapon Used Cd") โ†’ null_pct: 64.2%~99.9%
Re-query an unchanged fileRe-load file every timeHash check โ†’ instant skip if unchanged100% of re-read cost

The case where it doesn't help: you genuinely need every row for ML training or full exports. For that, read the file directly. For everything else โ€” exploration, filtering, aggregation, orientation โ€” structured retrieval wins every time.


ID scheme

Every column and row gets a stable ID:

{dataset}::{column_name}#column     โ†’  "lapd-crime::AREA NAME#column"
{dataset}::row_{rowid}#row          โ†’  "lapd-crime::row_4421#row"
{dataset}::{pk_col}={value}#row     โ†’  "lapd-crime::DR_NO=211507896#row"

Pass column IDs directly to describe_column. Row IDs are returned in get_rows results.


Part of the jMunch family

ProductDomainUnit of retrievalPyPI
jcodemunch-mcpSource codeSymbols (functions, classes)jcodemunch-mcp
jdocmunch-mcpDocumentationSections (headings)jdocmunch-mcp
jdatamunch-mcpTabular dataColumns, row slices, aggregationsjdatamunch-mcp

All three implement jMRI โ€” the open retrieval interface spec. Same response envelope, same token tracking, same telemetry pattern.


Best for

  • analysts, finance, ops, and consultants working with large spreadsheets
  • AI agents that answer questions about CSV or Excel data
  • anyone paying token costs to load files they query repeatedly
  • teams that want structured, auditable data access instead of raw file dumps
  • developers building data-aware agents who need a drop-in retrieval layer

New here?

Index a file, run describe_dataset, and look at what comes back.

That single call โ€” 35 milliseconds, 3,849 tokens โ€” tells you everything that would have cost you 111 million tokens to read raw.

That's the whole idea... <picture> <source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/image?repos=jgravelle/jdatamunch-mcp&type=date&theme=dark&legend=top-left" /> <source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/image?repos=jgravelle/jdatamunch-mcp&type=date&legend=top-left" /> <img alt="Star History Chart" src="https://api.star-history.com/image?repos=jgravelle/jdatamunch-mcp&type=date&legend=top-left" /> </picture> </a>

Global Ranking

2.1
Trust ScoreMCPHub Index

Based on codebase health & activity.

Manual Config

{ "mcpServers": { "jgravelle-jdatamunch-mcp": { "command": "npx", "args": ["jgravelle-jdatamunch-mcp"] } } }