MCPHub LabRegistryTencent/WeKnora
Tencent

Tencent/WeKnora

Built by Tencent โ€ข 13,671 stars

What is Tencent/WeKnora?

LLM-powered framework for deep document understanding, semantic retrieval, and context-aware answers using RAG paradigm.

How to use Tencent/WeKnora?

1. Install a compatible MCP client (like Claude Desktop). 2. Open your configuration settings. 3. Add Tencent/WeKnora using the following command: npx @modelcontextprotocol/tencent-weknora 4. Restart the client and verify the new tools are active.
๐Ÿ›ก๏ธ Scoped (Restricted)
npx @modelcontextprotocol/tencent-weknora --scope restricted
๐Ÿ”“ Unrestricted Access
npx @modelcontextprotocol/tencent-weknora

Key Features

Native MCP Protocol Support
Real-time Tool Activation & Execution
Verified High-performance Implementation
Secure Resource & Context Handling

Optimized Use Cases

Extending AI models with custom local capabilities
Automating system workflows via natural language
Connecting external data sources to LLM context windows

Tencent/WeKnora FAQ

Q

Is Tencent/WeKnora safe?

Yes, Tencent/WeKnora follows the standardized Model Context Protocol security patterns and only executes tools with explicit user-granted permissions.

Q

Is Tencent/WeKnora up to date?

Tencent/WeKnora is currently active in the registry with 13,671 stars on GitHub, indicating its reliability and community support.

Q

Are there any limits for Tencent/WeKnora?

Usage limits depend on the specific implementation of the MCP server and your system resources. Refer to the official documentation below for technical details.

Official Documentation

View on GitHub
<p align="center"> <picture> <img src="./docs/images/logo.png" alt="WeKnora Logo" height="120"/> </picture> </p> <p align="center"> <picture> <a href="https://trendshift.io/repositories/15289" target="_blank"> <img src="https://trendshift.io/api/badge/repositories/15289" alt="Tencent%2FWeKnora | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/> </a> </picture> </p> <p align="center"> <a href="https://weknora.weixin.qq.com" target="_blank"> <img alt="ๅฎ˜ๆ–น็ฝ‘็ซ™" src="https://img.shields.io/badge/ๅฎ˜ๆ–น็ฝ‘็ซ™-WeKnora-4e6b99"> </a> <a href="https://chatbot.weixin.qq.com" target="_blank"> <img alt="ๅพฎไฟกๅฏน่ฏๅผ€ๆ”พๅนณๅฐ" src="https://img.shields.io/badge/ๅพฎไฟกๅฏน่ฏๅผ€ๆ”พๅนณๅฐ-5ac725"> </a> <a href="https://github.com/Tencent/WeKnora/blob/main/LICENSE"> <img src="https://img.shields.io/badge/License-MIT-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="License"> </a> <a href="./CHANGELOG.md"> <img alt="Version" src="https://img.shields.io/badge/version-0.3.5-2e6cc4?labelColor=d4eaf7"> </a> </p> <p align="center"> | <b>English</b> | <a href="./README_CN.md"><b>็ฎ€ไฝ“ไธญๆ–‡</b></a> | <a href="./README_JA.md"><b>ๆ—ฅๆœฌ่ชž</b></a> | <a href="./README_KO.md"><b>ํ•œ๊ตญ์–ด</b></a> | </p> <p align="center"> <h4 align="center">

Overview โ€ข Architecture โ€ข Key Features โ€ข Getting Started โ€ข API Reference โ€ข Developer Guide

</h4> </p>

๐Ÿ’ก WeKnora - LLM-Powered Document Understanding & Retrieval Framework

๐Ÿ“Œ Overview

WeKnora is an LLM-powered framework designed for deep document understanding and semantic retrieval, especially for handling complex, heterogeneous documents.

It adopts a modular architecture that combines multimodal preprocessing, semantic vector indexing, intelligent retrieval, and large language model inference. At its core, WeKnora follows the RAG (Retrieval-Augmented Generation) paradigm, enabling high-quality, context-aware answers by combining relevant document chunks with model reasoning.

Website: https://weknora.weixin.qq.com

โœจ Latest Updates

v0.3.5 Highlights:

  • Telegram, DingTalk & Mattermost IM Integration: Added Telegram bot (webhook/long-polling, streaming via editMessageText), DingTalk bot (webhook/Stream mode, AI Card streaming), and Mattermost adapter; IM channel coverage now includes WeCom, Feishu, Slack, Telegram, DingTalk, and Mattermost
  • IM Slash Commands & QA Queue: Pluggable slash-command system (/help, /info, /search, /stop, /clear) with a bounded QA worker pool, per-user rate limiting, and Redis-based multi-instance coordination
  • Suggested Questions: Agents surface context-aware suggested questions based on configured knowledge bases; image knowledge automatically enqueues question generation
  • VLM Auto-Describe MCP Tool Images: When MCP tools return images, the agent generates text descriptions via the configured VLM model, enabling image content to be used by text-only LLMs
  • Novita AI Provider: New LLM provider with OpenAI-compatible API supporting chat, embedding, and VLLM model types
  • MCP Tool Name Stability: Tool names now based on service name (stable across reconnections) instead of UUID; unique name constraint added; frontend formats names into human-readable form
  • Channel Tracking: Knowledge entries and messages record source channel (web/api/im/browser_extension) for traceability
  • Bug Fixes: Fixed agent empty response when no knowledge base is configured, UTF-8 truncation in summaries for Chinese/emoji documents, API key encryption loss on tenant settings update, vLLM streaming reasoning content propagation, and rerank empty passage errors

v0.3.4 Highlights:

  • IM Bot Integration: WeCom, Feishu, and Slack IM channel support with WebSocket/Webhook modes, streaming, and knowledge base integration
  • Multimodal Image Support: Image upload and multimodal image processing with enhanced session management
  • Manual Knowledge Download: Download manual knowledge content as files with proper filename sanitization
  • NVIDIA Model API: Support NVIDIA chat model API with custom endpoint and VLM model configuration
  • Weaviate Vector DB: Added Weaviate as a new vector database backend for knowledge retrieval
  • AWS S3 Storage: Integrated AWS S3 storage adapter with configuration UI and database migrations
  • AES-256-GCM Encryption: API keys encrypted at rest with AES-256-GCM for enhanced security
  • Built-in MCP Service: Built-in MCP service support for extending agent capabilities
  • Hybrid Search Optimization: Grouped targets and reused query embeddings for better retrieval performance
  • Final Answer Tool: New final_answer tool with agent duration tracking for improved agent workflows
<details> <summary><b>Earlier Releases</b></summary>

v0.3.3 Highlights:

  • Parent-Child Chunking: Hierarchical parent-child chunking strategy for enhanced context management and more accurate retrieval
  • Knowledge Base Pinning: Pin frequently-used knowledge bases for quick access
  • Fallback Response: Fallback response handling with UI indicators when no relevant results are found
  • Passage Cleaning for Rerank: Passage cleaning for rerank model to improve relevance scoring accuracy
  • Storage Auto-Creation: Storage engine connectivity check with auto-creation of buckets
  • Milvus Vector DB: Added Milvus as a new vector database backend for knowledge retrieval

v0.3.2 Highlights:

  • ๐Ÿ” Knowledge Search: New "Knowledge Search" entry point with semantic retrieval, supporting bringing search results directly into the conversation window
  • โš™๏ธ Parser & Storage Engine Configuration: Configure document parser engines and storage engines for different sources in settings, with per-file-type parser selection in knowledge base
  • ๐Ÿ–ผ๏ธ Image Rendering in Local Storage: Support image rendering during conversations in local storage mode, with optimized streaming image placeholders
  • ๐Ÿ“„ Document Preview: Embedded document preview component for previewing user-uploaded original files
  • ๐ŸŽจ UI Optimization: Knowledge base, agent, and shared space list page interaction redesign
  • ๐Ÿ—„๏ธ Milvus Support: Added Milvus as a new vector database backend for knowledge retrieval
  • ๐ŸŒ‹ Volcengine TOS: Added Volcengine TOS object storage support
  • ๐Ÿ“Š Mermaid Rendering: Support mermaid diagram rendering in chat with fullscreen viewer, zoom, pan, toolbar and export
  • ๐Ÿ’ฌ Batch Conversation Management: Batch management and delete all sessions functionality
  • ๐Ÿ”— Remote URL Knowledge: Support creating knowledge entries from remote file URLs
  • ๐Ÿง  Memory Graph Preview: Preview of user-level memory graph visualization
  • ๐Ÿ”„ Async Re-parse: Async API for re-processing existing knowledge documents

v0.3.0 Highlights:

  • ๐Ÿข Shared Space: Shared space with member invitations, shared knowledge bases and agents across members, tenant-isolated retrieval
  • ๐Ÿงฉ Agent Skills: Agent skills system with preloaded skills for smart-reasoning agent, sandboxed execution environment for security isolation
  • ๐Ÿค– Custom Agents: Support for creating, configuring, and selecting custom agents with knowledge base selection modes (all/specified/disabled)
  • ๐Ÿ“Š Data Analyst Agent: Built-in Data Analyst agent with DataSchema tool for CSV/Excel analysis
  • ๐Ÿง  Thinking Mode: Support thinking mode for LLM and agents, intelligent filtering of thinking content
  • ๐Ÿ” Web Search Providers: Added Bing and Google search providers alongside DuckDuckGo
  • ๐Ÿ“‹ Enhanced FAQ: Batch import dry run, similar questions, matched question in search results, large imports offloaded to object storage
  • ๐Ÿ”‘ API Key Auth: API Key authentication mechanism with Swagger documentation security
  • ๐Ÿ“Ž In-Input Selection: Select knowledge bases and files directly in the input box with @mention display
  • โ˜ธ๏ธ Helm Chart: Complete Helm chart for Kubernetes deployment with Neo4j GraphRAG support
  • ๐ŸŒ i18n: Added Korean (ํ•œ๊ตญ์–ด) language support
  • ๐Ÿ”’ Security Hardening: SSRF-safe HTTP client, enhanced SQL validation, MCP stdio transport security, sandbox-based execution
  • โšก Infrastructure: Qdrant vector DB support, Redis ACL, configurable log level, Ollama embedding optimization, DISABLE_REGISTRATION control

v0.2.0 Highlights:

  • ๐Ÿค– Agent Mode: New ReACT Agent mode that can call built-in tools, MCP tools, and web search, providing comprehensive summary reports through multiple iterations and reflection
  • ๐Ÿ“š Multi-Type Knowledge Bases: Support for FAQ and document knowledge base types, with new features including folder import, URL import, tag management, and online entry
  • โš™๏ธ Conversation Strategy: Support for configuring Agent models, normal mode models, retrieval thresholds, and Prompts, with precise control over multi-turn conversation behavior
  • ๐ŸŒ Web Search: Support for extensible web search engines with built-in DuckDuckGo search engine
  • ๐Ÿ”Œ MCP Tool Integration: Support for extending Agent capabilities through MCP, with built-in uvx and npx launchers, supporting multiple transport methods
  • ๐ŸŽจ New UI: Optimized conversation interface with Agent mode/normal mode switching, tool call process display, and comprehensive knowledge base management interface upgrade
  • โšก Infrastructure Upgrade: Introduced MQ async task management, support for automatic database migration, and fast development mode
</details>

๐Ÿ”’ Security Notice

Important: Starting from v0.1.3, WeKnora includes login authentication functionality to enhance system security. For production deployments, we strongly recommend:

  • Deploy WeKnora services in internal/private network environments rather than public internet
  • Avoid exposing the service directly to public networks to prevent potential information leakage
  • Configure proper firewall rules and access controls for your deployment environment
  • Regularly update to the latest version for security patches and improvements

๐Ÿ—๏ธ Architecture

weknora-architecture.png

WeKnora employs a modern modular design to build a complete document understanding and retrieval pipeline. The system primarily includes document parsing, vector processing, retrieval engine, and large model inference as core modules, with each component being flexibly configurable and extendable.

๐ŸŽฏ Key Features

  • ๐Ÿค– Agent Mode: Support for ReACT Agent mode that can use built-in tools to retrieve knowledge bases, MCP tools, and web search tools to access external services, providing comprehensive summary reports through multiple iterations and reflection
  • ๐Ÿ” Precise Understanding: Structured content extraction from PDFs, Word documents, images and more into unified semantic views
  • ๐Ÿง  Intelligent Reasoning: Leverages LLMs to understand document context and user intent for accurate Q&A and multi-turn conversations
  • ๐Ÿ“š Multi-Type Knowledge Bases: Support for FAQ and document knowledge base types, with folder import, URL import, tag management, and online entry capabilities
  • ๐Ÿ”ง Flexible Extension: All components from parsing and embedding to retrieval and generation are decoupled for easy customization
  • โšก Efficient Retrieval: Hybrid retrieval strategies combining keywords, vectors, and knowledge graphs, with cross-knowledge base retrieval support
  • ๐ŸŒ Web Search: Support for extensible web search engines with built-in DuckDuckGo search engine
  • ๐Ÿ”Œ MCP Tool Integration: Support for extending Agent capabilities through MCP, with built-in uvx and npx launchers, supporting multiple transport methods
  • โš™๏ธ Conversation Strategy: Support for configuring Agent models, normal mode models, retrieval thresholds, and Prompts, with precise control over multi-turn conversation behavior
  • ๐ŸŽฏ User-Friendly: Intuitive web interface and standardized APIs for zero technical barriers
  • ๐Ÿ”’ Secure & Controlled: Support for local deployment and private cloud, ensuring complete data sovereignty

๐Ÿ“Š Application Scenarios

ScenarioApplicationsCore Value
Enterprise Knowledge ManagementInternal document retrieval, policy Q&A, operation manual searchImprove knowledge discovery efficiency, reduce training costs
Academic Research AnalysisPaper retrieval, research report analysis, scholarly material organizationAccelerate literature review, assist research decisions
Product Technical SupportProduct manual Q&A, technical documentation search, troubleshootingEnhance customer service quality, reduce support burden
Legal & Compliance ReviewContract clause retrieval, regulatory policy search, case analysisImprove compliance efficiency, reduce legal risks
Medical Knowledge AssistanceMedical literature retrieval, treatment guideline search, case analysisSupport clinical decisions, improve diagnosis quality

๐Ÿงฉ Feature Matrix

ModuleSupportDescription
Agent Modeโœ… ReACT Agent ModeBuilt-in tools for knowledge base retrieval, MCP tool calls, and web search; cross-knowledge base retrieval with multi-step iteration
Knowledge Base Typesโœ… FAQ / DocumentFAQ and document knowledge bases with folder import, URL import, tag management, online entry, and knowledge move
Document Formatsโœ… PDF / Word / Txt / Markdown / HTML / Images (OCR + Caption)Structured and unstructured document parsing; image text extraction via OCR; image caption generation via VLM
IM Channel Integrationโœ… WeCom / Feishu / Slack / Telegram / DingTalk / MattermostWebSocket and Webhook modes; streaming replies; slash commands (/help, /info, /search, /stop, /clear); per-user rate limiting; Redis-based multi-instance coordination
Model Managementโœ… Centralized configuration, built-in model sharingCentralized model config with per-knowledge-base model selection; multi-tenant shared built-in model support
Embedding Modelsโœ… Local models (Ollama), BGE / GTE / OpenAI-compatible APIsCustomizable embedding models compatible with local deployment and cloud vector generation APIs
Vector DB Integrationโœ… PostgreSQL (pgvector) / Elasticsearch / Milvus / Weaviate / QdrantFive vector index backends with flexible switching to match retrieval scenario requirements
Object Storageโœ… Local / MinIO / AWS S3 / Volcengine TOSPluggable storage adapters for file and image assets; bucket auto-creation on startup
Retrieval Strategiesโœ… BM25 / Dense Retrieval / GraphRAGSparse/dense recall and knowledge graph-enhanced retrieval; customizable retrieve-rerank-generate pipeline
LLM Integrationโœ… Qwen / DeepSeek / MiniMax / NVIDIA / Novita AI / OpenAI-compatibleLocal models via Ollama or external API services; thinking/non-thinking mode switching; vLLM streaming reasoning content support
Conversation Strategyโœ… Agent model, normal model, retrieval threshold, Prompt configurationOnline Prompt editing; retrieval threshold tuning; precise multi-turn conversation behavior control
Web Searchโœ… DuckDuckGo / Bing / Google (extensible)Pluggable search engine providers; web search toggle per conversation
MCP Toolsโœ… uvx / npx launchers, Stdio / HTTP Streamable / SSEExtend agent capabilities via MCP; stable tool naming with collision protection; VLM auto-description for tool-returned images
Suggested Questionsโœ… Knowledge-base-driven question suggestionsAgent surfaces context-aware suggested questions in chat interface; image knowledge auto-generates questions
QA Capabilitiesโœ… Context-aware, multi-turn dialogue, prompt templatesComplex semantic modeling, instruction control, chain-of-thought Q&A with configurable prompts and context windows
Securityโœ… AES-256-GCM at-rest encryption, SSRF protectionAPI keys encrypted at rest; SSRF-safe HTTP client for remote API calls; sandbox execution for agent skills
E2E Testingโœ… Retrieval + generation visualization and metric evaluationEnd-to-end test tools for evaluating recall hit rates, answer coverage, BLEU/ROUGE metrics
Deployment Modesโœ… Local / Docker / Kubernetes (Helm)Private and offline deployment; fast development mode with hot-reload; Helm chart for Kubernetes
User Interfacesโœ… Web UI + RESTful APIInteractive web interface and standard API; Agent/normal mode switching; tool call process display
Task Managementโœ… MQ async tasks, automatic database migrationMQ-based async task state; automatic schema and data migration on version upgrade

๐Ÿš€ Getting Started

๐Ÿ›  Prerequisites

Make sure the following tools are installed on your system:

๐Ÿ“ฆ Installation

โ‘  Clone the repository

# Clone the main repository
git clone https://github.com/Tencent/WeKnora.git
cd WeKnora

โ‘ก Configure environment variables

# Copy example env file
cp .env.example .env

# Edit .env and set required values
# All variables are documented in the .env.example comments

โ‘ข Start the core services

Check which images need to be started in the .env file, then start the WeKnora core services with Docker Compose.

docker compose up -d

โ‘ข.0 Start Ollama separately (Optional)

If you configured a local Ollama model in .env, start the Ollama service separately:

ollama serve > /dev/null 2>&1 &

โ‘ข.1 Activate different combinations of features

  • Minimum core services
docker compose up -d
  • All features enabled
docker compose --profile full up -d
  • Tracing logs required
docker compose --profile jaeger up -d
  • Neo4j knowledge graph required
docker compose --profile neo4j up -d
  • Minio file storage service required
docker compose --profile minio up -d
  • Multiple options combination
docker compose --profile neo4j --profile minio up -d

โ‘ฃ Stop the services

docker compose down

๐ŸŒ Access Services

Once started, services will be available at:

  • Web UI: http://localhost
  • Backend API: http://localhost:8080
  • Jaeger Tracing: http://localhost:16686

๐Ÿ“ฑ Interface Showcase

Web UI Interface

<table> <tr> <td><b>Knowledge Base Management</b><br/><img src="./docs/images/knowledgebases.png" alt="Knowledge Base Management"></td> <td><b>Conversation Settings</b><br/><img src="./docs/images/settings.png" alt="Conversation Settings"></td> </tr> <tr> <td colspan="2"><b>Intelligent Q&A Conversation</b><br/><img src="./docs/images/qa.png" alt="Intelligent Q&A Conversation"></td> </tr> <tr> <td colspan="2"><b>Agent Mode Tool Call Process</b><br/><img src="./docs/images/agent-qa.png" alt="Agent Mode Tool Call Process"></td> </tr> </table>

Knowledge Base Management: Support for creating FAQ and document knowledge base types, with multiple import methods including drag-and-drop, folder import, and URL import. Automatically identifies document structures and extracts core knowledge to establish indexes. Supports tag management and online entry. The system clearly displays processing progress and document status, achieving efficient knowledge base management.

Agent Mode: Support for ReACT Agent mode that can use built-in tools to retrieve knowledge bases, call user-configured MCP tools and web search tools to access external services, providing comprehensive summary reports through multiple iterations and reflection. Supports cross-knowledge base retrieval, allowing selection of multiple knowledge bases for simultaneous retrieval.

Conversation Strategy: Support for configuring Agent models, normal mode models, retrieval thresholds, and online Prompt configuration, with precise control over multi-turn conversation behavior and retrieval execution methods. The conversation input box supports Agent mode/normal mode switching, enabling/disabling web search, and selecting conversation models.

Document Knowledge Graph

WeKnora supports transforming documents into knowledge graphs, displaying the relationships between different sections of the documents. Once the knowledge graph feature is enabled, the system analyzes and constructs an internal semantic association network that not only helps users understand document content but also provides structured support for indexing and retrieval, enhancing the relevance and breadth of search results.

For detailed configuration, please refer to the Knowledge Graph Configuration Guide.

MCP Server

Please refer to the MCP Configuration Guide for the necessary setup.

๐Ÿ”Œ Using WeChat Dialog Open Platform

WeKnora serves as the core technology framework for the WeChat Dialog Open Platform, providing a more convenient usage approach:

  • Zero-code Deployment: Simply upload knowledge to quickly deploy intelligent Q&A services within the WeChat ecosystem, achieving an "ask and answer" experience
  • Efficient Question Management: Support for categorized management of high-frequency questions, with rich data tools to ensure accurate, reliable, and easily maintainable answers
  • WeChat Ecosystem Integration: Through the WeChat Dialog Open Platform, WeKnora's intelligent Q&A capabilities can be seamlessly integrated into WeChat Official Accounts, Mini Programs, and other WeChat scenarios, enhancing user interaction experiences

๐Ÿ”— Access WeKnora via MCP Server

1๏ธโƒฃ Clone the repository

git clone https://github.com/Tencent/WeKnora

2๏ธโƒฃ Configure MCP Server

It is recommended to directly refer to the MCP Configuration Guide for configuration.

Configure the MCP client to connect to the server:

{
  "mcpServers": {
    "weknora": {
      "args": [
        "path/to/WeKnora/mcp-server/run_server.py"
      ],
      "command": "python",
      "env":{
        "WEKNORA_API_KEY":"Enter your WeKnora instance, open developer tools, check the request header x-api-key starting with sk",
        "WEKNORA_BASE_URL":"http(s)://your-weknora-address/api/v1"
      }
    }
  }
}

Run directly using stdio command:

pip install weknora-mcp-server
python -m weknora-mcp-server

๐Ÿ”ง Initialization Configuration Guide

To help users quickly configure various models and reduce trial-and-error costs, we've improved the original configuration file initialization method by adding a Web UI interface for model configuration. Before using, please ensure the code is updated to the latest version. The specific steps are as follows: If this is your first time using this project, you can skip steps โ‘ โ‘ก and go directly to steps โ‘ขโ‘ฃ.

โ‘  Stop the services

docker compose down

โ‘ก Clear existing data tables (recommended when no important data exists)

make clean-db

โ‘ข Compile and start services

docker compose up -d --build

โ‘ฃ Access Web UI

http://localhost

On your first visit, you will be automatically redirected to the registration/login page. After completing registration, please create a new knowledge base and finish the relevant settings on its configuration page.

๐Ÿ“˜ API Reference

Troubleshooting FAQ: Troubleshooting FAQ

Detailed API documentation is available at: API Docs

Product plans and upcoming features: Roadmap

๐Ÿงญ Developer Guide

โšก Fast Development Mode (Recommended)

If you need to frequently modify code, you don't need to rebuild Docker images every time! Use fast development mode:

# Start infrastructure
make dev-start

# Start backend (new terminal)
make dev-app

# Start frontend (new terminal)
make dev-frontend

Development Advantages:

  • โœ… Frontend modifications auto hot-reload (no restart needed)
  • โœ… Backend modifications quick restart (5-10 seconds, supports Air hot-reload)
  • โœ… No need to rebuild Docker images
  • โœ… Support IDE breakpoint debugging

Detailed Documentation: Development Environment Quick Start

๐Ÿ“ Directory Structure

WeKnora/
โ”œโ”€โ”€ client/      # go client
โ”œโ”€โ”€ cmd/         # Main entry point
โ”œโ”€โ”€ config/      # Configuration files
โ”œโ”€โ”€ docker/      # docker images files
โ”œโ”€โ”€ docreader/   # Document parsing app
โ”œโ”€โ”€ docs/        # Project documentation
โ”œโ”€โ”€ frontend/    # Frontend app
โ”œโ”€โ”€ internal/    # Core business logic
โ”œโ”€โ”€ mcp-server/  # MCP server
โ”œโ”€โ”€ migrations/  # DB migration scripts
โ””โ”€โ”€ scripts/     # Shell scripts

๐Ÿค Contributing

We welcome community contributions! For suggestions, bugs, or feature requests, please submit an Issue or directly create a Pull Request.

๐ŸŽฏ How to Contribute

  • ๐Ÿ› Bug Fixes: Discover and fix system defects
  • โœจ New Features: Propose and implement new capabilities
  • ๐Ÿ“š Documentation: Improve project documentation
  • ๐Ÿงช Test Cases: Write unit and integration tests
  • ๐ŸŽจ UI/UX Enhancements: Improve user interface and experience

๐Ÿ“‹ Contribution Process

  1. Fork the project to your GitHub account
  2. Create a feature branch git checkout -b feature/amazing-feature
  3. Commit changes git commit -m 'Add amazing feature'
  4. Push branch git push origin feature/amazing-feature
  5. Create a Pull Request with detailed description of changes

๐ŸŽจ Code Standards

๐Ÿ“ Commit Guidelines

Use Conventional Commits standard:

feat: Add document batch upload functionality
fix: Resolve vector retrieval precision issue
docs: Update API documentation
test: Add retrieval engine test cases
refactor: Restructure document parsing module

๐Ÿ‘ฅ Contributors

Thanks to these excellent contributors:

Contributors

๐Ÿ“„ License

This project is licensed under the MIT License. You are free to use, modify, and distribute the code with proper attribution.

๐Ÿ“ˆ Project Statistics

<a href="https://www.star-history.com/#Tencent/WeKnora&type=date&legend=top-left"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=Tencent/WeKnora&type=date&theme=dark&legend=top-left" /> <source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=Tencent/WeKnora&type=date&legend=top-left" /> <img alt="Star History Chart" src="https://api.star-history.com/svg?repos=Tencent/WeKnora&type=date&legend=top-left" /> </picture> </a>

Global Ranking

8.5
Trust ScoreMCPHub Index

Based on codebase health & activity.

Manual Config

{ "mcpServers": { "tencent-weknora": { "command": "npx", "args": ["tencent-weknora"] } } }