Your codebase, understood.
CodeGraph transforms your entire codebase into a semantically searchable knowledge graph that AI agents can actually reason about—not just grep through.
Ready to get started? Jump to the Installation Guide for step-by-step setup instructions.
Already set up? See the Usage Guide for tips on getting the most out of CodeGraph with your AI assistant.
AI coding assistants are powerful, but they're flying blind. They see files one at a time, grep for patterns, and burn tokens trying to understand your architecture. Every conversation starts from zero.
What if your AI assistant already knew your codebase?
Most semantic search tools create embeddings and call it a day. CodeGraph builds a real knowledge graph:
Your Code → AST + FastML → Graph Construction → Vector Embeddings
↓ ↓ ↓
Functions Dependencies Semantic Search
Classes Call chains Similarity
Modules Data flow Context
When you search, you don't just get "similar code"—you get code with its relationships intact. The function that matches your query, plus what calls it, what it depends on, and where it fits in the architecture.
CodeGraph doesn't return a list of files and wish you luck. It ships 8 agentic tools that do the thinking:
| Tool | What It Actually Does |
|---|---|
agentic_code_search |
Multi-step semantic search with AI-synthesized answers |
agentic_dependency_analysis |
Maps impact before you touch anything |
agentic_call_chain_analysis |
Traces execution paths through your system |
agentic_architecture_analysis |
Gives you the 10,000-foot view |
agentic_api_surface_analysis |
Understands your public interfaces |
agentic_context_builder |
Gathers everything needed for a feature |
agentic_semantic_question |
Answers complex questions about your code |
agentic_complexity_analysis |
Identifies high-risk code hotspots for refactoring |
Each tool runs a reasoning agent that plans, searches, analyzes graph relationships, and synthesizes an answer. Not a search result—an answer.
View Agent Context Gathering Flow - Interactive diagram showing how agents use graph tools to gather context.
CodeGraph supports three agent backends selectable at runtime via CODEGRAPH_AGENT_ARCHITECTURE:
| Architecture | Description | Best For | Model Type |
|---|---|---|---|
rig |
Rig framework native orchestration | Fastest performance, deep analysis | Thinking/reasoning models (gpt-5.1, Claude 4.5 family, Grok 4.1 Fast Reasoning) |
react (default) |
ReAct-style single-pass reasoning | Quick queries, simple lookups | Basic Instruction following models |
lats |
Language Agent Tree Search | Complex problems requiring exploration | Works well with both |
Performance notes:
- Rig delivers the best performance with modern thinking/reasoning models. These models excel at multi-step tool orchestration and produce superior results for complex code analysis.
- ReAct remains the default for backward compatibility and works well with traditional instruction-following models.
- LATS uses tree search exploration, making it suitable for complex problems regardless of model type.
Agents can start with lightweight project context so their first tool calls are not blind. Enable via env:
CODEGRAPH_ARCH_BOOTSTRAP=true— includes a brief directory/structure bootstrap + contents of README.md and CLAUDE.md+AGENTS.md or GEMINI.md (if present) in the agent’s initial context.CODEGRAPH_ARCH_PRIMER="<primer text>"— optional custom primer injected into startup instructions (e.g., areas to focus on).
Why? Faster, more relevant early steps, fewer wasted graph/semantic queries, and better architecture answers on large repos.
Notes:
- Bootstrap is small (top directories summary), not a replacement for graph queries.
- Uses the same project selection as indexing (
CODEGRAPH_PROJECT_IDor current working directory).
# Use Rig for best performance with thinking and reasoning models (recommended)
CODEGRAPH_AGENT_ARCHITECTURE=rig ./codegraph start stdio
# Use default ReAct for traditional instruction models
./codegraph start stdio
# Use LATS for complex analysis
CODEGRAPH_AGENT_ARCHITECTURE=lats ./codegraph start stdioAll architectures use the same 8 graph analysis tools and tier-aware prompting—only the reasoning strategy differs.
Here's something clever: CodeGraph automatically adjusts its behavior based on the LLM's context window that you configured for the codegraph agent.
Running a small local model? Get focused, efficient queries.
Using GPT-5.1 or Claude with 200K context? Get comprehensive, exploratory analysis.
Using grok-4-1-fast-reasoning with 2M context? Get detailed analysis with intelligent result management.
The Agent only uses the amount of steps that it requires to produce the answer so tool execution times vary based on the query and amount of data indexed in the database.
During development the agent used 3-6 steps on average to produce answers for test scenarios.
The Agent is stateless it only has conversational memory for the span of tool execution it does not accumulate context/memory over multiple chained tool calls this is already handled by your client of choice, it accumulates that context so codegraph needs to just provide answers.
| Your Model | CodeGraph's Behavior |
|---|---|
| < 50K tokens | Terse prompts, max 3 steps |
| 50K-150K | Balanced analysis, max 5 steps |
| 150K-500K | Detailed exploration, max 6 steps |
| > 500K (Grok, etc.) | Comprehensive analysis, max 8 steps |
Hard cap: Maximum 8 steps regardless of tier (10 with env override). This prevents runaway costs and context overflow while still allowing thorough analysis.
Same tool, automatically optimized for your setup.
CodeGraph includes multi-layer protection against context overflow—preventing expensive failures when tool results exceed your model's limits.
Per-Tool Result Truncation:
- Each tool result is limited based on your configured context window
- Large results (e.g., dependency trees with 1000+ nodes) are intelligently truncated
- Truncated results include
_truncated: truemetadata so the agent knows data was cut - Array results keep the most relevant items that fit within limits
Context Accumulation Guard:
- Monitors total accumulated context across multi-step reasoning
- Fails fast with clear error message if accumulated tool results exceed safe threshold
- Threshold: 80% of context window × 4 (conservative estimate for token overhead)
Configure via environment:
# CRITICAL: Set this to match your agent's LLM context window
CODEGRAPH_CONTEXT_WINDOW=128000 # Default: 128K
# Per-tool result limit derived automatically: context_window × 2 bytes
# Accumulation limit derived automatically: context_window × 4 × 0.8 bytesWhy this matters: Without these guards, a single agentic_dependency_analysis on a large codebase could return 6M+ tokens—far exceeding most models' limits and causing expensive failures.
We don't pick sides in the "embeddings vs keywords" debate. CodeGraph combines:
- 70% vector similarity (semantic understanding)
- 30% lexical search (exact matches matter)
- Graph traversal (relationships and context)
- Optional reranking (cross-encoder precision)
The result? You find handleUserAuth when you search for "login logic"—but also when you search for "handleUserAuth".
When you connect CodeGraph to Claude Code, Cursor, or any MCP-compatible agent:
Before: Your AI reads files one by one, grepping around, burning tokens on context-gathering.
After: Your AI calls agentic_dependency_analysis("UserService") and instantly knows what breaks if you refactor it.
This isn't incremental improvement. It's the difference between an AI that searches your code and one that understands it.
# Clone and build with all features
git clone https://github.com/yourorg/codegraph-rust
cd codegraph-rust
./install-codegraph-full-features.sh# Local persistent storage
surreal start --bind 0.0.0.0:3004 --user root --pass root file://$HOME/.codegraph/surreal.dbcd schema && ./apply-schema.shcodegraph index /path/to/project -r -l rust,typescript,python🔒 Security Note: Indexing automatically respects
.gitignoreand filters out common secrets patterns (.env,credentials.json,*.pem, API keys, etc.). Your secrets won't be embedded or exposed to the agent.
Add to your MCP config:
{
"mcpServers": {
"codegraph": {
"command": "/full/path/to/codegraph",
"args": ["start", "stdio", "--watch"]
}
}
}That's it. Your AI now understands your codebase.
View Interactive Architecture Diagram - Explore the full workspace structure with clickable components and layer filtering.
┌─────────────────────────────────────────────────────────────────┐
│ Claude Code / MCP Client │
└─────────────────────────────────┬───────────────────────────────┘
│ MCP Protocol
▼
┌─────────────────────────────────────────────────────────────────┐
│ CodeGraph MCP Server │
│ ┌───────────────────────────────────────────────────────────┐ │
│ │ Agentic Tools Layer │ │
│ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────────────┐ │ │
│ │ │ Rig │ │ ReAct │ │ LATS │ │ Tool Execution │ │ │
│ │ │ Agent │ │ Agent │ │ Agent │ │ Pipeline │ │ │
│ │ └────┬────┘ └────┬────┘ └────┬────┘ └────────┬────────┘ │ │
│ └───────┼───────────┼───────────┼───────────────┼───────────┘ │
│ └───────────┴───────────┴───────────────┘ │
│ │ │
│ ┌───────────────────────────┼───────────────────────────────┐ │
│ │ Inner Graph Tools │ │
│ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────────┐ │ │
│ │ │ Transitive │ │ Call │ │ Coupling │ │ │
│ │ │ Dependencies │ │ Chains │ │ Metrics │ │ │
│ │ └──────────────┘ └──────────────┘ └──────────────────┘ │ │
│ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────────┐ │ │
│ │ │ Reverse │ │ Cycle │ │ Hub │ │ │
│ │ │ Deps │ │ Detection │ │ Nodes │ │ │
│ │ └──────────────┘ └──────────────┘ └──────────────────┘ │ │
│ └───────────────────────────┬───────────────────────────────┘ │
└──────────────────────────────┼──────────────────────────────────┘
│
┌──────────────────────────────┼──────────────────────────────────┐
│ SurrealDB │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────────┐ │
│ │ Nodes │ │ Edges │ │ Chunks + Embeddings │ │
│ │ (AST + │ │ (calls, │ │ (HNSW vector index) │ │
│ │ FastML) │ │ imports) │ │ │ │
│ └─────────────┘ └─────────────┘ └─────────────────────────┘ │
│ │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ SurrealQL Graph Functions │ │
│ │ fn::semantic_search_nodes_via_chunks │ │
│ │ fn::semantic_search_chunks_with_context │ │
│ │ fn::get_transitive_dependencies │ │
│ │ fn::trace_call_chain │ │
│ │ fn::calculate_coupling_metrics │ │
│ └────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
Key insight: The agentic tools don't just call one function. They reason about which graph operations to perform, chain them together, and synthesize results. A single agentic_dependency_analysis call might:
- Search for the target component semantically
- Get its direct dependencies
- Trace transitive dependencies
- Check for circular dependencies
- Calculate coupling metrics
- Identify hub nodes that might be affected
- Synthesize all findings into an actionable answer
CodeGraph uses tree-sitter for initial parsing and enhances results with FastML algorithms and supports:
Rust • Python • TypeScript • JavaScript • Go • Java • C++ • C • Swift • Kotlin • C# • Ruby • PHP • Dart
Use any model with dimensions 384-4096:
- Local: Ollama, LM Studio, ONNX Runtime
- Cloud: OpenAI, Jina AI
- Local: Ollama, LM Studio
- Cloud: Anthropic Claude, OpenAI, xAI Grok, OpenAI Compliant
- SurrealDB with HNSW vector index (2-5ms queries)
- Free cloud tier available at surrealdb.com/cloud
Global config in ~/.codegraph/config.toml:
[embedding]
provider = "ollama"
model = "qwen3-embedding:0.6b"
dimension = 1024
[llm]
provider = "anthropic"
model = "claude-sonnet-4"
[database.surrealdb]
connection = "ws://localhost:3004"
namespace = "ouroboros"
database = "codegraph"See INSTALLATION_GUIDE.md for complete configuration options.
Keep your index fresh automatically:
# With MCP server (recommended)
codegraph start stdio --watch
# Standalone daemon
codegraph daemon start /path/to/project --languages rust,typescriptChanges are detected, debounced, and re-indexed in the background.
- More language support
- Cross-repository analysis
- Custom graph schemas
- Plugin system for custom analyzers
CodeGraph exists because we believe AI coding assistants should be augmented, not replaced. The best AI-human collaboration happens when the AI has deep context about what you're working with.
We're not trying to replace your IDE, your type checker, or your tests. We're giving your AI the context it needs to actually help.
Your codebase is a graph. Let your AI see it that way.
MIT
- Installation Guide
- SurrealDB Cloud (free tier)
- Jina AI (free API tokens)
- Ollama (local models)

