A modular, agentic chatbot platform built with React, Node.js, and the Model Context Protocol (MCP). Features multiple AI-powered "activities" (personalities/agents) that can use custom tools via MCP servers.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Frontend (React) β
β - Activity Selector β
β - Chat Interface with streaming β
β - Message rendering (markdown, tables, HTML) β
β - Thinking/reasoning display β
βββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββ
β HTTP/SSE
βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββ
β Agent Server (Node.js) β
β - Activity routing β
β - LLM orchestration (Gemini, OpenAI-compatible) β
β - Agent loop with streaming β
β - Local tools (memory) β
β - MCP client manager β
βββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββ
β MCP over HTTP
ββββββββββ΄βββββββββ
β β
ββββββββββΌβββββββ ββββββββΌβββββββββ
β MCP Server β β MCP Server β
β (Trivia) β β (Web Fetch) β
β β β β
β - Questions β β - Web search β
β - Scoring β β - Fetch pages β
βββββββββββββββββ βββββββββββββββββ
- Frontend: React + TypeScript, Vite, Tailwind CSS, Zustand (state management)
- Backend: Hono (web framework), Vercel AI SDK, MCP SDK
- LLM Providers: Google Gemini, OpenAI-compatible APIs (local or remote)
- MCP: Model Context Protocol for modular tool servers
Activities: Each activity represents a distinct agent/personality with:
- Unique system prompt and personality
- Specific LLM model and configuration
- Access to specific MCP servers (tools)
- Optional local tools (e.g., memory)
- Theme and UI customization
- Chat history persistence settings
MCP Servers: Standalone HTTP services that expose tools:
- Built using
@modelcontextprotocol/sdk - Provide tools via JSON schema
- Run as separate Docker containers
- Stateless or stateful (using userId from metadata)
Agent Loop: The main execution flow in agent-server/src/agent/loop.ts:
- Receives user message and chat history
- Converts to LLM-compatible format
- Streams LLM response with tool calls
- Executes tools via MCP or local handlers
- Yields text deltas, reasoning, and tool-call events to frontend
- Continues loop until LLM completes response
Error Recovery: Handling of MCP tool failures:
- When tools fail with recoverable errors (wrong arguments, validation issues), errors are returned as text observations to the LLM
- LLM reads the error message, understands the problem, and retries with corrected arguments
- Hallucination detection prevents infinite loops by tracking identical tool calls
- Node.js 20+
- pnpm 8+
- Docker & Docker Compose
- PostgreSQL (or use Docker)
- Clone and install dependencies:
git clone <repository>
cd agentic-chatbot
pnpm install- Set up environment variables:
cp .env.example .env
# Edit .env with your API keysRequired environment variables:
# LLM API Keys
GOOGLE_API_KEY=your_gemini_api_key_here
AMA_API_KEY=for_ama_activity
AMA_API_URL=base_url_for_ama_activity
# CORS (comma-separated for multiple origins)
CORS_ORIGIN=http://localhost:5173
# MCP Server URLs (defaults work with docker-compose)
MCP_TRIVIA_URL=http://localhost:3001/mcp
MCP_WEB_URL=http://localhost:3002/mcp- Run with Docker Compose:
docker-compose up --buildServices will be available at:
- Frontend: http://localhost:5173
- Agent Server: http://localhost:3000
- Trivia MCP: http://localhost:3001
- Web MCP: http://localhost:3002
-
Start the database (PostgreSQL)
-
Run migrations:
cd packages/agent-server
pnpm db:migrate- Start services (in separate terminals):
# MCP Trivia Server (TypeScript)
cd packages/mcp-trivia
pnpm dev
# MCP Web Server (Python)
cd packages/mcp-web-py
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -e .
python -m src.server
# Agent Server
cd packages/agent-server
pnpm dev
# Frontend
cd packages/ui
pnpm devagentic-chatbot/
βββ packages/
β βββ agent-server/ # Main backend server
β β βββ src/
β β β βββ agent/
β β β β βββ loop.ts # Main agent execution loop
β β β β βββ llm/
β β β β β βββ providers.ts # LLM provider configs (Gemini, OpenAI)
β β β β βββ mcp/
β β β β β βββ manager.ts # MCP client manager (global singleton)
β β β β βββ tools/
β β β β βββ memory.ts # Local memory tool (DB storage)
β β β βββ config/
β β β β βββ activities/
β β β β βββ index.ts # Activity registry
β β β β βββ trivia.ts # Trivia activity config
β β β β βββ ama.ts # AMA activity config
β β β βββ db/
β β β β βββ schema.ts # Drizzle schema
β β β β βββ client.ts # DB client
β β β β βββ migrate.ts # Migration runner
β β β βββ routes/
β β β β βββ chat.ts # Chat streaming endpoint
β β β β βββ activities.ts # Activity list endpoint
β β β βββ index.ts # Server entry point (CORS, routing)
β β βββ drizzle/ # Migration files
β β
β βββ mcp-trivia/ # Trivia MCP server
β β βββ src/
β β βββ server.ts # MCP server implementation
β β βββ data/
β β βββ questions.ts # Trivia questions database
β β
β βββ mcp-web/ # Web fetch MCP server
β β βββ src/
β β βββ server.ts # Web fetching tools
β β
β βββ ui/ # React frontend
β β βββ src/
β β βββ components/
β β β βββ ActivitySelector.tsx
β β β βββ MockWallet.tsx # User ID management
β β β βββ Chat/
β β β βββ ChatContainer.tsx
β β β βββ MessageBubble.tsx # Markdown rendering
β β β βββ ChatInput.tsx
β β βββ stores/
β β β βββ chatStore.ts # Zustand state management
β β βββ hooks/
β β β βββ useChat.ts # Chat hook with streaming
β β βββ utils/
β β βββ uuid.ts # UUID polyfill
β β
β βββ shared/ # Shared TypeScript types
β βββ src/
β βββ types/
β βββ activities.ts # ActivityConfig, LLMConfig, McpServerConfig
β βββ messages.ts # ChatMessage types
β
βββ docker-compose.yml # Docker orchestration
βββ .env.example # Environment variable template
βββ README.md # This file
Activities are AI agents with specific personalities, tools, and configurations.
Create a new file in packages/agent-server/src/config/activities/:
// packages/agent-server/src/config/activities/my-activity.ts
import type { ActivityConfig } from '@agentic/shared';
export const myActivity: ActivityConfig = {
id: 'my-activity',
name: 'My Custom Agent',
description: 'A helpful agent that does X',
greetingMessage: "Hello! I can help you with X. What would you like to do?",
systemPrompt: `You are a helpful AI assistant specialized in X.
Your capabilities:
- Task A
- Task B
- Task C
Guidelines:
- Be concise and clear
- Use the available tools when needed
- Format responses using markdown
Available tools:
- tool_name: Description of what it does`,
llm: {
provider: 'gemini', // or 'openai-compatible'
model: 'gemini-2.5-flash', // or your model name
temperature: 0.7, // 0.0 = deterministic, 1.0 = creative
},
// MCP servers this activity can access
mcpServers: [
{
name: 'my-mcp-server',
url: process.env.MY_MCP_URL || 'http://localhost:3003/mcp',
},
],
// Local tools (currently only 'memory' available)
localTools: ['memory'],
// UI theming
theme: {
primaryColor: '#3b82f6', // Tailwind blue-500
name: 'my-activity',
},
// Whether to persist chat history between sessions
persistChatHistory: true, // true = remember, false = fresh each time
};Add your activity to the registry:
// packages/agent-server/src/config/activities/index.ts
import type { ActivityConfig } from '@agentic/shared';
import { triviaActivity } from './trivia.js';
import { amaActivity } from './ama.js';
import { myActivity } from './my-activity.js'; // Add this
const activities: Record<string, ActivityConfig> = {
trivia: triviaActivity,
ama: amaActivity,
'my-activity': myActivity, // Add this
};
export function getActivityConfig(id: string): ActivityConfig | undefined {
return activities[id];
}
export function getAllActivities(): ActivityConfig[] {
return Object.values(activities);
}Rebuild and restart:
docker-compose down
docker-compose up --buildYour new activity will appear in the activity selector.
Gemini:
llm: {
provider: 'gemini',
model: 'gemini-2.5-flash', // or 'gemini-2.0-flash-thinking-exp'
temperature: 0.7,
}OpenAI-compatible (local or remote):
llm: {
provider: 'openai-compatible',
model: 'gpt-4', // or local model name
baseUrl: 'http://localhost:8000/v1', // or OpenAI API
apiKey: process.env.OPENAI_API_KEY,
temperature: 0.7,
}Activities support dynamic template tags in system prompts to inject user context at runtime. This enables personalization based on user ID, timezone, locale, and other metadata.
| Variable | Example | Description | Source |
|---|---|---|---|
{{userId}} |
user_abc12345 |
Unique user identifier | Request body (localStorage) |
{{serverTime}} |
2025-12-02T14:30:00.123Z |
Current server time in UTC ISO format | Server |
{{userIp}} |
192.168.1.1 |
User's IP address (for geolocation) | HTTP headers (x-forwarded-for) |
{{userCountry}} |
EE |
User's country code | HTTP headers (cf-ipcountry - Cloudflare only) |
{{userTimezone}} |
Europe/Tallinn |
User's timezone | Frontend Intl API |
{{userLocale}} |
et-EE |
User's locale/language setting | Frontend navigator.language |
{{userLanguage}} |
et |
Language code (parsed from locale) | Derived from userLocale |
{{userRegion}} |
EE |
Region code (parsed from locale) | Derived from userLocale |
{{localTime}} |
12/02/2025, 16:30:00 |
Current time in user's timezone | Server time + userTimezone |
Simple Variables:
systemPrompt: `You are a helpful assistant.
User ID: {{userId}}
Current Time: {{serverTime}}
`Conditional Blocks (only show if variable exists):
systemPrompt: `You are a helpful assistant.
{{#if userCountry}}User is from {{userCountry}}.
{{/if}}
`If-Else Blocks:
systemPrompt: `You are a helpful assistant.
{{#if userTimezone}}
User's timezone: {{userTimezone}}
Local time: {{localTime}}
{{else}}
User's timezone unknown - using UTC.
{{/if}}
`// packages/agent-server/src/config/activities/my-activity.ts
export const myActivity: ActivityConfig = {
id: 'my-activity',
name: 'Assistant',
description: 'A timezone and locale-aware assistant',
systemPrompt: `You are a helpful assistant.
USER CONTEXT:
- User ID: {{userId}}
- Current Time (UTC): {{serverTime}}
{{#if userTimezone}}- User Timezone: {{userTimezone}}
- Local Time: {{localTime}}
{{/if}}{{#if userCountry}}- User Country: {{userCountry}}
{{/if}}{{#if userLocale}}- User Locale: {{userLocale}} (Language: {{userLanguage}})
{{/if}}
Your role:
- Be nice
...
`,
llm: {
provider: 'gemini',
model: 'gemini-2.5-flash',
temperature: 0.7,
},
mcpServers: [],
localTools: ['memory'],
theme: {
primaryColor: '#8b5cf6',
name: 'my-activity',
},
};The trivia questions are stored in packages/mcp-trivia/src/data/questions.ts.
MCP servers are standalone services that provide tools to the agent. Here's how to create one.
mkdir -p packages/mcp-myservice/src
cd packages/mcp-myservice// packages/mcp-myservice/package.json
{
"name": "@agentic/mcp-myservice",
"version": "1.0.0",
"type": "module",
"main": "dist/server.js",
"scripts": {
"dev": "tsx watch src/server.ts",
"build": "tsc",
"start": "node dist/server.js"
},
"dependencies": {
"@modelcontextprotocol/sdk": "^1.0.4",
"zod": "^3.24.1"
},
"devDependencies": {
"@types/node": "^20.0.0",
"tsx": "^4.20.6",
"typescript": "^5.7.2"
}
}// packages/mcp-myservice/src/server.ts
import { createServer } from 'node:http';
import { randomUUID } from 'node:crypto';
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StreamableHTTPServerTransport } from '@modelcontextprotocol/sdk/server/streamableHttp.js';
import { z } from 'zod';
const server = new McpServer({
name: 'myservice',
version: '1.0.0',
});
// Tool 1: Simple operation
server.tool(
'my_tool_name',
'Description of what this tool does',
{
// Input schema using Zod
input: z.string().describe('Input parameter description'),
count: z.number().optional().describe('Optional count parameter'),
},
async ({ input, count }) => {
// Tool implementation
const result = performOperation(input, count);
return {
content: [{
type: 'text',
text: JSON.stringify(result),
}],
};
}
);
// Tool 2: Stateful operation (uses userId)
server.tool(
'stateful_tool',
'A tool that remembers user state',
{
action: z.enum(['get', 'set', 'delete']),
value: z.string().optional(),
},
async ({ action, value }, extra) => {
// Access userId from metadata
const userId = (extra as any)?.meta?.userId || 'anonymous';
// Your stateful logic here
// ...
return {
content: [{
type: 'text',
text: JSON.stringify({ success: true }),
}],
};
}
);
// Start HTTP server
async function main() {
const port = parseInt(process.env.PORT || '3003');
const transport = new StreamableHTTPServerTransport({
sessionIdGenerator: () => randomUUID(),
});
await server.connect(transport);
const httpServer = createServer((req, res) => {
if (req.url === '/mcp') {
transport.handleRequest(req, res);
} else {
res.writeHead(404);
res.end('Not Found');
}
});
httpServer.listen(port, () => {
console.log(`My MCP server running on port ${port}`);
});
}
main().catch(console.error);# docker-compose.yml
services:
# ... existing services ...
mcp-myservice:
build:
context: .
dockerfile: packages/mcp-myservice/Dockerfile
ports:
- "3003:3003"
environment:
PORT: 3003
restart: unless-stopped# packages/mcp-myservice/Dockerfile
FROM node:20-alpine AS builder
WORKDIR /app
RUN corepack enable pnpm
COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./
COPY packages/mcp-myservice/package.json ./packages/mcp-myservice/
RUN pnpm install --frozen-lockfile
COPY packages/mcp-myservice ./packages/mcp-myservice
RUN pnpm --filter @agentic/mcp-myservice build
FROM node:20-alpine
WORKDIR /app
RUN corepack enable pnpm
COPY --from=builder /app/packages/mcp-myservice/dist ./dist
COPY --from=builder /app/packages/mcp-myservice/package.json ./
COPY --from=builder /app/packages/mcp-myservice/node_modules ./node_modules
EXPOSE 3003
CMD ["node", "dist/server.js"]Add the MCP server to your activity configuration:
// packages/agent-server/src/config/activities/my-activity.ts
mcpServers: [
{
name: 'myservice',
url: process.env.MY_SERVICE_MCP_URL || 'http://mcp-myservice:3003/mcp',
},
],- Clear descriptions: LLM uses these to decide when to call tools
- Structured output: Return JSON for complex data
- Error handling: Return error objects, don't throw
- Schema validation: Use Zod for input validation
- Stateless when possible: Use userId only if state is needed
- Idempotent: Same input should produce same output
- Documentation: Add tool usage to activity systemPrompt
The agent server includes intelligent error recovery for MCP tool failures, allowing the LLM to learn from errors and retry with corrected arguments.
Control error recovery behavior with environment variables:
# Enable/disable error recovery (default: true)
ENABLE_TOOL_RETRY=true
# Maximum identical retries before warning (default: 2)
MAX_TOOL_RETRIES=2# LLM Configuration
GOOGLE_API_KEY=your_gemini_api_key
AMA_API_KEY=optional_openai_compatible_key
AMA_API_URL=https://api.openai.com/v1
# Frontend/Backend URLs
VITE_API_URL=http://localhost:3000
API_BASE_URL=http://localhost:5173
# CORS (comma-separated)
CORS_ORIGIN=http://localhost:5173
# MCP Server URLs (adjust if not using docker-compose)
MCP_TRIVIA_URL=http://localhost:3001/mcp
MCP_WEB_URL=http://localhost:3002/mcp
# Server Port
PORT=3000
# Debug Flags
DEBUG_PROMPTS=false
DEBUG_MCP=false
# Tool Error Recovery
ENABLE_TOOL_RETRY=true
MAX_TOOL_RETRIES=2# LLM Configuration
GOOGLE_API_KEY=your_production_api_key
AMA_API_KEY=your_production_api_key
AMA_API_URL=https://api.openai.com/v1
# Frontend/Backend URLs
VITE_API_URL=https://api.yourdomain.com
API_BASE_URL=https://yourdomain.com
# CORS
CORS_ORIGIN=https://yourdomain.com,https://www.yourdomain.com
# MCP Server URLs
MCP_TRIVIA_URL=http://mcp-trivia:3001/mcp
MCP_WEB_URL=http://mcp-web:3002/mcp
# Server Port
PORT=3000VITE_API_URL is a build-time variable for the frontend:
# Local development (default)
VITE_API_URL=http://localhost:3000
# Production - set before building
VITE_API_URL=https://api.yourdomain.com
# Build with custom API URL
docker-compose build --build-arg VITE_API_URL=https://api.yourdomain.com uiImportant: After changing VITE_API_URL, you must rebuild the UI:
docker-compose build ui
docker-compose up -d ui- Set up server:
# Clone repository
git clone <repository>
cd agentic-chatbot
# Copy and configure environment
cp .env.example .env
nano .env # Edit with production values- Configure environment variables:
# Production URLs
VITE_API_URL=http://your-server-ip:3000
CORS_ORIGIN=http://your-server-ip:5173- Build and start services:
docker-compose build
docker-compose up -d- Check logs:
docker-compose logs -f- Run database migrations (first time only):
docker-compose exec agent-server pnpm db:migrateProduction monitoring essentials:
- Set up log aggregation (ELK stack, Loki, CloudWatch)
- Monitor error rates and latency
- Set up alerts for suspicious activity
- Track API usage and costs
- Monitor database performance
Sensitive data in logs:
- Never log API keys or passwords
- Redact user PII before logging
- Rotate logs regularly
- Secure log storage
If docker-compose is too old (ver 1.x) then try
docker compose instead. Or upgrade.
Error: "Server already initialized"
Solution: The MCP manager uses a global singleton with persistent connections. If you see this error:
- Restart the agent-server container
- Check MCP server logs for errors
- Verify MCP URLs in .env are correct
Error: "CORS header 'Access-Control-Allow-Origin' does not match"
Solution:
- Verify
CORS_ORIGINin .env matches your frontend URL exactly - For multiple origins:
CORS_ORIGIN=http://localhost:5173,http://192.168.1.100:5173 - Check agent-server logs for CORS debug output
- Remember: no wildcard '*' for security reasons
Error: "VITE_API_URL not defined"
Solution: VITE_API_URL is a build-time variable:
export VITE_API_URL=http://your-server:3000
docker-compose build uiError: "pnpm lockfile out of date"
Solution:
pnpm install
docker-compose buildError: Chat messages not streaming
Solution:
- Check browser console for errors
- Verify SSE connection in Network tab
- Check agent-server logs for streaming errors
- Ensure LLM provider is configured correctly
Error: "crypto.randomUUID is not a function"
Solution: This happens on non-HTTPS connections. The code already includes a polyfill in packages/ui/src/utils/uuid.ts, but ensure you're importing from there:
import { generateUUID } from '../utils/uuid';
// Not: crypto.randomUUID()Error: Memory tool returns empty results
Solution:
- Check browser's localStorage permissions
- Verify migrations ran:
docker-compose exec agent-server pnpm db:migrate - Check userId is being passed correctly (see Mock Wallet component)
Error: "API key not valid"
Solution: Verify API keys in .env:
- Gemini: Get key from https://aistudio.google.com/apikey
- OpenAI-compatible: Check your provider's documentation
Error: "Rate limit exceeded"
Solution:
- Use a lighter model (e.g., gemini-2.5-flash instead of pro)
- Upgrade API tier