Open-source AI agent orchestration platform built on LangGraph and powered by the MCP & A2A protocols.
Self-host for free or let us deploy it for you. Your agents, your data, your infrastructure.
| Option | Best For | Get Started |
|---|---|---|
| Community (Free) | Developers, self-hosting | docker pull ghcr.io/ruska-ai/orchestra:latest |
| Managed Cloud | Teams wanting convenience | chat.ruska.ai |
| Enterprise | Organizations needing SSO, compliance, SLA | Contact Us |
This project includes tools for running shell commands and Docker container operations. For detailed information, please refer to the following documentation:
We publish the backend image to GitHub Container Registry (GHCR). For the full Docker/Docker Compose deployment guide (env setup, services, migrations, troubleshooting), jump to Docker Deployment details.
docker pull ghcr.io/ruska-ai/orchestra:latest- Docker Installed
- Python 3.11 or higher
- Access to OpenAI API (for GPT-4o model) or Anthropic API (for Claude 3.5 Sonnet)
-
Environment Variables:
Create a
.envfile in the root directory and add your API key(s):# Backend cd <project-root>/backend cp .example.env .env # Frontend cd <project-root>/frontend cp .example.env .env
Ensure that your
.envfile is not tracked by git by checking the.gitignore: -
Start Docker Services
Below will start the database, and the GUI for viewing the Postgres DB.
cd <project-root> docker compose up postgres pgadmin
-
Setup Server Environment
Assumes you're using astral uv. See
./backend/scriptsdirectory for other dev utilities.# Change directory cd <project-root>/backend # Generate virtualenv uv venv # Activate source .venv/bin/activate # Install uv sync # Run bash scripts/dev.sh # Select "no" when prompted.
-
Setup Client Environment
# Change Directory cd <project-root>/frontend # Install npm install # Run npm run dev
This project uses Alembic for database migrations. Here's how to work with migrations:
-
Create the database (if not exists):
cd backend alembic upgrade headpython -m seeds.user_seeder
-
Create new
alembic revision -m "description_of_changes"### Appliy Next alembic upgrade +1 ### Speicif revision alembic upgrade <revis_id> ### Appliy Down alembic downgrade -1 ### Appliy Down alembic downgrade <revis_id> ### History alembic history
-
Start Ngrok on port 8931
ngrok http 8931
-
Run MCP server
npx @playwright/mcp@latest \ --port 8931 \ --executable-path $HOME/.cache/ms-playwright/chromium-<version>/chrome-linux/chrome \ --vision
For organizations needing managed deployment, compliance, or dedicated support:
| Feature | Description |
|---|---|
| SSO/SAML | Integrate with your identity provider |
| Audit Logging | Comprehensive logs for compliance |
| Air-Gapped Deployment | Run in isolated environments |
| Priority Support | SLA-backed response times |
| Custom Integrations | Connect to your internal tools |
We partner with you to deploy Orchestra inside your infrastructure. Contact us to discuss your requirements.
This section covers deploying the Orchestra backend using Docker. For local development, see the sections above.
- Docker installed
- Docker Compose installed
- Access to AI provider API keys (OpenAI, Anthropic, etc.)
Pull the latest image from GitHub Container Registry:
docker pull ghcr.io/ruska-ai/orchestra:latestCreate a .env.docker file in the backend/ directory:
cd backend
cp .example.env .env.dockerUpdate the following values for Docker networking:
# Database - use container name instead of localhost
POSTGRES_CONNECTION_STRING="postgresql://admin:test1234@postgres:5432/orchestra?sslmode=disable"
# Tools - use container names for internal services
SEARX_SEARCH_HOST_URL="http://search_engine:8080"
SHELL_EXEC_SERVER_URL="http://exec_server:3005/exec"From the project root directory:
# Start database and backend
docker compose up postgres orchestra
# Or start all services
docker compose upThe API will be available at http://localhost:8000
- API Docs:
http://localhost:8000/docs - Health Check:
http://localhost:8000/health
| Service | Port | Description |
|---|---|---|
orchestra |
8000 | Backend API |
postgres |
5432 | PostgreSQL with pgvector |
pgadmin |
4040 | Database admin UI |
minio |
9000/9001 | S3-compatible file storage |
search_engine |
8080 | SearXNG search engine |
exec_server |
3005 | Shell execution server |
ollama |
11434 | Local LLM inference (requires GPU) |
services:
# PGVector
postgres:
image: pgvector/pgvector:pg16
container_name: postgres
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: test1234
POSTGRES_DB: postgres
ports:
- "5432:5432"
# Server (use pre-built image or build locally)
orchestra:
image: ghcr.io/ruska-ai/orchestra:latest
container_name: orchestra
env_file: .env.docker
ports:
- "8000:8000"
depends_on:
- postgresThe build script copies the Docker deployment README into the image and handles tagging:
# From project root
bash backend/scripts/build.sh
# Or with custom tag
bash backend/scripts/build.sh v1.0.0docker compose build orchestra# Copy README first, then build
cp docker/README.md backend/README.md
cd backend
docker build -t orchestra:local .| Variable | Description | Default |
|---|---|---|
APP_ENV |
Environment (development/production) | development |
APP_LOG_LEVEL |
Logging level | DEBUG |
APP_SECRET_KEY |
Application secret key | - |
JWT_SECRET_KEY |
JWT signing key | - |
USER_AGENT |
User agent string for requests | ruska-dev |
TEST_USER_ID |
Test user UUID | - |
| Variable | Description | Default |
|---|---|---|
POSTGRES_CONNECTION_STRING |
PostgreSQL connection string | - |
| Variable | Description | Default |
|---|---|---|
OPENAI_API_KEY |
OpenAI API key | - |
GROQ_API_KEY |
Groq API key | - |
ANTHROPIC_API_KEY |
Anthropic API key | - |
XAI_API_KEY |
xAI API key | - |
OLLAMA_BASE_URL |
Ollama server URL | - |
| Variable | Description | Default |
|---|---|---|
SEARX_SEARCH_HOST_URL |
SearXNG search endpoint | http://localhost:8080 |
SHELL_EXEC_SERVER_URL |
Shell execution endpoint | http://localhost:3005/exec |
TAVILY_API_KEY |
Tavily search API key | - |
| Variable | Description | Default |
|---|---|---|
PRESIDIO_ANALYZE_HOST |
Presidio analyze endpoint | - |
PRESIDIO_ANONYMIZE_HOST |
Presidio anonymize endpoint | - |
PRESIDIO_API_KEY |
Presidio API key | - |
| Variable | Description | Default |
|---|---|---|
MINIO_HOST |
MinIO/S3 host URL | - |
S3_REGION |
S3 region | - |
ACCESS_KEY_ID |
S3 access key | - |
ACCESS_SECRET_KEY |
S3 secret key | - |
BUCKET |
S3 bucket name | enso_dev |
Run migrations inside the container:
# Using docker compose exec
docker compose exec orchestra alembic upgrade head
# Or run migrations before starting
docker compose run --rm orchestra alembic upgrade head- Generate strong values for
APP_SECRET_KEYandJWT_SECRET_KEY - Use SSL/TLS termination (nginx, traefik, etc.)
- Restrict database access to internal networks
- Never expose
.envfiles
- Configure appropriate resource limits in
docker-compose.yml - Use a reverse proxy for load balancing
- Enable PostgreSQL connection pooling for high traffic
The Dockerfile uses a multi-stage build:
- Builder Stage: Installs dependencies, compiles Python to bytecode (
.pyc) - Runtime Stage: Ships only compiled bytecode for smaller image size
Note: Migration files (
.py) are preserved since Alembic requires source files.
# Check logs
docker compose logs orchestra
# Verify environment file exists
ls -la backend/.env.docker# Ensure postgres is running
docker compose ps postgres
# Check postgres logs
docker compose logs postgres# Check what's using the port
lsof -i :8000
# Or change the port mapping in docker-compose.yml
ports:
- "8001:8000" # Map to different host port