Chain. Build. Run. All offline. LAO is how developers bend AI to their will—no cloud, no compromise.
LAO is a cross-platform desktop tool for chaining local AI models and plugins into powerful, agentic workflows. It supports prompt-driven orchestration, visual DAG editing, and full offline execution.
- Modular plugin system (Rust, local-first, dynamic loading)
- Offline DAG engine (retries, caching, lifecycle hooks)
- Prompt-driven agentic workflows (LLM-powered, system prompt file)
- Visual workflow builder (egui-based native GUI, drag & drop)
- CLI interface (run, validate, prompt, validate-prompts, plugin list)
- Prompt library (Markdown + JSON, for validation/fine-tuning)
- Test harness for prompt validation
- End-to-end execution from UI (execute and show logs/results)
- UI streaming run with real-time step events and parallel execution
- Node/edge editing in UI (drag, connect, edit, delete)
- Cross-platform support (Linux, macOS, Windows)
- Conditional/branching steps (output-based conditions)
- Multi-modal input (files, voice, images, video)
- Automated packaging (deb, rpm, AppImage, dmg, msi, zip)
- CI/CD pipeline (GitHub Actions, automated releases)
- Plugin explainability (
lao explain plugin <name>) - Plugin marketplace/discovery
- Live workflow status/logs in UI
# Run the native GUI with visual workflow builder
cargo run --bin lao-ui# Run workflows from command line
cargo run --bin lao-cli run workflows/test.yaml
# Generate workflows from natural language
cargo run --bin lao-cli prompt "Summarize this audio and tag action items"
# Validate prompt library
cargo run --bin lao-cli validate-prompts# Build all plugins for your platform
bash scripts/build-plugins.shLAO can generate and execute workflows from natural language prompts using a local LLM (Ollama). The system prompt is editable at core/prompt_dispatcher/prompt/system_prompt.txt.
Example:
lao prompt "Refactor this Python file and add comments"- Prompts and expected workflows:
core/prompt_dispatcher/prompt/prompt_library.mdand.json - Validate with:
cargo run --bin lao-cli validate-prompts - Add new prompts to improve LLM output and test new plugins
- Add new plugins by implementing the
LaoPlugintrait, building as acdylib, and placing the resulting library in theplugins/directory - Expose a C ABI function named
plugin_entry_pointthat returns aBox<dyn LaoPlugin> - Add prompt/workflow pairs to the prompt library for validation and LLM tuning
- See
docs/plugins.mdanddocs/workflows.mdfor details
- Architecture:
docs/architecture.md - Plugins:
docs/plugins.md - Workflows:
docs/workflows.md - CLI:
docs/cli.md - Observability:
docs/observability.md
Cloud is optional. Intelligence is modular. Agents are composable.
LAO is how devs build AI workflows with total control.
No tokens. No latency. No lock-in.
Let’s define the category—one plugin at a time.