A feature-rich, interactive CLI for AWS Strands agents with token tracking, prompt templates, agent aliases, and extensive configuration options.
- π·οΈ Agent Aliases - Save agents as short names (
chat_loop peteinstead of full paths) - π¦ Auto-Setup - Automatically install agent dependencies from
requirements.txtorpyproject.toml - π Audio Notifications - Play sound when agent completes a turn (cross-platform support)
- π΅ Harmony Support - Specialized processing for OpenAI Harmony format (gpt-oss models)
- π Command History - Navigate previous queries with ββ arrows (persisted to
~/.chat_history) - βοΈ Multi-line Input - Type
\\to enter multi-line mode with Ctrl+D to cancel and β to edit previous lines - πΎ Session Management - Save conversations as clean markdown files in
./.chat-sessions/(project-local) - π Copy Commands - Copy responses, queries, code blocks, or entire conversations to clipboard
- π° Token Tracking - Track tokens and costs per query and session
- π Prompt Templates - Reusable prompts from
~/.prompts/ - βοΈ Configuration - YAML-based config with per-agent overrides
- π Status Bar - Real-time metrics (queries, tokens, duration)
- π Session Summary - Full statistics displayed on exit
- π¨ Rich Formatting - Enhanced markdown rendering with syntax highlighting
- π Error Recovery - Automatic retry logic with exponential backoff
- π Agent Metadata - Display model, tools, and capabilities
pip install basic-agent-chat-loopThat's it! The package will automatically create:
~/.chatrc- Configuration file with recommended defaults~/.prompts/- Sample prompt templates (on first use)
Windows: Command history support (pyreadline3) is now installed automatically on Windows - no extra steps needed!
AWS Bedrock integration:
pip install basic-agent-chat-loop[bedrock]For development or the latest features:
git clone https://github.com/Open-Agent-Tools/Basic-Agent-Chat-Loop.git
cd Basic-Agent-Chat-Loop
pip install -e ".[dev]"See docs/INSTALL.md for detailed installation instructions and troubleshooting.
# Run with agent path
chat_loop path/to/your/agent.py
# Or use an alias (after saving)
chat_loop myagentSave frequently used agents for quick access:
# Save an agent as an alias
chat_loop --save-alias myagent path/to/agent.py
# Use the alias from anywhere
chat_loop myagent
# List all saved aliases
chat_loop --list-aliases
# Remove an alias
chat_loop --remove-alias myagentExample with real agents:
# Save your agents
chat_loop --save-alias pete ~/agents/product_manager/agent.py
chat_loop --save-alias dev ~/agents/senior_developer/agent.py
# Use them from anywhere
cd ~/projects/my-app
chat_loop dev # Get coding help
chat_loop pete # Get product feedbackAliases are stored in ~/.chat_aliases and work from any directory.
Automatically install agent dependencies with the --auto-setup flag (or -a for short):
# Auto-install dependencies when running an agent
chat_loop myagent --auto-setup
chat_loop path/to/agent.py -a
# Works with any of these dependency files:
# - requirements.txt (most common)
# - pyproject.toml (modern Python projects)
# - setup.py (legacy projects)Smart detection: If you run an agent without --auto-setup and dependency files are detected, you'll see a helpful suggestion:
chat_loop myagent
π‘ Found requirements.txt in agent directory. Run with --auto-setup (or -a) to install dependencies automaticallyWhat gets installed:
requirements.txtβpip install -r requirements.txtpyproject.tomlβpip install -e <agent_directory>setup.pyβpip install -e <agent_directory>
This makes sharing agents easierβjust include a requirements.txt with your agent and users can install everything with one command.
The package automatically creates sample templates in ~/.prompts/ on first use:
explain.md- Explain code in detailreview.md- Code review with best practicesdebug.md- Help debugging issuesoptimize.md- Performance optimization suggestionstest.md- Generate test casesdocument.md- Add documentation
Use templates in chat:
chat_loop myagent
You: /review src/app.py
You: /explain utils.py
You: /test my_functionCreate custom templates:
# Create your own template
cat > ~/.prompts/security.md <<'EOF'
# Security Review
Please review this code for security vulnerabilities:
{input}
Focus on:
- Input validation
- Authentication/authorization
- Data sanitization
- Common security patterns
EOF
# Use it in chat
You: /security auth.pyA configuration file (~/.chatrc) is automatically created on first use with recommended defaults. You can customize it to your preferences:
features:
auto_save: true # Automatically save conversations on exit
show_tokens: true # Display token counts
show_metadata: true # Show agent model/tools info
rich_enabled: true # Enhanced formatting
ui:
show_status_bar: true # Top status bar
show_duration: true # Query duration
audio:
enabled: true # Play sound when agent completes
notification_sound: null # Custom WAV file (null = bundled sound)
harmony:
enabled: auto # Harmony processing (auto/yes/no)
show_detailed_thinking: true # Show reasoning with labeled prefixes
behavior:
max_retries: 3 # Retry attempts on failure
timeout: 120.0 # Request timeout (seconds)
# Per-agent overrides
agents:
'Product Pete':
features:
show_tokens: false
audio:
enabled: false # Disable audio for this agentAudio notifications alert you when the agent completes a response. Enabled by default with a bundled notification sound.
Platforms supported:
- macOS (using
afplay) - Linux (using
aplayorpaplay) - Windows (using
winsound)
Configure audio in ~/.chatrc:
audio:
enabled: true
notification_sound: null # Use bundled sound
# Or specify a custom WAV file:
# notification_sound: /path/to/custom.wavPer-agent overrides:
agents:
'Silent Agent':
audio:
enabled: false # Disable audio for this agentSee CONFIG.md for full configuration options.
| Command | Description |
|---|---|
#help |
Show help message |
#info |
Show agent details (model, tools) |
#context |
Show token usage and context statistics |
#templates |
List available prompt templates |
#sessions |
List all saved conversation sessions |
/name |
Use prompt template from ~/.prompts/name.md |
#resume <#> |
Resume a previous session by number or ID |
#compact |
Save session and continue in new session with summary |
#copy |
Copy last response to clipboard (see variants below) |
#clear |
Clear screen and reset agent session |
#exit, #quit |
Exit chat (shows session summary) |
Save conversations automatically:
# Enable auto-save in config
features:
auto_save: trueResume a previous conversation:
# In chat - list sessions
You: #sessions
Available Sessions (3):
1. MyAgent - Jan 26, 14:30 - 15 queries
"Can you help me build a REST API..."
2. MyAgent - Jan 25, 09:15 - 7 queries
"Explain async/await in Python..."
# Resume by number or session ID
You: #resume 1
π Loading session...
β Found: MyAgent - Jan 26, 14:30 (15 queries, 12.5K tokens)
π Restoring context...
MyAgent: I've reviewed our previous conversation about building a REST API.
We discussed Flask routing and database models. Ready to continue!
# Continue conversation with restored context
You: Let's add authentication nowCompact current session:
When your conversation gets long, use #compact to save it and start fresh while preserving context:
You: #compact
π Generating session summary...
πΎ Saved session: myagent_20251230_143022 (15 queries, 12.5K tokens)
π Starting new session with summary...
MyAgent: I've reviewed our conversation about the REST API.
We built Flask routes and database models. Ready to continue!
# Continue in new session - old queries compressed into summary
You: Now let's add authenticationView saved conversations:
Conversations are saved as clean markdown files in ./.chat-sessions/ (in current directory):
ls -lh ./.chat-sessions/
# Shows files like: simple_sally_20251230_110627.md
# View a conversation
cat ./.chat-sessions/simple_sally_20251230_110627.mdEach saved session includes an auto-generated summary that enables fast, context-aware resumption without replaying all queries.
List all saved sessions:
chat_loop --list-sessionsSessions are saved to ./.chat-sessions/ in your current working directory, providing context separation between different projects.
Quickly copy content to clipboard:
Available copy commands:
# Copy last agent response (default)
You: #copy
# Copy your last query
You: #copy query
# Copy entire conversation as markdown
You: #copy all
# Copy only code blocks from last response
You: #copy codeExample workflow:
You: Write a Python function to reverse a string
Agent: Here's a function to reverse a string:
def reverse_string(s):
return s[::-1]
You: #copy code
β Copied code blocks from last response to clipboard
# Now paste into your editor with Cmd+V (Mac) or Ctrl+V (Windows/Linux)
Press \\ to enter multi-line mode:
You: \\
... def factorial(n):
... if n <= 1:
... return 1
... return n * factorial(n - 1)
...
[Press Enter on empty line to submit]
When show_tokens: true in config:
------------------------------------------------------------
Time: 6.3s β 1 cycle β Tokens: 4.6K (in: 4.4K, out: 237) β Cost: $0.017
Always shown on exit:
============================================================
Session Summary
------------------------------------------------------------
Duration: 12m 34s
Queries: 15
Tokens: 67.8K (in: 45.2K, out: 22.6K)
Total Cost: $0.475
============================================================
from basic_agent_chat_loop import ChatLoop
# Create chat interface
chat = ChatLoop(
agent=your_agent,
name="My Agent",
description="Agent description",
config_path=Path("~/.chatrc") # Optional
)
# Run interactive loop
chat.run()This chat loop is specifically designed for AWS Strands agents with full support for:
- Automatic metadata extraction
- Tool discovery
- Streaming responses
- Token tracking and cost estimation
The chat loop includes built-in support for the OpenAI Harmony response format (designed for gpt-oss open-weight models). Harmony support is included by default in all installations.
Harmony is OpenAI's response formatting standard for their open-weight model series (gpt-oss). It provides:
- Structured conversation handling with multiple output channels
- Reasoning output generation (internal analysis separate from final response)
- Function call management with namespaces
- Tool usage tracking and structured outputs
The chat loop automatically detects Harmony agents by checking for:
- Explicit
uses_harmonyattribute on the agent - Model names containing "gpt-oss" or "harmony"
- Harmony-specific methods or attributes
- Agent class names containing "harmony"
When a Harmony agent is detected, responses are automatically processed to:
- Extract and display multiple output channels (analysis, commentary, final)
- Highlight internal reasoning separately from the final response
- Detect and format tool calls appropriately
- Parse structured Harmony response formats
Control Harmony processing behavior:
# In ~/.chatrc or .chatrc
harmony:
enabled: auto # auto (default) / yes / no
show_detailed_thinking: true # Default - show all channels with labelsharmony.enabled options:
auto(default) - Automatically detect harmony agentsyes- Force enable harmony processing for all agentsno- Disable harmony processing completely
By default, detailed thinking is enabled - showing all channels with labeled prefixes:
With detailed thinking enabled (true, default):
π [REASONING]
I need to analyze this query for potential bottlenecks...
π [ANALYSIS]
Looking at the query structure:
- Multiple table joins without proper indexes
- WHERE clause filtering happens after the joins
π [COMMENTARY]
This is a common pattern I see in legacy codebases...
π¬ [RESPONSE]
Here are three optimizations for your database query...
To disable detailed thinking (set to false):
harmony:
show_detailed_thinking: false # Only show final responseOutput with detailed thinking disabled:
Here are three optimizations for your database query...
# Your agent using Harmony
class MyHarmonyAgent:
uses_harmony = True # Explicit marker
def __call__(self, query):
# Agent returns Harmony-formatted response
return harmony_response
# Chat loop will automatically detect and handle Harmony format
chat_loop my_harmony_agent- Python 3.9+ (required by openai-harmony dependency)
pyyaml>=6.0.1- Configuration file parsingrich>=13.7.0- Enhanced terminal renderingpyperclip>=1.8.0- Clipboard support for copy commandsopenai-harmony>=0.0.8- OpenAI Harmony format support (built-in)pyreadline3>=3.4.1- Command history on Windows (auto-installed on Windows)
anthropic-bedrock>=0.8.0- AWS Bedrock integration (install with[bedrock])
readline(built-in on Unix) - Command history on macOS/Linux
- β macOS - Full support with native readline
- β Linux - Full support with native readline
- β Windows - Full support with automatic pyreadline3 installation
src/basic_agent_chat_loop/
βββ chat_loop.py # Main orchestration
βββ chat_config.py # Configuration management
βββ cli.py # CLI entry point
βββ components/ # Modular components
β βββ ui_components.py # Colors, StatusBar
β βββ token_tracker.py # Token/cost tracking
β βββ template_manager.py # Prompt templates
β βββ display_manager.py # Display formatting
β βββ agent_loader.py # Agent loading
β βββ alias_manager.py # Alias management
docs/
βββ ALIASES.md # Alias system guide
βββ CONFIG.md # Configuration reference
βββ INSTALL.md # Installation instructions
βββ Chat_TODO.md # Roadmap and future features
- docs/ALIASES.md - Agent alias system guide
- docs/CONFIG.md - Configuration reference
- docs/INSTALL.md - Installation instructions
- docs/Chat_TODO.md - Roadmap and future features
# Install dev dependencies
pip install -e ".[dev]"
# Run tests
pytest# Format code
black src/ tests/
# Lint
ruff check src/ tests/Contributions are welcome! Please feel free to submit a Pull Request.
MIT License - see LICENSE file for details.
See CHANGELOG.md for detailed version history.
Hotfix release with default features enabled and harmony improvements:
- β¨ Default Features Enabled - All features now enabled by default for better UX
auto_save: true- Save conversations automaticallyshow_tokens: true- Display token counts and costsshow_status_bar: true- Status bar with agent, model, queries, timeshow_detailed_thinking: true- Show harmony reasoning channels
- π§ Status Bar Fix - Status bar now displays correctly between messages
- π Harmony Improvements - Enhanced detection logging and documentation
- π¨ Better Defaults - Optimized out-of-the-box experience for new users
See docs/TROUBLESHOOTING.md for common issues and solutions.
Quick fixes:
- Package not found: Run
pip install --upgrade basic-agent-chat-loop - Command not found: Ensure pip's bin directory is in your PATH
- Import errors: Try reinstalling with
pip install --force-reinstall basic-agent-chat-loop
- π Bug Reports: GitHub Issues
- π‘ Feature Requests: GitHub Issues
- π Documentation: docs/
- π¬ Discussions: GitHub Discussions