This is an experimental Telegram chat bot that uses a configurable LLM model to generate responses. With this bot, you can have engaging and realistic conversations with an artificial intelligence model.
First, you need to install the required packages using uv:
uv sync --no-devYou can copy and rename the provided env.example to .env and edit the file according to your data
You can create a bot on Telegram and get its API token by following the official instructions.
To use the bot in a group, you have to use the @BotFather bot to set the Group Privacy off. This allows the bot to access all group messages.
You can use the GOOGLE_API_KEY, OPENAI_API_KEY, OPENAI_API_BASE_URL or OLLAMA_MODEL for selecting the required
LLM provider.
The OPENAI_API_BASE_URL will look for an OpenAI API like, as the LM Studio API
- Note: When
GOOGLE_API_KEYoption is selected, the default model will be Gemini 2.0 Flash.
TELEGRAM_BOT_NAME: Your Telegram bot name
TELEGRAM_BOT_USERNAME: Your Telegram bot username
TELEGRAM_BOT_TOKEN: Your Telegram bot token
OPENAI_API_MODEL: LLM to use for OpenAI or OpenAI-like API; if not provided, the default model will be used.
GOOGLE_API_MODEL: LLM to use for Google API; if not provided, the default model will be used.
AGENT_MODE: Enable agent mode (True, False). Default is False. When agent mode is enabled, the bot will use agentic capabilities.
AGENT_INSTRUCTIONS: (Optional) Custom instructions to guide the agent's behavior when in agent mode. This allows you to specify how the agent should behave, what tools it can use, and any specific guidelines for its operation. If not provided, default agent behavior will be used.
WEBUI_SD_API_URL: you can define a Stable Diffusion Web UI API URL for image generation. If this option is enabled the bot will answer image generation requests using Stable Diffusion generated images.
WEBUI_SD_API_PARAMS: A JSON string containing Stable Diffusion Web UI API params. If not provided, default parameters for the SDXL Turbo model will be used.
TELEGRAM_BOT_INSTRUCTIONS_CHARACTER: You can define a custom character for the bot instructions.
This will override the default bot character. For example: You are a software engineer, geek and nerd, user of linux and free software technologies.
TELEGRAM_BOT_INSTRUCTIONS_EXTRA: You can include extra LLM system instructions using this variable.
TELEGRAM_BOT_INSTRUCTIONS: You can define custom LLM system instructions using this variable.
This will override the default instructions, and the custom bot character instructions.
TELEGRAM_ALLOWED_CHATS: You can use a comma-separated list of allowed chat IDs to limit bot interaction to those chats.
ENABLE_MULTIMODAL: Enable multimodal capabilities for images (True, False). The selected model must support multimodal capabilities.
ENABLE_GROUP_ASSISTANT: Enable group assistant for group chats (True, False). The bot will respond to group chats with a question mark. The default value is False.
RATE_LIMITER_REQUESTS_PER_SECOND: The number of requests per second allowed by the bot.
RATE_LIMITER_CHECK_EVERY_N_SECONDS: The number of seconds between rate limit checks.
RATE_LIMITER_MAX_BUCKET_SIZE: The maximum bucket size for rate limiting.
PREFERRED_LANGUAGE: The preferred language for the bot. (English, Spanish, etc.)
CONTEXT_MAX_TOKENS: The maximum number of tokens allowed for the bot's context.
WEB_CONTENT_REQUEST_TIMEOUT_SECONDS: Timeout in seconds for HTTP requests when retrieving web content. Default is 10 seconds.
SIMULATE_TYPING: Enable simulating human typing behavior. The default is False. This typing simulation will influence the bot's response time in all chats.
SIMULATE_TYPING_WPM: The words per minute for simulating human typing behavior. Default is 100.
SIMULATE_TYPING_MAX_TIME: The maximum time in seconds for simulating human typing behavior. Default is 10 seconds (we usually don't want to wait too long).
USE_TOOLS: Enable tool usage (True, False). Default is False. When tool usage is enabled, the bot will use the LLM's tools capabilities. When tool usage is disabled, the bot will use the prompt-based pseudo-tools implementation.
manolo_bot supports the Model Context Protocol for connecting to external tool servers.
Set the following environment variables:
ENABLE_MCP: Enable MCP support (True, False). Default is False.
MCP_SERVERS_CONFIG: MCP server configuration in JSON format.
MCP servers are configured via the MCP_SERVERS_CONFIG environment variable, which accepts a JSON object mapping server names to their configurations.
stdio transport example:
{
"math": {
"command": "python",
"args": ["/path/to/math_server.py"],
"transport": "stdio"
}
}streamable_http transport example:
{
"weather": {
"url": "http://localhost:8000/mcp/",
"transport": "streamable_http"
}
}Multiple servers:
{
"math": {
"command": "python",
"args": ["/path/to/math_server.py"],
"transport": "stdio"
},
"weather": {
"url": "http://localhost:8000/mcp/",
"transport": "streamable_http"
}
}Notes:
- MCP tools are loaded alongside custom tools defined in
ai/tools.py - If tool name conflicts occur, MCP tools will override custom tools (a warning is logged)
- The bot will start successfully even if MCP initialization fails (graceful degradation)
- MCP is only loaded when both
ENABLE_MCP=Trueand validMCP_SERVERS_CONFIGare provided
LOGGING_LEVEL: Sets the logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL), defaulting to INFO.
The bot supports the following commands:
/flushcontext- Clears the conversation context for the current chat. In group chats, only admins can use this command. The bot will respond with a confirmation message in the configured language.
You can run the bot using the following command:
uv run main.pyor
python main.pyUse uv sync --dev to install the development dependencies.
After installing the development dependencies, to install pre-commit scripts, including ruff checks, you can run the following command:
pre-commit installYou can run the tests using the following command:
uv run python -m unittest discoverIf you'd like to contribute to this project, feel free to submit a pull request. We're always open to new ideas or improvements to the code.
This project is licensed under the MIT License - see the LICENSE file for details.