Skip to content

cccaballero/manolo_bot

Repository files navigation

Telegram Chat Bot using LLM

This is an experimental Telegram chat bot that uses a configurable LLM model to generate responses. With this bot, you can have engaging and realistic conversations with an artificial intelligence model.

Getting Started

Prerequisites

First, you need to install the required packages using uv:

uv sync --no-dev

Configuration

You can copy and rename the provided env.example to .env and edit the file according to your data

You can create a bot on Telegram and get its API token by following the official instructions.

To use the bot in a group, you have to use the @BotFather bot to set the Group Privacy off. This allows the bot to access all group messages.

Required environment variables.

You can use the GOOGLE_API_KEY, OPENAI_API_KEY, OPENAI_API_BASE_URL or OLLAMA_MODEL for selecting the required LLM provider. The OPENAI_API_BASE_URL will look for an OpenAI API like, as the LM Studio API

  • Note: When GOOGLE_API_KEY option is selected, the default model will be Gemini 2.0 Flash.

TELEGRAM_BOT_NAME: Your Telegram bot name

TELEGRAM_BOT_USERNAME: Your Telegram bot username

TELEGRAM_BOT_TOKEN: Your Telegram bot token

Selecting OpenAI Model.

OPENAI_API_MODEL: LLM to use for OpenAI or OpenAI-like API; if not provided, the default model will be used.

Selecting Google API Model.

GOOGLE_API_MODEL: LLM to use for Google API; if not provided, the default model will be used.

Enabling Agent Mode

AGENT_MODE: Enable agent mode (True, False). Default is False. When agent mode is enabled, the bot will use agentic capabilities.

AGENT_INSTRUCTIONS: (Optional) Custom instructions to guide the agent's behavior when in agent mode. This allows you to specify how the agent should behave, what tools it can use, and any specific guidelines for its operation. If not provided, default agent behavior will be used.

Enabling image Generation with Stable Diffusion

WEBUI_SD_API_URL: you can define a Stable Diffusion Web UI API URL for image generation. If this option is enabled the bot will answer image generation requests using Stable Diffusion generated images.

WEBUI_SD_API_PARAMS: A JSON string containing Stable Diffusion Web UI API params. If not provided, default parameters for the SDXL Turbo model will be used.

Setting custom bot character instructions

TELEGRAM_BOT_INSTRUCTIONS_CHARACTER: You can define a custom character for the bot instructions. This will override the default bot character. For example: You are a software engineer, geek and nerd, user of linux and free software technologies.

Setting extra bot instructions

TELEGRAM_BOT_INSTRUCTIONS_EXTRA: You can include extra LLM system instructions using this variable.

Setting custom bot instructions

TELEGRAM_BOT_INSTRUCTIONS: You can define custom LLM system instructions using this variable. This will override the default instructions, and the custom bot character instructions.

Limiting Bot interaction

TELEGRAM_ALLOWED_CHATS: You can use a comma-separated list of allowed chat IDs to limit bot interaction to those chats.

Enable multimodal capabilities

ENABLE_MULTIMODAL: Enable multimodal capabilities for images (True, False). The selected model must support multimodal capabilities.

Enable group assistant

ENABLE_GROUP_ASSISTANT: Enable group assistant for group chats (True, False). The bot will respond to group chats with a question mark. The default value is False.

Enable rate limiting

RATE_LIMITER_REQUESTS_PER_SECOND: The number of requests per second allowed by the bot. RATE_LIMITER_CHECK_EVERY_N_SECONDS: The number of seconds between rate limit checks. RATE_LIMITER_MAX_BUCKET_SIZE: The maximum bucket size for rate limiting.

Set preferred language

PREFERRED_LANGUAGE: The preferred language for the bot. (English, Spanish, etc.)

Set context max tokens

CONTEXT_MAX_TOKENS: The maximum number of tokens allowed for the bot's context.

Web Content Retrieval Configuration

WEB_CONTENT_REQUEST_TIMEOUT_SECONDS: Timeout in seconds for HTTP requests when retrieving web content. Default is 10 seconds.

Simulate typing human behavior

SIMULATE_TYPING: Enable simulating human typing behavior. The default is False. This typing simulation will influence the bot's response time in all chats.

SIMULATE_TYPING_WPM: The words per minute for simulating human typing behavior. Default is 100.

SIMULATE_TYPING_MAX_TIME: The maximum time in seconds for simulating human typing behavior. Default is 10 seconds (we usually don't want to wait too long).

Tools usage

USE_TOOLS: Enable tool usage (True, False). Default is False. When tool usage is enabled, the bot will use the LLM's tools capabilities. When tool usage is disabled, the bot will use the prompt-based pseudo-tools implementation.

MCP (Model Context Protocol) Support

manolo_bot supports the Model Context Protocol for connecting to external tool servers.

Enabling MCP

Set the following environment variables:

ENABLE_MCP: Enable MCP support (True, False). Default is False.

MCP_SERVERS_CONFIG: MCP server configuration in JSON format.

MCP Server Configuration

MCP servers are configured via the MCP_SERVERS_CONFIG environment variable, which accepts a JSON object mapping server names to their configurations.

stdio transport example:

{
  "math": {
    "command": "python",
    "args": ["/path/to/math_server.py"],
    "transport": "stdio"
  }
}

streamable_http transport example:

{
  "weather": {
    "url": "http://localhost:8000/mcp/",
    "transport": "streamable_http"
  }
}

Multiple servers:

{
  "math": {
    "command": "python",
    "args": ["/path/to/math_server.py"],
    "transport": "stdio"
  },
  "weather": {
    "url": "http://localhost:8000/mcp/",
    "transport": "streamable_http"
  }
}

Notes:

  • MCP tools are loaded alongside custom tools defined in ai/tools.py
  • If tool name conflicts occur, MCP tools will override custom tools (a warning is logged)
  • The bot will start successfully even if MCP initialization fails (graceful degradation)
  • MCP is only loaded when both ENABLE_MCP=True and valid MCP_SERVERS_CONFIG are provided

Logging Level

LOGGING_LEVEL: Sets the logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL), defaulting to INFO.

Available Commands

The bot supports the following commands:

  • /flushcontext - Clears the conversation context for the current chat. In group chats, only admins can use this command. The bot will respond with a confirmation message in the configured language.

Running the Bot

You can run the bot using the following command:

uv run main.py

or

python main.py

Developers information

Use uv sync --dev to install the development dependencies.

Pre-commit hooks

After installing the development dependencies, to install pre-commit scripts, including ruff checks, you can run the following command:

pre-commit install

Running tests

You can run the tests using the following command:

uv run python -m unittest discover

Contributing

If you'd like to contribute to this project, feel free to submit a pull request. We're always open to new ideas or improvements to the code.

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

An LLM based telegram bot experiment

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 5