convcommitgpt is a CLI tool that analyzes changes made in a Git repository (via a diff) and automatically generates commit messages following the Conventional Commits convention, using advanced language models (such as Ollama).
Its goal is to facilitate the creation of clear, precise, and standardized commit messages, improving the quality of the change history and team collaboration.
- Automatic Git diff analysis: Generates a summary of the changes made to the source code.
- Conventional commit message generation: Uses AI to create messages following the Conventional Commits standard.
- Customizable instruction prompt: Allows you to modify the
instructions_prompt.mdfile to adapt the style and format of the generated messages. - Support for Ollama: Uses local Ollama models for commit message generation.
- Docker execution: Runs inside a Docker container for easy setup and isolation.
- Visual spinner: Shows an animated spinner while processing the request to improve the CLI user experience.
You can install convcommitgpt using curl:
curl -sSL https://raw.githubusercontent.com/mnofresno/convcommitgpt/main/install.sh | bashThe installer will:
- Check for required dependencies (Docker/Podman, Git, Ollama)
- Create necessary directories
- Build the Docker image
- Set up configuration
- Create the
convcommitcommand
If you prefer to install manually:
- Clone the repository:
git clone https://github.com/mnofresno/convcommitgpt.git
cd convcommitgpt- Run the installer:
chmod +x test_install.sh
./test_install.shTo uninstall convcommitgpt:
curl -sSL https://raw.githubusercontent.com/mnofresno/convcommitgpt/main/uninstall.sh | bashOr manually:
chmod +x uninstall.sh
./uninstall.shThe uninstaller will:
- Backup your configuration
- Remove all installed files
- Clean up system directories
The tool uses environment variables to configure Ollama. You can define them in a .env file in ~/.local/lib/convcommitgpt/.env.
Main variables:
BASE_URL: Base URL of the Ollama API (default:http://host.docker.internal:11434/v1).MODEL: Name of the Ollama model to use (default:mistral).
Example .env:
BASE_URL=http://host.docker.internal:11434/v1
MODEL=mistral
- Make sure you have staged changes in your Git repository.
- Run:
convcommit .Or specify a different repository path:
convcommit /path/to/repositoryMain options:
--repository-pathor-r: Path to the Git repository.--prompt-fileor-p: Instruction prompt file (defaultinstructions_prompt.md).--modelor-m: Model to use.--base-url: API endpoint.--diff-from-stdinor-d: Allows passing a diff directly via stdin.--debug-diffor-dd: Shows the analyzed diff.--max-bytes-in-diffor-mb: Byte limit to analyze per file.
You can also use the convcommit.sh script directly:
./convcommit.sh /path/to/repositoryOr pass a diff directly:
git diff --cached | ./convcommit.sh -d -You can edit the instructions_prompt.md file to change the instructions received by the AI model. This allows you to adapt the style, format, and level of detail of the generated commit messages.
fix(api): handle null pointer exceptions in user authentication
- Fixed null pointer exceptions in the authentication module.
- Updated error handling for improved stability.
.env,.venv, and temporary files should not be uploaded to the repository.- The system ignores changes in critical files such as the assistant itself or the instruction prompt.
- If the diff is too large, it is truncated to avoid token limit errors.
MIT