Skip to content

VirtualAgentics/review-bot-automator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Review Bot Automator

Universal AI-powered automation for GitHub code review bots
Intelligent suggestion application and conflict resolution for CodeRabbit, GitHub Copilot, and custom review bots

CI codecov Security Documentation
Code style: black Ruff MyPy Markdownlint pre-commit
OpenSSF Scorecard OpenSSF Best Practices CodeRabbit Reviews
Python Version License Status: Alpha


πŸ“‹ Table of Contents


🎯 Problem Statement

When multiple PR review comments suggest overlapping changes to the same file, traditional automation tools either:

  • Skip all conflicting changes (losing valuable suggestions)
  • Apply changes sequentially without conflict awareness (potentially breaking code)
  • Require tedious manual resolution for every conflict

Review Bot Automator provides intelligent, semantic-aware conflict resolution that:

  • βœ… Understands code structure (JSON, YAML, TOML, Python, TypeScript)
  • βœ… Uses priority-based resolution (user selections, security fixes, syntax errors)
  • βœ… Supports semantic merging (combining non-conflicting changes automatically)
  • βœ… Learns from your decisions to improve over time
  • βœ… Provides detailed conflict analysis and actionable suggestions

πŸš€ Quick Start

Installation

pip install review-bot-automator

Basic Usage

# Set your GitHub token (required)
export GITHUB_PERSONAL_ACCESS_TOKEN="your_token_here"

# Analyze conflicts in a PR
pr-resolve analyze --owner VirtualAgentics --repo my-repo --pr 123

# Apply suggestions with conflict resolution
pr-resolve apply --owner VirtualAgentics --repo my-repo --pr 123 --strategy priority

# Apply only conflicting changes
pr-resolve apply --owner VirtualAgentics --repo my-repo --pr 123 --mode conflicts-only

# Simulate without applying changes (dry-run mode)
pr-resolve apply --owner VirtualAgentics --repo my-repo --pr 123 --mode dry-run

# Use parallel processing for large PRs
pr-resolve apply --owner VirtualAgentics --repo my-repo --pr 123 --parallel --max-workers 8

# Load configuration from file
pr-resolve apply --owner VirtualAgentics --repo my-repo --pr 123 --config config.yaml

LLM Provider Setup (Optional)

Enable AI-powered features with your choice of LLM provider using zero-config presets:

# ✨ NEW: Zero-config presets for instant setup

# Option 1: Codex CLI (free with GitHub Copilot subscription)
pr-resolve apply --owner VirtualAgentics --repo my-repo --pr 123 \
  --llm-preset codex-cli-free

# Option 2: Local Ollama πŸ”’ (free, private) - REDUCES THIRD-PARTY LLM VENDOR EXPOSURE
./scripts/setup_ollama.sh          # One-time install
./scripts/download_ollama_models.sh  # Download model
pr-resolve apply --owner VirtualAgentics --repo my-repo --pr 123 \
  --llm-preset ollama-local
# πŸ”’ Reduces third-party LLM vendor exposure (OpenAI/Anthropic never see comments)
# βœ… Simpler compliance (one fewer data processor for GDPR, HIPAA, SOC2)
# ⚠️ Note: GitHub/CodeRabbit still have access (required for PR workflow)
# See docs/ollama-setup.md for setup | docs/privacy-architecture.md for privacy details

# Option 3: Claude CLI (requires Claude subscription)
pr-resolve apply --owner VirtualAgentics --repo my-repo --pr 123 \
  --llm-preset claude-cli-sonnet

# Option 4: OpenAI API (pay-per-use, ~$0.01 per PR)
pr-resolve apply --owner VirtualAgentics --repo my-repo --pr 123 \
  --llm-preset openai-api-mini \
  --llm-api-key sk-...

# Option 5: Anthropic API (balanced cost/performance)
pr-resolve apply --owner VirtualAgentics --repo my-repo --pr 123 \
  --llm-preset anthropic-api-balanced \
  --llm-api-key sk-ant-...

Available presets: codex-cli-free, ollama-local πŸ”’, claude-cli-sonnet, openai-api-mini, anthropic-api-balanced

Privacy Note: Ollama (ollama-local) reduces third-party LLM vendor exposure by processing review comments locally. OpenAI/Anthropic never see your code, simplifying compliance. Note: GitHub and CodeRabbit still have access (required for PR workflow). See Privacy Architecture for details.

Alternative: Use environment variables

# Anthropic (recommended - 50-90% cost savings with caching)
export CR_LLM_ENABLED="true"
export CR_LLM_PROVIDER="anthropic"
export CR_LLM_API_KEY="sk-ant-..."  # Get from https://console.anthropic.com/

# OpenAI
export CR_LLM_ENABLED="true"
export CR_LLM_PROVIDER="openai"
export CR_LLM_API_KEY="sk-..."  # Get from https://platform.openai.com/api-keys

# Then use as normal
pr-resolve apply --owner VirtualAgentics --repo my-repo --pr 123

Documentation:

Python API

from review_bot_automator import ConflictResolver
from review_bot_automator.config import PresetConfig

resolver = ConflictResolver(config=PresetConfig.BALANCED)
results = resolver.resolve_pr_conflicts(
    owner="VirtualAgentics",
    repo="my-repo",
    pr_number=123
)

print(f"Applied: {results.applied_count}")
print(f"Conflicts: {results.conflict_count}")
print(f"Success rate: {results.success_rate}%")

🎨 Features

Intelligent Conflict Analysis

  • Semantic Understanding: Analyzes JSON, YAML, TOML structure, not just text
  • Conflict Categorization: Exact, major, partial, minor, disjoint-keys, semantic-duplicate
  • Impact Assessment: Evaluates scope, risk level, and criticality of changes
  • Actionable Suggestions: Provides specific guidance for each conflict

Smart Resolution Strategies

  • Priority-Based: User selections > Security fixes > Syntax errors > Regular suggestions
  • Semantic Merging: Combines non-conflicting changes in structured files
  • Sequential Application: Applies compatible changes in optimal order
  • Defer to User: Escalates complex conflicts for manual review

File-Type Handlers

  • JSON: Duplicate key detection, key-level merging
  • YAML: Comment preservation, structure-aware merging
  • TOML: Section merging, format preservation
  • Python/TypeScript: AST-aware analysis (planned)

Multi-Provider LLM Support βœ… (Phase 2 Complete - All 5 Providers Production-Ready)

  • 5 Provider Types: OpenAI API, Anthropic API, Claude CLI, Codex CLI, Ollama (all production-ready)
  • GPU Acceleration: Ollama supports NVIDIA CUDA, AMD ROCm, Apple Metal with automatic detection
  • HTTP Connection Pooling: Optimized for concurrent requests (10 connections per provider)
  • Auto-Download: Ollama can automatically download models when not available
  • Cost Optimization: Prompt caching reduces Anthropic costs by 50-90%
  • Retry Logic: Exponential backoff for transient failures (all providers)
  • Flexible Deployment: API-based, CLI-based, or local inference
  • Provider Selection: Choose based on cost, privacy, or performance needs
  • Health Checks: Automatic provider validation before use

Learning & Optimization

  • ML-Assisted Priority: Learns from your resolution decisions
  • Metrics Tracking: Monitors success rates, resolution times, strategy effectiveness
  • Conflict Caching: Reuses analysis for similar conflicts
  • Performance: Parallel processing for large PRs

Configuration & Presets

  • Conservative: Skip all conflicts, manual review required
  • Balanced: Priority system + semantic merging (default)
  • Aggressive: Maximize automation, user selections always win
  • Semantic: Focus on structure-aware merging for config files

Application Modes

  • all: Apply both conflicting and non-conflicting changes (default)
  • conflicts-only: Apply only changes that have conflicts
  • non-conflicts-only: Apply only changes without conflicts
  • dry-run: Analyze and report without applying any changes

Rollback & Safety Features

  • Automatic Rollback: Git-based checkpointing with automatic rollback on failure
  • Pre-Application Validation: Validates changes before applying (optional)
  • File Integrity Checks: Verifies file safety and containment
  • Detailed Logging: Comprehensive logging for debugging and audit trails

Runtime Configuration

Configure via multiple sources with precedence chain: CLI flags > Environment variables > Config file > Defaults

  • Configuration Files: Load settings from YAML or TOML files
  • Environment Variables: Set options using CR_* prefix variables
  • CLI Overrides: Override any setting via command-line flags

See .env.example for available environment variables.

πŸ“– Documentation

User Guides

Reference Documentation

Architecture & Development

Security

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    GitHub PR Comments                       β”‚
β”‚                   (CodeRabbit, Review Bot)                  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                     β”‚
                     β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚              Comment Parser & Extractor                     β”‚
β”‚   (Suggestions, Diffs, Codemods, Multi-Options)            β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                     β”‚
                     β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚              Conflict Detection Engine                      β”‚
β”‚  β€’ Fingerprinting  β€’ Overlap Analysis  β€’ Semantic Check    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                     β”‚
         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
         β–Ό                      β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  File Handlers   β”‚   β”‚  Priority System β”‚
β”‚  β€’ JSON          β”‚   β”‚  β€’ User Selected β”‚
β”‚  β€’ YAML          β”‚   β”‚  β€’ Security Fix  β”‚
β”‚  β€’ TOML          β”‚   β”‚  β€’ Syntax Error  β”‚
β”‚  β€’ Python        β”‚   β”‚  β€’ Regular       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
          β”‚                     β”‚
          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                     β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚           Resolution Strategy Selector                      β”‚
β”‚  β€’ Skip  β€’ Override  β€’ Merge  β€’ Sequential  β€’ Defer        β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                     β”‚
                     β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚              Application Engine                             β”‚
β”‚  β€’ Backup  β€’ Apply  β€’ Validate  β€’ Rollback                 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                     β”‚
                     β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚        Reporting & Metrics                                  β”‚
β”‚  β€’ Conflict Summary  β€’ Visual Diff  β€’ Success Rate         β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ”§ Use Cases

1. CodeRabbit Multi-Option Selections

Problem: User selects "Option 2" but it conflicts with another suggestion Solution: Priority system ensures user selections override lower-priority changes

2. Overlapping Configuration Changes

Problem: Two suggestions modify different keys in package.json Solution: Semantic merging combines both changes automatically

3. Security Fix vs. Formatting

Problem: Security fix conflicts with formatting suggestion Solution: Priority system applies security fix, skips formatting

4. Large PR with 50+ Comments

Problem: Manual conflict resolution is time-consuming Solution: Parallel processing + caching resolves conflicts in seconds

πŸ”§ Environment Variables

Configure the tool using environment variables (see .env.example for all options):

Variable Description Default
GITHUB_PERSONAL_ACCESS_TOKEN GitHub API token (required) None
CR_MODE Application mode (all, conflicts-only, non-conflicts-only, dry-run) all
CR_ENABLE_ROLLBACK Enable automatic rollback on failure true
CR_VALIDATE Enable pre-application validation true
CR_PARALLEL Enable parallel processing false
CR_MAX_WORKERS Number of parallel workers 4
CR_LOG_LEVEL Logging level (DEBUG, INFO, WARNING, ERROR) INFO
CR_LOG_FILE Log file path (optional) None

🀝 Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

Development Setup

git clone https://github.com/VirtualAgentics/review-bot-automator.git
cd review-bot-automator
python -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"
pre-commit install

Running Tests

This project uses pytest 9.0 with native subtests support for comprehensive testing. We maintain >80% test coverage with 1,445 tests including unit, integration, security, and property-based fuzzing tests.

# Run standard tests with coverage
pytest tests/ --cov=src --cov-report=html

# Run property-based fuzzing tests
make test-fuzz              # Dev profile: 50 examples
make test-fuzz-ci           # CI profile: 100 examples
make test-fuzz-extended     # Extended: 1000 examples

# Run all tests (standard + fuzzing)
make test-all

For more details, see:

πŸ“œ License

MIT License - see LICENSE for details.

πŸ™ Acknowledgments

  • Inspired by the sophisticated code review capabilities of CodeRabbit AI
  • Built with experience from ContextForge Memory project
  • Community feedback and contributions

πŸ“Š Project Status

Current Version: 2.0.0

Roadmap:

  • βœ… Phase 0: Security Foundation (COMPLETE)
    • βœ… 0.1: Security Architecture Design
    • βœ… 0.2: Input Validation & Sanitization
    • βœ… 0.3: Secure File Handling
    • βœ… 0.4: Secret Detection (17 patterns)
    • βœ… 0.5: Security Testing Suite (95%+ coverage)
    • βœ… 0.6: Security Configuration
    • βœ… 0.7: CI/CD Security Scanning (7+ tools)
    • βœ… 0.8: Security Documentation
  • βœ… Phase 1: Core Features (COMPLETE)
    • βœ… Core conflict detection and analysis
    • βœ… File handlers (JSON, YAML, TOML)
    • βœ… Priority system
    • βœ… Rollback system with git-based checkpointing
  • βœ… Phase 2: CLI & Configuration (COMPLETE)
    • βœ… CLI with comprehensive options
    • βœ… Runtime configuration system
    • βœ… Application modes (all, conflicts-only, non-conflicts-only, dry-run)
    • βœ… Parallel processing support
    • βœ… Multiple configuration sources (file, env, CLI)
  • πŸ”„ Phase 3: Documentation & Examples (IN PROGRESS)
    • πŸ”„ Comprehensive documentation updates
    • πŸ“… Example configurations and use cases
  • βœ… V2.0 Phase 0: LLM Foundation (COMPLETE) - PR #121
    • βœ… Core LLM data models and infrastructure
    • βœ… Universal comment parser with LLM + regex fallback
    • βœ… LLM provider protocol for polymorphic support
    • βœ… Structured prompt engineering system
    • βœ… Confidence threshold filtering
  • βœ… V2.0 Phase 1: LLM-Powered Parsing (COMPLETE) - PR #122
    • βœ… OpenAI API provider implementation
    • βœ… Automatic retry logic with exponential backoff
    • βœ… Token counting and cost tracking
    • βœ… Comprehensive error handling
    • βœ… Integration with ConflictResolver
  • βœ… V2.0 Phase 2: Multi-Provider Support (COMPLETE) - Closed Nov 9, 2025
    • βœ… All 5 LLM providers implemented: OpenAI API, Anthropic API, Claude CLI, Codex CLI, Ollama
    • βœ… Provider factory pattern with automatic selection
    • βœ… HTTP connection pooling and retry logic
    • βœ… Provider health checks and validation
    • βœ… Cost tracking across all API-based providers
  • βœ… V2.0 Phase 3: CLI Integration Polish (COMPLETE) - Closed Nov 11, 2025
    • βœ… Zero-config presets for instant LLM setup (5 presets available)
    • βœ… Configuration precedence chain: CLI > Environment > File > Defaults
    • βœ… Enhanced error messages with actionable resolution steps
    • βœ… Support for YAML/TOML configuration files
    • βœ… Security: API keys must use ${VAR} syntax in config files
  • βœ… V2.0 Phase 4: Local Model Support (COMPLETE) - Closed Nov 2025
    • βœ… Ollama provider with GPU acceleration (NVIDIA, AMD ROCm, Apple Metal)
    • βœ… Automatic GPU detection and hardware info display
    • βœ… HTTP connection pooling for concurrent requests
    • βœ… Model auto-download feature
    • βœ… Performance benchmarking (local vs API models) - Issue #170
    • βœ… Privacy documentation (local LLM operation guide) - Issue #171
    • βœ… Integration tests with privacy verification - Issue #172
  • βœ… V2.0 Phase 5: Optimization & Production Readiness (COMPLETE) - PR #250 (Nov 26, 2025)
    • Rate limit retry with exponential backoff
    • Cache warming for cold start optimization
    • Fallback rate tracking, confidence threshold CLI option
    • fsync for atomic write durability
  • πŸ”„ V2.0 Phase 6: Documentation & Migration (IN PROGRESS) - ~90% complete

V2.0 Milestone Progress: ~95% complete (Phases 0-5 complete, Phase 6 finalizing)

Security Highlights

  • ClusterFuzzLite: Continuous fuzzing (3 fuzz targets, ASan + UBSan)
  • Test Coverage: 82.35% overall, 95%+ for security modules
  • Security Scanning: CodeQL, Trivy, TruffleHog, Bandit, pip-audit, OpenSSF Scorecard
  • Secret Detection: 17 pattern types (GitHub tokens, AWS keys, API keys, etc.)
  • Documentation: Comprehensive security documentation (threat model, incident response, compliance)

πŸš€ LLM Features (v2.0 Architecture)

βœ… Core v2.0 LLM features are production-ready! Phases 0-4 complete (~71% of v2.0 milestone). All 5 LLM providers fully functional. See Roadmap for current status.

Vision: Major architecture upgrade to parse 95%+ of CodeRabbit comments (up from 20%)

The Problem We're Solving

Current system only parses ```suggestion blocks, missing:

  • ❌ Diff blocks (```diff) - 60% of CodeRabbit comments
  • ❌ Natural language suggestions - 20% of comments
  • ❌ Multi-option suggestions
  • ❌ Multiple diff blocks per comment

Result: Only 1 out of 5 CodeRabbit comments are currently parsed.

The Solution: LLM-First Parsing

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚           LLM Parser (Primary - All Formats)            β”‚
β”‚  β€’ Diff blocks        β€’ Suggestion blocks              β”‚
β”‚  β€’ Natural language   β€’ Multi-options                   β”‚
β”‚  β€’ 95%+ coverage      β€’ Intelligent understanding       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                           β”‚
                  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”
                  β”‚  Fallback if    β”‚
                  β”‚  LLM fails      β”‚
                  β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                           β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚       Regex Parser (Fallback - Suggestion Blocks)       β”‚
β”‚  β€’ 100% reliable      β€’ Zero cost                       β”‚
β”‚  β€’ Legacy support     β€’ Always available                β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Multi-Provider Support (User Choice)

Choose your preferred LLM provider:

Provider Cost Model Best For Est. Cost (1000 comments)
Claude CLI Subscription ($20/mo) Best quality + zero marginal cost $0 (covered)
Codex CLI Subscription ($20/mo) Cost-effective, OpenAI quality $0 (covered)
Ollama Free (local) Privacy, offline, no API costs $0
OpenAI API Pay-per-token Pay-as-you-go, low volume $0.07 (with caching)
Anthropic API Pay-per-token Best quality, willing to pay $0.22 (with caching)

Quick Preview

# Current (v1.x) - regex-only
pr-resolve apply --owner VirtualAgentics --repo my-repo --pr 123
# Parses: 1/5 comments (20%)

# v2.0 - LLM-powered (opt-in)
pr-resolve apply --llm --llm-provider claude-cli --owner VirtualAgentics --repo my-repo --pr 123
# Parses: 5/5 comments (100%)

# Use presets for quick config
pr-resolve apply --llm-preset claude-cli-sonnet --owner VirtualAgentics --repo my-repo --pr 123
pr-resolve apply --llm-preset ollama-local --owner VirtualAgentics --repo my-repo --pr 123  # Privacy-first

Backward Compatibility Guarantee

βœ… Runtime Behavior Preserved - v2.0 maintains full compatibility for CLI and API usage

  • LLM parsing disabled by default (opt-in via --llm flag)
  • Automatic fallback to regex if LLM fails
  • v1.x CLI commands work identically
  • v1.x Python API behavior unchanged

⚠️ Package Rename: v2.0 renamed the package from pr-conflict-resolver to review-bot-automator. Update your imports and dependencies:

  • Import: from review_bot_automator import ... (was pr_conflict_resolver)
  • Dependency: review-bot-automator in requirements.txt (was pr-conflict-resolver)

See Migration Guide for details.

Enhanced Change Metadata

# v2.0: Changes include AI-powered insights
change = Change(
    path="src/module.py",
    start_line=10,
    end_line=12,
    content="new code",
    # NEW in v2.0 (optional fields)
    llm_confidence=0.95,  # How confident the LLM is
    llm_provider="claude-cli",  # Which provider parsed it
    parsing_method="llm",  # "llm" or "regex"
    change_rationale="Improves error handling",  # Why change was suggested
    risk_level="low"  # "low", "medium", "high"
)

Documentation

Comprehensive planning documentation available:

Timeline

  • Phase 0-6: 10-12 weeks implementation
  • Estimated Release: Q2 2025
  • GitHub Milestone: v2.0 - LLM-First Architecture
  • GitHub Issues: #114-#120 (Phases 0-6)

πŸ”— Related Projects


Made with ❀️ by VirtualAgentics

About

Intelligent conflict resolution for GitHub PR comments with CodeRabbit AI

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

No packages published

Contributors 5

Languages