Skip to content

Conversation

@stephen-cox
Copy link
Owner

Summary

This PR implements a comprehensive LLM monitoring and metrics collection system for Nova that provides real-time visibility into AI interactions across all supported providers (OpenAI, Anthropic, Ollama).

Key Features

  • Context Window Monitoring: Real-time tracking of context utilization with intelligent warnings and optimization suggestions
  • Performance Metrics: Response time tracking, tokens per second, and efficiency metrics
  • Debug Logging: Configurable debug logging to file with full request/response details
  • Multi-Provider Support: Unified metrics collection across OpenAI, Anthropic, and Ollama
  • Smart Analysis: Context status classification (optimal/warning/critical) with actionable suggestions

Technical Implementation

Core Components:

  • MetricsCollector: Central metrics management with configurable detail levels
  • LLMMetrics: Comprehensive interaction metrics dataclass
  • ContextAnalysis: Context window analysis with optimization suggestions
  • TokenUsage: Token counting and efficiency tracking

Integration:

  • Seamlessly integrated into all AI client providers
  • Provider-specific context limits and token extraction
  • Global configuration system with environment variable support

Configuration:

monitoring:
  enabled: true
  level: "debug"  # basic, detailed, debug
  debug_log_file: "~/.nova/debug.log"
  context_warnings: true
  performance_metrics: true

Files Changed

  • nova/core/metrics.py (new): Complete metrics collection system
  • nova/core/ai_client.py: Integrated metrics into all AI providers
  • nova/models/config.py: Added MonitoringConfig class
  • tests/unit/test_metrics.py (new): Comprehensive test suite (15 tests)

Testing

  • ✅ All existing tests pass
  • ✅ 15 new metrics tests with 83% coverage
  • ✅ Code quality checks pass
  • ✅ Integration tested across all AI providers

Test plan

  • Verify metrics collection works for OpenAI provider
  • Verify metrics collection works for Anthropic provider
  • Verify metrics collection works for Ollama provider
  • Test context window warnings at different utilization levels
  • Test debug logging functionality
  • Test configuration validation
  • Verify performance metrics calculation
  • Test token usage tracking and efficiency ratios
  • Ensure backward compatibility with existing code
  • Validate all unit tests pass

🤖 Generated with Claude Code

Add comprehensive monitoring capabilities for LLM interactions across all AI providers (OpenAI, Anthropic, Ollama) with context window analysis, performance tracking, and debug logging.

## Features Added

### Core Metrics Collection
- Token usage tracking (input, output, total, efficiency ratio)
- Context window utilization monitoring with status classification
- Performance metrics (response time, tokens per second)
- Debug logging with configurable levels (basic, detailed, debug)

### Context Window Analysis
- Real-time context utilization calculation
- Smart status classification (optimal < 70%, warning 70-90%, critical > 90%)
- Optimization suggestions (summarization, pruning)
- Console warnings when approaching context limits

### Multi-Provider Integration
- Integrated metrics collection across OpenAI, Anthropic, and Ollama clients
- Provider-specific context limits and token extraction
- Unified metrics interface with provider abstraction

### Configuration System
- New MonitoringConfig class with validation
- Configurable monitoring levels and debug log file paths
- Context warnings and performance metrics toggles

### Architecture
- MetricsCollector class for centralized metrics management
- Structured dataclasses for metrics, context analysis, and token usage
- Global configuration with get_metrics_collector() and configure_metrics()
- Comprehensive test suite with 15 test cases

## Implementation Details

**Files Modified:**
- nova/core/ai_client.py: Added metrics integration to all AI providers
- nova/models/config.py: Added MonitoringConfig class

**Files Added:**
- nova/core/metrics.py: Core metrics collection system
- tests/unit/test_metrics.py: Comprehensive test suite

**Configuration:**
```yaml
monitoring:
  enabled: true
  level: "debug"  # basic, detailed, debug
  debug_log_file: "~/.nova/debug.log"
  context_warnings: true
  performance_metrics: true
```

All tests pass and code follows project quality standards.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants