-
Notifications
You must be signed in to change notification settings - Fork 0
feat: Add response caching for Flask metrics endpoint #5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
MOTIVATION: - Graph metrics computation (PageRank, betweenness, engagement) is expensive - Users rapidly adjust sliders (alpha, weights, resolution), triggering repeated identical computations - UI feels sluggish due to 2-5 second computation times per parameter change - Many slider adjustments explore the same parameter space, wasting resources APPROACH: - Implemented in-memory LRU cache with TTL for /api/metrics/compute responses - Cache key uses SHA256 hash of sorted request parameters (seeds, weights, alpha, resolution, etc.) - Seed order independence via tuple(sorted(seeds)) ensures ["alice", "bob"] == ["bob", "alice"] - LRU eviction when max_size (100 entries) reached, removing oldest entry by created_at - TTL expiration (300 seconds = 5 minutes) balances freshness vs. cache utility - Automatic cache invalidation when graph rebuild completes successfully - @cached_response decorator wraps endpoint for transparent caching CHANGES: - tpot-analyzer/src/api/metrics_cache.py: New file with MetricsCache class and cached_response decorator - CacheEntry dataclass with data, created_at, hits - _create_key() hashes sorted parameters to 16-char hex string - get() checks TTL and increments hit/miss counters - set() performs LRU eviction when at max_size - stats() returns hits, misses, size, hit_rate, ttl_seconds - clear() removes all entries - cached_response() decorator extracts Flask request params, checks cache, stores responses - tpot-analyzer/src/api/server.py: - Added import: MetricsCache, cached_response - create_app(): Initialize metrics_cache = MetricsCache(max_size=100, ttl_seconds=300) - Applied @cached_response(metrics_cache) to /api/metrics/compute endpoint - Added /api/metrics/cache/stats GET endpoint for monitoring - Added /api/metrics/cache/clear POST endpoint for manual invalidation - Modified _analysis_worker() to accept metrics_cache parameter - Added metrics_cache.clear() after successful graph rebuild (exit_code == 0) IMPACT: - UI responsiveness improved for repeated metric computations within 5-minute window - Reduced server load during slider exploration (cache hit = instant response) - Cache stats endpoint enables monitoring hit rate and cache effectiveness - No breaking changes - caching is transparent to frontend - No new dependencies (uses stdlib hashlib, json, time, functools) - Cache automatically cleared on graph rebuild to ensure fresh data TESTING: - Manual verification with test script: - Cache miss on first request, hit on duplicate parameters - Seed order independence (["a","b"] == ["b","a"]) - TTL expiration after 2 seconds (shortened for testing) - LRU eviction when max_size exceeded - Stats endpoint returns accurate hit/miss counts and hit_rate - Clear endpoint removes all entries - All imports successful (python3 -c checks) - Verified integration points in server.py - Tested with 8 scenarios: all passed 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| # Try cache first | ||
| cached = cache.get(**cache_params) | ||
| if cached is not None: | ||
| return jsonify(cached) | ||
|
|
||
| # Cache miss - compute and store | ||
| response = func(*args, **kwargs) | ||
|
|
||
| # Extract data from response (handle both dict and Response objects) | ||
| if hasattr(response, 'get_json'): | ||
| data = response.get_json() | ||
| else: | ||
| data = response | ||
|
|
||
| cache.set(data, **cache_params) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid caching tuple responses in metrics cache decorator
The cache decorator unconditionally stores whatever the wrapped view returns and, on a hit, blindly feeds the cached object to jsonify. When compute_metrics errors it returns a tuple (Response, 500). That tuple is cached as-is, so the next identical request executes the cache hit branch and jsonify((Response, 500)) raises TypeError because Flask cannot serialize a Response object. Even for non-error tuples, the wrapper would strip status codes and headers by replacing them with a new 200 response. Consider skipping caching for non-200 responses and, when caching, store only JSON-serializable payloads and return a full Response (with status/headers) on hits.
Useful? React with 👍 / 👎.
Summary
Adds in-memory response caching to
/api/metrics/computeendpoint to improve UI responsiveness during rapid slider adjustments. Cache uses LRU eviction + TTL expiration and automatically invalidates on graph rebuilds.Type of Change
Motivation
Graph metrics computation (PageRank, betweenness, engagement) takes 2-5 seconds per request. Users rapidly adjust sliders (alpha, weights, resolution), triggering repeated identical computations. Many slider adjustments explore the same parameter space, wasting server resources and making UI feel sluggish.
Implementation Details
MetricsCache class (
src/api/metrics_cache.py):["alice", "bob"]==["bob", "alice"]Integration (
src/api/server.py):@cached_response(metrics_cache)decorator on/api/metrics/compute/api/metrics/cache/stats,/api/metrics/cache/clearPerformance Impact
Testing
Manual testing verified:
Review Checklist
🤖 Generated with Claude Code