-
Notifications
You must be signed in to change notification settings - Fork 11
Enable real classifiers in local dev by default #952
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
✅ Deploy Preview for antenna-preview ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
WalkthroughThe docker-compose.yml file was updated to change the ml_backend service's build context and volume mount paths from Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~2 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
🔇 Additional comments (2)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR switches the default ML backend for local development from the minimal processing service to the example processing service, enabling real ML classifiers (transformers, PyTorch) instead of the basic mock implementations.
Key changes:
- Updates the
ml_backendservice indocker-compose.ymlto use theexampleprocessing service with real ML models - The CI environment continues to use the minimal service for faster testing
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| volumes: | ||
| - ./processing_services/minimal/:/app | ||
| - ./processing_services/example/:/app |
Copilot
AI
Nov 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The example service uses ML models (transformers, PyTorch) that download models on first use. Consider adding cache volumes to avoid re-downloading models on container restarts:
volumes:
- ./processing_services/example/:/app
- ./processing_services/example/huggingface_cache:/root/.cache/huggingface
- ./processing_services/example/pytorch_cache:/root/.cache/torchThis is similar to the configuration in processing_services/example/docker-compose.yml and would improve startup performance after the initial model download.

TODO
If you test this:
docker compose build ml_backenddocker compose up -dThen click Register Pipelines in the project processing services config screen
Note: the minimal pipeline is still used in the CI stack for testing (docker-compose.ci.yml)
Summary by CodeRabbit