Evalify's Evaluator is a microservice designed to handle the evaluation of student submissions. It is part of the Evalify project, which aims to provide a comprehensive platform for educational assessments.
Note: rq for background jobs and is now outdated — consult it for historical reference only.
- Scalable Architecture: Designed to handle high-volume evaluation requests asynchronously.
- Specialized Workers: Uses dedicated worker pools for different question types (MCQ, Descriptive, Coding) to optimize resource usage (CPU vs I/O bound).
- Factory Pattern: Easily extensible evaluator system using a factory pattern for registering new question types.
- Type Safety: Fully typed codebase using Pydantic for robust data validation and serialization.
- Modern Stack: Built with FastAPI, Celery, Redis, and managed by
uv.
- Language: Python 3.12+
- API Framework: FastAPI
- Task Queue: Celery
- Broker/Backend: Redis
- Package Manager: uv
- Concurrency: Gevent (for I/O bound tasks) & Prefork (for CPU bound tasks)
- Redis: Required as the message broker and result backend.
- uv: An extremely fast Python package installer and resolver.
-
Clone the repository:
git clone https://github.com/evalify/evalify-evaluator.git cd evalify-evaluator -
Install dependencies:
uv sync
The application is configured using environment variables.
- Set up environment variables:
Copy the example configuration file:
cp .env.example .env
- Edit
.env: Update the file with your specific configuration (Redis URL, API keys, etc.).
-
Start Redis:
redis-server
-
Start the API Server:
uv run uvicorn src.evaluator.main:app --reload
The API will be available at
http://localhost:8000. -
Start Workers: You can start all workers (MCQ, Descriptive, Coding) using the provided script (requires tmux):
./scripts/start_all_workers_tmux.sh --attach
Or start them individually:
./scripts/start_mcq_worker.sh ./scripts/start_desc_worker.sh ./scripts/start_coding_worker.sh
- Unit & Integration Tests: Located in
tests/. Run them usingpytest:uv run pytest
- Manual/Scripted Tests: Located in
test_scripts/. These scripts are useful for manual verification and debugging specific flows.
For production environments, Docker is the recommended way to deploy and scale workers.
You can spin up the entire stack (API, Redis, and Workers) using Docker Compose:
docker-compose up --build -dThis will start:
- API Service: Exposed on port 4040
- Redis: Internal message broker
- Workers: Scalable worker containers for MCQ, Descriptive, and Coding tasks
The system handles evaluation requests by dividing them into student-level jobs, which are further broken down into question-level tasks. These tasks are routed to specialized workers based on the question type (MCQ, Descriptive, Coding).
Results from individual question evaluations are accumulated per student. Once all questions for a student are evaluated, the aggregated results are sent back to the main Evalify backend.
Documentation is currently in progress. Some documentation is available in the docs/ directory. Proper documentation will be written and deployed via evalify/evalify-docs.


