This document summarizes my submission for the Resola 1-Week Challenge: Audit Log API system.
I dedicated approximately 3 ~ 4 days to the project, with the following breakdown:
- Tuesday to Friday: 2 – 3 hours daily (approximately 8–12 hours total).
- Saturday and Sunday: Full days (approximately 12–16 hours total).
- Monday (final day): Half day (approximately 4 hours).
Total estimated time: 24–32 hours over the span of 1 week.
- Git Repository
- README: Comprehensive setup and usage instructions
- API Documentation: Please check "API Endpoints" section on this document
- Postman Collection: Please check audit_log_api.postman_collection.json file
- Architecture Diagram: Please check \assets\ArchitectureDiagram.jpg
This project provides an API for managing audit logs, with a Django-based API as the main entry point and a Python worker for background processing of AWS SQS messages. The system uses DynamoDB for storage, OpenSearch for full-text search, and SQS for asynchronous tasks.
The project consists of two main components:
- Django API: The primary entry point for handling audit log CRUD operations, search, and tenant management.
- Python Worker: A background application that continuously reads AWS SQS messages for bulk log creation and cleanup tasks.
- Python: Use Python 3.11 or below (dependency issues exist with Python 3.12 and above).
- Docker: Required for local OpenSearch testing.
- AWS Account: For DynamoDB, SQS, and optional AWS OpenSearch Service.
- Tools: Postman or similar for API testing.
-
Update
.env.example:- Both
audit_log_apiandaudit_log_workerdirectories contain a.env.examplefile. - Update them to
.envin each directory and fill in the required values (e.g., AWS credentials, service URLs).
- Both
-
DynamoDB Setup:
- Create two DynamoDB tables:
AuditLogsandTenants. - Provide the table URLs in the
.envfiles. - Manually create a tenant for login:
[ { "tenant_id": "tenant01", "name": "Tenant 01" } ]
- Create two DynamoDB tables:
-
AWS SQS:
- Create a queue named
audit-log-queueand add its name to the.envfiles asAWS_SQS_QUEUE_NAME.
- Create a queue named
-
OpenSearch:
- Create an OpenSearch index named
audit_logsand setOPENSEARCH_INDEX=audit_logsin the.envfiles. - For local testing, set
OPENSEARCH_PW=D0ck3rP@ssw0rd1!in the.envfile for the local OpenSearch image or change it to your liking in OpenSearch/docker-compose.yml.
- Create an OpenSearch index named
-
Navigate to the API directory:
cd audit_log_api -
Create and activate a virtual environment:
- Windows:
python -m venv .venv .\.venv\Scripts\activate
- Unix/Linux/MacOS:
python -m venv .venv source .venv/bin/activate
- Windows:
-
Install dependencies:
pip install -r requirements.txt
-
Run the server:
python manage.py runserver 8008
-
(Optional) Run tests:
python manage.py test
-
Navigate to the worker directory:
cd audit_log_worker -
Create and activate a virtual environment:
- Windows:
python -m venv .venv .\.venv\Scripts\activate
- Unix/Linux/MacOS:
python -m venv .venv source .venv/bin/activate
- Windows:
-
Install dependencies:
pip install -r requirements.txt
-
Run the worker:
python worker.py
For testing and cost-saving, you can use a local OpenSearch instance instead of AWS OpenSearch Service.
- Ensure Docker is installed and running.
- Navigate to the OpenSearch directory:
cd OpenSearch - Start the OpenSearch container:
docker-compose up -d
Use tools like Postman to test the API. All endpoints require a Bearer token for authentication.
To obtain a token, use the mock login endpoint:
- Endpoint:
POST http://127.0.0.1:8008/api/v1/mock-login/ - Request Body:
{ "tenant_id": "tenant01", "user_id": "user123", "access_level": "admin" // or "auditor" or "user" } - Response:
{ "access_token": "...", "token_type": "Bearer" }
All endpoints are prefixed with /api/v1/ (see audit_log_api/urls.py).
| Endpoint | Method | Description | Request Body / Query Params | Response |
|---|---|---|---|---|
/mock-login/ |
POST | Obtain a mock JWT token for authentication. | { "tenant_id": "...", "user_id": "...", "access_level": "admin|auditor|user" } |
{ "access_token": "...", "token_type": "Bearer" } |
| Endpoint | Method | Description | Request Body / Query Params | Response |
|---|---|---|---|---|
/logs/ |
GET | List audit logs for the tenant. Supports filtering and pagination. | limit, next_token, log fields (e.g., action, severity) |
{ "results": [...], "next_token": "...", "count": N } |
/logs/ |
POST | Create a new audit log entry. | Log entry fields (see below) | Created log entry |
/logs/search/ |
GET | Full-text search on the details field using OpenSearch. |
details (required), limit, next_token |
{ "results": [...], "next_token": "...", "count": N } |
/logs/stats/ |
GET | Get log statistics (total, by action, by severity). | None | { "total_logs": N, "by_action": {...}, "by_severity": {...}, "next_token": "..." } |
/logs/bulk/ |
POST | Bulk create multiple log entries. | List of log entry objects | Status message |
/logs/cleanup/ |
DELETE | Cleanup logs older than specified days (default 90, range 1-365). | older_than_days |
Status message |
/logs/<log_id>/ |
GET | Retrieve a specific log entry by ID. | None | Log entry object |
| Endpoint | Method | Description | Request Body / Query Params | Response |
|---|---|---|---|---|
/tenants/ |
GET | List all tenants. | None | List of tenants |
/tenants/ |
POST | Create a new tenant. | { "tenant_id": "...", "name": "..." } |
Created tenant |
/tenants/<tenant_id>/ |
DELETE | Delete a tenant by ID. | None | Status message |
Defined in LogEntrySerializer:
- id: String (read-only)
- user_id: String
- session_id: String (optional)
- action: Enum (
CREATE,UPDATE,DELETE,VIEW) - resource_type: String
- resource_id: String
- timestamp: Datetime
- ip_address: IP address
- user_agent: String
- before: JSON (optional)
- after: JSON (optional)
- metadata: JSON (optional)
- severity: Enum (
INFO,WARNING,ERROR,CRITICAL) - details: String (optional)
- tenant_id: String
- Authentication: Most endpoints require JWT authentication (mocked for development).
- Permissions: Enforced per endpoint based on access level (
admin,user,auditor).
The system is designed with the following components and data flow:
- Django API: Handles CRUD operations, search, stats, and tenant management.
- DynamoDB: Primary data store for audit logs and tenants.
- OpenSearch: Provides full-text search capabilities for logs.
- AWS SQS: Manages asynchronous tasks like bulk log creation and cleanup.
- JWT Authentication: Mocked for development, used for securing endpoints.
