Skip to content

Optimise handling high throughput requests #163

@StewartJingga

Description

@StewartJingga

Is your feature request related to a problem? Please describe.
In our setup, we have Meteor jobs that are running every 10 mins.

On failure sending metadata to Compass, the requests were being retried thus causing throughput to 600req/s from usual which is 200req/s.

This causes Compass' Postgres CPU utils to 100% thus causing other incoming requests to be affected and dropped.

Describe the solution you'd like

1. Bulk ingestions

Right now Compass UpsertPatch api is only allowing a single asset per request.
Doing ingestion in Bulk would help reducing overhead on at least:

  1. Network calls
  2. Open connections on Postgres (yes, bulk insert on postgres too)

This could potentially reduce the load on Postgres due to less connections to maintain and network calls on and from Compass.

2. Rate Limiting APIs

Ingestions are mostly carried out with high throughput. Rate limiting might not be the solution here, but maybe it could help preventing unnecessary (downstream) calls (postgres, elasticsearch) that could potentially block the process.

Describe alternatives you've considered
Current reducing throughput from Meteor works, but it is not scalable and would be better if we solve it from Compass level.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions