Skip to content

[Roadmap] API eval version #142

@LukeLIN-web

Description

@LukeLIN-web

Let model to adapt benchmark is still heavy workload. Like https://github.com/hokindeng/VMEvalKit/blob/main/vmevalkit/models/openai_inference.py

Can we let video model run on API like port 8800.

Then we only need the API to evaluate it ?

  • Support API
  • support hf local dir

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions