Skip to content
This repository was archived by the owner on Jul 4, 2025. It is now read-only.
This repository was archived by the owner on Jul 4, 2025. It is now read-only.

feat: Healthcheck endpoint for model loaded and able to do inference #119

@hiro-v

Description

@hiro-v

Problem

  • When integrating Nitro, it's hard to know if model is able to do inference after model_loaded api. I can only tell there is a problem if it fails inference.

Success Criteria

  • /api/health, return 200 if the model is loaded and ready to do inference. The app will poll this API. This should throw 500 to let app know that there is some problems that Nitro cannot fix itself and needs restarting.

Additional context
None

Metadata

Metadata

Assignees

Type

No type

Projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions