KServe
KServe is a highly scalable and standards based Model Inference Platform on Kubernetes for Trusted AI. It is hosted in Incubation in LF AI & Data Foundation.
Pinned Loading
Repositories
Showing 10 of 13 repositories
- modelmesh-runtime-adapter Public
Unified runtime-adapter image of the sidecar containers which run in the modelmesh pods
kserve/modelmesh-runtime-adapter’s past year of commit activity - modelmesh-minio-examples Public
ModelMesh example models packaged into a MinIO container for user exploration and functional verification testing
kserve/modelmesh-minio-examples’s past year of commit activity