Inference API v2 - design docs kick-off#2277
Inference API v2 - design docs kick-off#2277PawelPeczek-Roboflow wants to merge 1 commit intomainfrom
Conversation
| * `GET /v2/models` - discover loaded models | ||
| * `DELETE /v2/models` - unload all models | ||
| * `POST /v2/models/load` - load given model | ||
| * `POST /v2/models/unload` - unload given model |
There was a problem hiding this comment.
| * `POST /v2/models/unload` - unload given model | |
| * `DELETE /v2/models/unload` - unload given model |
|
|
||
| * `POST /v2/models/infer` - predict from model | ||
| * `GET /v2/models/interface` - discover model interface | ||
| * `GET /v2/models/compatibility` - discover models compatible with current server configuration |
There was a problem hiding this comment.
if GET /v2/models means discover loaded models->GET /v2/models/compatibility` seems confusing as it doesn't operate on the loaded models but, as I understand, returns a broader lists. Maybe
GET /v2/models?state=loadedGET /v2/models?state=compatible
|
|
||
| ## Models endpoints | ||
|
|
||
| * `POST /v2/models/infer` - predict from model |
There was a problem hiding this comment.
One broader design question. Don't we see any value in separating model management endpoints from prediction endpoints. Similar as in torch serve where we have different ports for both. This probably would only make sense in self-hosted environment. Where a company admin manages model loading and unloading and we have some flag like SMART_MODEL_MANAGEMENT_ON_PREDICT=false where the model manager doesn't decide on loading/unloading models on predict requests.
| curl -X POST https://serverless.roboflow.com/v2/models/infer \ | ||
| --data-urlencode 'model_id=whatever/model-id/we?can?figure-out' \ | ||
| -F "image=@photo.jpg;type=image/jpeg" \ | ||
| -F 'inputs={"confidence": 0.5};type=application/json' |
There was a problem hiding this comment.
This would probably also map nicely for params that be strictly assigned to elements of the batch. Assuming order needs to be kept.:
-F 'inputs=[{"confidence":0.5}, {"confidence":0.4}];type=application/json' \
-F "image=@photo-1.jpg" \
-F "image=@photo-2.jpg"
Mixing scalar and batch:
-F 'inputs=[{"confidence":0.5, "fuse_nms": true}, {"confidence":0.4, "fuse_nms" : true}];type=application/json' \
-F "image=@photo-1.jpg" \
-F "image=@photo-2.jpg"
Alternatively:
-F 'inputs=[{"confidence":0.5}, {"confidence":0.4}];type=application/json' \
-f 'defaults={"fuse_nms": true};type=application/json' \
-F "image=@photo-1.jpg" \
-F "image=@photo-2.jpg"
So we don't duplicate, but also separate batch inputs from scalars.
| ``` | ||
|
|
||
| > [!IMPORTANT] | ||
| > Since we have `inference-models` and one model may have multiple model-packages, `model_package_id` is natural candidate for structured query param - letting clients specify which exact model package they want, altering dafault auto-loading choice. We can decide also that certain parameters of auto-loading should be possible to be passed (although we need to decide on that relatively fast due to engineering work in progress and impact on model manager). |
There was a problem hiding this comment.
Probably all relevant parameters, not sure if choosing those certain parameters makes sense, if the client is advanced enough to decide on those, he probably want to have the option of full control.
| ``` | ||
|
|
||
| > [!IMPORTANT] | ||
| > There is security issue **embedded into accepting URLs as inputs - especially on the platform.** We accepted the risk of being middle-man in DDoS attack so far, likely it is going to be the case in the future (for user convenience), but would be good for all of parties involved into discussion to recognize and acknowledge this risk - to avoid surprises in the future. |
There was a problem hiding this comment.
I see we have the ALLOW_URL_INPUT_WITHOUT_FQDN and ALLOW_NON_HTTPS_URL_INPUT vars. This plus timeouts and image size checks is imo ok.
What does this PR do?
Related Issue(s):
Type of Change
Testing
Test details:
Checklist
Additional Context