Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference after training the model #10

Closed
LSQI15 opened this issue Jul 27, 2020 · 4 comments
Closed

Inference after training the model #10

LSQI15 opened this issue Jul 27, 2020 · 4 comments

Comments

@LSQI15
Copy link

LSQI15 commented Jul 27, 2020

Are there any ways to do inference/predictions using the latest weight after the model is trained?
I am able to do predictions during the training process using the Custom API at port 8099. However, the port is also closed after the training is finished.
Thanks!

@hadikoub
Copy link
Member

Yes sure, you can use our BMW Yolo inference cpu or gpu where u can put you trained model (read the README in those repo to have a clear view)
BMW-Yolov4-Inference-CPU: https://github.com/BMW-InnovationLab/BMW-YOLOv4-Inference-API-CPU.git
BMW-Yolov4-Inference-GPU: https://github.com/BMW-InnovationLab/BMW-YOLOv4-Inference-API-GPU.git

@LSQI15
Copy link
Author

LSQI15 commented Jul 31, 2020

Great to know! Thanks a lot!

@LSQI15 LSQI15 closed this as completed Jul 31, 2020
@lpinuer
Copy link

lpinuer commented Mar 11, 2021

hi there i want to know how to make the detection of several images without using the api, just local ? or if is possible to do the detection of an folder with images using the BMW-YOLOv4-Inference-API-GPU?

@hadikoub
Copy link
Member

In the BMW-YOLOv4-Inference-API-GPU we have an endpoint called /models/{model_name}/predict_batch in https://github.com/BMW-InnovationLab/BMW-YOLOv4-Inference-API-GPU/blob/96f939a654f4a761323745b05af6a6ea3a6acc80/src/main/start.py#L150

here is an example to call this endpoint using python:

import requests
import os 

# url = 'http://<ip-inference-api>:<port>/models/<model-name>/predict_batch'
url = 'http://localhost:4343/models/<model-name>/predict_batch'

files_list=[]

for image_name in os.listdir('<path-of-folder-where-images-are-stored>'):
    image_path : str = os.path.join('<path-of-folder-where-images-are-stored>', image_name )
    file =  ('input_data',open(image_path,'rb'))
    files_list.append(file)

# general format 
# files_list = [
#         ('input_data', open('foo.png', 'rb'),
#         ('input_data', open('bar.png', 'rb')]


response = requests.post(url, files = files_list)
response.json()

the response should be like this

{
    "data": [
        [
            {
                "ObjectClass": "dog",
                "Confidence": 0.5256381630897522
            },
            {
                "ObjectClass": "cat",
                "Confidence": 0.4743618667125702
            }
        ],
        [
            {
                "ObjectClass": "dog",
                "Confidence": 0.5236366391181946
            },
            {
                "ObjectClass": "cat",
                "Confidence": 0.47636330127716064
            }
        ],
        [
            {
                "ObjectClass": "cat",
                "Confidence": 0.5182664394378662
            },
            {
                "ObjectClass": "dog",
                "Confidence": 0.4817335307598114
            }
        ]
    ],
    "error": "",
    "success": true
}

another method is to send each image in one request inside a loop

@hadikoub hadikoub pinned this issue Mar 12, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants