Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batch support for more than one image #3339

Closed
andviane opened this issue Apr 29, 2021 · 3 comments
Closed

Batch support for more than one image #3339

andviane opened this issue Apr 29, 2021 · 3 comments

Comments

@andviane
Copy link

andviane commented Apr 29, 2021

Feature requests should first be proposed on the forum.

Link to forum discussion.

https://forums.fast.ai/t/how-to-do-batch-inference/39201

This refers to the old code that does not make my life any easier. The desired functionality remains the same and seems still missing.

Is your feature request related to a problem? Please describe.
The camera is moving over 8 meter distance in 2 seconds, with the location reported at every 1 mm, so 8000 locations. There is an object at one of these locations. We need to find the location where the object is right in front of the camera. The images are 100x200px. Under resolution so low, one pixel accuracy is required. There is no requirement to do this in real time, we can collect all 8000 images first and process in a batch. But the analysis must be done in a few seconds at most.

Describe the solution you'd like
Semantic segmentation works well. I can find the location of the object no problem by taking the median of the location for the pixels reported as belonging to the object. But for each image separately, this is too slow. We can afford a huge GPU card that can probably take really many images in one go.

Describe alternatives you've considered
Most of the neural networks support batch processing no problem. It was really a big surprise for me to realize that fast.ai keeps this possibility for itself.

  1. Hacking the following peace of code in the learner:
    def predict(self, item, rm_type_tfms=None, with_input=False):
        dl = self.dls.test_dl([item], rm_type_tfms=rm_type_tfms, num_workers=0)
        inp,preds,_,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True)
        i = getattr(self.dls, 'n_inp', -1)
        inp = (inp,) if i==1 else tuplify(inp)
        dec = self.dls.decode_batch(inp + tuplify(dec_preds))[0]
        dec_inp,dec_targ = map(detuplify, [dec[:i],dec[i:]])
        res = dec_targ,dec_preds[0],preds[0]
        if with_input: res = (dec_inp,) + res
        return res

It is very obvious that it does batch prediction with one item, but I do not find this code very self-documenting. Could anybody understanding it at least give a hint?

  1. Run the model separately from fast.ai. Need to understand how to extract and convert already trained model with its parameters.

  2. Build artificial composed image, placing frames side by side, and then split again the returned mask of segmentation. This looks rather dirty, what a code reviewer will say?

@tcapelle
Copy link
Contributor

tcapelle commented Apr 29, 2021

Just use a test_dl approach. You can create a dataloader easily and make inference on it using a batch size. Take a look here:
https://docs.fast.ai/tutorial.pets.html#Adding-a-test-dataloader-for-inference at the end.

@muellerzr
Copy link
Contributor

@andviane what Thomas has linked is correct, you should use a test_dl to perform batch inference. I have an example for semantic segmentation here: https://walkwithfastai.com/Segmentation#Inference

@jph00 think this can be closed, as my understanding is that test_dl coupled with get_preds should be the desired functionality they want

@jph00 jph00 closed this as completed May 3, 2021
@andviane
Copy link
Author

We managed to understand and re-purpose the fragment of code I was talking about when opening the ticked. The problem is solved for us know. However I would like to wish making this approach more user friendly. I expect to be able to pass an array of images and get back an array of predictions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants