You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This refers to the old code that does not make my life any easier. The desired functionality remains the same and seems still missing.
Is your feature request related to a problem? Please describe.
The camera is moving over 8 meter distance in 2 seconds, with the location reported at every 1 mm, so 8000 locations. There is an object at one of these locations. We need to find the location where the object is right in front of the camera. The images are 100x200px. Under resolution so low, one pixel accuracy is required. There is no requirement to do this in real time, we can collect all 8000 images first and process in a batch. But the analysis must be done in a few seconds at most.
Describe the solution you'd like
Semantic segmentation works well. I can find the location of the object no problem by taking the median of the location for the pixels reported as belonging to the object. But for each image separately, this is too slow. We can afford a huge GPU card that can probably take really many images in one go.
Describe alternatives you've considered
Most of the neural networks support batch processing no problem. It was really a big surprise for me to realize that fast.ai keeps this possibility for itself.
Hacking the following peace of code in the learner:
def predict(self, item, rm_type_tfms=None, with_input=False):
dl = self.dls.test_dl([item], rm_type_tfms=rm_type_tfms, num_workers=0)
inp,preds,_,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True)
i = getattr(self.dls, 'n_inp', -1)
inp = (inp,) if i==1 else tuplify(inp)
dec = self.dls.decode_batch(inp + tuplify(dec_preds))[0]
dec_inp,dec_targ = map(detuplify, [dec[:i],dec[i:]])
res = dec_targ,dec_preds[0],preds[0]
if with_input: res = (dec_inp,) + res
return res
It is very obvious that it does batch prediction with one item, but I do not find this code very self-documenting. Could anybody understanding it at least give a hint?
Run the model separately from fast.ai. Need to understand how to extract and convert already trained model with its parameters.
Build artificial composed image, placing frames side by side, and then split again the returned mask of segmentation. This looks rather dirty, what a code reviewer will say?
The text was updated successfully, but these errors were encountered:
We managed to understand and re-purpose the fragment of code I was talking about when opening the ticked. The problem is solved for us know. However I would like to wish making this approach more user friendly. I expect to be able to pass an array of images and get back an array of predictions.
Feature requests should first be proposed on the forum.
Link to forum discussion.
https://forums.fast.ai/t/how-to-do-batch-inference/39201
This refers to the old code that does not make my life any easier. The desired functionality remains the same and seems still missing.
Is your feature request related to a problem? Please describe.
The camera is moving over 8 meter distance in 2 seconds, with the location reported at every 1 mm, so 8000 locations. There is an object at one of these locations. We need to find the location where the object is right in front of the camera. The images are 100x200px. Under resolution so low, one pixel accuracy is required. There is no requirement to do this in real time, we can collect all 8000 images first and process in a batch. But the analysis must be done in a few seconds at most.
Describe the solution you'd like
Semantic segmentation works well. I can find the location of the object no problem by taking the median of the location for the pixels reported as belonging to the object. But for each image separately, this is too slow. We can afford a huge GPU card that can probably take really many images in one go.
Describe alternatives you've considered
Most of the neural networks support batch processing no problem. It was really a big surprise for me to realize that fast.ai keeps this possibility for itself.
It is very obvious that it does batch prediction with one item, but I do not find this code very self-documenting. Could anybody understanding it at least give a hint?
Run the model separately from fast.ai. Need to understand how to extract and convert already trained model with its parameters.
Build artificial composed image, placing frames side by side, and then split again the returned mask of segmentation. This looks rather dirty, what a code reviewer will say?
The text was updated successfully, but these errors were encountered: