Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to adapt the method for a stream? #1

Closed
thoppe opened this issue Sep 9, 2019 · 4 comments
Closed

How to adapt the method for a stream? #1

thoppe opened this issue Sep 9, 2019 · 4 comments
Labels
question Further information is requested

Comments

@thoppe
Copy link

thoppe commented Sep 9, 2019

The examples work well for a fixed set of images, but I'm having trouble trying to adjust them for a stream. I've asked the general question on SO, but I was wondering if there was a cleaner way to do this with model you've got.

I'm not fully understanding why we have to create a data object in the first place -- especially with files in both "train" and "valid"

    data = ImageDataBunch.from_folder(
        path,
        "train",
        "valid",
        size=(375, 666),
        ds_tfms=get_tfms(),
        bs=1,
        resize_method=ResizeMethod.SQUISH,
        num_workers=0,
    ).normalize(imagenet_stats)

couldn't we just load them model, apply the preprocessing, and then output the result?

Thanks for the help, the model looks to be amazing!

@rsomani95
Copy link
Owner

rsomani95 commented Sep 9, 2019

The data object needs to be created only if you want to generate heatmaps. It's a hacky way of doing it, but it's the only way I could get it to work, for now.

What's your objective? Do you want to

  1. Get the model's predictions?
  2. Generate heatmaps?

If it's just getting predictions, then you should be able to do that from a video stream without creating an ImageDataBunch. Take a look at the save_preds function in the get-preds.py file here.

Does that help?

This is a useful feature to add to the repo. I'll work on the code and push it soon.

Thanks for the help, the model looks to be amazing!

Thanks! Happy to help :)

@thoppe
Copy link
Author

thoppe commented Sep 9, 2019

Thanks for the response. I only need 1], the model's predictions. I used get-preds.py as a template for loading my own images. It seems to call initialise.py which in turns creates a data = ImageDataBunch.from_folder which prompted my question. What I need is a way to load the "learner"

learn = cnn_learner(data, models.resnet50, metrics = [accuracy], pretrained=True)
learn = learn.to_fp16()
learn.load(path/'models'/'shot-type-classifier');

without having to either create a data element, or an empty one that only has the transforms. Right now, I'm dumping each image to a temporary file and then reading it back in with open_file! This applies the transforms, but it's a huge waste of IO as I've already got the image loaded as a numpy array.

While you're here, it might be nice to note in the documentation what image format you're using:

1] width by height or height by width?
2] RGB or BGR?

@rsomani95 rsomani95 added question Further information is requested help wanted Extra attention is needed labels Sep 11, 2019
@rsomani95
Copy link
Owner

So it turns out the method I've put in place for predictions is far from optimal.
Reorganising the directory will take some time but meanwhile, I wrote some code that should help you.

First, download the .pkl model. I've included the link in the get_data_model.sh script. As mentioned here, this is the correct way to use a model for inference.

In my testing, this worked with arrays that were shaped (height, width, channels); the size of the array doesn't matter (images don't need to be (375, 666).

from fastai.vision import *

learn = load_learner('~/shot-type-classifier/models', file='shot-type-classifier.pkl')

## Predict from an image on disk
img = '~/test.jpg'
learn.predict(open_image(img))

## Predict from a numpy array
# arr.shape --> (height, width, 3)
img = PIL.Image.fromarray(arr).convert('RGB')
img = pil2tensor(arr, np.float32).div_(255) # Convert to torch.Tensor
img = Image(img) # Convert to fastai.vision.image.Image

learn.predict(img)[0] # --> Shot Type
learn.predict(img)[2] # --> Probabilities

Further optimising would be to convert it directly to a float.Tensor from a numpy.ndarray but in my brief testing, that gave some strange errors that I haven't gotten around yet.

@rsomani95 rsomani95 removed the help wanted Extra attention is needed label Sep 11, 2019
@thoppe
Copy link
Author

thoppe commented Sep 13, 2019

Thanks, your example helped a lot! I found you didn't need the line img = PIL.Image.fromarray(arr).convert('RGB').

@thoppe thoppe closed this as completed Sep 13, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants