-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Docker environment & web demo #6
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you so much for the PR @cjwbw! The demo looks really cool!
Just added a couple of small comments before we can merge the code!
Download the weights in ./checkpoints and ImageNet 1K ID to class mappings beforehand | ||
wget https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json -O in_cls_idx.json |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we download the class mapping in the code instead of the instructions?
Also, can the models directly be loaded from torch.hub instead of having users download the models manually?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @mannatsingh,
Thank you for your comment! Yes the models can be loaded from the hub on the fly, but we recommend that the checkpoints are downloaded and prepared in the cog environment so the inference can be very fast! You can test on the website that only the first time setting up takes time, but consecutive runs will be very speedy because of this.
Users who try the web demo do not need to download the checkpoints - everything is there! That note there is just for people who wish to implement models themselves for a Replicate demo too :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Understood! I mentioned this because torch.hub
also uses a cache to avoid re-downloads (default path is in ~/.cache
but that can be overridden with environment variables). But it's not a big deal :)
@@ -0,0 +1,105 @@ | |||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add a license header to this file, similar to https://github.com/facebookresearch/SWAG/blob/main/imagenet_1k_eval.py?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just added!
add license
Thanks again for the PR @cjwbw ! |
Thanks for merging @mannatsingh! We'd appreciate it if you could claim the page too so we can publish and feature it on our website. You can find the 'Claim this model' button on the top of the page https://replicate.com/facebookresearch/swag. Anyone with a GitHub account from the facebookreaserch orgs can claim the page :) Cheers! |
Done, it's pending review :) |
Thanks for claiming the model, @mannatsingh! You've been added to the @facebookresearch organization on Replicate! |
This pull request makes it possible to run your model inside a Docker environment, which makes it easier for other people to run it. We're using an open source tool called Cog to make this process easier.
This also means we can make a web page where other people can try out your model! View it here: https://replicate.com/facebookresearch/swag. We enable selecting different models for inference, and you can find the docker file under the tab ‘run model with docker’.
We have added some examples to the page, but do claim the page so you can own the page, customise the Example gallery as you like, push any future update to the web demo, and we'll feature it on our website and tweet about it too.
In case you're wondering who I am, I'm from Replicate, where we're trying to make machine learning reproducible. We got frustrated that we couldn't run all the really interesting ML work being done. So, we're going round implementing models we like. 😊