-
Notifications
You must be signed in to change notification settings - Fork 180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FFCV Imagenet Image Quality vs. Native Imagenet Image Quality #44
Comments
Hello, If your goal is speed I would consider using more RAW images and fewer JPEG as a first measure. Now if you have limited amount of RAM then lower JPEG quality might be your best bet. I would give 50% quality a try (make sure you process the validation dataset the same way or your accuracy will tank!), if that's not good enough you can try 80% or 90%. |
Thanks! If my priority is accuracy (let's say the evaluation validation set doesn't use FFCV), what would be the ideal settings? Would I try to make the jpeg quality to 100% and also have more RAW images? |
It can be a problem to use a different pipeline to a train dataset and the validation (think what would happen if you used different Normalization for example). If that's your constraint then definitely use as much RAW as you can (100% if size isn't your problem). RAW isn't compressed so it's really really fast if you have enough RAM to keep it cached (or if you have really good SSD/networked attached storage) |
Just to add: for our ImageNet results (the speed/accuracy tradeoff scatterplot and table), we use:
|
Generally if you want to have higher accuracy, you should want to have larger images and higher JPEG quality. As a concrete result, we found that going from 350px to 500px images yielded a roughly 1% increase in ImageNet val set accuracy. Also, we found that pretrained classifiers started performing poorly on held-out images after 90 JPEG quality; this might not have any bearing on training but it is something to keep in mind. |
IMHO "Raw" is really not a good term to use here (maybe worth another issue?). In imaging "raw" means the very first step of the acquisition pipeline (before demosaicing if CFA sensor). I was confused at first, then realized the distinction is actually: original ImageNet JPEGs decompressed and saved as RGB VS additionally compressed JPEGs saved in encoded format*. My question is: Why do you perform the additional compression? It alters the visual quality of the images, and probably doesn't buy much space compared to parsing the JPEG files and grabbing the encoded stream without the additional compression. * ImageNet is a real zoo of JPEGs and contains double and maybe triple JPEG compressed images. |
Great question - we performed the additional compression step to allow users to customize jpeg quality + to simplify recompression when resizing the image in our pipeline. We found that for 90 quality, recompression barely changed the val set accuracy. It would be great if we could work with the original encoded stream without any recompression; we could consider this in a later update (and of course pull requests are welcome :). |
@YassineYousfi You are totally correct regarding RAW. We struggled to find a term that is both correct, relatively easy to understand to most users and not to long to type. Happy to talk/receive suggestions in a separate thread @lengstrom is 100% right about the additional compression, we wanted to give that possibility to users and have an API that makes importing datasets as straightforward as possible and it seemed that taking IterableDatasets and re-compressing was the best compromise. We are more than happy to hear suggestions/receive PRs on how to allow users to do that while keeping the API as simple as possible! |
Thanks, @lengstrom and @GuillaumeLeclerc for the detailed answers. It makes a lot of sense, the need for resizing makes it a little more tricky. Cropping can be easily done on JPEGs losslessly, while resizing might require some more work (like SmartScale introduced in libjpeg8). Happy to discuss in a separate thread. I Will look into how easily this can be done in ffcv and potentially open a PR. |
Keep us up to date @YassineYousfi. I also wanted to leverage JPEG's ability to decode lower resolution to reduce the amount of resizing done at training time but I have been too busy to add it yet :/ |
I think this thread has been dealt with. @YassineYousfi Feel free to start a new conversation about these specific topics. |
Hello! I see that FFCV offers alot of options for the quality of the dataset - e.g. in the imagenet example:
One thing I'm curious is the effects of these quality options on the training results of the dataset, as I'm interested in reproducing Imagenet results but faster using FFCV. What would also be recommended settings to use if you would like to produce Native Imagenet Quality precisely (barring the crop size).
The text was updated successfully, but these errors were encountered: