Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ImageNet #146

Open
wants to merge 26 commits into
base: master
Choose a base branch
from
Open

Add ImageNet #146

wants to merge 26 commits into from

Conversation

adrhill
Copy link

@adrhill adrhill commented Jun 23, 2022

Draft PR to add the ImageNet 2012 Classification Dataset (ILSVRC 2012-2017) as a ManualDataDep.
Closes #100.


Since ImageNet is very large (>150 GB) and requires signing up and accepting the terms of access, it can only be added manually. The ManualDataDep instruction message for ImageNet includes the following:

  • instructions on creating symlinks to existing ImageNet datasets (e.g. for use on shared compute clusters)
  • instructions on downloading and unpacking ImageNet from scratch, based on the PyTorch guide, as linked by @CarloLucibello

When unpacked "PyTorch-style", the ImageNet dataset is assumed to look as follows: ImageNet -> split-folder -> WordNet ID folder -> class samples as jpg-files, e.g.:

ImageNet
├── train
├── val
│   ├── n01440764
│   │   ├── ILSVRC2012_val_00000293.JPEG
│   │   ├── ILSVRC2012_val_00002138.JPEG
│   │   └── ...
│   ├── n01443537
│   └── ...
├── test
└── devkit
    ├── data
    │   ├── meta.mat
    │   └── ...
    └── ...

Current limitations

Since ImageNet is too large to precompute all preprocessed images and keep them in memory, the dataset precomputes a list of all file paths instead. Calling Base.getindex(d::ImageNet, i) loads the image via ImageMagick.jl and preprocesses it when required. This adds dependencies on ImageMagick and Images.jl via LazyModules.

This also means that the ImageNet struct currently doesn't contain features (which might be a requirement for SupervisedDatasets?)

@codecov-commenter
Copy link

codecov-commenter commented Jun 23, 2022

Codecov Report

Merging #146 (09d5be4) into master (86dabc4) will decrease coverage by 1.33%.
The diff coverage is 6.75%.

📣 This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more

@@            Coverage Diff             @@
##           master     #146      +/-   ##
==========================================
- Coverage   48.56%   47.23%   -1.33%     
==========================================
  Files          44       47       +3     
  Lines        2261     2335      +74     
==========================================
+ Hits         1098     1103       +5     
- Misses       1163     1232      +69     
Impacted Files Coverage Δ
src/datasets/vision/imagenet_reader/preprocess.jl 0.00% <0.00%> (ø)
.../datasets/vision/imagenet_reader/ImageNetReader.jl 5.00% <5.00%> (ø)
src/datasets/vision/imagenet.jl 7.31% <7.31%> (ø)
src/MLDatasets.jl 100.00% <100.00%> (ø)

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

@lorenzoh
Copy link
Contributor

Will be good to have ImageNet support!

I'm wondering if there may be a simpler implementation for this, though. It seems the dataset has the same format as the (derived) ImageNette and ImageWoof datasets. The way those are loaded in FastAI.jl combines the MLUtils.jl primitives and those could be used to load ImageNet as folows:

using MLDatasets, MLUtils, FileIO

function ImageNet(dir)
    files = FileDataset(identity, path, "*.JPEG").paths
    return mapobs((FileIO.load, loadlabel), files)
end
# get the class name from the file path. could add a lookup here to convert the ID to the human-readable name
loadlabel(file::String) = split(file, "/")[end-2]


data  = ImageNet(IMAGENET_DIR)

# only training set
data  = ImageNet(joinpath(IMAGENET_DIR, "train"))

I'd also suggest using FileIO.jl for loading images which will the faster JpegTurbo.jl to load the images.

If more control over the image loading is desired, like converting to a color upon reading or loading an image into a smaller size (much faster if it'll be downsized during training anyway) , one could also use JpegTurbo.jl directly:

function ImageNet(dir; C = RGB{N0f8}, preferred_size = nothing)
    files = FileDataset(identity, path, "*.JPEG").paths
    return mapobs((f -> JpegTurbo.jpeg_decode(C, f; preferred_size), loadlabel), files)
end

# load as grayscale and smaller image size
data = ImageNet(IMAGENET_DIR; C = Gray{N0f8}, preferred_size = (224, 224))

@adrhill
Copy link
Author

adrhill commented Jun 23, 2022

Thanks a lot, loading smaller images with JpegTurbo is indeed much faster!
I've also added a lookup-table wnid_to_label to the metadata. Once you know the label, you can access class names and descriptions by indexing the corresponding metadata entries, e.g. metadata["class_names"][label].

@adrhill adrhill marked this pull request as ready for review June 23, 2022 18:22
@adrhill
Copy link
Author

adrhill commented Jun 23, 2022

JpegTurbo's preferred_size keyword already returns images pretty close to the desired 224x224 size. At the cost of losing a couple of pixels, we could skip the second resizing in resize_smallest_dimension, which allocates and instead directly center_crop, which is just a view.

@adrhill
Copy link
Author

adrhill commented Jun 23, 2022

I've done some local benchmarks:
Current commit cac14d2 with JpegTurbo loading smaller images:

julia> using MLDatasets

julia> dataset = ImageNet(Float32, :val);

julia> @benchmark dataset[1:16]
BenchmarkTools.Trial: 44 samples with 1 evaluation.
 Range (min  max):  104.413 ms  143.052 ms  ┊ GC (min  max):  7.28%  18.57%
 Time  (median):     113.164 ms               ┊ GC (median):    10.80%
 Time  (mean ± σ):   115.515 ms ±   9.030 ms  ┊ GC (mean ± σ):  10.46% ±  3.68%

    ▃          █                                                 
  ▇▄█▄▁▁▄▄▇▇▇▄▄█▄▇▄▄▇▁▄▁▄▁▁▄▁▄▄▁▄▁▁▄▄▁▁▁▄▁▁▁▁▁▄▁▄▁▁▁▄▁▁▁▁▁▁▁▁▁▄ ▁
  104 ms           Histogram: frequency by time          143 ms <

 Memory estimate: 131.78 MiB, allocs estimate: 2050.

Without resize_smallest_dimension , using only center_crop:

julia> @benchmark dataset[1:16]
BenchmarkTools.Trial: 57 samples with 1 evaluation.
 Range (min  max):  80.594 ms  103.226 ms  ┊ GC (min  max):  7.43%  19.03%
 Time  (median):     86.954 ms               ┊ GC (median):     8.95%
 Time  (mean ± σ):   88.287 ms ±   5.683 ms  ┊ GC (mean ± σ):  10.90% ±  3.57%

    ▄ ▄ ▁▁ █▄  ▁▄   ▁    ▁  ▁▁ ▁   ▄   ▁ ▁   ▁           ▁      
  ▆▆█▆█▁██▁██▆▁██▆▁▆█▁▁▆▁█▁▆██▆█▁▁▁█▆▁▆█▁█▁▁▁█▁▁▁▆▁▁▁▁▁▁▁█▁▁▁▆ ▁
  80.6 ms         Histogram: frequency by time          101 ms <

 Memory estimate: 115.96 MiB, allocs estimate: 1826.

Additionally using StackedViews.jl for batching:

julia> @benchmark dataset[1:16]
BenchmarkTools.Trial: 95 samples with 1 evaluation.
 Range (min  max):  47.971 ms  73.503 ms  ┊ GC (min  max): 0.00%  8.68%
 Time  (median):     51.116 ms              ┊ GC (median):    0.00%
 Time  (mean ± σ):   52.903 ms ±  4.922 ms  ┊ GC (mean ± σ):  4.69% ± 5.81%

  ▂ ▂▄█                                                        
  █▇█████▃▅▃▅▅▆▃█▇▇▅▅▅▁▅▁▁▃▃▆▆▁▁▁▁▁▅▃▃▁▃▁▁▁▁▁▁▁▁▁▃▁▁▁▃▁▁▁▁▁▁▃ ▁
  48 ms           Histogram: frequency by time          70 ms <

 Memory estimate: 38.43 MiB, allocs estimate: 1499.

Copy link
Member

@johnnychen94 johnnychen94 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm against @lazy import ImageCore (and @lazy import ImageShow) because it's very likely to hit the world-age issue if not used carefully. I mean, if this is a safe solution I'll be the first one to refactor the JuliaImages ecosystem this way. But since @CarloLucibello is the actual maintainer of this package, I'll leave this decision to him.


# Load image from ImageNetFile path and preprocess it to normalized 224x224x3 Array{Tx,3}
function readimage(Tx::Type{<:Real}, file::AbstractString)
im = JpegTurbo.jpeg_decode(ImageCore.RGB{Tx}, file; preferred_size=IMGSIZE)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if all ImageNet images meets the requirement, but note that the actual decomposed result size size(im) might not be preferred_size.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm actually running into warnings with images smaller than preferred_size:

┌ Warning: Failed to infer appropriate scale ratio, use `scale_ratio=2` instead.
│   actual_size = (127, 100)
│   preferred_size = (224, 224)
└ @ JpegTurbo ~/.julia/packages/JpegTurbo/b5MSG/src/decode.jl:165

do you have experience with this @lorenzoh ?

Copy link
Member

@johnnychen94 johnnychen94 Jun 25, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason for this is that JpegTurbo.jl (or libjpegt-turbo) only supports a very limited range of scale_ratio: they are $M/8$ where $M \in [1, 2, ..., 16]$. Thus the maximal possible scale_ratio is 2. This is exactly why size(img) == preferred_size may not hold in practice.

The supported scale_ratio permits a faster decoding algorithm (by scaling the coefficients instead of the actual images), this is why we can observe the performance boost here.


The perhaps safest (I think) solution is to add a imresize after it:

    img = @suppress_err JpegTurbo.jpeg_decode(file; preferred_size=(224, 224))
    if size(img) != (224, 224)
        img = imresize(img, (224, 224))
    end

The @suppress_err macro is a handy tool from https://github.com/JuliaIO/Suppressor.jl to disable this warning message.

I don't plan to make this imresize happen automatically in JpegTurbo.jl because it would otherwise break people's expectation on "keyword preferred_size can make decoding faster"

src/datasets/vision/imagenet_reader/preprocess.jl Outdated Show resolved Hide resolved
src/datasets/vision/imagenet.jl Outdated Show resolved Hide resolved
src/MLDatasets.jl Outdated Show resolved Hide resolved
src/datasets/vision/imagenet_reader/ImageNetReader.jl Outdated Show resolved Hide resolved
src/datasets/vision/imagenet.jl Outdated Show resolved Hide resolved
src/datasets/vision/imagenet.jl Outdated Show resolved Hide resolved
src/datasets/vision/imagenet.jl Outdated Show resolved Hide resolved
@adrhill
Copy link
Author

adrhill commented Jun 30, 2022

Thanks for the review @Dsantra92!
I'm slightly busy due to the JuliaCon submission deadline on Monday, but I'll get back to this PR as soon as possible.

@adrhill
Copy link
Author

adrhill commented Jun 30, 2022

The order of the classes in the metadata also still has to be fixed as it doesn't match https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt.

docs/src/datasets/vision.md Outdated Show resolved Hide resolved
src/MLDatasets.jl Outdated Show resolved Hide resolved
src/datasets/vision/imagenet.jl Outdated Show resolved Hide resolved
src/datasets/vision/imagenet_reader/preprocess.jl Outdated Show resolved Hide resolved
@adrhill
Copy link
Author

adrhill commented Aug 3, 2022

Sorry for stalling this.

I guess the issue with this PR boils down to whether preprocessing functions belong to MLDatasets or to packages exporting pre-trained models. This question has already been raised in FluxML/Metalhead.jl#117.

Since images in ImageNet have different dimensions, providing an ImageNet data loader with matching preprocessing functions would be somewhat useless, as it would not be able to load batches of data. And as discussed here in the context of JpegTurbo.jpeg_decode, a lot of performance would be left on the table if we loaded full-size images just to immediately resize them.

I took a look at how other Deep Learning frameworks deal with this and both torchvision and Keras Applications export preprocessing functions with their pre-trained models. MLDataset's FileDataset pattern would work well if pre-trained model libraries exported a corresponding loadfn.
One of the issues mentioned in FluxML/Metalhead.jl#117 is import latency for extra dependencies such as DataAugmentation.jl. Maybe LazyModules.jl could help circumvent this problem.

@RomeoV
Copy link

RomeoV commented Feb 2, 2023

Hey everyone, this looks awesome. Is anyone still working on this? Otherwise I would suggest trying to merge this, even if it's not "perfect" with regards to extra dependencies or open questions about transformations.

@adrhill
Copy link
Author

adrhill commented Feb 2, 2023

I'm still interested in working on this.

To get this merged, we could make the preprocess and inverse_preprocess functions part of the ImageNet struct and provide the current functions as defaults.

Edit: inverse_preprocess is now a field of the ImageNet struct, preprocess is the loadfn of the internal FileDataset.

@CarloLucibello
Copy link
Member

this needs a rebase, otherwise looks mostly good

Comment on lines 5 to 9
const PYTORCH_MEAN = [0.485f0, 0.456f0, 0.406f0]
const PYTORCH_STD = [0.229f0, 0.224f0, 0.225f0]

normalize_pytorch(x) = (x .- PYTORCH_MEAN) ./ PYTORCH_STD
inv_normalize_pytorch(x) = x .* PYTORCH_STD .+ PYTORCH_MEAN

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would drop the pytorch prefix/suffix and use something else. The comments can stay. If PyTorch isn't the only library that does this preprocessing, then it makes sense to represent that with more general names. If different libraries are providing different preprocessing functionality for ImageNet (or not providing any), then I'd argue there is no canonical default set of ImageNet transformations and this code (aside from maybe the descriptive stats) shouldn't be in MLDatasets.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. Since this is just an internal function used by default_preprocess, I would suggest either _normalize or default_normalize. The appeal of using these coefficients as defaults is that they should work out of the box with pre-trained vision models from Metalhead.jl.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait, so do other libraries provide this functionality in their ImageNet dataset APIs? I checked https://www.tensorflow.org/datasets/catalog/imagenet2012 and it has no mention of preprocessing, so is PyTorch the only library that does this? If so, I would vote to remove the preprocessing functions as mentioned above.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I am not wrong, these normalization values depend on the model you are using. Also, none of the existing vision datasets have preprocessing functions. These functions are ideally handled by data preprocessing libraries/modules.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The norm values should not be model-specific. They're derived directly from the data before any model is involved.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the pytorch case, notice however that although the transformations are stored in the "model weights", the mean and std is the same across models (see e.g. the mobilenet model).

In a similar spirit, I would definitely defend the decision of shipping the set of transformations (cropping, interpolation, linear transformation, etc) as part of the dataset. However I agree with the very first point that the name transformation_pytorch isn't really precise, although I think it is fair to link to the corresponding transformations for tensorflow, pytorch, and/or the timm library in a related comment.

Copy link

@ToucheSir ToucheSir Feb 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PyTorch also lumps code for pretrained models, data augmentations and datasets into one library, I don't think we need to follow their every example :)

In a similar spirit, I would definitely defend the decision of shipping the set of transformations (cropping, interpolation, linear transformation, etc) as part of the dataset.

This is precisely why I asked about what other libraries are doing. If nobody else is shipping the same set of transformations, then they can hardly be considered canonical for ImageNet. That doesn't mean we should never ship helpers to create common augmentation pipelines, but that it is better served by packages which have access to efficient augmentation libraries (e.g. Augmentor, DataAugmentation) and not by some unoptimized implementation which is simultaneously more general (because it's applicable to other datasets) and less general (because many papers using ImageNet do not use these augmentations) than the dataset it's been attached to.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's just apply the channelview and permute transformation by default here,
and make the (permuted) mean and std values be part of the type

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have also taken a look at Keras' ImageNet utilities. While these normalization constants are used in many places throughout torchvision and PyTorch, it looks like TensorFlow and Keras do indeed use their own constants.

I agree with @ToucheSir's sentiment

If nobody else is shipping the same set of transformations, then they can hardly be considered canonical for ImageNet.

However, this point can be drawn even further, as nothing about ImageNet is truly canonical.
To give some examples (some of which have previously been discussed):

  1. There is no canonical reason why Images have to be loaded in 224 x 224 format.
  2. There is no canonical reason to apply the resizing algorithm JpegTurbo.jl uses when calling jpeg_decode with a preferred_size.
  3. There is no canonical way of sorting class labels. Some sort by WordNet ID (e.g. PyTorch), others don't.

Getting this merged

So make this dataloader as "unopinionated" as possible, we could just make it a very thin wrapper around FileDataset which just loads metadata. This would require the user to pass a loadfn which handles the transformation from file path to array. Class ordering could be handled using a sort_by_wnid=True keyword argument and all new dependencies introduced in this PR could be removed (ImageCore, JpegTurbo and StackViews).

Future work

However, I do strongly feel like some package in the wider Julia ML / Deep Learning ecosystem should export loadfns that are usable with Metalhead's PyTorch models out of the box. @lorenzoh previously proposed adding such functionality to DataAugmentation.jl in FluxML/Metalhead.jl#117.
Once this functionality is available somewhere, ImageNet's docstring in MLDataset should be updated to showcase this common use-case.

Until this functionality exists, I would suggest adding a "Home" => "Tutorials" => "ImageNet" page to the MLDatasets docs which implements the current load function.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it looks like TensorFlow and Keras do indeed use their own constants.

Nice find. I was not expecting that mode == "torch" conditional.

However, this point can be drawn even further, as nothing about ImageNet is truly canonical. To give some examples (some of which have previously been discussed):

1. There is no canonical reason why Images have to be loaded in 224 x 224 format.

2. There is no canonical reason to apply the resizing algorithm JpegTurbo.jl uses when calling `jpeg_decode` with a `preferred_size`.

3. There is no canonical way of sorting class labels.  Some sort by WordNet ID (e.g. PyTorch), others don't.

The difference here is that all three of those points can have a decent fallback without depending on external packages. Another argument is that more people will rely on these defaults than won't. I'm not sure augmentations pass that threshold.
I'm not saying users shouldn't be able to pass in a transformation function, but identity or some such seems a more defensible default. Indeed, the torchvision ImageNet class does not do any additional transforms by default, so we'd be deviating from every other library if we stuck with this default centre crop.

@adrhill
Copy link
Author

adrhill commented Mar 19, 2024

In case someone is still interested in using this, I've opened a unregistered repository containing this PR:
https://github.com/adrhill/ImageNetDataset.jl

The most notable difference is that ImageNetDataset.jl contains some custom preprocessing pipelines that support convert2image and work out of the box with Metalhead.jl.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Feature request: ImageNet data loader
8 participants