Skip to content

IO speed

victorca25 edited this page Jul 24, 2021 · 7 revisions

During training, loading data from your storage device or decoding the images may be the bottlenecks during training, especially if using datasets like DIV2K with 2K resolution images. Some alternatives to improve IO speed are:

  1. Put your training data in SSD (Solid-State Drive), if your machine has one. Reading data from SSD is much faster than reading from HDD (Hard Disk Drive).
  2. Pre-crop the images to sub-images. During training, the data loader crops the images and only uses patches of size HR_size X HR_size, as configured in the options (for example, 128x128). Given this, there is no need to read the whole images, which takes more resources and time to process in the dataloader. For this reason, you can use extract_subimgs_single.py or any other mechanism to crop the large images to sub-images tiles. For example, you can crop the DIV2K images to tiles of 480x480 sub-images with a sliding window of step = 240 using the script.
  3. (Optional) Use lmdb format. Reading raw images from your storage devices consumes a lot of CPU resources due to the decoding for each image. lmdb is an alternative that pre-decodes the images and store them in single indexed databases that can be read during training. Note that not all the dataloaders support using lmdb, so it's worth checking before creating the databases.
  4. (Optional) If lmdb is not an option, another alternative to test is to decode the images to numpy arrays with cv2 and store them as .npy files, which can then be loaded directly as numpy arrays without decoding first.

These options can be combined (except 3 and 4, which are exclusive), each contributing to better IO performance. Also important to note is that when using on-the-fly augmentations, depending on the type of augmentation there can be some performance impact, but in some cases it's not noticeable.

During training a td variable is show on the console and log files representing the dataloader time, which can help in debugging the IO behavior of the data.

A simple excercise to compare the options of using the same images as regular PNGs, decoded NPYs and LMDB can be made and looping through a dataset of each type for multiple loops and averaging the time (in seconds). Using a dataset of 50 paired images of dimensions 128x128 produce the following results on a HDD:

Metric PNG NPY LMDB - NPY % PNG LMDB % PNG
Average 0.171707s 0.136925s 0.127708s - 79.74369% 74.37564%
STD 0.019705s 0.014681s 0.013691s - 8.550096% 7.973582%

With NPY being about 20% faster than using the regular images and LMDB 25% faster.

How to create the lmdb files

  1. Put the training images into a folder.
  2. Run the script codes/scripts/create_lmdb.py. It takes a required argument images_path which is the folder with the training images and an optional lmdb_path which is a directory ending in .lmdb which will be set up as the dataroot in the options training file. This process has to be repeated for each dataroot that will be used during training (for example, dataroot_HR and dataroot_LR). The resulting directory will have the following file structure:
    dataset.lmdb
    ├── data.mdb
    ├── lock.mdb
    ├── meta_info.txt
  1. In the configuration files (.yml or .json) files, write the lmdb paths in the corresponding dataroots, for example:
dataroot_HR: '../datasets/train/hr_dataset.lmdb'
dataroot_LR: '../datasets/train/lr_dataset.lmdb'

Note: it is currently required to set n_workers: 0 in the dataloader options to use lmdb (at least on Windows), else there can be a PermissionError due to multiple processes accessing the image database.