Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance issues in /utils (by P3) #12

Open
DLPerf opened this issue Aug 30, 2021 · 1 comment
Open

Performance issues in /utils (by P3) #12

DLPerf opened this issue Aug 30, 2021 · 1 comment
Labels
enhancement New feature or request

Comments

@DLPerf
Copy link

DLPerf commented Aug 30, 2021

Hello! I've found a performance issue in /utils: batch() should be called before map(), which could make your program more efficient. Here is the tensorflow document to support it.

Detailed description is listed below:

  • /pre_process_mnist.py: dataset_train.batch(batch_size)(here) should be called before dataset_train.map(image_rotate_random,num_parallel_calls=PARALLEL_INPUT_CALLS)(here), dataset_train.map(image_shift_rand,num_parallel_calls=PARALLEL_INPUT_CALLS)(here), dataset_train.map(image_squish_random,num_parallel_calls=PARALLEL_INPUT_CALLS)(here), dataset_train.map(image_erase_random,num_parallel_calls=PARALLEL_INPUT_CALLS)(here) and dataset_train.map(generator,num_parallel_calls=PARALLEL_INPUT_CALLS)(here).
  • /pre_process_mnist.py: dataset_test.batch(batch_size)(here) should be called before dataset_test.map(generator,num_parallel_calls=PARALLEL_INPUT_CALLS)(here).
  • /pre_process_smallnorb.py: dataset_train.batch(batch_size)(here) should be called before dataset_train.map(random_patches,num_parallel_calls=PARALLEL_INPUT_CALLS)(here), dataset_train.map(random_brightness,num_parallel_calls=PARALLEL_INPUT_CALLS)(here), dataset_train.map(random_contrast,num_parallel_calls=PARALLEL_INPUT_CALLS)(here) and dataset_train.map(generator,num_parallel_calls=PARALLEL_INPUT_CALLS)(here).
  • /pre_process_smallnorb.py: dataset_test.batch(1)(here) should be called before dataset_test.map(generator,num_parallel_calls=PARALLEL_INPUT_CALLS)(here).

Besides, you need to check the function called in map()(e.g., generator called in dataset_test.map(generator,num_parallel_calls=PARALLEL_INPUT_CALLS)) whether to be affected or not to make the changed code work properly. For example, if generator needs data with shape (x, y, z) as its input before fix, it would require data with shape (batch_size, x, y, z).

Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.

@EscVM EscVM added the enhancement New feature or request label Sep 2, 2021
@EscVM
Copy link
Owner

EscVM commented Sep 2, 2021

Hi @DLPerf!

Thank you for your tip. However, I have a doubt: does inserting batch before all transformation functions reduce the variance of the dataset? I mean, doing as you're saying, all images of a batch go through the same transformation. On the other hand, keeping batch at the end ensures a different random transformation for each image.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants