Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speedup for long benchmarks iteration #1464

Merged

Conversation

lrzpellegrini
Copy link
Collaborator

A simple fix to speedup benchmark creation and stream iteration.

The impact is greatly noticeable in long benchmarks (such as SplitImagenet, n_experiences=1000), where the iteration time on the training stream drops from 1h to 1m. I also tried other optimizations regarding FlatData, but this seems to be the most straightforward solution.

@coveralls
Copy link

Pull Request Test Coverage Report for Build 5601711922

  • 2 of 2 (100.0%) changed or added relevant lines in 1 file are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 72.765%

Totals Coverage Status
Change from base Build 5601076566: 0.0%
Covered Lines: 16709
Relevant Lines: 22963

💛 - Coveralls

@AntonioCarta
Copy link
Collaborator

Thanks, this is great! does it impacts also smaller benchmarks such as CIFAR100?

@AntonioCarta AntonioCarta merged commit 079d807 into ContinualAI:master Jul 20, 2023
18 checks passed
@lrzpellegrini
Copy link
Collaborator Author

lrzpellegrini commented Jul 20, 2023

For SplitCIFAR100, n_exps=100, the following loop goes from 6.22 to 1.50 seconds (Ryzen 7 3700X):

start_time = time.time()
benchmark_instance = SplitCIFAR100(100)
for exp in benchmark_instance.train_stream:
    dataset = exp.dataset
end_time = time.time()
print(end_time-start_time)

@lrzpellegrini lrzpellegrini deleted the long_benchmarks_speedup branch July 20, 2023 14:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants