Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor/autotune/key #924

Merged
merged 6 commits into from
Nov 3, 2023
Merged

Refactor/autotune/key #924

merged 6 commits into from
Nov 3, 2023

Conversation

louisfd
Copy link
Member

@louisfd louisfd commented Nov 1, 2023

Refactored the autotune key (wgpu) as an enum and a struct. It should now be more robust, clear and efficient.

Comment on lines 11 to 15
Matmul {
/// The key with specific matmul information
matmul_key: MatmulAutotuneKey,
},
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The name is not necessary:

WgpuAutotuneKey {
    Matmul(MatmulAutotuneKey),
}

pub struct Tuner<S, C> {
tune_cache: TuneCache<S>,
pub struct Tuner<S: ComputeServer, C> {
tune_cache: TuneCache<S::AutotuneKey>,
_server: PhantomData<S>,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this phantom data is necessary.

Comment on lines 7 to 13
pub struct MatmulAutotuneKey {
round: bool, // True when all matmul dims are multiples of 64
broadcast: bool, // True when there are differences in batch size
anchored_m: usize,
anchored_k: usize,
anchored_n: usize,
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we wanted to include the round batch size as well !

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if you mean to have a bool for if batch sizes are all multiples of 64, or have an anchored batch size as usize (for all batch dims?)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

anchored batch size for all batch dims!

Copy link

codecov bot commented Nov 2, 2023

Codecov Report

Attention: 19 lines in your changes are missing coverage. Please review.

Comparison is base (8c80c9b) 86.54% compared to head (62385ab) 86.52%.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #924      +/-   ##
==========================================
- Coverage   86.54%   86.52%   -0.03%     
==========================================
  Files         469      471       +2     
  Lines       44191    44238      +47     
==========================================
+ Hits        38244    38275      +31     
- Misses       5947     5963      +16     
Files Coverage Δ
burn-compute/src/client.rs 97.36% <100.00%> (+0.22%) ⬆️
burn-compute/src/compute.rs 62.79% <ø> (ø)
burn-compute/src/server.rs 100.00% <ø> (ø)
burn-compute/src/tune/operation.rs 100.00% <ø> (+42.85%) ⬆️
burn-compute/src/tune/tune_benchmark.rs 81.25% <100.00%> (-2.09%) ⬇️
burn-compute/src/tune/tune_cache.rs 94.73% <100.00%> (-0.27%) ⬇️
burn-compute/src/tune/tuner.rs 96.82% <100.00%> (-0.05%) ⬇️
burn-compute/tests/dummy/server.rs 100.00% <ø> (ø)
burn-compute/tests/dummy/tune/operation_sets.rs 100.00% <100.00%> (ø)
burn-wgpu/src/compute/server.rs 98.72% <ø> (ø)
... and 3 more

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@nathanielsimard nathanielsimard merged commit 1cc1844 into main Nov 3, 2023
9 checks passed
@nathanielsimard nathanielsimard deleted the feat/autotune/key branch November 3, 2023 12:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants