-
-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve examples (add Cuda) #452
Conversation
2ff7104
to
ad3d4bb
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice changes, looking great! Let's revert the workarounds in 08 and then should be good
examples/01-tensor.rs
Outdated
use dfdx::{ | ||
shapes::{Const, HasShape, Rank1, Rank2, Rank3}, | ||
tensor::{AsArray, Cpu, OnesTensor, SampleTensor, Tensor, TensorFrom, ZerosTensor}, | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I originally kept the imports as explicit just so people would get to know where things were in the library.
The intention is to probably just use dfdx::prelude most times, but I still wonder if its helpful to have explicit imports in some of the examples.
Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually agree with you on that matter. The reason I used prelude
nonetheless is, that it automatically imports the Cuda device if the according feature is enabled. Otherwise the dfdx::tensor::cuda::device::Cuda
struct has to imported with a #[cfg(feature = "cuda")]
safe guard.
I can change that if desired but I think it would look "ugly":
#[cfg(feature = "cuda")]
use dfdx::tensor::cuda::device::Cuda;
#[cfg(feature = "cuda")]
type Device = Cuda;
That could well just be me though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh yeah great point.
Since we are doing type aliases for the devices we could not include cpu/cuda in use statements, and then do:
#[cfg(feature = "cuda")]
type Device = dfdx::tensor::Cuda;
#[cfg(not(feature = "cuda"))]
type Device = dfdx::tensor::Cpu;
Probably not as clean as doing prelude, but I think better than doing conditional use
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done now :)
6287a4e
to
42dab6d
Compare
Added possible cuda device to each example. Fixed some typos. Rearranged code. Added some insightful print statements.
42dab6d
to
24b90b8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Beautiful, thanks for this contribution! 🚀
Content
Fixes #446
I added Cuda support to every example.
Furthermore, I improved some examples by printing more information or demonstrating more capabilities.
Testing strategy
I tested my code by running every single example on my machine with and without the
cuda
feature.Me kissing your a**
I am very impressed by this library and think you did a wonderful work. Out of all the Deep Learning frameworks in the Rust ecosystem, this one is the most intuitive and ergonomic one and it just feels so great.