Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] Why don't tensors implement the Copy trait? #932

Closed
EddieMataEwy opened this issue Nov 6, 2023 · 4 comments
Closed

[Question] Why don't tensors implement the Copy trait? #932

EddieMataEwy opened this issue Nov 6, 2023 · 4 comments
Labels
question Further information is requested

Comments

@EddieMataEwy
Copy link
Contributor

Hi, I was just wondering why tensors implement the Clone trait but not the Copy trait.
What is the reason for this?

I just find myself writing .clone() over and over again, and I don't see any reason why you couldn't derive Copy as well.

Thank you for your time.

@louisfd louisfd added the question Further information is requested label Nov 7, 2023
@nathanielsimard
Copy link
Member

I don't think Copy works by just calling clone implicitly, it's a marker trait that tells the compiler the type can be copied directly in memory. However, a tensor clone isn't a copy, it just increments a reference, so the reason is probably the same why Copy isn't implemented for Arc.

@EddieMataEwy
Copy link
Contributor Author

I see. In retrospect that was a dumb question. I just saw the Tensors are implemented as Arc Arrays.
In this case, would it be possible to have many of the tensor ops take a reference to a tensor so that the compiler implements dereference coercion automatically? Or would we run into similar issues?

@nathanielsimard
Copy link
Member

Most of the optimizations of Burn are because we use owned tensors as arguments for all our operations. Using the ownership system to know when a tensor can be reused. This allows us to capture the graph and is fundamental in our optimization strategy with the upcoming burn-fusion. So, using tensor as reference would actually make this impossible and slow down the framework. I agree that calling clone when reusing a tensor isn't really pretty, but you can actually use clippy to minimize the number of clones and, in the same way, optimize your model!

@EddieMataEwy
Copy link
Contributor Author

Gotcha! Thanks for the explanation and for your time. Your work here is amazing. Keep going!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants