-
Notifications
You must be signed in to change notification settings - Fork 25.4k
make_variable consumes the Tensor if it only has one reference #22705
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
torch/csrc/autograd/variable.h
Outdated
data_impl->set_allow_tensor_metadata_change(allow_tensor_metadata_change); | ||
data_impl->set_autograd_meta(c10::guts::make_unique<Variable::AutogradMeta>(data_impl.get(), requires_grad)); | ||
return Variable(std::move(data_impl)); | ||
if (data.getIntrusivePtr().use_count() == 1) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this unsafe in a multithreaded environment? Isn't it possible to have two threads invoking make_variable(tensor)
concurrently on the same tensor
instance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't two threads have to move() the same instance into the call to make_variable? Or is there another sequencing you're thinking of?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or more directly: i believe this is safe because data
is taken by value
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yeah, you're right - taking by value makes this safe. The interleaving I was thinking of two threads invoking make_variable(t); one threads passes the use_count check tries to move, and the other thread fails the use_count check and tries to copy from the moved-from value, but that can't happen since neither could pass the use_count check.
…ence" make_variable consumes the Tensor if it only has one reference gh-metadata: pytorch pytorch 22705 gh/jamesr66a/23/head
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NGNT
…ence" make_variable consumes the Tensor if it only has one reference gh-metadata: pytorch pytorch 22705 gh/jamesr66a/23/head
…ence" make_variable consumes the Tensor if it only has one reference gh-metadata: pytorch pytorch 22705 gh/jamesr66a/23/head
…ence" Update on "make_variable consumes the Tensor if it only has one reference" make_variable consumes the Tensor if it only has one reference gh-metadata: pytorch pytorch 22705 gh/jamesr66a/23/head
…ence" make_variable consumes the Tensor if it only has one reference gh-metadata: pytorch pytorch 22705 gh/jamesr66a/23/head
@jamesr66a merged this pull request in 815e73b. |
Stack from ghstack:
This significantly reduces the overhead of operator dispatch. Benchmark:
Before
After
TODO: I don't know if this is valid if the Storage is shared
Differential Revision: D16192220