Join GitHub today
GitHub is home to over 36 million developers working together to host and review code, manage projects, and build software together.Sign up
Mention TF_FORCE_GPU_ALLOW_GROWTH in using_gpu.md #558
As mentioned in this comment to the commit of discussed functionality, I'd like to have it mentioned here, as it is the most relevant place for it to be mentioned that I could find.
Since I failed to find info about whether this feature is intended to be kept around and integrated into TF2 in the future development, or its future is not that certain, I'd like to at least start the discussion through this PR and maybe advocate for it, as I recently found it most useful and would really appreciate for it to be standardized/propagated into future TF2 stable releases.
Thanks for any reply!
Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).
Once you've signed (or fixed any issues), please reply here (e.g.
What to do if you already signed the CLA
Thanks, Lukas! Looks handy but I'm wondering if the platform-specific nature of this is why we're not surfacing it in the docs. Any insights into this @tfboyd?
I may totally miss the point, so please correct me. I do not think we want to share another way of turning on the GPU growth feature. the config proto is what we want people to use not the ENV VAR. If there is some advantage to the ENV_VAR other than someone liking to use them, please let me know and we can sort this out.
@lamberta Thanks for the links, I'll ask about the future plans there.
@tfboyd Thanks for the comment. I understand your concerns, let me clarify this. + Apologies for the size and many thanks for your time if you're going to read this!
I am not sure about the design decisions that led to implementation of this feature, so I want to clarify how much one can rely on it in the future. But since it is there, useful to at least system admins (e.g. people configuring runtimes that are served through Colab or any other multi-user or multi-instance system), it might as well be documented somewhere, as other people administrating multi-user systems might want to make use of it in the future.
I'd disagree that this is just another way of doing the same thing. It is rather the only way to persistently set a default availability of the feature. This is a different thing to do, conceptually, and is especially useful when high-level libraries such as
In a single GPU multi-user system, one can use this feature to set a system-wide default instead of forcing every user to include a code snippet that changes the TF-enforced default (disabled) to the one that is actually preferred (enabled) and ensure it is present all of their projects on every spot where default / active
I can imagine that this is not of much relevance in industry, where there is plenty of hardware for every practitioner, but in academia, the story is quite opposite and scarcity of hardware sometimes implies multiple practitioners working on a single piece of CUDA-capable hardware in parallel. To give some examples from experience:
I understand that this is not an intended use of
Being able to impose own preference over this default on the scope of whole maintained system is a great feature to have (even though quite use-case specific) and I just wished for it to be advertised a bit better than just this.
I hope this clarifies my viewpoint enough.
Thanks for your time and for any reply!
More than enough to justify. You had me at notebook and needing to set this outside the user code. I would like the updated wording to say something like this is another way to set the value. I hope to come back to this later today and hopefully a fast back and forth then submit before the week is over at the latest. After all your typing and reading I don't want this lost.