Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TF2.0] Change default types globally #26033

Open
awav opened this issue Feb 23, 2019 · 29 comments
Open

[TF2.0] Change default types globally #26033

awav opened this issue Feb 23, 2019 · 29 comments
Assignees
Labels
comp:apis Highlevel API related issues stat:awaiting tensorflower Status - Awaiting response from tensorflower type:feature Feature requests

Comments

@awav
Copy link

awav commented Feb 23, 2019

Hello everyone,

I made the same request a while ago at tensorflow/community. The similar question was raised before at tensorflow/tensorflow#9781, where maintainers argued that GPU is much faster on float32, default type cannot (should not) be changed because of backwards compatibility reasons and cetera.

The thing is that the precision is very important for algorithms where operations like cholesky, solvers and etc. are used. This becomes very tedious to specify type everywhere, it gets even worse when you start using other frameworks or small libraries which follow the standard type settings and sometimes they become useless, just because type incompatibilities. The policy of "changing types locally solves your problems" becomes cumbersome.

It would be great to have methods tf.set_default_float and tf.set_default_int in TensorFlow 2.0 and I believe that such a small change will make TensorFlow more user friendly.

Kind regards,
Artem Artemev

@jvishnuvardhan jvishnuvardhan self-assigned this Feb 25, 2019
@jvishnuvardhan jvishnuvardhan added the type:feature Feature requests label Feb 25, 2019
@dynamicwebpaige dynamicwebpaige added the TF 2.0 Issues relating to TensorFlow 2.0 label Feb 25, 2019
@jvishnuvardhan jvishnuvardhan added comp:ops OPs related issues comp:apis Highlevel API related issues and removed comp:ops OPs related issues labels Feb 25, 2019
@jvishnuvardhan jvishnuvardhan added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Feb 27, 2019
@alextp
Copy link
Contributor

alextp commented Feb 27, 2019

@reedwm @azaks2 I wonder if we can piggy back on the fp16 work to make it easier to use fp64 across the board?

@reedwm
Copy link
Member

reedwm commented Feb 27, 2019

The fp16 API we are working on will only support Keras layers and models. Instead, perhaps floatx should be used as the default dtype for such algorithms, although I believe currently floatx only affects tf.keras.

/CC @fchollet, what do you think about using floatx?

@tensorflowbutler tensorflowbutler removed the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Feb 28, 2019
@awav
Copy link
Author

awav commented Mar 1, 2019

@alextp , @reedwm Hello! By tf.set_default_float, I meant that the method will change default types for variables creation and tensors. E.g.:

tensor1 = tf.Variable(v)
tf.set_default_float(tf.float64)
tensor2 = tf.Variable(v) 
tensor3 = tf.Variable(v, dtype=tf.float16)

# tensor1 has fp32
# tensor2 has fp64
# tensor3 has fp16

@awav
Copy link
Author

awav commented Apr 4, 2019

@alextp, @reedwm Hello everyone! ooc, does anyone work on this feature, or it is postponed until better times? :)

@alextp
Copy link
Contributor

alextp commented Apr 4, 2019 via email

@reedwm
Copy link
Member

reedwm commented Apr 4, 2019

You're right, we cannot use floatx for backwards compatibility, since currently it does not affect the dtypes of tf.Variables and such.

This proposal sounds reasonable. It would be convenient to be able to have all variables/constants in a user-chosen dtype without having to specify the dtype parameter for each. I do have several concerns however.

  1. It is confusing what the difference between set_default_float and floatx are. Floatx only affects tf.keras, while set_default_float would affect functions outside tf.keras. It is likely users will get them mixed up.
  2. It is confusing what exact set_default_float would affect. It probably should affect tf.Variable and tf.constant because they create tensors, and should probably not affect tf.add, etc. But what about functions like tf.ones_like? They arguably create a tensor from scratch, but they also take a tensor as input.
  3. We are currently working on a mixed precision API. Currently this only affects tf.keras, but nothing is set in stone at this point. The mixed precision API will create variables, but not necessarily constants, in float32, so it's unclear how it would interact with set_default_float. Since the mixed precision API fundamentally just changes the precision of tensors, it may be unclear to users what the difference between set_default_float and the mixed precision API are.

@alextp what do you think? Maybe we should wait until the mixed precision API is further along, so at least we could address (3) with more certainty.

@reedwm
Copy link
Member

reedwm commented Jul 11, 2019

I think this feature will be very useful for the upcoming tf.keras mixed precision API, so I will revisit this, at least for floating-point types.

I think it makes sense for tf.set_default_float to only affect functions which produce tf.Tensor outputs without taking in tf.Tensor inputs, such as tf.ones and tf.convert_to_tensor. It would not affect ops like tf.ones_like which take Tensor inputs, unless they were pass non-Tensor inputs.

@awav, in your example with variables, if v were a Tensor, tf.Variable would ignore the default float value and use the dtype of v. If you walk up v's input chain, some op must have produced v without taking a Tensor input, and so that op would have used the default float. Due to dtype propogation, this would make v the default float type as well unless a cast were done. If v were a non-Tensor, like a float or numpy array, then it would use the default float value.

Also, IMO, set_default_float and floatx should be the same, so that setting one also sets the other. We would have to make set_default_float to be thread-local to be compatible with DistributionStrategy, so we would also have to change floatx to be thread-local.

@alextp, @awav, any thoughts?

@alextp
Copy link
Contributor

alextp commented Jul 12, 2019

Overall looks good. Ideally floatx can just call set_default_float for correctness.

@jonas-eschle
Copy link
Contributor

We would also be very interested in this features.

Since we use TF to do likelihood fits requiring float64 (zfit) and allow users to specify their model with TF themselves, we often run into problems where users create (because of the default) a float32 tensor, conflicting with the other float64 tensors. Mostly for TF unaware users a quite annoying thing (and hard for us to catch and explicitly write what to change).

Currently, we even started wrapping some of the TF functions to avoid this problem, so a global fix would be highly appreciated (and we may can help implementing it oc).

@reedwm
Copy link
Member

reedwm commented Sep 3, 2019

@mayou36 thank you for the feedback. I plan to start a design doc in the upcoming weeks.

@awav
Copy link
Author

awav commented Sep 12, 2019

@reedwm ,

in your example with variables, if v were a Tensor, tf.Variable would ignore the default float value and use the dtype of v.

I agree with it, this is absolutely logical.
I look forward to this feature!

@awav
Copy link
Author

awav commented Oct 17, 2019

@reedwm , any updates on that?

@reedwm
Copy link
Member

reedwm commented Oct 17, 2019

Not yet sorry! I am currently busy with other tasks, but I hope to have a design doc fairly soon.

@awav
Copy link
Author

awav commented Dec 19, 2019

@alextp, @reedwm, it has been a while, no progress with this feature?

@reedwm
Copy link
Member

reedwm commented Dec 19, 2019

Unfortunately still no update :(. I am working on some mixed precision tasks at the moment. Once the Keras mixed precision API is more complete I can get to this.

@st--
Copy link

st-- commented Feb 5, 2020

@reedwm any news? I'd be very keen for this to make it into tensorflow - the lack of configurable default dtypes is one of our biggest pain points with tensorflow and makes life difficult and cumbersome for both the developers and users of downstream libraries (such as GPflow)!

@renatobellotti
Copy link

Has there been progress?

@reedwm
Copy link
Member

reedwm commented Mar 9, 2020

Sorry no updates yet :(

@Sayam753
Copy link

Any updates on this? I am very excited to see this feature in tensorflow.

@filipbartek
Copy link

tf.keras.backend.set_floatx seems to be doing this.

@awav
Copy link
Author

awav commented Aug 19, 2020

tf.keras.backend.set_floatx seems to be doing this.

Not exactly, it works for Keras only and doesn't have any effect on the rest of TensorFlow.

@donthomasitos
Copy link

This would be really helpful. Writing float16 code (if not keras) is filled with boilerplate crap.

@PercyLau
Copy link

This could be quite helpful, but sorry to see no update

@rmothukuru rmothukuru added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Jun 18, 2021
@tilakrayal tilakrayal removed the TF 2.0 Issues relating to TensorFlow 2.0 label Dec 24, 2021
@saideeptiku
Copy link

Three years, and still waiting.

@jonas-eschle
Copy link
Contributor

@reedwm what is the status of this? Any progress, anything we can help with?

@dhar174
Copy link

dhar174 commented Apr 15, 2022

I'd like to chime in as well that this would be a very useful feature. It can be very cumbersome to cast every tensor to the correct dtype manually, especially when using existing data. Let us know how we might be able to help, as a community!

@tomfrankkirk
Copy link

tomfrankkirk commented Oct 11, 2022

Just to keep the dream alive... yes this would be really useful!

@SimonBartels
Copy link

Does it maybe increase motivation for this feature by pointing out that PyTorch has it? :)
https://pytorch.org/docs/stable/generated/torch.set_default_dtype.html
Implementing algorithms from numerical linear algebra becomes really tedious when constantly running into easily avoidable tf.float32 != tf.float64 exceptions.

@Orient-Dorey
Copy link

Hi! Any updates regarding this feature? This would indeed be very helpful! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:apis Highlevel API related issues stat:awaiting tensorflower Status - Awaiting response from tensorflower type:feature Feature requests
Projects
TensorFlow 2.0
  
Awaiting triage
Development

No branches or pull requests