-
Notifications
You must be signed in to change notification settings - Fork 31.3k
Description
Environment info
transformersversion: 4.12.5- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.10
- PyTorch version (GPU?): 1.10.0 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
Who can help
@michaelbenayoun @Albertobegue
Information
When both Torch and TensorFlow are installed, FeaturesManager defaults to using AutoModel, so the model returned by get_model_from_feature is always Torch.
To reproduce
Steps to reproduce the behavior:
- Install Torch and TF
- Call
FeatureManager.get_model_from_featurewith arbitrary but supportedfeaturesandmodel_namearguments - The resulting model is always a Torch model
features = "default" # randomly chosen, supported feature
model_name = "bert" # randomly chosen, supported model
model = FeaturesManager.get_model_from_feature(features, model_name)Expected behavior
Some test environments have both Torch and TensorFlow installed, because the immediate task is to ensure functionality is the same regardless of the framework. I would expect FeaturesManager.get_model_from_feature to allow TensorFlow to be used even when Torch is installed. This could be implemented by e.g. a keyword argument to get_model_from_feature with a default value of None. When the keyword argument is None, and both Torch and TensorFlow are installed, FeatureManager would default to Torch, as it does now. Otherwise, it would use the specified framework.