Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Module 'keras.engine.data_adapter' has no attribute 'expand_1d' with non dummy loss #20750

Closed
3 of 4 tasks
ZJaume opened this issue Dec 13, 2022 · 5 comments · Fixed by #20786
Closed
3 of 4 tasks

Module 'keras.engine.data_adapter' has no attribute 'expand_1d' with non dummy loss #20750

ZJaume opened this issue Dec 13, 2022 · 5 comments · Fixed by #20786

Comments

@ZJaume
Copy link

ZJaume commented Dec 13, 2022

System Info

  • transformers version: 4.25.1
  • Platform: Linux-4.15.0-200-generic-x86_64-with-glibc2.17
  • Python version: 3.8.13
  • Huggingface_hub version: 0.11.1
  • PyTorch version (GPU?): 1.10.1+cu102 (True)
  • Tensorflow version (GPU?): 2.11.0 (True)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using GPU in script?: Yes
  • Using distributed or parallel set-up in script?: No

Who can help?

@Rocketknight1

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

Run the example code with a non dummy loss:

from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
from tensorflow.keras.optimizers import Adam
from datasets import load_dataset
import tensorflow as tf
import numpy as np

dataset = load_dataset("glue", "cola")
dataset = dataset["train"]  # Just take the training split for now


tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
tokenized_data = dict(tokenizer(dataset["sentence"], return_tensors="np", padding=True))

labels = np.array(dataset["label"])  # Label is already an array of 0 and 1

# Load and compile our model
model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased")
# Lower learning rates are often better for fine-tuning transformers
model.compile(optimizer=Adam(3e-5), loss='binary_crossentropy')

model.fit(tokenized_data, labels)
Traceback (most recent call last):
  File "test_mirrored.py", line 22, in <module>
    model.fit(tokenized_data, labels)
  File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/tmp/__autograph_generated_file1a59fb96.py", line 15, in tf__train_function
    retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
  File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/transformers/modeling_tf_utils.py", line 1476, in train_step
    data = data_adapter.expand_1d(data)
AttributeError: in user code:

    File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/engine/training.py", line 1249, in train_function  *
        return step_function(self, iterator)
    File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/engine/training.py", line 1233, in step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/engine/training.py", line 1222, in run_step  **
        outputs = model.train_step(data)
    File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/transformers/modeling_tf_utils.py", line 1476, in train_step
        data = data_adapter.expand_1d(data)

    AttributeError: module 'keras.engine.data_adapter' has no attribute 'expand_1d'

Expected behavior

Training succesfully.

@sgugger
Copy link
Collaborator

sgugger commented Dec 13, 2022

cc @Rocketknight1 and @gante

@Rocketknight1
Copy link
Member

Reproduced this issue locally, seems to be an issue with TF 2.11 and doesn't occur in previous versions. Checking it out now!

@Rocketknight1
Copy link
Member

@ZJaume @McClunatic the fix has been merged - please try installing transformers from main with pip install --upgrade git+https://github.com/huggingface/transformers.git and see if the issue is resolved. If you encounter any further problems, please reopen this issue and let me know!

@McClunatic
Copy link

@Rocketknight1 I've just tested it in my notebook and the issue is indeed resolved! Thanks so much for fixing this so quickly!

@jamesalbert
Copy link

jamesalbert commented Jan 4, 2023

came across this issue experiencing the same thing. upgraded from the primary branch worked for me as well 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants