Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: User specified an unsupported autocast device_type 'mps' #29431

Closed
4 tasks
danny-su opened this issue Mar 4, 2024 · 14 comments · Fixed by #29439
Closed
4 tasks

RuntimeError: User specified an unsupported autocast device_type 'mps' #29431

danny-su opened this issue Mar 4, 2024 · 14 comments · Fixed by #29439

Comments

@danny-su
Copy link

danny-su commented Mar 4, 2024

System Info

transformers: 4.38.2
Python: 3.11.8
macOS: 14.3.1

Traceback (most recent call last):
  File "/Users/danny/Downloads/gemma_test.py", line 11, in <module>
    outputs = model.generate(**input_ids, max_new_tokens=50)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/transformers/generation/utils.py", line 1544, in generate
    return self.greedy_search(
           ^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/transformers/generation/utils.py", line 2404, in greedy_search
    outputs = self(
              ^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/transformers/models/gemma/modeling_gemma.py", line 1073, in forward
    outputs = self.model(
              ^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/transformers/models/gemma/modeling_gemma.py", line 914, in forward
    layer_outputs = decoder_layer(
                    ^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/transformers/models/gemma/modeling_gemma.py", line 631, in forward
    hidden_states, self_attn_weights, present_key_value = self.self_attn(
                                                          ^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/transformers/models/gemma/modeling_gemma.py", line 537, in forward
    cos, sin = self.rotary_emb(value_states, position_ids, seq_len=None)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/transformers/models/gemma/modeling_gemma.py", line 117, in forward
    with torch.autocast(device_type=device_type, enabled=False):
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/torch/amp/autocast_mode.py", line 241, in __init__
    raise RuntimeError(
RuntimeError: User specified an unsupported autocast device_type 'mps'

Who can help?

No response

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

Run the following code:

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="mps")

input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("mps")

outputs = model.generate(**input_ids, max_new_tokens=50)
print(tokenizer.decode(outputs[0]))

Expected behavior

No error.

@currybab
Copy link
Contributor

currybab commented Mar 4, 2024

I've submitted a PR to address this issue. It seems that reverting to a version of the library prior to 4.38.2 might work well as a temporary workaround. However, I haven't tried this.

@maxim25
Copy link

maxim25 commented Mar 4, 2024

I spent all weekend trying to get google/gemma-2b-it to work with mps. Finally, what worked was reverting to transformers 4.38.1

Thanks @currybab

@mmenendezg
Copy link

Hi everyone,

Maybe someone from the hugging face team could share if this an error that they will fix on the future releases of transformers.

Thanks

@amyeroberts
Copy link
Collaborator

Hi @mmenendezg - this should have been resolved in #29439. Do you still experience the error if you install transformers from source? pip install git+https://github.com/huggingface/transformers

@mmenendezg
Copy link

Hi @amyeroberts - Yes, this fixes the issue. Thanks for your support.

@Marvbuster
Copy link

Marvbuster commented Mar 19, 2024

Hi, unfortunately i am getting this error as well on an M2 max macbook pro, 64gb.
I am getting this when trying to start a dreambooth training in stablediffusion (webui).

Initializing dreambooth training... WARNING:dreambooth.train_dreambooth:Wandb API key not set. Please set WANDB_API_KEY environment variable to use wandb. 0 cached latents 0it [00:00, ?it/s] We need a total of 180 class images.0: : 24it [00:00, 934.94it/s] Loading pipeline components...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:12<00:00, 1.78s/it] Using scheduler: DEISMultistep: 0%| | 0/180 [00:00<?, ?it/s] Traceback (most recent call last): File "/Users/USER/workspace/ai/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/ui_functions.py", line 735, in start_training result = main(class_gen_method=class_gen_method) File "/Users/USER/workspace/ai/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/train_dreambooth.py", line 2003, in main return inner_loop() File "/Users/USER/workspace/ai/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/memory.py", line 126, in decorator return function(batch_size, grad_size, prof, *args, **kwargs) File "/Users/USER/workspace/ai/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/train_dreambooth.py", line 380, in inner_loop count, instance_prompts, class_prompts = generate_classifiers( File "/Users/USER/workspace/ai/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/utils/gen_utils.py", line 211, in generate_classifiers new_images = builder.generate_images(prompts, pbar) File "/Users/USER/workspace/ai/stable-diffusion-webui/extensions/sd_dreambooth_extension/helpers/image_builder.py", line 235, in generate_images with self.accelerator.autocast(), torch.inference_mode(): File "/opt/homebrew/Cellar/python@3.10/3.10.13_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py", line 135, in __enter__ return next(self.gen) File "/Users/USER/workspace/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/accelerate/accelerator.py", line 2907, in autocast autocast_context = get_mixed_precision_context_manager(self.native_amp, cache_enabled=cache_enabled) File "/Users/USER/workspace/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 1372, in get_mixed_precision_context_manager return torch.autocast(device_type=state.device.type, dtype=torch.float16, cache_enabled=cache_enabled) File "/Users/USER/workspace/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 241, in __init__ raise RuntimeError( RuntimeError: User specified an unsupported autocast device_type 'mps'

i am pretty new to the python world, so forgive me, if i ask something stupid.

I have a conda environment, in which i installed all of the packages with pip and conda.
i am starting webui in this environment and getting the above mentioned error.

In the stable-diffusion folder, there is a venv folder. I think this is technically the same as my conda env, right? And all the binaries in there wont change, if i install packages with pip and conda, right?

Is is safe to change the packages in the venv or do you have any tip for me where to look for a fix? As this is the only place, i could find on the internet which has the same error.

@amyeroberts
Copy link
Collaborator

Hi @Marvbuster - thanks for flagging this issue. From the traceback, the error isn't coming from the transformers library. I'd suggest opening an issue in the diffusers repo

@apotdar01
Copy link

I also getting same issue in M2 Pro Max

@sagargulabani
Copy link

Hi, I am also getting the same issue.
I am using an apple m3 max 64 GB RAM 40 core GPU and 16 core CPU machine.

400 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "/Users/sagargulabani/dev/automatic1111/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/ui_functions.py", line 735, in start_training
    result = main(class_gen_method=class_gen_method)
  File "/Users/sagargulabani/dev/automatic1111/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/train_dreambooth.py", line 2003, in main
    return inner_loop()
  File "/Users/sagargulabani/dev/automatic1111/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/memory.py", line 126, in decorator
    return function(batch_size, grad_size, prof, *args, **kwargs)
  File "/Users/sagargulabani/dev/automatic1111/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/train_dreambooth.py", line 380, in inner_loop
    count, instance_prompts, class_prompts = generate_classifiers(
  File "/Users/sagargulabani/dev/automatic1111/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/utils/gen_utils.py", line 211, in generate_classifiers
    new_images = builder.generate_images(prompts, pbar)
  File "/Users/sagargulabani/dev/automatic1111/stable-diffusion-webui/extensions/sd_dreambooth_extension/helpers/image_builder.py", line 235, in generate_images
    with self.accelerator.autocast(), torch.inference_mode():
  File "/opt/anaconda3/envs/automatic1111/lib/python3.10/contextlib.py", line 135, in __enter__
    return next(self.gen)
  File "/opt/anaconda3/envs/automatic1111/lib/python3.10/site-packages/accelerate/accelerator.py", line 2907, in autocast
    autocast_context = get_mixed_precision_context_manager(self.native_amp, cache_enabled=cache_enabled)
  File "/opt/anaconda3/envs/automatic1111/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 1372, in get_mixed_precision_context_manager
    return torch.autocast(device_type=state.device.type, dtype=torch.float16, cache_enabled=cache_enabled)
  File "/opt/anaconda3/envs/automatic1111/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 241, in __init__
    raise RuntimeError(
RuntimeError: User specified an unsupported autocast device_type 'mps'
Generating class images 0/1400::   0%|  

Getting this error for dreambooth.

@Marvbuster
Copy link

Hi, I am also getting the same issue. I am using an apple m3 max 64 GB RAM 40 core GPU and 16 core CPU machine.

400 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "/Users/sagargulabani/dev/automatic1111/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/ui_functions.py", line 735, in start_training
    result = main(class_gen_method=class_gen_method)
  File "/Users/sagargulabani/dev/automatic1111/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/train_dreambooth.py", line 2003, in main
    return inner_loop()
  File "/Users/sagargulabani/dev/automatic1111/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/memory.py", line 126, in decorator
    return function(batch_size, grad_size, prof, *args, **kwargs)
  File "/Users/sagargulabani/dev/automatic1111/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/train_dreambooth.py", line 380, in inner_loop
    count, instance_prompts, class_prompts = generate_classifiers(
  File "/Users/sagargulabani/dev/automatic1111/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/utils/gen_utils.py", line 211, in generate_classifiers
    new_images = builder.generate_images(prompts, pbar)
  File "/Users/sagargulabani/dev/automatic1111/stable-diffusion-webui/extensions/sd_dreambooth_extension/helpers/image_builder.py", line 235, in generate_images
    with self.accelerator.autocast(), torch.inference_mode():
  File "/opt/anaconda3/envs/automatic1111/lib/python3.10/contextlib.py", line 135, in __enter__
    return next(self.gen)
  File "/opt/anaconda3/envs/automatic1111/lib/python3.10/site-packages/accelerate/accelerator.py", line 2907, in autocast
    autocast_context = get_mixed_precision_context_manager(self.native_amp, cache_enabled=cache_enabled)
  File "/opt/anaconda3/envs/automatic1111/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 1372, in get_mixed_precision_context_manager
    return torch.autocast(device_type=state.device.type, dtype=torch.float16, cache_enabled=cache_enabled)
  File "/opt/anaconda3/envs/automatic1111/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 241, in __init__
    raise RuntimeError(
RuntimeError: User specified an unsupported autocast device_type 'mps'
Generating class images 0/1400::   0%|  

Getting this error for dreambooth.

you seem to have the same problem, @amyeroberts suggested opening an issue in the diffusers repo, but i haven't done so yet.

@danpe
Copy link

danpe commented Apr 16, 2024

Was someone able to overcome this issue?

@ArthurZucker
Copy link
Collaborator

Disable autocast on mps, it's atorch issue not a transfomrers issues

@danpe
Copy link

danpe commented Apr 18, 2024

Disable autocast on mps, it's atorch issue not a transfomrers issues

Thank you, how do I disable autocast when using Automatic1111?

@ArthurZucker
Copy link
Collaborator

I would recommend you to open an issue over there! I have no idea how the auto111 code works 😉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
10 participants