Skip to content

How to run the program on MacbookPro with M2 Max CPU? #8

@zhuchuji

Description

@zhuchuji

I try to run the program on my macbook pro with M2 Max CPU, it throws AssertionError: Torch not compiled with CUDA enabled.

Detail log is shown as following:

/Users/chockiezhu/Library/Python/3.9/lib/python/site-packages/urllib3/__init__.py:35: NotOpenSSLWarning: urllib3 v2 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'LibreSSL 2.8.3'. See: https://github.com/urllib3/urllib3/issues/3020
  warnings.warn(
Loading checkpoint shards:   0%|                         | 0/10 [00:01<?, ?it/s]
Traceback (most recent call last):
  File "/Users/chockiezhu/practice/CLoT/gradio_demo.py", line 369, in <module>
    main()
  File "/Users/chockiezhu/practice/CLoT/gradio_demo.py", line 365, in main
    _launch_demo(args)
  File "/Users/chockiezhu/practice/CLoT/gradio_demo.py", line 114, in _launch_demo
    model, tokenizer = _load_model_tokenizer(args.checkpoint_path)
  File "/Users/chockiezhu/practice/CLoT/gradio_demo.py", line 71, in _load_model_tokenizer
    model = AutoPeftModelForCausalLM.from_pretrained(
  File "/Users/chockiezhu/Library/Python/3.9/lib/python/site-packages/peft/auto.py", line 104, in from_pretrained
    base_model = target_class.from_pretrained(base_model_path, **kwargs)
  File "/Users/chockiezhu/Library/Python/3.9/lib/python/site-packages/transformers/models/auto/auto_factory.py", line 558, in from_pretrained
    return model_class.from_pretrained(
  File "/Users/chockiezhu/Library/Python/3.9/lib/python/site-packages/transformers/modeling_utils.py", line 3531, in from_pretrained
    ) = cls._load_pretrained_model(
  File "/Users/chockiezhu/Library/Python/3.9/lib/python/site-packages/transformers/modeling_utils.py", line 3958, in _load_pretrained_model
    new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
  File "/Users/chockiezhu/Library/Python/3.9/lib/python/site-packages/transformers/modeling_utils.py", line 812, in _load_state_dict_into_meta_model
    set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs)
  File "/Users/chockiezhu/Library/Python/3.9/lib/python/site-packages/accelerate/utils/modeling.py", line 399, in set_module_tensor_to_device
    new_value = value.to(device)
  File "/Users/chockiezhu/Library/Python/3.9/lib/python/site-packages/torch/cuda/__init__.py", line 293, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions