Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

libc++abi: terminating due to uncaught exception of type std::runtime_error: [Matmul::eval_cpu] Currently only supports float32. #2

Open
bengrine opened this issue May 20, 2024 · 1 comment

Comments

@bengrine
Copy link

python run.py --train --model ./TinyLlama-1.1B-Chat-v1.0 --data ./data --batch-size 1 --lora-layers 4

Loading pretrained model You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the legacy(previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, setlegacy=False. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 Total parameters 227.461M Trainable parameters 0.205M Loading datasets Training libc++abi: terminating due to uncaught exception of type std::runtime_error: [Matmul::eval_cpu] Currently only supports float32. zsh: abort python run.py --train --model ./TinyLlama-1.1B-Chat-v1.0 --data ./data 1 4

Model Name: Mac Pro
Model Identifier: MacPro7,1
Enclosure: Tower
Processor Name: 16-Core Intel Xeon W
Processor Speed: 3.2 GHz
Number of Processors: 1
Total Number of Cores: 16
L2 Cache (per Core): 1 MB
L3 Cache: 22 MB
Hyper-Threading Technology: Enabled
Memory: 128 GB

@bengrine
Copy link
Author

I guess only Apple Silicon is supported in MLX

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant