Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Export llama without llama #85

Merged
merged 1 commit into from
Jul 25, 2023

Conversation

python273
Copy link
Contributor

@python273 python273 commented Jul 25, 2023

Now only need torch + numpy, GPU not needed, no need to clone https://github.com/facebookresearch/llama

7B works, 13B doesn't work for some reason. But, I have llama2c format loader for Tinygrad and it works there, so there must be some bug in run.c?

Screenshot 2023-07-26 at 01 30 04 Screenshot 2023-07-26 at 01 38 30

@sidsarasvati
Copy link

You wanna update the README as well? I can test it out on my M1 mac

@python273
Copy link
Contributor Author

😅 Seems like there are already #71 and #78. My version works without llama repo, CPU only, supports models bigger than 7B

@karpathy
Copy link
Owner

Thank you I will take it, and change some of the docs too. Let's worry about the 13B later. Strange that it would work in tinygrad but not here.

@karpathy karpathy merged commit 5bcd19a into karpathy:master Jul 25, 2023
vinhtran2611 pushed a commit to vinhtran2611/llama2.c that referenced this pull request Jan 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants