Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Metal/GPU support for running model inference #30

Open
singularitti opened this issue Jun 8, 2023 · 1 comment
Open

Add Metal/GPU support for running model inference #30

singularitti opened this issue Jun 8, 2023 · 1 comment
Assignees
Milestone

Comments

@singularitti
Copy link

I am no expert in this, but it seems to be running on CPUs, which could cause severe heat generation.

@alexrozanski
Copy link
Owner

@singularitti adding support for this in llama.swift to start with (see alexrozanski/llama.swift#8). this will be coming to LlamaChat v2 which is still a WIP!

@alexrozanski alexrozanski changed the title Does it support running on Apple Silicon's GPUs? Add Metal/GPU support for running model inference Jun 20, 2023
@alexrozanski alexrozanski self-assigned this Jun 20, 2023
@alexrozanski alexrozanski added this to the v2.0 milestone Jun 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: In Progress
Development

No branches or pull requests

2 participants