Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

windows compatibility? #2

Closed
anm5704 opened this issue Mar 21, 2023 · 6 comments
Closed

windows compatibility? #2

anm5704 opened this issue Mar 21, 2023 · 6 comments

Comments

@anm5704
Copy link

anm5704 commented Mar 21, 2023

I'm a beginner , is this program compatible with windows? what are the neccessary steps. I have the alpaca.cpp already installed on my laptop

@mudler
Copy link
Owner

mudler commented Mar 21, 2023

Hi!

I don't have a windows machine to double check, but I don't think there should be anything specific preventing it to work there.

Just install docker and start in API mode https://github.com/go-skynet/llama-cli#advanced-usage, or I guess WSL should work too.

@anm5704
Copy link
Author

anm5704 commented Mar 21, 2023

Thank you, I ran it after installing docker
docker run -ti --rm quay.io/go-skynet/llama-cli:v0.1 --instruction "What's an alpaca?" --topk 10000
I ran this command but it seems to start the process for 5-6 seconds and ends it without displaying any output,
the same thing happens in API, the localhost site doesnt even load and the process stops are 5-6 without output. Any idea what might be causing this

@anm5704 anm5704 closed this as completed Mar 21, 2023
@anm5704 anm5704 reopened this Mar 21, 2023
@stasadance
Copy link

Thank you, I ran it after installing docker docker run -ti --rm quay.io/go-skynet/llama-cli:v0.1 --instruction "What's an alpaca?" --topk 10000 I ran this command but it seems to start the process for 5-6 seconds and ends it without displaying any output, the same thing happens in API, the localhost site doesnt even load and the process stops are 5-6 without output. Any idea what might be causing this

I was able to fix this by allocating more memory in Docker Desktop settings:

image

@anm5704
Copy link
Author

anm5704 commented Mar 23, 2023

hello! thanks, it seems in windows you have to create a .wslconfig in order to increase the limits. But its working now.
II'm confused tho, the API is for LLaMa right? and not the fine-tuned alpaca models (trained on gpt-3 data).

@mudler
Copy link
Owner

mudler commented Mar 23, 2023

the api should work for both, although I've tested only alpaca models. The new version 0.3 has an --alpaca boolean flag too.

@mudler
Copy link
Owner

mudler commented Apr 4, 2023

I guess we can close this now? Thanks @stasadance for pointing us in the right direction!

@mudler mudler closed this as completed Apr 4, 2023
dave-gray101 added a commit to dave-gray101/LocalAI that referenced this issue Apr 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants