Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't install on windows #488

Closed
ParisNeo opened this issue Aug 30, 2023 · 9 comments
Closed

Can't install on windows #488

ParisNeo opened this issue Aug 30, 2023 · 9 comments

Comments

@ParisNeo
Copy link

Hi there, when I try to install this tool on windocs, I get this error:
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
Traceback (most recent call last):
File "", line 2, in
File "", line 34, in
File "C:\Users\aloui\AppData\Local\Temp\pip-install-2dteevmr\uvloop_a94358b69ddb46f6a5dc8651981ce698\setup.py", line 8, in
raise RuntimeError('uvloop does not support Windows at the moment')
RuntimeError: uvloop does not support Windows at the moment
[end of output]

Is there any way to make this run on windows?

@shohamjac
Copy link

Petals is using uvloop which does not support windows at the moment. you can run it using wsl with the guide here.

I am going to hijack this thread and ask: Why is uvloop necessary? Can we create a "windows version"?

@borzunov
Copy link
Collaborator

borzunov commented Aug 30, 2023

Hi @shohamjac,

uvloop is not the only issue - we use lots of Unix-specific things at the moment. Unfortunately, I don't think we'd be able to create a Windows version soon since right now we only have 1.75 people working full-time on the project. So WSL and/or Docker are the only ways to run on Windows for now.

@borzunov
Copy link
Collaborator

Hi @ParisNeo,

Please use WSL or Docker to run on Windows, see the commands here (they are for servers, but running a client requires a similar installation).

If you're only interested in inference, you can also use HTTP/WebSocket inference API from any platform.

@borzunov
Copy link
Collaborator

I'll update the error message to suggest to use WSL/Docker instead of showing the cryptic uvloop error on Windows.

@ParisNeo
Copy link
Author

Hi, and thanks alot. I just wanted to integrate your tool as a binding in my cross platform text generation (lollms). Bindings need to run on all platforms seemlessly as the users of my tool are not supposed to be nerdy :) :
https://github.com/ParisNeo/lollms-webui

To be honest, I don't like WSL and I think it blocks people from trying the tool. For windows users, lollms can be installed from a single installer exe and it runs out of the box. I have to rebuild a conda environment inside a WSL then bundle it in the installer. A lot of work!

For now it looks like it will be used by linux nerds if they want to :)

image

The idea of distributed text generation is genious. I hope this thing grows and starts supporting new models.

I saw that you support bloom, StableBeluga2 and llama2. Do you support quantized models or any of the fine tuned models that we can find on the hugging face ?

@borzunov
Copy link
Collaborator

Hi @ParisNeo, thanks for working on the integration and sorry for the delayed response!

Please note that there's a bounty for people working on a user-friendly Windows installer (the reward may be increased): TheSCInitiative/bounties#16

I saw that you support bloom, StableBeluga2 and llama2. Do you support quantized models or any of the fine tuned models that we can find on the hugging face ?

We support all models based on Llama, Falcon, and BLOOM architectures - so you can use run fine-tuned versions from the Hugging Face hubs (given that someone connects GPUs hosting them). All models are loaded with NF4 quantization by default (4.5 bits per weight with only a tiny quality loss).

Let us know if you have other questions!

@ParisNeo
Copy link
Author

Thanks for your answer. I really like this project and want to make it more accessible to people by making it a default a binding in my lollms app and add insentives for people to actually put their gpu in the service of the p2p generation network.

Now it only runs on linux, and it would be cool if a future version can run on other platforms too.

I want to democratize the use of this amazing idea by providing an easy to install, seemlessly integrated binding to lollms. So that any lollms user can just press install, select an available model and start running the model in the network. I can modify the models zoo to show how many nodes are serving each model etc.

Also, I want the network to be self aware of its environmental impact by integrating code carbon.

I'll dive into the details when I have the Time. As I am working on the other front of lollms now by making it accessible in virtually any programming language.

The only problem is that there are 24h a day, I am a research ingeneer for 8 hours of them, and can only work on this in my free time.

@borzunov
Copy link
Collaborator

@ParisNeo Sure! Just FYI, Petals also can run on macOS natively.

@ParisNeo
Copy link
Author

I already managed to integrate it successfully on lollms in windows. I still have minor bugs, but the most difficult part is now running fine. The lollms with ptals installer will be available on lollms release section. A simple exe.
Good night

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants