Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

$200 Bounty: Petals GUI & Installer for Windows #16

Closed
makeasnek opened this issue Sep 15, 2023 · 15 comments
Closed

$200 Bounty: Petals GUI & Installer for Windows #16

makeasnek opened this issue Sep 15, 2023 · 15 comments

Comments

@makeasnek
Copy link
Contributor

makeasnek commented Sep 15, 2023

btcbalance badgeeth balance badge grc balance badge

Bounty amount will increase at random times and amounts until it is claimed. Subscribe to this issue to receive notifications about increases

Context: Why this bounty exists
Petals is a bleeding-edge tool for running large AI models in a distributed fashion. Previously, AI researchers and those looking to use large language models would have to pay exorbitant costs to host a server farm to train and run models. With Petals, this is done in a decentralized way which removes this barrier to research.

However, installing and hosting a petals node still requires some technical expertise as some knowledge of command-line usage is required. This bounty will create a point-and-click GUI installer, enabling more people to contribute to and benefit from the Petals network.

Requirements to claim bounty:

  • Create a simple GUI for Petals on Windows which will: download the required packages, install them, launch a petals node, and provide useful output to the user if there are any issues so they can seek support.
  • The GUI should be coded in Python, since this is what Petals is coded in.
  • The installer should be a single downloadable exe
  • The installer should make a Petals icon on the desktop which when double-clicked will launch Petals
  • The GUI should check prior to installation that the computer has a compatible graphics card installed and warn the user if one is not installed. This means checking for NVIDIA graphics card with driver version > 490.
  • After installation, the GUI shall present a window to the user displaying a few stats about the node (uptime, number of requests, or any other useful stats) with a button to turn on/off the node
  • Submit code to Petals repo and have it approved by maintainers, following all coding standards used by the repo and making any style and other edits requested by them. In the event that your code is not accepted by maintainers but otherwise meets bounty requirements and solves the issue the bounty is for, the bounty will still be paid out. You should not submit code as a pull request, instead you should create a seperate repo for the Petals team to review, which they can then take over or import upon completion of the bounty.
  • If more than one GPU is detected, installer and main run window should let user pick which GPU to use.
  • GUI should let user choose which model to run at installation, and user should be able to modify this choice from the main window after installation. Each model should have a pre-defined GPU VRAM number and warn the user if the VRAM number > their GPUs amount of vram.
  • Installer should also allow the user to enter their username to identify their node in the swarm, and should enable them to change it from the main running window after installation.
  • GUI should provide a text box for the user to submit queries to the model and see responses
  • Code should be written in a way that it will be possible to expand in the future. For example, by adding different stats to be printed or by adding new models to be run.
  • Examples of how to run Petals on a Windows system are here. Your installer should use the WSL or Docker installation methods.
  • Your code must be released under the MIT license (same license as the Petals repository)

Contribute to this Bounty
You can contribute to SCI's bounty program by donating cash or crypto to SCI. You will get a nice tax deduction, and we will spend those donated funds on our bounty programs.

You can also donate to this bounty specifically by sending crypto to the following addresses. Did you know that crypto is one of the most effective ways to make donations (for US donors)? Cryptocurrency donations to 501(c)3 nonprofits are considered tax-deductible and do not trigger a taxable event, meaning you do not usually have to pay capital gains tax on them. We request that any individual donating over $500USD (or equivalent) provide their information along with their donation to ensure compliance with our AML and KYC policies. Any organization that wishes to make a donation to SCI is requested to reach out to us directly at contact{at}thesciencecommons.org. In the event that the awardee does not want the crypto or the bounty is closed without being paid out, it will be turned over to SCI's bounty fund to be spent on future bounties.

BTC (Bitcoin): bc1qrl5ksfgw2ue3fxf6avuyuw5z3rs32hdmw4t2k6
ETH (Ethereum) and DAI: 0x60982d4f98A3a9Cb957Fe66C15149A2d91311DD9
GRC (Gridcoin): S8VgmnQnVARejcPPcG4burFeoEVRS362fk. You can see the balance of this GRC address at http://gridcoinstats.eu/address/S8VgmnQnVARejcPPcG4burFeoEVRS362fk

Bounty amount: $200 USD + Contents of above crypto addresses

Payment of USD portion will be made through PayPal or DAI (your choice) directly from the SCI upon completion of the work. You will also get the contents of the crypto addresses linked above (minus any tx fees) and the satisfaction of knowing you are helping a software and ecosystem which supports the progress of science.

Claiming bounty

  • Comment below if you want to indicate you are working on the bounty (though this is not required) or if you have any questions.
  • Comment below once you have created the code required to satisfy the bounty
  • Once the code request is accepted by Petals project maintainers (or they reject it but SCI determines the fix is appropriately coded), you will be awarded the bounty
  • The bounty is awarded to the first person who successfully completes the requirements.
  • If the bounty amount exceeds $600 USD equivalent, you will also need to provide us with the requisite paperwork normally completed for contractors for US companies (1099-MISC).
  • Please see readme in root of repo for full information about bounty policies

About SCI
The SCI is a US 501(c)(3) non-profit organization dedicated to rebuilding the bridge of participation and trust between the public and the scientific process. We support tools and infrastructure that enable people to learn about and engage with science. Follow our work via our free newsletter on substack.

@makeasnek makeasnek removed the Draft label Sep 16, 2023
@makeasnek makeasnek changed the title DRAFT Bounty: Petals GUI $200 Bounty: Petals GUI Sep 16, 2023
@makeasnek makeasnek reopened this Sep 16, 2023
@makeasnek
Copy link
Contributor Author

Sep 15, 2023: Bounty is now live and open!

@makeasnek makeasnek changed the title $200 Bounty: Petals GUI $200 Bounty: Petals GUI & Installer Sep 16, 2023
@makeasnek makeasnek changed the title $200 Bounty: Petals GUI & Installer $200 Bounty: Petals GUI & Installer for Windows Sep 16, 2023
@bennmann
Copy link

I encourage the science commons to add clear and specific hardware compatibility requirements, but I understand why not

It would not be ideal if someone "only" released a cuda windows binary/exe and neglected AMD and Intel

@makeasnek
Copy link
Contributor Author

9000 GRC (approx 90 USD) has been added to the bounty today by some generous community contributors. Updated the badge

@ParisNeo
Copy link

ParisNeo commented Sep 20, 2023

Hhh. And petals has been sitting there on lollms for weeks now:
https://twitter.com/SpaceNerduino/status/1697033550413938694?t=Mg-a1zIKwFvUQWBNJBytFA&s=19

For those who don't know lollms it's like oobabooga text generation and can be found here:

Ok folks, I'll take time to make sure it can be installed on windows from lollms. Does that count? Lollms can be installed using a simple windows installer already. And it offers many many things, like more than 300 personalities, a playground tool with loads of presets, full control over the generation system, access to other tools like stable diffusion, musicgen and so on.

@makeasnek
Copy link
Contributor Author

makeasnek commented Sep 20, 2023

Thank you for your interest @ParisNeo. If I understand correctly, you have another tool (lollms) which is sort of like a meta-installer for running many different language models? If so, if you add Petals to it so that you can create a Petals node (on Windows), then yes that would qualify for the bounty. The code for the installer must be open-source and otherwise meet all aspects of the bounty requirements.

@ParisNeo
Copy link

OK then, I'll try to do it tomorrow evening.

Best regards.
You can learn more about lollms in my youtube videos:

It is a multi bindings UI for text generation that provides personalities to chat with, a vector database to use documents, a playground for experimenting with text generation tasks along with multiple presets for many applications (coding, translation, documenting, writinf, fixing mails etc..). It also supports image and video generation as well as music generation. All in one :)

@ParisNeo
Copy link

ParisNeo commented Sep 21, 2023

Thank you very much. I actually managed only to make it run natively on linux.

On windows, there is a dependency that is making this very very difficult: uvloop. This dependency explicitly rejects any attempt to install it on windows. There is active work to make it windows friendly, but the pull requests are not yet accepted and they don't seem to be fully working yet. So we may expect them to make a windows version in the upcoming months but not sooner.

This means that my best shot at doing this is to use WSL.

It works like charm with WSL with cuda and everything:

image
image
image
image
image
image
image
The node is visible from the https://health.petals.dev/ site. So everything is running fine.

To sum up, I've built a simple .bat file that installs an ubuntu WSL system, installs python and pip, then installs petals and runs the server.

But that won't be acceptable if I understand the rules of this challenge. So I am integrating the installation directly in the lollms binding installation procedure. Usually, if you are using linux, I install the binding and run the node from python with the right models. So for windows I'll make a test and use the wsl instead.

image

Now with this, when you run lollms it starts the node but I need to code a bridge so that it is usable for text generation. I may go with a client that uses socketio to communicate with lollms.

The other solution is to literally install lollms in wsl, which will solve all bridging needs. I think I'll go with that other solution, that would save me some time.

I'll make a version of lollms that runs on wsl and is using petals by default.

DONE!

Now lollms can be installed with wsl support
Works!
image
Now Install petals
image

It automatically installs cuda and stuff:

image

Now it is using petals:

image

To finish, I created an exe installer using innosetup:

image

Once installed you will have three new icons:

image

  • The lollms with petals launches lollms with petals support
  • The petals server runs a petals-team/StableBeluga2 server or another model that you explicitely type.
  • The ubuntu is a terminal to interact with the wsl image that is running lollms or code using petals or any of the lollms library tools.

OK, now I finished making the installer. I'll try to do a full reinstall and see if it works.

You can find all the scripts to build the installer in the lollms repository:

https://github.com/ParisNeo/lollms-webui/tree/main/scripts/wsl

The installer is built using innosetup tool (free to download from the internet):

Steps:

  • Download the installer (make sure your antivirus don't block the download because the installer is new and sometimes the antiviruses consider that its reputation is not high enough for it to be safe)
  • Run the installer and accept licence and press next next next as any install.

image

  • After copying files, a console window wil appear. If you don't have wsl, it will install it and install an ubuntu distribution, It will ask you for a user name and password to be used for the ubuntu distribution. Otherwize, it may load a terminal. Just type exit to go on.
  • After that, another script is executed, this script requires sudo privileges, so make sure you type the password you have created when installed the ubuntu wsl. This script will update all files, install cuda, add it to the path and setup the environment variables, configure the whole system, install miniconda, clone lollms-webui repository, install all required files.
  • Now you have finished the install, you will be asked if you want to run lollms, you can accept.
  • Notice that there will be three new shortcuts on the desktop as stated before:

image

  • The first one is a simple ubuntu terminal, useful for debug and manual execution of petals
  • The second one is for running lollms to do inference with petals or any other binding
  • The third one is for running a petals server to give part of your PC to the community (you'll be prompted for a model hugging face path. if you press enter it will use petals-team/StableBeluga2)

You need to run lollms to install petals binding. When it is loaded it opens a browser. If it doesn't open a browser and navigate to localhost:9600.
Go to settings -> Bindings zoo -> petals and press install. You can monitor the install by looking at the console output.

Once ready, open the models zoo and select a model you want to use for petals. Wait for it to load. If no model is showing up, just reload the localhost:9600 page and then go to settings and the models zoo should have models in it.

image

You can run the petals server by double clicking the petals server icon on the desktop. This will use your machine as part of the hive mind:

image

And after all, in the discussion view it works like charm. We can see here that it is using the bs_petals which is the codename for the petals binding (i can't use the same name as the module to avoid import issues):

image

Now this is all in my lollms hugging face repository.
You can find the code for wsl install of everything in here:
https://github.com/ParisNeo/lollms-webui/tree/main/scripts/wsl

You can modify the code to adapt any aspect to your needs then use innosetup to generate an installer or even make an installer that is independant from lollms if you don't need it.

I also provide an executable installer on my release page of lollms, just select the petals version:
https://github.com/ParisNeo/lollms-webui/releases/tag/v6.5.0

The one with wsl and petals support is lollms-with-petals.exe

I will probably make a video explaining exactly how to install and use this tool.

I hope you like this. Tell me if you have questions or notice a bug or something.

Here is my free discord channel: https://discord.gg/vHRwSxb5

Best regards

@makeasnek
Copy link
Contributor Author

This is looking great! Using WSL or Docker is fine, this is probably preferable to installing it on Windows natively. There are a few requirements which I think are not met yet, please let me know if I am mistaken:

  • The GUI should check prior to installation that the computer has a compatible graphics card installed and warn the user if one is not installed. This means checking for NVIDIA graphics card with driver version > 490.
  • After installation, the GUI shall present a window to the user displaying a few stats about the node (uptime, number of requests, or any other useful stats) with a button to turn on/off the node
  • If more than one GPU is detected, installer and main run window should let user pick which GPU to use.
  • GUI should let user choose which model to run at installation, and user should be able to modify this choice from the main window after installation. Each model should have a pre-defined GPU VRAM number and warn the user if the VRAM number > their GPUs amount of vram.
  • Installer should also allow the user to enter their username to identify their node in the swarm, and should enable them to change it from the main running window after installation.

If these requirements are satisfied, we will ask the Petals team to review your submission and if everything looks good we will pay out the bounty :).

@ParisNeo
Copy link

Thank you very much.

I will adress the issues this evening when I'm back home.
It would be a cool idea if we can talk, because for long time, I had an idea with few other people of using distributed computing for AI and I think petals offers an interesting platform to my neurovoyance initiave.

If you have time this evening We can talk on discord: https://discord.gg/vHRwSxb5
You can DM me.

The bounty is not what interest me the most in this. It is more the potential of the tool that interests me most. But I won't say no to 200$ :) that can pay for the expenses of hosting lollms services.

@ParisNeo
Copy link

I think I'm done.
I have integrated it to lollms and I have built a stand alone application for the server with automatic install on windows:

You can find the executable here:

https://github.com/ParisNeo/petals_server_installer/releases/tag/v1.0

The full code is on the same repository:

https://github.com/ParisNeo/petals_server_installer

A video on how it is installed using lollms :

https://www.youtube.com/watch?v=XwjL8ZOa7ec&t=332s

So you can pick between independent version or lollms integrated version.

Have a nice weekend, Got to sleep. I havn't slept for some time :)

@makeasnek
Copy link
Contributor Author

Excellent, send this to the Petals team for their review and if they sign off, we'll do a final code review and then release the bounty to you :)

@makeasnek makeasnek added In Progress Bounties which somebody is working on and removed In Progress Bounties which somebody is working on labels Sep 24, 2023
@makeasnek
Copy link
Contributor Author

Note for bounty hunters: We currently have a submission in for this bounty which is under review

@makeasnek
Copy link
Contributor Author

Followed up w/ Petals today awaiting response.

@makeasnek
Copy link
Contributor Author

Bounty is approved, making payment today

@ParisNeo
Copy link

Thanks

@makeasnek makeasnek removed the In Progress Bounties which somebody is working on label Oct 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants