NIP-104 Generative AI prompt#634
NIP-104 Generative AI prompt#634vitorpamplona wants to merge 2 commits intonostr-protocol:masterfrom
Conversation
|
If I understood the NIP correctly you would like to run a generative model on a client side (e.g. something like a stable-diffusion model), where the prompts will be provided via NIP, so that every user/client has a potentially unique experience. Although this might well be possible in the future when SD derivatives become more efficient I think we are not there yet. I wonder if any mobile phone has enough memory to achieve this. Still, a possible way to achieve the client side usage is described in this walk-through based on apache TVM: https://github.com/mlc-ai/web-stable-diffusion/blob/main/walkthrough.ipynb they also have a demo web-page https://mlc.ai/web-stable-diffusion/#text-to-image-generation-demo which you can run if your browser supports WebGPU (e.g. chrome or chromium). I did not manage to get an output out of it. It breaks at some point. Note that you GPU card needs at least 7GB of RAM to be able to load the model. I am wondering if something like data vending-machine NIP would be a more realistic solution for this use case. |
|
It's not that far out. We have seen apps running with specific models already: https://github.com/EdVince/Stable-Diffusion-NCNN I think desktop applications could be the first to implement it. But yes, we can definitely start with data-vending-machine-powered responses. |
|
Yes if one limits it to a desktop client with hardware requirements that would definitely work. Thinks are also much simpler to implement, as similar UIs already exist. For example https://github.com/Sygil-Dev/sygil-webui or https://github.com/AUTOMATIC1111/stable-diffusion-webui. |
|
Yo! Get off my NIP number ;) |
Reserves
kind:1947for media-generating AI prompts.Read here