Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please, add local LVM model support and example [feature request] #12

Closed
bigcat88 opened this issue May 12, 2024 · 8 comments
Closed

Please, add local LVM model support and example [feature request] #12

bigcat88 opened this issue May 12, 2024 · 8 comments

Comments

@bigcat88
Copy link
Contributor

Any model will do, even a simple one for start.

Most people in ComfyUI will be interested in a model that can determine the GENDER of an object: boy or girl/man or woman

This is very useful for InstantID and IP-Adapter workflows, where you want regenerate a picture.

Also VLM nodes will be very useful for upscaler workflows and for transfer styles ones.

As I am very interested in LVM Nodes, I can try create one and open a PR for this, I will have some time at evenings during week, so at next weekend probably can open a PR.

@heshengtao
Copy link
Owner

That’s really great, I also think that supporting LVM is an indispensable feature.

@heshengtao
Copy link
Owner

I’ve been a bit busy this past week, mainly spending time creating a tutorial video for this project and fixing some minor issues within the project. It was only during the weekend that I had a substantial amount of time. May I ask if you have already started writing the PR related to the LVM model? If so, I won’t duplicate the development. Instead, I plan to create something that can package the comfyui workflow into an OpenAI interface, making it convenient for any software that can integrate with OpenAI to access user-defined workflows. If you haven’t started yet, I can begin writing one, and you can provide your feedback on it later.

@bigcat88
Copy link
Contributor Author

Unfortunately, I had exactly the same situation, and I didn’t have time all week. And I expecting next week will be full of work too. :(
On this topic, I was only able to check the nodes from here https://github.com/gokayfem/ComfyUI_VLM_nodes and the nodes for the local version of Ollama - https://github.com/stavsap/comfyui-ollama

I can say that specifically for those situations where we use ComfyUI, the approach with another local server is not suitable, since then control over unloading the model is lost and Comfi’s workflows end up waiting for the computer with the ollama server to process the request.
And this adds the requirement to either have a second computer or Ollama to have to run on the CPU if a person has only one computer.

The way you described with a universal interface, when you can outsource some of the tasks to another service, is quite an interesting and good solution, imho :)

@heshengtao
Copy link
Owner

When I have time, I will start working on LVM-related tasks, beginning with the LLaVA model :)

@heshengtao
Copy link
Owner

Sorry for the wait, I have adapted this model: llava-llama-3-8b-v1_1-gguf. The example workflow can be found here: start_with_LVM.json.Due to the use of llama_cpp_python code, it may not be perfectly compatible with MPS devices. You can see the adaptation code I made for different devices here. I’m not sure if it will cause errors on macOS and MPS, and I would greatly appreciate your help. 🙂

@bigcat88
Copy link
Contributor Author

image

llama_cpp_python is working fine, there are just a problem with installing it from default Python's pipy(but it is available in GH releases).

Surprisingly, it worked with int4 - I didn’t even have to do anything :)

Good job, really.

Issue now can be closed?

@bigcat88
Copy link
Contributor Author

Just a note: will be good not to display a encoded image data in "history"

A few big images with 10-25 MB will make history totally non displayable in browser.

@heshengtao
Copy link
Owner

The issue with the history has been resolved, and the download source for llama_cpp_python in macOS has been adjusted. The problem is now solved.I sincerely appreciate your assistance. Should you have any further recommendations or if there’s anything else I might require, I would be grateful if you could kindly reach out to me.:)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants