Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to pass computer resource information #40

Closed
schrummy14 opened this issue Sep 19, 2023 · 7 comments
Closed

How to pass computer resource information #40

schrummy14 opened this issue Sep 19, 2023 · 7 comments
Labels
documentation Improvements or additions to documentation

Comments

@schrummy14
Copy link

Hello,
I am looking to see where/how cpu and/or gpu information is passed during server start but I am unable to find it.
Thank you

@ido-pluto
Copy link
Collaborator

Are you trying to configure CatAI only to use some CPU/GPU cores?
In general llama.cpp is the actual engine behind the scenes, you can configure with CatAI how many CPU/GPU cores and memory the model uses.

You can configure CatAI like this:

  1. open the settings tab:
    image

  2. Add custom settings, which you can find here: https://withcatai.github.io/node-llama-cpp/types/LlamaModelOptions.html

  3. Restart CatAI
    image

@ido-pluto ido-pluto added the documentation Improvements or additions to documentation label Sep 20, 2023
@schrummy14
Copy link
Author

Hello,

I am trying to get the model to utilize all of the cpu/gpu that I have available...
Looking at the utilization, node is using 6 cores (of 16) and ~13% of a v100 gpu.
image
image

I have tried altering some of the settings but I wasn't able to see a difference.
By chance is it because of the model that I am using?
vicuna-7b-v1.5-q4_1

Thank you.

@ido-pluto
Copy link
Collaborator

Yes, this may be related. The binding of node-llama-cpp has some bugs with the configuration.
This binding allows you to use ggmlV3 (legacy format).

The recommended format is gguf (node-llama-cpp-v2),
If you want to utilize more VRAM you should use a bigger model, such as

  • llama-2-13b-chat-limarp-v2-merged-q3_k_s
  • phind-codellama-34b-q3_k_s

You can check this page for more models: https://huggingface.co/TheBloke

Most GGUF models are supported, you can install a custom model by coping the model link to catai install

image

The more parameter the model the more resources it utilizes, so this model (70B) https://huggingface.co/TheBloke/ARIA-70B-V2-GGUF

Can be pretty heavy.

If a model split into files like https://huggingface.co/TheBloke/Falcon-180B-GGUF/tree/main
You can install it like this:

canai install model-part-a.gguf,model-part-b.gguf

I have not tried this method yet, so if there is a problem feel free to report it :)

@schrummy14
Copy link
Author

Ok. I'll continue to look into it.
For the gif that is in the documentation, would you be able to tell me what model was used and about how long the answer took to generate? I feel that what I am seeing is slower than what is to be expected.
Thank you again.

@ido-pluto
Copy link
Collaborator

This git was generated from screen recording in mac pro m1. llama.cpp has spesifict optimization for mac silicon.
If I remember right this was from vicuna-13b-v1.5-q4_1 (but older version of the model).
I will soon update the models index to support vicuna with GGUF (gguf have more optimizations).

Are your cuda working when the model generating tokens?

@ido-pluto
Copy link
Collaborator

I uploaded new models, in my Mac token/s is faster than in the gif, I use vicuna-7b-16k-q4_k_s

Screen.Recording.2023-09-21.at.12.02.31.mov

@schrummy14
Copy link
Author

Thank you.
I'll need to go back and see what is happening on my side as it is much slower than what you are showing.
I'll close this out.
Thank you again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

2 participants