-
Notifications
You must be signed in to change notification settings - Fork 353
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add integration to guidance library #11
Comments
@paolorechia Guidance actually supports loading local LLMs and its pretty versatile https://github.com/microsoft/guidance/blob/main/guidance/llms/transformers/_llama.py, but loading local LLMs will surely making ToT heavier. I believe you can extend from that llm class a little bit by wrapping your webui API call inside, so that your pr would work with guidance. |
@yhyu13 thanks for looking into it :) I already have a fork of guidance that uses the text-generation-web-ui api: https://github.com/paolorechia/local-guidance I didn’t mention it before I was not sure if you wanted an integration to text-generation-web-ui in the first place. I did something similar as you mention, but based on OpenAI class. Main problem here is that my code doesn’t support many features of guidance, but that’s definitely a way to go. Maybe something better would be to stick to the more “official” integrations, like Huggingface, so it also supports better guidance? |
Is guidance part of hugging face? We should catchup with those high-level wrappers I believe @kyegomez |
@yhyu13 Not part of, but you'll get a much better support to use guidance if you use Hugging Face. There are several features implemented specifically for the HF API, e.g., token healing. |
I thought token healing was just an bool flag, had never looked inside. Will do |
Trying to implement guidance right now can you all help!, I'm making an all-new folder at extremely_experimental |
Nevermind just saw your pr |
Now we need to update the prompts of the models to use guidance |
Inside /extremely_experimental/prompting Given the current state of reasoning: 'Given the current state of reasoning: 'W h a t a r e n e x t g e n e r a t i o n r e a s o n i n g m e t h o d s f o r L a r g e L a n g u a g e M o d e l s', generate {5} coherent thoughts to continue the reasoning process:', evaluate its value as a float between 0 and 1, and NOTHING ELSE: |
|
tree-of-thoughts % /usr/local/bin/python3 /Users/defalt/Desktop/Athena/research/tree-of-thoughts/experiements/extremely_experimental/prompting/guidancePrompt.py |
I got it to run with the attached change to the However, it doesn't appear to solve the game of 24 (with text-davinci-003):
|
@jabowery is that using guidance? |
Almost certainly. The only way I was able to get the |
I have the same issue. Running the example prompted me to install guidance. After pip installing guidance and using this as the input_problem: using 1 2 6 9 us these numbers and basic arithmetic operations (+-*/) to obtain 24. The output should be an equation. I get some response like solution: (['Observation: All of the numbers provided can be combined to create 24.\nThoughts: We can use trial and error to find the combination of numbers and operations that produces the desired result of 24.'], 1.0) Any ideas on how (with what settings) to get the example going are highly appreciated :) |
@ALL pr #18 guidance is still WIP at this moment, the prompt suggested from https://github.com/microsoft/guidance that uses system, user, assistant actually leads to GPT4 spitting out wrong output for evaluation The example.py will not use guidance before this problem is fixed |
text-davinci-003 is poor at this task, gpt4 and chatgpt mostly will speculate with actual mathematics formula |
I have finally managed to solve this puzzle: with gpt-4 and V2 in the example file with these settings: search_algorithm = "BFS" #cot or propose value or voteevaluation_strategy = "value" Pretty awesome! Did cost quite a lot of tokens, though. |
Working on the prompts now, the prompts seem to be the biggest problem! |
Thanks a lot for implementing this ToT library, nice work!
I think it could benefit a lot from using Microsoft Guidance (https://github.com/microsoft/guidance), specially with smaller local models that have a hard time following the instructions exactly.
Is this something you would be interested in?
Sadly, I’m low on time this week (and the next) to help with the implementation, but I would eventually look into making this integration (if you have no interest in making it yourself).
The text was updated successfully, but these errors were encountered: