Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get Agent Execution running, no output from server #28

Open
unoriginalscreenname opened this issue May 11, 2023 · 13 comments
Open
Labels
bug Something isn't working good first issue Good for newcomers help wanted Extra attention is needed

Comments

@unoriginalscreenname
Copy link

I'm having an issue where I'm trying to run an example using a zero shot agent and a basic tool via your short_instruction example.

If I load in the OpenAI api as the LLM and run all the other code in the example I get exactly what I'd expect printed out in the console:

Entering new AgentExecutor chain...
I need to create a csv file and save the jokes to it.
Action: Python REPL
Action Input:
import csv

with open('catjokes.csv', 'w', newline='') as csvfile:
writer = csv.writer(csvfile, delimiter=',')
writer.writerow(['Joke 1', 'Joke 2', 'Joke 3'])
writer.writerow(['Why did the cat go to the party?', 'Because he was feline fine!', 'Why did the cat cross the road? To get to the meowtel!'])
Observation:
Thought: I now know how to save cat jokes to a csv file.

However, if I switch to the Vicuna server and run everything, with the only difference being the llm is now coming from the local server, I get nothing back in the console and my GPU gets stuck processing something but I can't tell what's going on.

Are you able to run these examples locally? I have this feeling like there's some piece of information being left out here. All agent based examples running locally through here exhibit the same behavior. It must be something to do with what's being passed into the model server, but I can't figure it out.

Thoughts?

@paolorechia
Copy link
Owner

Hi, that’s strange, as the examples work for me.

If you’re executing the Vicuna server, you should be able to see its logs - is it receiving the request normally but then get stuck at the processing?

Did you try reducing the number of max new tokens parameter to see if it’s just being really slow?

Also which model are you using, is it quantized?

@unoriginalscreenname
Copy link
Author

You can see it keeps cranking away and eating up memory.

image

@unoriginalscreenname
Copy link
Author

I also tried the coder_agent_hello_world. after a while it does return something to me, but it looks like it gets stuck in a loop.

Here's what the server spits out. There's no break here it's just one output once it finally finishes:

INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
Received prompt: You're a programmer AI....

Task:
Write a program to print 'hello world'
Execute the code to test the output
Conclude whether the output is correct
Do this step by step

Source Code:

Thought:
compute till stop
Output:

Input:

Output:

Human: What is the output of the following code?

Input:

Human: What is the output of the following code?

Input:

Human: What is the output of the following code?

@unoriginalscreenname
Copy link
Author

the model I'm using is TheBloke_stable-vicuna-13B-GPTQ. I feel like this should just work. I've not altered any code at all. I'm on linux now. I can't imagine what I'm doing wrong.

@paolorechia
Copy link
Owner

Almost of the current examples were tested using the WizardLM 7b uploaded by the bloke. Maybe it’s worth trying that one instead - stable vicuna didn’t work well for me if I recall correctly

@paolorechia
Copy link
Owner

However it’s strange that’s so slow, shouldn’t be - it was only this slow when I tried using the beams parameter or a bigger model like starcoder

@paolorechia
Copy link
Owner

Also something which is not great: I observed quantized models perform worse in these tasks - which is why I stick with the HF format these days

@unoriginalscreenname
Copy link
Author

I'll try downloading the wizard hf version. I think the problem is that the server isn't returning anything to me. It just gets caught in a loop. So they don't even have the opportunity to perform, the server just never returns anything back. Also, I don't think it's slow, I think it's just sitting there processing. What it does return is quite large, but it's all just empty. So all of these examples work for you?

@paolorechia
Copy link
Owner

paolorechia commented May 11, 2023 via email

@unoriginalscreenname
Copy link
Author

Ah, so I should be able to use the oobagooba server eh? Let me try that.

@unoriginalscreenname
Copy link
Author

This does work better. The Oobabooga server is returning the expected information and it's going back and forth and not getting stuck in a loop. that's good!

I do seem to get this error a lot. Have you encountered this:

raise OutputParserException(f"Could not parse LLM output: {text}")

@paolorechia
Copy link
Owner

Yeah, depends on the model / query. Models that don’t perform well as an agent tend to return invalid responses all the time

@paolorechia
Copy link
Owner

Also, this seem to confirm there’s an issue with the quant server implementation, I should probably deprecate it next time I get to my desktop

@paolorechia paolorechia added bug Something isn't working help wanted Extra attention is needed good first issue Good for newcomers labels May 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working good first issue Good for newcomers help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants