-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llama3 model not handling requests correctly #78
Comments
Very true, and indeed great observations too! I truly hope they will fix this ASAP! Since well llama3 is the only one that it is currently free! |
I don't believe it would be creating multiple agents over time. What I see happening is appending chat history and providing that for each new prompt. This is part of the memory module. I could be wrong. |
Yes, I also opened the ollama server console and watched the loading process, it handled a few requests but then when everything was loaded I made an input to the bot then watched the server (instead of taking the next request it straight up just rebooted [well at least tried to reload something so I would assume it's trying to start another server instead of using the existing one it made]) |
I personally don't think its an Ollama issue but more like an issue with the code in itself running it through Ollama! |
If the bot is restarting, what is the output in the command line? Any errors? |
Nope no errors what so ever, we write things in chat, it doesn't do nothing and after a lil while it just restarts |
It's possible that your hardware is unable to run llama3-8b. Seeing the console logs of both mindcraft and ollama would be helpful in knowing for sure. However, as has been mentioned elsewhere, small local models do not work very well in mindcraft. I would not expect the bot to be able to perform most tasks when running locally. |
I've tried using llama3 with its built-in terminal and it responds almost immediately, I'm also sure my computer is quite capable of running the model. So, I'm thinking it's how the agent runs I might try to figure it out myself but I'm not really familiar with js or this bot. |
Llama3 works very well locally on both of my computers (an Intel Core i9 and a Mac with M3 Ultra) without any issues. However, when it comes to the Minecraft agent, it does not respond to any input. I am not quite sure this is a hardware problem. |
Ok, I have some news. Somehow, Llama3 now seems to behave a bit better on my Mac. It can at least take some inputs and respond to me. The agent can now come to my position in the game, follow me, and gather the blocks I request. However, when it comes to building, it doesn't do anything. It gives me messages that it will proceed with building something, but it doesn't follow through. It basically can't place any blocks. And also goes absolutely crazy and wants to kill everything (sheeps, pigs, etcs...) here is a part of my logs in case you need them: |
I'm glad its working for you now. If you want it to be able to write code for building you need to enable "insecure coding" in the settings.json file. However, once again, don't expect llama 3 8b to do that well. |
lets hope its possible to get working on windows :I |
I have enabled the "insecure coding" but the agent still doesn't build anything when requested. I might have understood it wrongly, but do I need to build the building function myself? |
Strange new logs recorded when asking to check the inventory and building a house: { |
and also it seems that when using certain functions such as !placeHere, it gives the wrong amount if inputs, check the logs: { role: 'user', content: 'Topo1717: go for it' }, placing block... |
@Luca-Girotti |
@BorbTheBird |
By my setup im assuming you mean my pc specs so:
mindcraft and ollama server logs: https://pastebin.com/ndJXB3X7 [GIN] 2024/05/18 - 12:37:15 | 200 | 2m56s | 127.0.0.1 | POST "/api/chat" yeah and as soon as i send another message, vram drops and the time=2024-05-18T12:37:36.642+10:00 level=INFO source=amd_windows.go:90 msg="unsupported Radeon iGPU detected skipping" id=1 name="AMD Radeon(TM) Graphics" gfx=gfx1036 happens again edit: I created a program to test the inputs and it seems to be working fine |
same issue |
yeah, it would be nice if they had more documentation on using llama3 I have no idea how to use it with these bots (unless its meant to be extremely slow)
the only thing it does is freaks out when it sees a zombie
I think I found the issue which is that the ollama server seems to be constantly restarting (I have no idea why) every time a new message is sent, the server thingy decides to start loading something (you can see if you have the console window open) and it takes around 2m to do one request and then just IMMEDIATELY RELOADS
Ok I think I got it figured out: every time the user sends a message, it creates a new agent (instead of using the same one for some reason) and sends the starting stuff again:
received message from BorbTheBird : hello there
selected examples:
zZZn98: come here
brug: Remember that your base is here.
Awaiting local response... (model: llama3)
Messages: (intents here)
(yes, I just copied my comment, but I believe this issue deserves its own thread)
The text was updated successfully, but these errors were encountered: