-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide example code for ollama provider #3
Comments
I see. I need to |
Hm, I wonder how you could have |
For now looks like code more helpful than docs :) |
@ahyatt are there any docs how to manage conversation memory? |
The existing way is to use the llm-chat-prompt-interaction objects, and
keep a record of what was sent and returned, and just build that up. That
will run out of context at some point, though. Ollama has a unique way of
doing this by using a context parameter, which we don't use, since no one
else uses it. It would be nice to have just work by default, using this
context if it exists, but I may need another method, or series of methods.
And to not complicate things too much, I may need to rethink the
interface. Let me ponder this for a bit, and hopefully by end of week I
can have a suggestion that I'm happy with.
…On Mon, Oct 16, 2023 at 4:59 PM Sergey Kostyaev ***@***.***> wrote:
@ahyatt <https://github.com/ahyatt> are there any docs how to manage
conversation memory?
—
Reply to this email directly, view it on GitHub
<#3 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAE7ZDVTETTBTQGBM22WO3X7WN3PAVCNFSM6AAAAAA6CSFUZCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONRVGI3DKOJQGM>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
OK, I think I have an idea of how to do this: the llm chat functions can
mutate the interactions list in the prompt passed in, setting it a
conversation id (what Ollama calls a "context") when that's what the llm
needs, or the list of messages, for the other llm providers. It's a bit
more complicated, but should actually make things a bit easier for the
client, and should be backwards compatible.
I'll work on a change and point you to it when it is ready.
…On Mon, Oct 16, 2023 at 9:38 PM Andrew Hyatt ***@***.***> wrote:
The existing way is to use the llm-chat-prompt-interaction objects, and
keep a record of what was sent and returned, and just build that up. That
will run out of context at some point, though. Ollama has a unique way of
doing this by using a context parameter, which we don't use, since no one
else uses it. It would be nice to have just work by default, using this
context if it exists, but I may need another method, or series of methods.
And to not complicate things too much, I may need to rethink the
interface. Let me ponder this for a bit, and hopefully by end of week I
can have a suggestion that I'm happy with.
On Mon, Oct 16, 2023 at 4:59 PM Sergey Kostyaev ***@***.***>
wrote:
> @ahyatt <https://github.com/ahyatt> are there any docs how to manage
> conversation memory?
>
> —
> Reply to this email directly, view it on GitHub
> <#3 (comment)>, or
> unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AAAE7ZDVTETTBTQGBM22WO3X7WN3PAVCNFSM6AAAAAA6CSFUZCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONRVGI3DKOJQGM>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
|
Sounds very promising. I will wait. |
You can look at the |
Thank you. I will see later today. |
Sorry, I need more time to experiments. Will reply later. |
It doesn't work correctly with ollama. s-kostyaev/ellama@1f8854c
(with-current-buffer ellama-buffer
(llm-ollama--chat-request ellama-provider ellama--chat-prompt))
=> (("model" . "zephyr") ("prompt" . "implement it in go")) Last evaluation result should contain also "context" field. |
@ahyatt in emacs 29.1 (with-current-buffer ellama-buffer
(type-of (car (llm-chat-prompt-interactions ellama--chat-prompt)))) returns |
Probably you mean something like this https://github.com/ahyatt/llm/pull/5/files @ahyatt |
Also, does your solution work for buffer-local variables? During testing I can see very strange behaviour. In commit s-kostyaev/ellama@5e7e618 I see in
|
Looks like value of |
Last test was on my version of |
If the original buffer has been killed, use a temporary buffer. This will fix one of the issues noticed in #3.
Thank you for testing this! So, yes, indeed the context was getting messed up, so thank you for the pull request. I've checked in my own different fix for this. About your issue with buffer-local variables, I agree there's a problem: the callbacks do not have the buffer set to the original buffer you used. It'd be nice if it did, though. But it can't be guaranteed, since the original buffer might have been killed. I submitted a fix to the Please take a look and let me know if this fixes your issues, and if you have remaining issues. |
For now branch |
I will wait your release with code from |
Thank you, I’ll aim to release this week.
…On Tue, Oct 24, 2023 at 2:05 PM Sergey Kostyaev ***@***.***> wrote:
I will wait your release with code from conversation-fix branch before I
can release ellama with llm as a backend. Transition code is ready, only
documentation left.
—
Reply to this email directly, view it on GitHub
<#3 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAE7ZGCNM6GIE3DZIPAP4DYA77PNAVCNFSM6AAAAAA6CSFUZCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZXG42TQMBQGQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I've pushed out the code to main. The release on GNU ELPA should happen tomorrow. Be sure to depend on the 0.5.0 version of |
Move to |
Hi.
Please provide example code to work with ollama provider. This simple code:
leads to error:
I would like to see example code with that I could start.
The text was updated successfully, but these errors were encountered: