Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide example code for ollama provider #3

Closed
s-kostyaev opened this issue Oct 16, 2023 · 22 comments
Closed

Provide example code for ollama provider #3

s-kostyaev opened this issue Oct 16, 2023 · 22 comments

Comments

@s-kostyaev
Copy link
Contributor

Hi.
Please provide example code to work with ollama provider. This simple code:

(llm-chat
 (make-llm-ollama :chat-model "zephyr" :port 11434 :embedding-model "zephyr")
 (llm-make-simple-chat-prompt "Who are you?"))

leads to error:

Debugger entered--Lisp error: (not-implemented)
  signal(not-implemented nil)
  #f(compiled-function (provider prompt) #<bytecode 0x1158229f2628ca4>)(#s(llm-ollama :port 11434 :chat-model "zephyr" :embedding-model "zephyr") #s(llm-chat-prompt :context nil :examples nil :interactions (#s(llm-chat-prompt-interaction :role user :content "Who are you?")) :temperature nil :max-tokens nil))
  apply(#f(compiled-function (provider prompt) #<bytecode 0x1158229f2628ca4>) (#s(llm-ollama :port 11434 :chat-model "zephyr" :embedding-model "zephyr") #s(llm-chat-prompt :context nil :examples nil :interactions (#s(llm-chat-prompt-interaction :role user :content "Who are you?")) :temperature nil :max-tokens nil)))
  #f(compiled-function (&rest args) #<bytecode 0x6c97265ce378b94>)(#s(llm-ollama :port 11434 :chat-model "zephyr" :embedding-model "zephyr") #s(llm-chat-prompt :context nil :examples nil :interactions (#s(llm-chat-prompt-interaction :role user :content "Who are you?")) :temperature nil :max-tokens nil))
  apply(#f(compiled-function (&rest args) #<bytecode 0x6c97265ce378b94>) #s(llm-ollama :port 11434 :chat-model "zephyr" :embedding-model "zephyr") #s(llm-chat-prompt :context nil :examples nil :interactions (#s(llm-chat-prompt-interaction :role user :content "Who are you?")) :temperature nil :max-tokens nil))
  llm-chat(#s(llm-ollama :port 11434 :chat-model "zephyr" :embedding-model "zephyr") #s(llm-chat-prompt :context nil :examples nil :interactions (#s(llm-chat-prompt-interaction :role user :content "Who are you?")) :temperature nil :max-tokens nil))
  (progn (llm-chat (record 'llm-ollama 11434 "zephyr" "zephyr") (llm-make-simple-chat-prompt "Who are you?")))
  elisp--eval-last-sexp(nil)
  eval-last-sexp(nil)
  funcall-interactively(eval-last-sexp nil)
  command-execute(eval-last-sexp)

I would like to see example code with that I could start.

@s-kostyaev
Copy link
Contributor Author

I see. I need to (require 'llm-ollama) first

@ahyatt
Copy link
Owner

ahyatt commented Oct 16, 2023

Hm, I wonder how you could have make-llm-ollama be defined but not the rest of the file. At any rate, there might be things I can do to make this clearer in the docs, so I'll take a pass over the docs soon to see what can be further clarified.

@s-kostyaev
Copy link
Contributor Author

For now looks like code more helpful than docs :)

@s-kostyaev
Copy link
Contributor Author

@ahyatt are there any docs how to manage conversation memory?

@ahyatt
Copy link
Owner

ahyatt commented Oct 17, 2023 via email

@ahyatt
Copy link
Owner

ahyatt commented Oct 18, 2023 via email

@s-kostyaev
Copy link
Contributor Author

Sounds very promising. I will wait.

@ahyatt
Copy link
Owner

ahyatt commented Oct 19, 2023

You can look at the conversation-fix branch - please let me know if works for you. It was a bit hard to see if ollama was working since it didn't respond to my test messages with the same accuracy as the other providers.

@s-kostyaev
Copy link
Contributor Author

Thank you. I will see later today.

@s-kostyaev
Copy link
Contributor Author

Sorry, I need more time to experiments. Will reply later.

@s-kostyaev
Copy link
Contributor Author

s-kostyaev commented Oct 23, 2023

It doesn't work correctly with ollama. s-kostyaev/ellama@1f8854c

ellama--chat-prompt is a variable defined in ‘~/elisp/ellama/ellama.el’.

Its value is
#s(llm-chat-prompt :context nil :examples nil :interactions
		   (523 28766 6574 28766 28767 13 13 700 28713 28767 13 28789 28766 1838 28766 28767 13 966 19457 528 7075 333 302 2852 270 504 540 274 13 700 28713 28767 13 28789 28766 489 11143 28766 28767 13 1014 7075 333 302 2852 270 504 540 274 349 396 9467 9464 354 7484 544 8139 5551 582 298 264 2078 1474 28723 415 2038 3791 486 6854 6308 1716 288 390 24117 325 28710 28723 28706 2063 459 8139 28731 272 6079 2815 302 1430 8139 28725 5615 395 272 6079 2815 302 28705 28750 28723 851 8049 272 521 3325 286 5551 390 12179 354 724 1198 28725 304 272 9464 15892 739 272 3030 3607 349 5048 442 544 5551 582 298 369 3607 506 750 9681 28723 13 13 1551 4808 456 9464 28725 907 28725 2231 264 1274 302 5551 477 28705 28750 582 298 272 2078 3607 28723 2479 28725 7870 1059 272 5551 477 28705 28750 582 298 272 7930 5557 302 272 3607 325 20475 264 1474 6084 821 272 7930 5557 302 307 3573 347 264 6999 302 707 7000 1474 609 1263 1430 8139 1419 28725 1716 390 24117 544 6079 2815 302 369 8139 1413 1698 1274 28723 415 8409 521 3325 286 5551 297 272 3493 1274 460 868 272 724 1198 368 5695 28723 13 13 15423 349 396 2757 21366 2696 298 4808 456 9464 28747 13 13 13940 28832 17667 13 1270 4748 333 28730 1009 28730 263 270 504 540 274 28732 28711 1329 13 2287 422 5670 264 3695 1274 345 783 1198 28792 28734 568 28711 28793 28739 304 15267 13 2287 422 544 11507 390 1132 28723 330 1192 297 724 1198 28792 28710 28793 622 13 2287 422 347 1132 513 613 349 459 10727 28725 1112 1341 28723 13 2287 724 1198 327 733 4365 354 583 297 2819 28732 28711 28806 28740 4753 13 2287 284 327 28705 28750 13 2287 1312 325 28720 398 284 5042 307 1329 13 5390 422 1047 724 1198 28792 28720 28793 349 459 4648 28725 868 378 349 264 8139 13 5390 513 325 783 1198 28792 28720 28793 859 6110 1329 13 17422 422 8980 544 6079 2815 302 284 13 17422 354 613 297 2819 28732 28720 28736 28750 28725 307 28806 28740 28725 284 1329 13 1417 28705 724 1198 28792 28710 28793 327 8250 13 5390 284 2679 28705 28740 13 2287 422 724 1198 28792 28734 28793 304 724 1198 28792 28740 28793 460 1743 808 298 1132 486 2369 13 2287 422 13908 706 477 272 1274 1854 478 460 6348 865 13 2287 422 297 8139 5551 6517 821 28705 28740 13 2287 724 1198 28730 26816 327 733 28710 354 613 28725 284 297 481 11892 28732 783 1198 28731 513 284 304 613 876 28705 28740 28793 13 2287 604 724 1198 28730 26816 13 13940 28832 13 13 15423 349 264 26987 302 910 378 3791 28747 13 13 28740 28723 1552 28713 18388 28730 1009 28730 263 270 504 540 274 28832 4347 264 3607 1552 28711 28832 390 2787 28723 13 28750 28723 330 3695 1274 1552 783 1198 28832 349 3859 395 1669 1552 28711 28806 28740 9429 1682 11507 460 12735 808 298 1132 325 28710 28723 28706 2063 544 5551 477 28705 28750 582 298 307 460 4525 8139 1996 12598 5860 609 13 28770 28723 816 1149 6854 1077 754 724 1198 325 28710 28723 28706 2063 5551 6517 821 28705 28740 28731 5615 477 28705 28750 28723 13 28781 28723 1263 1430 8139 1419 28725 478 1716 390 24117 325 28710 28723 28706 2063 459 8139 28731 544 6079 2815 302 369 8139 297 272 1552 783 1198 28832 1274 28723 13 28782 28723 2530 6854 1077 1059 544 724 1198 582 298 272 7930 5557 302 307 28725 478 604 264 1274 8707 865 1395 5551 297 272 3493 1274 325 28832 783 1198 28792 28740 568 28711 28793 25920 690 654 1484 10727 390 24117 28723 #s(llm-chat-prompt-interaction :role user :content "implement it in go"))
		   :temperature nil :max-tokens nil)
Local in buffer *ellama*; global value is nil

Not documented as a variable.

  Automatically becomes buffer-local when set.
(with-current-buffer ellama-buffer
  (llm-ollama--chat-request ellama-provider ellama--chat-prompt))

=>  (("model" . "zephyr") ("prompt" . "implement it in go"))

Last evaluation result should contain also "context" field.

@s-kostyaev
Copy link
Contributor Author

@ahyatt in emacs 29.1

(with-current-buffer ellama-buffer
  (type-of (car (llm-chat-prompt-interactions ellama--chat-prompt))))

returns integer

@s-kostyaev
Copy link
Contributor Author

Probably you mean something like this https://github.com/ahyatt/llm/pull/5/files @ahyatt

@s-kostyaev
Copy link
Contributor Author

Also, does your solution work for buffer-local variables? During testing I can see very strange behaviour. In commit s-kostyaev/ellama@5e7e618 I see in *Messages*:

buffer: #<buffer ellama.el>
local: nil
local if set: t
buffer: #<buffer *ellama*>
local: nil
local if set: t
prompt before: nil
prompt: #s(llm-chat-prompt nil nil (#s(llm-chat-prompt-interaction user "Explain me Sieve of Eratosthenes")) nil nil)
buffer: #<buffer *Messages*>
local: nil
local if set: t
buffer: #<buffer *ellama*>
local: nil
local if set: t
prompt before: nil
prompt: #s(llm-chat-prompt nil nil (#s(llm-chat-prompt-interaction user "implement it in go")) nil nil)

@s-kostyaev
Copy link
Contributor Author

Looks like value of ellama--chat-prompt was set to nil inside your library code. Or maybe I don't understand what's going on here.

@s-kostyaev
Copy link
Contributor Author

Last test was on my version of llm from here https://github.com/ahyatt/llm/pull/5/files

ahyatt added a commit that referenced this issue Oct 24, 2023
If the original buffer has been killed, use a temporary buffer.

This will fix one of the issues noticed in #3.
@ahyatt
Copy link
Owner

ahyatt commented Oct 24, 2023

Thank you for testing this! So, yes, indeed the context was getting messed up, so thank you for the pull request. I've checked in my own different fix for this.

About your issue with buffer-local variables, I agree there's a problem: the callbacks do not have the buffer set to the original buffer you used. It'd be nice if it did, though. But it can't be guaranteed, since the original buffer might have been killed. I submitted a fix to the conversation-fix branch so that the callback should be in your original buffer, but if it is killed, it uses a temporary buffer.

Please take a look and let me know if this fixes your issues, and if you have remaining issues.

@s-kostyaev
Copy link
Contributor Author

s-kostyaev commented Oct 24, 2023

For now branch conversation-fix works for me. Issue was because of I have called markdown-mode every time ellama-chat called. And this cleared buffer-local variables. Fixed on my side - I call markdown-mode only once after creating buffer.

@s-kostyaev
Copy link
Contributor Author

I will wait your release with code from conversation-fix branch before I can release ellama with llm as a backend. Transition code is ready, only documentation left.

@ahyatt
Copy link
Owner

ahyatt commented Oct 24, 2023 via email

@ahyatt
Copy link
Owner

ahyatt commented Oct 26, 2023

I've pushed out the code to main. The release on GNU ELPA should happen tomorrow. Be sure to depend on the 0.5.0 version of llm specifically in your package-requires.

@s-kostyaev
Copy link
Contributor Author

Move to llm as a backend for ellama done.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants