Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add adaptors for various backends (ollama, tgi, api-inference) #40

Merged
merged 14 commits into from
Jan 2, 2024

Conversation

noahbald
Copy link
Contributor

@noahbald noahbald commented Oct 28, 2023

  • Allow users to specify the key the input will be sent to in the request to the llm (defaults to "input")
  • Allow users to add arbitrary data to be sent in the request to the llm

This should resolve issue #17, by allowing users to remap "input" to "prompt" add adding { model: "model:nb-code" }

This will require further changes on clients as well (such as llm.nvim) - happy to contribute here as well

closes #17, closes #28

Copy link
Member

@McPatate McPatate left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your contribution.

Did you think about the server's response that could also be different?

I'm not sure how flexible this will be in the end. My plan is to add support to different backends in code and to differentiate with an identifier in the params object.

What API were you aiming at supporting with this PR?

@noahbald
Copy link
Contributor Author

Hey McPatate, my limited understanding is that a server usually responds with a text stream. If not the case I can see how that would need to be addressed, maybe for a separate PR.

I feel that adding support through an identifier might be more hefty to maintain, if the project ends up having logic to handle dozens of possible backends I could see it becoming bloated.
Perhaps supporting a plugin or middleware approach would make more sense, and let the community create what they need for their backend when they need it.

I'm still playing around with this, but I'm aiming to support usage of Ollama through llm.nvim

@McPatate
Copy link
Member

Ollama's response is not supported by llm-ls currently, it needs to be handled otherwise it won't work.

I wonder if we'll ever get to dozens of different backends. I believe at some point we'll stabilise on an API and people will implement "the standard" rather than re-invent a new thing.
But yes at first we'll need to implement a few of them before that crystallises. Not sure if the plugin route will solve anything and might be too much work compared to just adding different backends.

@noahbald
Copy link
Contributor Author

@McPatate haven't had a proper test yet, I'd like to get your thoughts on the implementation of the adaptors and whether you think they can be improved.

Users will now be able to specify an adaptor in their configuration, which will handle the transformation of requests and responses to be compatible with the given backend.

Copy link
Member

@McPatate McPatate left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did a quick first pass

crates/llm-ls/src/adaptors.rs Outdated Show resolved Hide resolved
crates/llm-ls/src/adaptors.rs Outdated Show resolved Hide resolved
crates/llm-ls/src/adaptors.rs Outdated Show resolved Hide resolved
crates/llm-ls/src/adaptors.rs Outdated Show resolved Hide resolved
crates/llm-ls/src/adaptors.rs Show resolved Hide resolved
crates/llm-ls/src/adaptors.rs Outdated Show resolved Hide resolved
crates/llm-ls/src/main.rs Outdated Show resolved Hide resolved
Comment on lines 96 to 97
Ok(OllamaAPIResponse::Error(err)) => return Err(internal_error(err)),
Err(err) => return Err(internal_error(err)),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Ok(OllamaAPIResponse::Error(err)) => return Err(internal_error(err)),
Err(err) => return Err(internal_error(err)),
Ok(OllamaAPIResponse::Error(err)) | Err(err) => return Err(internal_error(err)),

does this work?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think not, since Ok(OllamaAPIResponse::Error(err)) and Err(err) have conflicting types

@noahbald noahbald changed the title feat: allow custom input key and arbitrary data from client feat: Add adaptors for various backends (ollama, tgi, api-inference) Nov 24, 2023
@noahbald
Copy link
Contributor Author

@McPatate do you think it's worth me re-implementing #42 as an adaptor?

@McPatate
Copy link
Member

@McPatate do you think it's worth me re-implementing #42 as an adaptor?

Yes good idea

crates/llm-ls/src/adaptors.rs Outdated Show resolved Hide resolved
crates/llm-ls/src/adaptors.rs Outdated Show resolved Hide resolved
crates/llm-ls/src/main.rs Outdated Show resolved Hide resolved
crates/llm-ls/src/main.rs Outdated Show resolved Hide resolved
crates/llm-ls/src/main.rs Outdated Show resolved Hide resolved
crates/llm-ls/src/adaptors.rs Outdated Show resolved Hide resolved
crates/llm-ls/src/adaptors.rs Outdated Show resolved Hide resolved
crates/llm-ls/src/adaptors.rs Outdated Show resolved Hide resolved
crates/llm-ls/src/adaptors.rs Outdated Show resolved Hide resolved
crates/llm-ls/src/adaptors.rs Outdated Show resolved Hide resolved
crates/llm-ls/src/main.rs Outdated Show resolved Hide resolved
@noahbald
Copy link
Contributor Author

Thanks for the feedback @McPatate, it's been really helpful - already looking a lot better from what you've suggested.
Keen to know if there's anything else you think we need to get this through

@McPatate
Copy link
Member

Thanks for the feedback @McPatate, it's been really helpful

Thank you for your contribution!

Keen to know if there's anything else you think we need to get this through

How have you tested your PR? If the CI passes and your testing is conclusive then we should be able to merge.

We will need to update all clients to match the new API as well, is this something you'd be willing to do as well?

@noahbald
Copy link
Contributor Author

noahbald commented Dec 3, 2023

Thanks for the feedback @McPatate, it's been really helpful

Thank you for your contribution!

Keen to know if there's anything else you think we need to get this through

How have you tested your PR? If the CI passes and your testing is conclusive then we should be able to merge.

We will need to update all clients to match the new API as well, is this something you'd be willing to do as well?

I've been testing by hacking around in llm.nvim. I'll need to retest though as I haven't tested this in a good while.
I don't think I have time to update all the clients, but I can make a PR for what I've done in llm.nvim when ready

@noahbald
Copy link
Contributor Author

noahbald commented Dec 3, 2023

I'm trying to test this but running into an issue where it's complaining about the case of the request parameters given.
image

I'm using https://github.com/noahbald/llm.nvim as my client here.

Is this something to do with #[serde(rename_all = "camelCase")]?

@McPatate
Copy link
Member

McPatate commented Dec 4, 2023

Is this something to do with #[serde(rename_all = "camelCase")]?

Yes, I've recently changed the API to make the case consistent across parameters. I haven't released a version of llm-ls yet since it's a breaking change. I was waiting for a few other things before creating a new release, like your PR and what I'm currently working on.

@noahbald
Copy link
Contributor Author

noahbald commented Dec 5, 2023

image
I've got it working using my fork of llm.nvim - I'll make a PR for that!

You may want to test with openai and huggingface as well. I don't have accounts with either so I haven't tested there

@manish-baghel
Copy link

manish-baghel commented Dec 9, 2023

@noahbald @McPatate
Thanks for the amazing work.

I managed to test it locally.
Models:

  1. Open Hermes Mistral 2.5-7B merged with Intel neural chat v3.3
  2. Open Hermes Mistral 2.5-7B by Teknium

2023-12-10-053607_853x254_scrot

However, I had to copy paste all snake_case configs to camelCase

  local params = lsp.util.make_position_params()
  params.model = utils.get_model()
  params.tokens_to_clear = config.get().tokens_to_clear
  params.tokensToClear = config.get().tokens_to_clear
  params.api_token = config.get().api_token
  params.apiTokens = config.get().tokens_to_clear
  params.request_params = config.get().query_params
  params.request_params.do_sample = config.get().query_params.temperature > 0
  params.requestParams = config.get().query_params
  params.requestParams.doSample = config.get().query_params.temperature > 0
  params.fim = config.get().fim
  params.tokenizer_config = config.get().tokenizer
  params.tokenizerConfig = config.get().tokenizer
  params.context_window = config.get().context_window
  params.contextWindow = config.get().context_window
  params.tls_skip_verify_insecure = config.get().tls_skip_verify_insecure
  params.tlsSkipVerifyInsecure = config.get().tls_skip_verify_insecure
  params.adaptor = config.get().adaptor
  params.request_body = config.get().request_body
  params.requestBody = config.get().request_body
  params.ide = "neovim"

Hopefully, the changes will get merged to main branch soon. Really excited for this one 🤠

@noahbald
Copy link
Contributor Author

@noahbald @McPatate Thanks for the amazing work.

I managed to test it locally. Models:

  1. Open Hermes Mistral 2.5-7B merged with Intel neural chat v3.3
  2. Open Hermes Mistral 2.5-7B by Teknium

2023-12-10-053607_853x254_scrot

However, I had to copy paste all snake_case configs to camelCase

  local params = lsp.util.make_position_params()
  params.model = utils.get_model()
  params.tokens_to_clear = config.get().tokens_to_clear
  params.tokensToClear = config.get().tokens_to_clear
  params.api_token = config.get().api_token
  params.apiTokens = config.get().tokens_to_clear
  params.request_params = config.get().query_params
  params.request_params.do_sample = config.get().query_params.temperature > 0
  params.requestParams = config.get().query_params
  params.requestParams.doSample = config.get().query_params.temperature > 0
  params.fim = config.get().fim
  params.tokenizer_config = config.get().tokenizer
  params.tokenizerConfig = config.get().tokenizer
  params.context_window = config.get().context_window
  params.contextWindow = config.get().context_window
  params.tls_skip_verify_insecure = config.get().tls_skip_verify_insecure
  params.tlsSkipVerifyInsecure = config.get().tls_skip_verify_insecure
  params.adaptor = config.get().adaptor
  params.request_body = config.get().request_body
  params.requestBody = config.get().request_body
  params.ide = "neovim"

Hopefully, the changes will get merged to main branch soon. Really excited for this one 🤠

Thanks for testing, the casing changes are expected at the moment, McPatate is working on that separately

Yes, I've recently changed the API to make the case consistent across parameters. I haven't released a version of llm-ls yet since it's a breaking change. I was waiting for a few other things before creating a new release, like your PR and what I'm currently working on.

@McPatate McPatate mentioned this pull request Dec 15, 2023
@McPatate
Copy link
Member

Since I can't seem to run the CI without risking to leak the tokens to the world, I ran it locally and things seem to be broken:

cargo +nightly run --bin testbed -r -- --api-token $API_TOKEN -r `pwd`/crates/testbed/repositories-ci.yaml -f simple

Produces:

Repository name Source type Average hole completion time (s) Pass percentage
simple local 0.14864999 0%
Total -- 0.14864999 0%

When it should be 100%

@noahbald
Copy link
Contributor Author

@McPatate I'm not sure how the tests are set up here. I tried running your command but I'm getting an IO error.
I set up a Hugging Face account to play around with it in the editor and it seems to work :\

Do you have any advice to how I can approach the failing tests, or would you rather have a look into it on your end?

@McPatate
Copy link
Member

McPatate commented Dec 22, 2023

I'd suggest using a debugger or just adding logs in the the testbed code and try to see where the IO error is coming from.
You have to be at the root of the folder to run the command and your API_TOKEN should be your HF token.

If you're really struggling I'll take a look. I haven't completely documented testbed and there are some subtleties that may not be very user friendly at times, sorry about that.

@noahbald
Copy link
Contributor Author

image
I've done a sync with main and unfortunately I can't seem to run the tester at all in my environment (wsl)

@McPatate
Copy link
Member

McPatate commented Dec 25, 2023

Try setting LOG_LEVEL=debug when running testbed to see if you get more information. You can set the log level per crate as well, see how RUST_LOG format works, it's the same for the LOG_LEVEL var.

Also, you'll need to run testbed from the root of the directory, so:

- -r ../../crates/testbed/repositories-ci.yaml
+ -r crates/testbed/repositories-ci.yaml

With pwd returning /path/to/llm-ls

@noahbald
Copy link
Contributor Author

Thanks Luc, I hadn't run a release build so llm-ls was missing from target/release.
I've run the test and it's passing for me. Maybe I'm missing something or maybe it was fixed with a recent commit.

2023-12-31T03:53:29.708931Z  INFO testbed: 726: simple from local obtained 100.00% in 3.544s
2023-12-31T03:53:29.709274Z  INFO testbed: 747: all tests were run, exiting

@McPatate McPatate merged commit 585ea3a into huggingface:main Jan 2, 2024
@McPatate
Copy link
Member

McPatate commented Jan 2, 2024

Thank you very much for the hard work on this PR @noahbald, especially while putting up with my incessant pestering 😉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

feat: add support for llama.cpp feat: add support for ollama
3 participants