New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add OpenAI (LLM) procedures #3575
Conversation
Want to change the procedure results columns for completion:
not sure how to handle: choices=[{finish_reason="stop", index=0, text="Blue", logprobs=null}] (not sure what logprobs is) same for chat-completion |
I would leave it as is as currently we don't have a clear view about how the developer will use it; @tomasonjo wdyt? |
I would also leave it as we return full responses and allow users to do whatever they want. I am guessing in 98%, the developer usage will be:
|
Ok then we can release it as is. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jexp @tomasonjo thank you so much, I just have one question
"model": "text-embedding-ada-002", | ||
"usage": { "prompt_tokens": 8, "total_tokens": 8 } } | ||
*/ | ||
Stream<Object> resultStream = executeRequest(apiKey, configuration, "embeddings", "text-embedding-ada-002", "input", texts, "$.data"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jexp @tomasonjo does make sense to make this and the other models configurable, via configuration
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure exactly if I understand what you mean, but we could add model parameter under the configuration map. I would leave the prompts as a separate parameter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The model is something important, so it cannot be missed. For the user it's already in the configuration. But internally I think we should make it explicit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I mean something like this
Stream<Object> resultStream = executeRequest(apiKey, configuration, "embeddings", "text-embedding-ada-002", "input", texts, "$.data"); | |
String model = configuration.getOrDefault("model", "text-embedding-ada-002"); | |
Stream<Object> resultStream = executeRequest(apiKey, configuration, "embeddings", model, "input", texts, "$.data"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure we need to do this on every call, as it's already done inside of executeRequest.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
@jexp @tomasonjo thank you! |
* WIP * Add completion API * add chatCompletion * prettificiation * WIP * prettify * Refactoring, todo docs & tests * Added Tests & Docs for OpenAI procs (WIP) * Update openai.adoc From @tomasonjo --------- Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com>
* WIP * Add completion API * add chatCompletion * prettificiation * WIP * prettify * Refactoring, todo docs & tests * Added Tests & Docs for OpenAI procs (WIP) * Update openai.adoc From @tomasonjo --------- Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com>
* Add OpenAI (LLM) procedures (#3575) * WIP * Add completion API * add chatCompletion * prettificiation * WIP * prettify * Refactoring, todo docs & tests * Added Tests & Docs for OpenAI procs (WIP) * Update openai.adoc From @tomasonjo --------- Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com> * Removed unused deps after OpenAI procedures addition (#3585) * Updated extended.txt to fix checkCoreWithExtraDependenciesJars failure after Open AI procedures --------- Co-authored-by: Michael Hunger <github@jexp.de> Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com>
* Add OpenAI (LLM) procedures (#3575) * WIP * Add completion API * add chatCompletion * prettificiation * WIP * prettify * Refactoring, todo docs & tests * Added Tests & Docs for OpenAI procs (WIP) * Update openai.adoc From @tomasonjo --------- Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com> * Removed unused deps after OpenAI procedures addition (#3585) * Updated extended.txt to fix checkCoreWithExtraDependenciesJars failure after Open AI procedures --------- Co-authored-by: Michael Hunger <github@jexp.de> Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com>
* Add OpenAI (LLM) procedures (#3575) * WIP * Add completion API * add chatCompletion * prettificiation * WIP * prettify * Refactoring, todo docs & tests * Added Tests & Docs for OpenAI procs (WIP) * Update openai.adoc From @tomasonjo --------- Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com> * Removed unused deps after OpenAI procedures addition (#3585) * Updated extended.txt to fix checkCoreWithExtraDependenciesJars failure after Open AI procedures --------- Co-authored-by: Michael Hunger <github@jexp.de> Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com>
* [NOID] Add OpenAI (LLM) procedures (#3575) (#3582) * Add OpenAI (LLM) procedures (#3575) * WIP * Add completion API * add chatCompletion * prettificiation * WIP * prettify * Refactoring, todo docs & tests * Added Tests & Docs for OpenAI procs (WIP) * Update openai.adoc From @tomasonjo --------- Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com> * Removed unused deps after OpenAI procedures addition (#3585) * Updated extended.txt to fix checkCoreWithExtraDependenciesJars failure after Open AI procedures --------- Co-authored-by: Michael Hunger <github@jexp.de> Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com> * [NOID] First LLM prompt for cypher, query and schema in APOC (#3649) * First LLM prompt for cypher, query and schema in APOC * Added integration tests * enable open.ai key management globally * Added tests * added docs * added configuration map to procs --------- Co-authored-by: Andrea Santurbano <santand@gmail.com> --------- Co-authored-by: Michael Hunger <github@jexp.de> Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com> Co-authored-by: Andrea Santurbano <santand@gmail.com>
Fixes #3574
One sentence summary of the change.
Proposed Changes (Mandatory)
Similar to the NLP procedures, we can provide a number of procedures that make working with LLM easier
Starting with OpenAI API, possibly extending it to other providers
Thanks to @tomasonjo for the initial PR