Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Add missing response fields to Spring AI ChatResponse, Generation and AssistantMessage #550

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

tzolov
Copy link
Collaborator

@tzolov tzolov commented Apr 4, 2024

Ensure that Generation and AssistantMessage has Id, index, isCompleted attributes.
Also the ChatResponse should have an unique ID as well.
Refactor all LLM APIs to introduce the response ID and index and hasFinished (isCompleted) status.
Some of the APIs provide build in unique response ID and finished reason. But many does not.
For later we generate a synthetic IDs.

Add ID, index and isCompleted to the following LLM API responses:

  • Anthropic, Azure, Mistral, OpenAI
  • Vertex AI Gemini. Resolve API deprivation issue. Streamline Geminie output code. Serialise citation and safety output in message properties.
  • Ollama support
  • Bedrock cohere result metadata propagation.
  • Bedrock Llama2 and Titan
  • Berock Jurassic2
  • Bedrock Anthropic2
  • WatsanxAI
  • Vertex PaLM2, HuggingFace
  • Remove Generation.withGenerationMetadata
  • Remove AssistantPromptTemplate
  • Make all Generation field final.
  • Make all AssistantMessage fields final.

Experiment with Message and Token window chat history

  Support cha-history management at Spring AI abstraction level.

  Ensure that Generation and AssistantMessage has Id, index, isCompleted attributes.
  Also the ChatResponse should have an unique ID as well.
  Refactor all LLM APIs to introduce the reponse ID and index and hasFinished (isCompleted) status.
  Some of the APIs provide build in unique response ID and finished reason. But many does not.
  For later we generate a synthetic IDs.

  Add ID,index and isCompleted to the following LLM API responses:

  - Anthropic, Azure, Mistral, OpenAI
  - Vertex AI Gemini. Resolve API deprication issue. Streamline Geminie output code.
    Serialize citation and safty output in message  properties.
  - Ollama support
  - Bedrock cohere result metadata propagation.
  - Bedrock Llama2 and Titan
  - Berock Jurassic2
  - Bedrock Anthropic2
  - WatsanxAI
  - Vertex PaLM2, HuggingFace
  - Remove Generation.withGenerationMetadata
  - Remove AssistantPromptTemplate
  - Make all Generation field final.
  - Make all AssitantMessage fields final.

 Experiment with Message and Token window chat history
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant