Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python: Add Google PaLM connector with chat completion and text embedding #2258

Merged
merged 73 commits into from
Aug 22, 2023

Conversation

am831
Copy link
Contributor

@am831 am831 commented Aug 1, 2023

Motivation and Context

Implementation of Google PaLM connector with chat completion, text embedding and three example files to demonstrate their functionality.
Closes #2098
I also opened a PR for text completion #2076

Description

What's new:

  1. Implemented Google Palm connector with chat completion and text embedding
  2. Added 3 example files to python/samples/kernel-syntax-examples
  3. The example files show PaLM chat in action with skills, memory, and just with user messages. The memory example uses PaLM embedding as well
  4. Added integration tests to test chat with skills and embedding with kernel.memory functions
  5. Added unit tests to verify successful class initialization and correct behavior from class functions

There are some important differences between google palm chat and open ai chat:

  1. PaLM has two functions for chatting, chat and reply. The chat function in google's genai library starts a new conversation, and reply continues the conversation. Reply is an attribute of the response object returned by chat. So an instance of the GooglePalmChatCompletion class needs a way to determine which function to use, which is why I introduced a private attribute to store the response object. See https://developers.generativeai.google/api/python/google/generativeai/types/ChatResponse
  2. PaLM does not use system messages. Instead, the chat function takes a parameter called context. It serves the same purpose, to prime the assistant with certain behaviors and information. So when the user passes a system message to complete_chat_async, it is passed to chat as the context parameter.
  3. Semantic memory works with the chat service as long as the user creates a chat prompt template. The prompt containing the memory needs to be added to the chat prompt template as a system message. See python\samples\kernel-syntax-examples\google_palm_chat_with_memory.py for more details. If the only purpose of complete_async in GooglePalmChatCompletion is to send memories + user messages to the chat service as a text prompt, then complete_async is not fulfilling its intended purpose. A possible solution would be to send the text prompt as a request to the text service within complete_async.

Contribution Checklist

Currently no warnings, there was 1 warning when first installing genai with poetry add google.generativeai==v0.1.0rc2 from within poetry shell: "The locked version 0.1.0rc2 for google-generativeai is a yanked version. Reason for being yanked: Release is marked as supporting Py3.8, but in practice it requires 3.9". We would need to require later versions of python to fix it.

am831 and others added 30 commits July 14, 2023 15:14
@nacharya1 nacharya1 added this to the R3 : Cycle 2 milestone Aug 10, 2023
github-merge-queue bot pushed a commit that referenced this pull request Aug 17, 2023
…le (#2076)

### Motivation and Context

<!-- Thank you for your contribution to the semantic-kernel repo!
Please help reviewers and future users, providing the following
information:
  1. Why is this change required?
  2. What problem does it solve?
  3. What scenario does it contribute to?
  4. If it fixes an open issue, please link to the issue here.
-->
Implementation of Google PaLM connector with text completion and an
example file to demonstrate its functionality.
Closes #1979

### Description

<!-- Describe your changes, the overall approach, the underlying design.
These notes will help understanding how your code works. Thanks! -->

1. Implemented Google Palm connector with text completion 
2. Added example file to ```python/samples/kernel-syntax-examples```
3. Added integration tests with different inputs to kernel.run_async
4. Added unit tests to ensure successful initialization of the class and
successful API calls
5. 3 optional arguments (top_k, safety_settings, client) for
google.generativeai.generate_text were not included. See more
information about the function and its arguments:
https://developers.generativeai.google/api/python/google/generativeai/generate_text

I also opened a PR for text embedding and chat completion #2258

### Contribution Checklist

<!-- Before submitting this PR, please make sure: -->

- [x] The code builds clean without any errors or warnings
- [x] The PR follows the [SK Contribution
Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md)
and the [pre-submission formatting
script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#dev-scripts)
raises no violations
- [x] All unit tests pass, and I have added new tests where possible
- [x] I didn't break anyone 😄

Currently no warnings, there was 1 warning when first installing genai
with `poetry add google.generativeai==v0.1.0rc2` from within poetry
shell: "The locked version 0.1.0rc2 for google-generativeai is a yanked
version. Reason for being yanked: Release is marked as supporting Py3.8,
but in practice it requires 3.9". We would need to require later
versions of python to fix it.

---------

Co-authored-by: Abby Harrison <54643756+awharrison-28@users.noreply.github.com>
Co-authored-by: Abby Harrison <abby.harrison@microsoft.com>
@awharrison-28
Copy link
Contributor

@am831 merged with main and resolved conflicts caused by the merge of PR #2076. Added some additional logic to make sure that test and module imports only occur with Python 3.9 or greater.

@awharrison-28 awharrison-28 added this pull request to the merge queue Aug 21, 2023
Merged via the queue into microsoft:main with commit 59cbbdb Aug 22, 2023
28 checks passed
SOE-YoungS pushed a commit to SOE-YoungS/semantic-kernel that referenced this pull request Nov 1, 2023
…le (microsoft#2076)

### Motivation and Context

<!-- Thank you for your contribution to the semantic-kernel repo!
Please help reviewers and future users, providing the following
information:
  1. Why is this change required?
  2. What problem does it solve?
  3. What scenario does it contribute to?
  4. If it fixes an open issue, please link to the issue here.
-->
Implementation of Google PaLM connector with text completion and an
example file to demonstrate its functionality.
Closes microsoft#1979

### Description

<!-- Describe your changes, the overall approach, the underlying design.
These notes will help understanding how your code works. Thanks! -->

1. Implemented Google Palm connector with text completion 
2. Added example file to ```python/samples/kernel-syntax-examples```
3. Added integration tests with different inputs to kernel.run_async
4. Added unit tests to ensure successful initialization of the class and
successful API calls
5. 3 optional arguments (top_k, safety_settings, client) for
google.generativeai.generate_text were not included. See more
information about the function and its arguments:
https://developers.generativeai.google/api/python/google/generativeai/generate_text

I also opened a PR for text embedding and chat completion microsoft#2258

### Contribution Checklist

<!-- Before submitting this PR, please make sure: -->

- [x] The code builds clean without any errors or warnings
- [x] The PR follows the [SK Contribution
Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md)
and the [pre-submission formatting
script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#dev-scripts)
raises no violations
- [x] All unit tests pass, and I have added new tests where possible
- [x] I didn't break anyone 😄

Currently no warnings, there was 1 warning when first installing genai
with `poetry add google.generativeai==v0.1.0rc2` from within poetry
shell: "The locked version 0.1.0rc2 for google-generativeai is a yanked
version. Reason for being yanked: Release is marked as supporting Py3.8,
but in practice it requires 3.9". We would need to require later
versions of python to fix it.

---------

Co-authored-by: Abby Harrison <54643756+awharrison-28@users.noreply.github.com>
Co-authored-by: Abby Harrison <abby.harrison@microsoft.com>
SOE-YoungS pushed a commit to SOE-YoungS/semantic-kernel that referenced this pull request Nov 1, 2023
…ding (microsoft#2258)

### Motivation and Context

Implementation of Google PaLM connector with chat completion, text
embedding and three example files to demonstrate their functionality.
Closes microsoft#2098
I also opened a PR for text completion microsoft#2076

<!-- Thank you for your contribution to the semantic-kernel repo!
Please help reviewers and future users, providing the following
information:
  1. Why is this change required?
  2. What problem does it solve?
  3. What scenario does it contribute to?
  4. If it fixes an open issue, please link to the issue here.

-->

### Description
**What's new:**
1. Implemented Google Palm connector with chat completion and text
embedding
2. Added 3 example files to `python/samples/kernel-syntax-examples`
3. The example files show PaLM chat in action with skills, memory, and
just with user messages. The memory example uses PaLM embedding as well
5. Added integration tests to test chat with skills and embedding with
kernel.memory functions
6. Added unit tests to verify successful class initialization and
correct behavior from class functions
  
**There are some important differences between google palm chat and open
ai chat:**

1. PaLM has two functions for chatting, `chat` and `reply`. The chat
function in google's genai library starts a new conversation, and reply
continues the conversation. Reply is an attribute of the response object
returned by chat. So an instance of the `GooglePalmChatCompletion` class
needs a way to determine which function to use, which is why I
introduced a private attribute to store the response object. See
https://developers.generativeai.google/api/python/google/generativeai/types/ChatResponse
2. PaLM does not use system messages. Instead, the chat function takes a
parameter called context. It serves the same purpose, to prime the
assistant with certain behaviors and information. So when the user
passes a system message to `complete_chat_async`, it is passed to `chat`
as the context parameter.
3. Semantic memory works with the chat service as long as the user
creates a chat prompt template. The prompt containing the memory needs
to be added to the chat prompt template as a system message. See
`python\samples\kernel-syntax-examples\google_palm_chat_with_memory.py`
for more details. If the only purpose of `complete_async` in
`GooglePalmChatCompletion` is to send memories + user messages to the
chat service as a text prompt, then `complete_async` is not fulfilling
its intended purpose. A possible solution would be to send the text
prompt as a request to the text service within `complete_async`.

<!-- Describe your changes, the overall approach, the underlying design.
These notes will help understanding how your code works. Thanks! -->

### Contribution Checklist

<!-- Before submitting this PR, please make sure: -->

- [x] The code builds clean without any errors or warnings
- [x] The PR follows the [SK Contribution
Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md)
and the [pre-submission formatting
script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts)
raises no violations
- [x] All unit tests pass, and I have added new tests where possible
- [x] I didn't break anyone 😄

Currently no warnings, there was 1 warning when first installing genai
with `poetry add google.generativeai==v0.1.0rc2` from within poetry
shell: "The locked version 0.1.0rc2 for google-generativeai is a yanked
version. Reason for being yanked: Release is marked as supporting Py3.8,
but in practice it requires 3.9". We would need to require later
versions of python to fix it.

---------

Co-authored-by: Abby Harrison <54643756+awharrison-28@users.noreply.github.com>
Co-authored-by: Abby Harrison <abby.harrison@microsoft.com>
Co-authored-by: Abby Harrison <abharris@microsoft.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
python Pull requests for the Python Semantic Kernel
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

Python: Add text embedding and chat completion to Google PaLM connector
5 participants