Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

google-common [feature]: Support JSON Mode for Gemini #5071 #5190

Merged
merged 11 commits into from
May 1, 2024

Conversation

JackFener
Copy link
Contributor

Fixes #5071
Added the opportunity to choose responseMimeType when calling google ai models.
Officially available since Gemini 1.5 pro (but tested finely also on Gemini 1.0 pro), you can choose to have the LLM output formatted into JSON object.
The model.stream() method still returns a string containing the JSON. I would like to add the automatic parsing of the string, but it's not always true that Gemini returns a correctly formatted json.
I also don't like to have the .call() method returning 2 different types.
So by now the user still has to JSON.parse(response) to have an object response.
I would be happy to work also on the other google related tasks.
Not very active on X but you can find me at @Giacomo_fava or on LinkedIn at @giacomofavaai. Not here to self promote, just linking my handle as the pr comment suggests

@dosubot dosubot bot added the size:M This PR changes 30-99 lines, ignoring generated files. label Apr 23, 2024
Copy link

vercel bot commented Apr 23, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
langchainjs-api-refs ✅ Ready (Inspect) Visit Preview 💬 Add feedback May 1, 2024 0:45am
langchainjs-docs ✅ Ready (Inspect) Visit Preview May 1, 2024 0:45am

Copy link
Contributor

@afirstenberg afirstenberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks great! Thank you for the update!

Some thoughts below.

Two additional questions:

  • Do we know if this also works on the AI Studio API?
  • What happens if we include the setting on a model that doesn't support it?

* text/plain: (default) Text output.
* application/json: JSON response in the candidates.
*/
responseMimeType?: "text/plain" | "application/json"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good to see the comment!
I would suggest making the two strings a separate type and using that type here and down in GeminiGenerationConfig.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

definitely I will

@@ -218,6 +227,7 @@ export interface GeminiGenerationConfig {
temperature?: number;
topP?: number;
topK?: number;
responseMimeType?: "text/plain" | "application/json"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other location to use the type I suggestd above.

options?.responseMimeType ??
params?.responseMimeType ??
target?.responseMimeType ??
"text/plain";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rather than put the default here, we tend to have the defaults specified in the class that implements it (ie - GoogleBaseLLM and ChatGoogleBase).

const authOptions: MockClientAuthInfo = {
record,
projectId,
resultFile: "llm-8-mock.json",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see the llm-8-mock.json file checked in.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my bad, will push it asap

@@ -521,4 +521,30 @@ describe("Mock Google LLM", () => {
expect(responseArray).toHaveLength(3);
console.log("record", JSON.stringify(record, null, 2));
});

test("8: streamGenerateContent - streaming - json responseMimeType", async () => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we also get a test to make sure the request contains the responseMimeType setting in the generation config?

@JackFener
Copy link
Contributor Author

  • Do we know if this also works on the AI Studio API?
    Doesn't seem so by reading the docs but I'll deepen
  • What happens if we include the setting on a model that doesn't support it?
    I've tested it also on some old version of gemini (that shouldn't support it) and it delivers a json output anyway.

@dosubot dosubot bot added size:XL This PR changes 500-999 lines, ignoring generated files. and removed size:M This PR changes 30-99 lines, ignoring generated files. labels Apr 24, 2024
@JackFener
Copy link
Contributor Author

just committed all the suggested changes :)

@@ -14,7 +14,7 @@ import {
GoogleAIModelParams,
GoogleAISafetySetting,
GooglePlatformType,
GeminiContent,
GeminiContent, GoogleAIResponseMimeType,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make sure you run yarn format

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto:improvement Medium size change to existing code to handle new use-cases size:XL This PR changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

google-common [feature]: Support JSON Mode for Gemini
4 participants