Skip to content

Vercel AI SDK – Google Gemini Question #521

@SambhavAnand

Description

@SambhavAnand

Viewing and Debugging Model Configurations

I'm successfully able to run an agent by using the ai sdk adapter for google gemini. However, I'm passing down options like this:

modelSettings: {
  providerData: {
    google: {
      maxOutputTokens,
      thinkingConfig: {
        thinkingBudget,
        includeThoughts: true,
      },
      temperature: 0,
    },
  },
},

and have no way of debugging if these options are actually going in to the downstream provider. Is there any way for me to log out some instrumentation / raw API calls (or view it in the openAI tracing logs) to make sure that these parameters are being passed down.

The reason I want to debug it is is because I made some raw calls to the gemini model with the same prompt and parameters and the results are more in line with what one would expect with the parameters I provided.

Debug information

  • Agents SDK version: (e.g. v0.1.3)
  • Runtime environment (e.g. Node.js v22.17.0)

Repro steps

Typescript script below:

import "dotenv/config";
import {
  Agent,
  run,
  setTracingExportApiKey,
  setDefaultOpenAIKey,
} from "@openai/agents";
import { createGoogleGenerativeAI } from "@ai-sdk/google";
import { aisdk } from "@openai/agents-extensions";
import _ from "lodash";
import { Logger } from "./src/utils/Logger";

setTracingExportApiKey(process.env.openai_api_key ?? "");
setDefaultOpenAIKey(process.env.openai_api_key ?? "");

// Initialize Google provider
const google = createGoogleGenerativeAI({
  apiKey: process.env.GOOGLE_GEMINI_API_KEY,
});

// Create a simple test agent
const testAgent = new Agent({
  name: "Test Gemini Agent",
  model: aisdk(google("gemini-2.5-pro")),
  modelSettings: {
    providerData: {
      google: {
        maxOutputTokens: 65655,
        thinkingConfig: {
          thinkingBudget: 20000,
          includeThoughts: true,
        },
        temperature: 0,
      },
    },
  },
  instructions: `You are a helpful assistant. Think through problems step by step.
  When asked a question, use your thinking tokens to reason through the answer thoroughly.`,
});

// Test function to run the agent
async function testGeminiAgent() {
  try {
    console.log("🚀 Starting Gemini 2.5 Pro agent test...");
    console.log("Settings:");
    console.log("- Model: gemini-2.5-pro");
    console.log("- Thinking tokens: 65655");
    console.log("- Max output tokens: 65655");
    console.log("- Include thoughts: true");
    console.log("- Temperature: 0");
    console.log("\n");

    // Create a test thread with a complex question that requires thinking
    const thread = [
      {
        role: "user" as const,
        content: `What are the key differences between renewable and non-renewable energy sources? 
        Please analyze the advantages and disadvantages of each type and explain which approach 
        would be more sustainable for meeting global energy needs in the next 50 years.`,
      },
    ];

    // Run the agent
    const result = await run(testAgent, thread);
    Logger.info({
      modelSettings: testAgent.modelSettings,
      model: testAgent.model,
    });

    console.log("📝 Agent Response:\n");
    console.log("=".repeat(50));
    Logger.info({
      otherStuff: result.input,
    });
  } catch (error) {
    console.error("❌ Test failed:", error);
  }
}

// Main execution
async function main() {
  // Check for API key
  if (!process.env.GOOGLE_GEMINI_API_KEY) {
    console.error("❌ Please set GOOGLE_GEMINI_API_KEY environment variable");
    process.exit(1);
  }

  // Use lodash to show it's imported
  console.log(`📦 Using lodash version: ${_.VERSION || "unknown"}`);
  console.log(`📊 Test data has ${_.size({ a: 1, b: 2, c: 3 })} properties\n`);

  // Run the test
  await testGeminiAgent();

  // Optionally run the exact config test
  // await testWithExactConfig();
}

// Run the script
main().catch(error => {
  console.error("Fatal error:", error);
  process.exit(1);
});

Expected behavior

I just want to know how to figure out if these parameters are actually going in to the API. So either through the agents tracing UI or inspecting the raw API call to google

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions