Google Gemini provider implementation for LLM execution. Implements the Provider interface from the execution package.
npm install execution-gemini @google/generative-aiimport { GeminiProvider, createGeminiProvider } from 'execution-gemini';
// Create provider
const provider = createGeminiProvider();
// Execute a request
const response = await provider.execute(
{
model: 'gemini-1.5-pro',
messages: [
{ role: 'system', content: 'You are helpful.' },
{ role: 'user', content: 'Hello!' }
],
addMessage: () => {},
},
{
apiKey: process.env.GEMINI_API_KEY,
temperature: 0.7,
}
);
console.log(response.content);
console.log(response.usage); // { inputTokens: X, outputTokens: Y }The provider supports all Gemini models:
- Gemini 1.5 Pro
- Gemini 1.5 Flash
- Gemini 1.0 Pro
Set via:
options.apiKeyparameterGEMINI_API_KEYenvironment variable
- System instruction support
- Multi-turn conversation handling
- Structured output via JSON schema
- Token usage tracking
interface ProviderResponse {
content: string;
model: string;
usage?: {
inputTokens: number;
outputTokens: number;
};
}execution- Core interfaces (no SDK dependencies)execution-openai- OpenAI providerexecution-anthropic- Anthropic provider
Apache-2.0