Welcome to this hands-on codelab! You'll learn how to integrate Google's Gemini API into an Angular application to build an intelligent chatbot with function calling capabilities.
Duration: 45-60 minutes
Level: Intermediate
Technologies: Angular 20, Gemini API, TypeScript, TailwindCSS
A real-time chatbot application that:
- β¨ Communicates with Google's Gemini AI model
- π οΈ Uses function calling to execute custom tools
- π Performs grounded searches with web results
- π¬ Displays chat messages with a modern UI
Before starting, ensure you have:
- Node.js (v22 or later) installed
- npm or yarn package manager
- Basic knowledge of Angular and TypeScript
- A Google AI Studio account and Gemini API key
# Clone the repository (or download the project files)
cd gdg-angularnpm installThis will install:
- Angular 20 framework
@google/genai- Gemini API client library- TailwindCSS for styling
- Other required dependencies
π TODO: You need to obtain your own API key!
- Visit Google AI Studio
- Sign in with your Google account
- Click "Get API Key" or "Create API Key"
- Copy your API key (keep it secure!)
β οΈ Important: Never commit API keys to version control. We'll configure it properly in the next steps.
Open src/services/gemini.service.ts and locate the constructor:
constructor() {
// IMPORTANT: Replace with your API key
this.ai = new GoogleGenAI({
apiKey: 'YOUR_API_KEY_HERE', // π Replace this!
});
}TODO: Replace 'YOUR_API_KEY_HERE' with your actual Gemini API key.
π‘ Best Practice: For production apps, use environment variables instead of hardcoded keys. We'll discuss this later!
βββββββββββββββββββ
β AppComponent β β User Interface (Chat UI)
ββββββββββ¬βββββββββ
β
β
βββββββββββββββββββ
β GeminiService β β Handles AI Communication
ββββββββββ¬βββββββββ
β
β
βββββββββββββββββββ
β Gemini API β β Google's AI Model
ββββββββββ¬βββββββββ
β
β
βββββββββββββββββββ
β Function Tools β β Custom Functions (Weather, Orders)
βββββββββββββββββββ
| File | Purpose |
|---|---|
src/services/gemini.service.ts |
Core service for Gemini API integration |
src/services/available-tools.ts |
Function declarations and implementations |
src/models/chat.model.ts |
Type definitions for chat messages |
src/app.component.ts |
Main chat interface component |
src/components/chat-message/ |
Individual message display component |
Open src/services/gemini.service.ts and let's understand the key parts:
constructor() {
this.ai = new GoogleGenAI({
apiKey: 'YOUR_API_KEY',
});
}The GoogleGenAI class is the main entry point for interacting with Gemini.
private startChat(): void {
this.chat = this.ai.chats.create({
model: 'gemini-2.5-flash',
config: {
tools: [{ urlContext: {} }, { googleSearch: {} }],
},
});
}Key Concepts:
- Model: We're using
gemini-2.5-flash(fast and efficient) - Tools: Enable
urlContextandgoogleSearchfor grounded responses
π― TASK: Try changing the model to gemini-2.5-pro to see the difference in responses!
async sendMessage(prompt: string): Promise<ChatMessage> {
if (!this.chat) {
this.startChat();
}
const response: GenerateContentResponse = await this.chat.sendMessage({
message: prompt,
});
return await this.handleResponse(response);
}This method:
- Ensures a chat session exists
- Sends the user's message
- Processes the response
The handleResponse method is where the magic happens:
private async handleResponse(
response: GenerateContentResponse,
isToolResponse = false
): Promise<ChatMessage> {
// Check if Gemini wants to call a function
const functionCalls = response.candidates?.[0]?.content.parts
.filter((part) => !!part.functionCall)
.map((part) => part.functionCall);
if (!functionCalls || functionCalls.length === 0) {
// No function call - return the text response
return {
role: isToolResponse ? 'tool' : 'model',
content: response.text.trim(),
// Include grounding metadata if available
searchEntryPoint: groundingMetadata?.searchEntryPoint?.renderedContent,
groundingChunks: groundingMetadata?.groundingChunks?.map(...),
};
}
// Execute the function calls
const toolResults: Part[] = [];
for (const call of functionCalls) {
const { name, args } = call;
const tool = toolImplementations[name];
if (tool) {
const result = await tool(...Object.values(args));
toolResults.push({
functionResponse: { name, response: result }
});
}
}
// Send tool results back to Gemini
const toolResponse = await this.chat.sendMessage({ message: toolResults });
return await this.handleResponse(toolResponse, true);
}Flow Explanation:
- Check if Gemini requests a function call
- If yes β Execute the function and send results back
- If no β Return the text response with grounding data
Open src/services/available-tools.ts to see how custom functions are defined.
function getWeather(location: string): object {
if (location.toLowerCase().includes('tokyo')) {
return { location: 'Tokyo', temperature: '15Β°C', condition: 'Cloudy' };
}
// ... more locations
return { location, temperature: '20Β°C', condition: 'Clear' };
}
function getOrderStatus(orderId: string): object {
const orderIdNumber = parseInt(orderId, 10);
if (orderIdNumber > 500) {
return { orderId, status: 'Shipped' };
}
return { orderId, status: 'Processing' };
}These are mock functions for demonstration. In a real app, these would call actual APIs!
export const functionDeclarations: FunctionDeclaration[] = [
{
name: 'getWeather',
description: 'Get the current weather in a given location',
parameters: {
type: Type.OBJECT,
properties: {
location: {
type: Type.STRING,
description: 'The city and state, e.g. San Francisco, CA',
},
},
required: ['location'],
},
},
// ... more functions
];Important: The description field is crucial! Gemini uses it to decide when to call your function.
export const toolImplementations: { [key: string]: (...args: any[]) => any } = {
getWeather,
getOrderStatus,
};This maps function names to their implementations.
Now it's your turn! Let's add a new function to get stock prices.
TODO: Add this function to src/services/available-tools.ts:
function getStockPrice(symbol: string): object {
// Mock stock prices
const stocks: {
[key: string]: { symbol: string; price: string; change: string };
} = {
GOOGL: { symbol: 'GOOGL', price: '$175.50', change: '+2.3%' },
AAPL: { symbol: 'AAPL', price: '$189.25', change: '+1.8%' },
MSFT: { symbol: 'MSFT', price: '$420.10', change: '+0.5%' },
};
const upperSymbol = symbol.toUpperCase();
return (
stocks[upperSymbol] || {
symbol: upperSymbol,
price: 'N/A',
change: 'N/A',
error: 'Stock not found',
}
);
}TODO: Update the toolImplementations object:
export const toolImplementations: { [key: string]: (...args: any[]) => any } = {
getWeather,
getOrderStatus,
getStockPrice, // π Add this line!
};TODO: Add this to the functionDeclarations array:
{
name: "getStockPrice",
description: "Get the current stock price for a given ticker symbol",
parameters: {
type: Type.OBJECT,
properties: {
symbol: {
type: Type.STRING,
description: "The stock ticker symbol, e.g. GOOGL, AAPL, MSFT",
},
},
required: ["symbol"],
},
}TODO: Modify src/services/gemini.service.ts in the startChat() method:
private startChat(): void {
this.chat = this.ai.chats.create({
model: 'gemini-2.5-flash',
config: {
tools: [
{ functionDeclarations }, // π Add this line!
{ urlContext: {} },
{ googleSearch: {} }
],
},
});
}Make sure to import at the top:
import { functionDeclarations, toolImplementations } from './available-tools';npm run devThe app should open at http://localhost:4200 (or the port shown in terminal).
Try these prompts:
- Weather: "What's the weather in Tokyo?"
- Orders: "Check status for order 501"
- Search: "Who won the Nobel Prize in Physics in 2023?"
Try: "What's the stock price for GOOGL?"
Expected behavior:
- Gemini recognizes it needs stock info
- Calls
getStockPrice("GOOGL") - Returns formatted response with the price
Open the browser console (F12) and look for logs:
Calling tool: getStockPrice with args: { symbol: "GOOGL" }
This confirms your function is being called correctly!
Grounding connects AI responses to real-world sources, making them more accurate and trustworthy.
In src/services/gemini.service.ts:
const groundingMetadata = response.candidates?.[0]?.groundingMetadata;
return {
role: 'model',
content: response.text.trim(),
searchEntryPoint: groundingMetadata?.searchEntryPoint?.renderedContent,
groundingChunks: groundingMetadata?.groundingChunks?.map((chunk) => ({
uri: chunk.web.uri,
title: chunk.web.title,
})),
groundingSupports: groundingMetadata?.groundingSupports,
};The ChatMessageComponent processes grounding data and adds citation links:
processedContent = computed(() => {
const message = this.message();
if (!message.groundingSupports || !message.groundingChunks) {
return message.content;
}
// Add [1], [2] citation links to the content
let content = message.content;
for (const support of message.groundingSupports) {
const links = support.groundingChunkIndices
.map((index) => {
const chunk = message.groundingChunks![index];
return `<a href="${chunk.uri}" target="_blank">[${index + 1}]</a>`;
})
.join('');
// Insert links at the appropriate position
}
return content;
});Try it: Ask "What are the latest AI developments?" and see citation links appear!
β DON'T hardcode API keys in your source code!
β DO use environment variables:
-
Create a
.env.localfile:VITE_GEMINI_API_KEY=your_actual_key_here -
Add
.env.localto.gitignore:.env.local -
Update
gemini.service.ts:constructor() { this.ai = new GoogleGenAI({ apiKey: import.meta.env.VITE_GEMINI_API_KEY, }); }
TODO: Add error handling to your service:
async sendMessage(prompt: string): Promise<ChatMessage> {
try {
if (!this.chat) {
this.startChat();
}
const response = await this.chat!.sendMessage({ message: prompt });
return await this.handleResponse(response);
} catch (error) {
console.error('Gemini API error:', error);
return {
role: 'model',
content: 'Sorry, I encountered an error. Please try again.',
};
}
}| Model | Use Case | Speed | Quality |
|---|---|---|---|
gemini-2.5-flash |
Fast responses, chat apps | β‘β‘β‘ | βββ |
gemini-2.5-pro |
Complex tasks, analysis | β‘β‘ | βββββ |
gemini-2.0-flash-exp |
Experimental features | β‘β‘β‘ | ββββ |
Ready to level up? Try these challenges:
Create functions for:
- Currency conversion
- Unit conversion (miles to km)
- Date/time in different timezones
Implement streaming to show responses as they're generated (typewriter effect).
Hint: Use streamGenerateContent() instead of generateContent().
Save chat history to localStorage and restore on page reload.
Allow users to upload images and ask questions about them.
Hint: Use GoogleGenAI.models.generateContent() with image parts.
Add custom system instructions to control the AI's personality and behavior.
this.chat = this.ai.chats.create({
model: 'gemini-2.5-flash',
config: {
systemInstruction: 'You are a helpful assistant specialized in...',
tools: [...]
},
});Congratulations! You've successfully:
β
Integrated Gemini API into an Angular application
β
Implemented function calling with custom tools
β
Built a real-time chat interface
β
Understood grounding and search capabilities
β
Created your own custom AI-powered tool
- Build a production-ready chatbot for your website
- Integrate with your existing APIs and databases
- Explore multimodal capabilities (images, audio, video)
- Experiment with different Gemini models
- Deploy your app to production
Solution: Double-check your API key in gemini.service.ts. Make sure there are no extra spaces.
Solution:
- Verify function declarations match implementation names
- Check that descriptions are clear and specific
- Ensure
functionDeclarationsis imported in the service
Solution: The Gemini API should handle CORS automatically. If issues persist, check your API key permissions.
Solution:
- Use
gemini-2.5-flashinstead ofprofor faster responses - Reduce the complexity of your prompts
- Check your internet connection
Found an issue or have suggestions? Please let us know!
Happy Coding! π
Last Updated: October 29, 2025
Codelab Version: 1.0