This repository contains comprehensive hands-on materials for learning Spring AI, from basic LLM integration to advanced agent architectures and Model Context Protocol (MCP) implementations.
This workshop will take you on a journey through the exciting world of AI-powered Java applications using Spring AI. You'll learn how to:
- Integrate Large Language Models (LLMs) into Spring Boot applications
- Implement function calling to let AI execute business logic
- Build RAG systems for context-aware document Q&A
- Create MCP servers to expose tools for AI agents
- Develop MCP clients to consume remote tools
- Design skill-based agents with reusable capabilities
Before starting the workshop, ensure you have:
- Java 21 or higher installed
- Gradle (wrapper included in each project)
- IDE: IntelliJ IDEA, VS Code, or Eclipse
- API Keys:
- OpenAI API key (Get it here)
- Google Gemini API key (Get it here)
- Google Cloud Platform account (for Session 3)
- PostgreSQL with pgvector extension (for Session 3)
- Basic knowledge of Spring Boot and REST APIs
Learn the fundamentals of integrating LLMs with Spring AI.
- Basic chat completions
- System prompts and context
- Conversation memory
- Prompt templates
- Structured output generation
- Same code, different provider
- Understanding Spring AI's abstraction power
- Comparing OpenAI vs Gemini behavior
Key Takeaway: Spring AI's provider-agnostic design lets you switch LLM providers with minimal code changes.
Teach LLMs to execute functions and interact with your business logic.
What You'll Learn:
- Spring AI's tool calling mechanism
- Implementing functions LLMs can invoke
- Using
@ToolCallbackDescriptiondecorators - Building intelligent assistants that query business data
Example: Create a customer service bot that can look up orders, check inventory, and process refunds.
Build document Q&A systems with semantic search and vector databases.
What You'll Learn:
- Processing documents (PDF, Word, Text)
- Storing embeddings in PGVector
- Semantic search implementation
- Using Google Vertex AI Gemini
- Context-aware responses with QuestionAnswerAdvisor
Example: Create a system that answers questions about your company's documentation.
Learn to build MCP servers that expose tools for AI agents.
- Building a Spring Boot MCP server
- Exposing tools via STDIO and HTTP protocols
- MCP server configuration and profiles
- Creating a Todo application with MCP tools
- Adding authentication to MCP servers
- User-specific todo management
- Firebase integration
- OAuth2 resource server configuration
- Custom authentication entry points
- OAuth2 resource metadata endpoints
- Standardized error responses
- Production-ready security
Key Concept: MCP (Model Context Protocol) standardizes how AI agents discover and invoke tools from remote servers.
Take your MCP server to production.
What You'll Learn:
- OAuth2 resource metadata APIs
- OpenID Connect discovery
- Comprehensive health monitoring
- Multi-cloud deployment (AWS, GCP, Azure)
- Production environment configuration
Build clients that consume MCP server tools.
What You'll Learn:
- Connecting to remote MCP servers
- Discovering and invoking remote tools
- Integrating MCP tools with ChatClient
- Letting LLMs use tools from multiple servers
Architecture:
User Question → LLM + MCP Client → MCP Server → Tool Execution → Response
Create reusable, skill-based agent architectures.
What You'll Learn:
- Skill-based agent design patterns
- Creating reusable prompt templates as markdown files
- Dynamic skill loading with SkillsTool
- Integrating file system operations
Why Skills?
- Reusable: Define once, use everywhere
- Maintainable: Update prompts without code changes
- Shareable: Share skills across projects
- Versionable: Track in git
-
Clone the repository:
git clone https://github.com/yourusername/jug-ws.git cd jug-ws -
Start with Session 1:
cd session-1/llm-as-api-openai -
Create
.envfile with your API keys:OPENAI_API_KEY=sk-your-api-key-here
-
Follow the README in each session directory for detailed instructions.
We recommend following the sessions in order:
Session 1 → Session 2 → Session 3 → Session 4 (Ch1-3) → Session 6 → Session 7 → Session 8
However, each session is self-contained and can be explored independently based on your interests.
- Spring Boot 3.x
- Spring AI 2.0.0-M2 / 1.1.0-M4 (depending on session)
- Java 21
- Gradle (build tool)
- H2 / PostgreSQL (databases)
- Firebase (authentication)
- OpenAI API
- Google Gemini API
- Google Vertex AI
- Spring AI Documentation
- Model Context Protocol Specification
- OpenAI API Reference
- Google Gemini Documentation
- Read the READMEs: Each session has detailed step-by-step instructions
- Uncomment dependencies: Most projects have dependencies commented out for learning purposes
- Check your API keys: Ensure your
.envfiles are properly configured - Test incrementally: Run the code after each major step
- Experiment: Try modifying prompts and parameters to see how behavior changes
- Ask questions: Leverage the JUG community for support
This workshop material is provided for educational purposes.
Happy Learning! 🚀
For questions or support, reach out to the Tamil Nadu Java User Group community.