=====================================
This is a simple Spring Boot application that utilizes the power of AI to create a chatbot. It uses LLaMA, an open-source large language model developed by Meta AI.
- Java 21 (JDK)
- Ollama 3.2 LLM
To build the application and run it locally:
mvn clean package
java -jar target/spring-ai-simple-chat-0.0.1.jarThis project uses the following dependencies:
- Spring Boot Starter Web: for creating a web-based chat interface
- LLaMA 3.2 LLM: for generating responses to user input
- Ollama Java Client: for interacting with the LLaMA model
The application features a simple web interface that allows users to interact with the chatbot.
The chatbot uses the LLaMA model to generate responses to user input. It can engage in basic conversations, answer questions, and provide information on various topics.
- Open a web browser and navigate to
http://localhost:8080/swagger-ui/index.html. - It has only one api, which takes user prompt and wait for LLM to respond.
- The LLM doesn't keep the context.
Note: This is a simple implementation and may not cover all scenarios. You can extend the application to add more features and functionality as needed.
This project includes a Dockerfile so you can build and run the application as a container.
Ports
- The application listens on port 8080 by default. The container image exposes that port.
Environment variables (Spring Boot relaxed binding)
- SPRING_AI_OLLAMA_BASE_URL — overrides
spring.ai.ollama.base-url(example default inapplication.yml:http://host.docker.internal:11434). - SERVER_PORT — overrides the server port (spring property
server.port). - You can also pass any other Spring Boot properties as environment variables using the standard Spring Boot relaxed binding rules.
Build the Docker image
docker build -t spring-ai-simple-chat:latest .Run the container (Ollama running on your host / Docker Desktop on macOS)
# If Ollama is running on the host (macOS Docker Desktop), use host.docker.internal
docker run --rm -p 8080:8080 \
-e SPRING_AI_OLLAMA_BASE_URL=http://host.docker.internal:11434 \
spring-ai-simple-chat:latestRun the container (Ollama running in another container on the same Docker network)
# Create a network and run Ollama (example) with name 'ollama'
# docker network create ai-net
# docker run -d --name ollama --network ai-net <ollama-image>
docker run --rm -p 8080:8080 \
--network ai-net \
-e SPRING_AI_OLLAMA_BASE_URL=http://ollama:11434 \
spring-ai-simple-chat:latestAlternative: run the packaged JAR inside an OpenJDK container (no Dockerfile)
docker run --rm -p 8080:8080 -v "$(pwd)/target/spring-ai-simple-chat-0.0.1.jar:/app/app.jar" \
openjdk:21-jdk java -jar /app/app.jarTroubleshooting
- On macOS Docker Desktop,
host.docker.internalresolves to the host so the app container can reach an Ollama instance running on the host. - On Linux,
host.docker.internalmay not be available; use a user-defined network and reference the Ollama container by name, or set the host machine IP. - If the app cannot reach the LLM, check that the base URL is reachable from inside the container (e.g.,
docker exec+ curl) and that Ollama is listening on port 11434.
Where to go next
- After the container is running visit
http://localhost:8080/swagger-ui/index.htmlto test the API.