Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions documentation/modules/ROOT/nav.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
** xref:16_kafka-and-streams.adoc[Apache Kafka with Reactive Streams]

* AI
** xref:17_ai_intro.adoc[AI with Quarkus]
** xref:17_prompts.adoc[Working with prompts]
** xref:18_chains_memory.adoc[Chains and Memory]
** xref:19_agents_tools.adoc[Agents/Tools]
Expand Down
14 changes: 14 additions & 0 deletions documentation/modules/ROOT/pages/17_ai_intro.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
= Quarkus and AI

AI is becoming an intrinsic part of software development. It can help us write, test and debug code. We can also infuse AI models directly into our applications. Inversely, we can also create functions/tools that can be called by AI agents to augment their capabilities and knowledge.

Quarkus supports a few different ways to work with AI, mainly leveraging the LangChain4j extension. There are also other extensions such as the Quarkus MCP server which allows you to serve tools to be consumed by AI agents.

In this chapter, we'll explore how to work with AI models. We'll cover:
* Prompting AI models in your applications
* Preserving state between calls
* Creating Tools for use by AI Agents
* Embedding Documents that can be queried by LLMs
* Building a chatbot
* Working with local models (using Podman Desktop AI Lab)

21 changes: 16 additions & 5 deletions documentation/modules/ROOT/pages/17_prompts.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,17 @@

The Quarkus LangChain4j extension seamlessly integrates Large Language Models (LLMs) into Quarkus applications. LLMs are AI-based systems designed to understand, generate, and manipulate human language, showcasing advanced natural language processing capabilities. Thanks to this extension, we can enable the harnessing of LLM capabilities for the development of more intelligent applications.

In this first chapter, we'll explore the simplest of interactions with an LLM: Prompting. It essentially means just asking questions to an LLM and receiving an answer in natural language from a given Model, such as OpenAI, Mistral, Hugging Face, Ollama, etc.
In this first chapter, we'll explore the simplest of interactions with an LLM: Prompting. It essentially means just asking questions to an LLM and receiving an answer in natural language from a given model, such as ChatGPT, Granite, Mistral, etc.


== Creating a Quarkus & LangChain4j Application

We're going to use the langchain4j-openai extension for our first interaction with models.
The openai extension supports models that expose the open sourced OpenAI API specification.
Several models and model providers expose this API specification. If you want to use
a different API spec, then you can likely find a supported extension in the https://docs.quarkiverse.io/quarkus-langchain4j/dev/llms.html[Quarkus documentation].


[tabs%sync]
====

Expand All @@ -18,7 +24,7 @@ Maven::
[.console-input]
[source,bash,subs="+macros,+attributes"]
----
mvn "io.quarkus.platform:quarkus-maven-plugin:create" -DprojectGroupId="com.redhat.developers" -DprojectArtifactId="{project-ai-name}" -DprojectVersion="1.0-SNAPSHOT" -Dextensions=rest,langchain4j-core,langchain4j-openai
mvn "io.quarkus.platform:quarkus-maven-plugin:create" -DprojectGroupId="com.redhat.developers" -DprojectArtifactId="{project-ai-name}" -DprojectVersion="1.0-SNAPSHOT" -Dextensions=rest,langchain4j-openai
cd {project-ai-name}
----
--
Expand All @@ -29,7 +35,7 @@ Quarkus CLI::
[.console-input]
[source,bash,subs="+macros,+attributes"]
----
quarkus create app -x rest -x langchain4j-openai -x langchain4j-core com.redhat.developers:{project-ai-name}:1.0-SNAPSHOT
quarkus create app -x rest -x langchain4j-openai com.redhat.developers:{project-ai-name}:1.0-SNAPSHOT
cd {project-ai-name}
----
--
Expand All @@ -44,9 +50,13 @@ LangChain4j provides you a proxy to connect your application to OpenAI by just a
[.console-input]
[source,properties]
----
# Free demo key for basic usage of OpenAI ChatGPT
quarkus.langchain4j.openai.api-key=demo
# Change this URL to the model provider of your choice
quarkus.langchain4j.openai.base-url=https://api.openai.com/v1
----


== Create the AI service

First we need to create an interface for our AI service.
Expand All @@ -68,7 +78,7 @@ public interface Assistant {

== Create the prompt-base resource

Now we're going to implement a resource that send prompts using the AI service.
Now we're going to implement a resource that sends prompts using the AI service.

Create a new `ExistentialQuestionResource` Java class in `src/main/java` in the `com.redhat.developers` package with the following contents:

Expand Down Expand Up @@ -136,7 +146,8 @@ You can also run the following command:
curl -w '\n' localhost:8080/earth/flat
----

An example of output (it can vary on each prompt execution):
An example of the output you might see (Yours will likely be slightly different
depending on the response from the non-deterministic LLM):

[.console-output]
[source,text]
Expand Down
4 changes: 2 additions & 2 deletions documentation/modules/ROOT/pages/18_chains_memory.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ In this section, we'll cover how we can achieve this with the LangChain4j extens

== Create an AI service with memory

Let's create an interface for our AI service, but with memory feature this time.
Let's create an interface for our AI service, but with memory this time.

Create a new `AssistantWithMemory` Java interface in `src/main/java` in the `com.redhat.developers` package with the following contents:

Expand Down Expand Up @@ -255,4 +255,4 @@ The result will be at your Quarkus terminal. An example of output (it can vary o
------------------------------------------
----

NOTE: Take a close look at the IDs of our calls to the assistant. Do you notice that the last question was in fact directed to Klaus with ID=1? We were indeed able to maintain 2 separate and concurrent conversations with the LLM!
NOTE: Take a close look at the IDs of our calls to the assistant. Do you notice that the last question was in fact directed to Klaus with ID=1? We were indeed able to maintain 2 separate and concurrent conversations with the LLM.
45 changes: 34 additions & 11 deletions documentation/modules/ROOT/pages/19_agents_tools.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ You can read more about this in the https://docs.quarkiverse.io/quarkus-langchai

== Add the Mailer and Mailpit extensions

Open a new terminal window, and make sure you’re at the root of your `{project-ai-name}` project, then run:
Open a new terminal window, and make sure you’re at the root of your `{project-ai-name}` project, then run the following command to add emailing capabilities to our application:

[tabs]
====
Expand Down Expand Up @@ -79,7 +79,8 @@ public class EmailService {
Let's create an interface for our AI service, but with `SystemMessage` and `UserMessage` this time.
`SystemMessage` gives context to the AI Model.
In this case, we tell it that it should craft a message as if it is written by a professional poet.
The `UserMessage` is the actual instruction/question we're sending to the AI model. As you can see in the example below,
The `UserMessage` is the actual instruction/question we're sending to the AI model.
As you can see in the example below,
you can format and parameterize the `UserMessage`, translating structured content to text and vice-versa.

Create a new `AssistantWithContext` Java interface in `src/main/java` in the `com.redhat.developers` package with the following contents:
Expand Down Expand Up @@ -143,21 +144,41 @@ public class EmailMeAPoemResource {
}
----

== Adding email service properties to your configuration
== Modify application.properties to use the email Tools

Tool calling is not supported with the OpenAI `demo` key so we will need to
either use a real API key, or use a local model that supports tools..
If you want to use OpenAI's ChatGPT, you can create and fund an account at https://platform.openai.com/[OpenAI] and then set the openai-api-key to your key.

We will use a local (free) open source model served with Ollama instead.
To do this, you will need to https://ollama.com/download[download and install Ollama].
Once that's done, you will need to https://ollama.com/search?c=tools[download a model that supports tool calling], such as `granite3.1-dense:2b`. To do so, execute the command:

[#quarkuspdb-dl-ollama]
[.console-input]
[source,config,subs="+macros,+attributes"]
----
ollama pull granite3.1-dense:2b
----

Update the following properties in your `application.properties`

IMPORTANT: The LangChain4j `demo` key currently does not support tools, so you will need to use a real OpenAI key for the email service to be called by the OpenAI model.
You can create an account over at https://platform.openai.com/[OpenAI] if you'd like to see this in action.
Note that OpenAI requires you to fund your account with credits to be able to use the API. The minimum is $5 but this amount will go a long way to test the scenarios in this tutorial.
NOTE: If you do not want to go through the trouble of creating an OpenAI account or install Ollama, you can still test the below scenario, it just won't send an email since the "Tool" functionality unfortunately won't work.

NOTE: If you do not want to create an OpenAI key, you can still test the below scenario, it just won't send an email since the "Tool" functionality unfortunately won't work.
Modify the application.properties as below:

[#quarkuspdb-update-props]
[.console-input]
[source,config,subs="+macros,+attributes"]
----
quarkus.langchain4j.openai.api-key=<YOUR OPENAI KEY>
# Set OpenAI key if you want to use the API key
# quarkus.langchain4j.openai.api-key=demo

# With Ollama
quarkus.langchain4j.openai.base-url=http://localhost:11434/v1
# Configure server to use a specific model
quarkus.langchain4j.openai.chat-model.model-name=granite3.1-dense:2b
quarkus.langchain4j.openai.embedding-model.model-name=granite3.1-dense:2b

quarkus.langchain4j.openai.log-requests=true
quarkus.langchain4j.openai.log-responses=true
Expand All @@ -166,7 +187,9 @@ quarkus.langchain4j.openai.timeout=60s
%dev.quarkus.mailer.mock=false
----

Because we haven't configured the local email service, Quarkus will use Dev Services to instantiate and configure a local email service for you (in dev mode only!).
Make sure your Quarkus Dev mode is still running. It should have reloaded with the new configuration.

Because we haven't configured the local email service, Quarkus will also have started a Dev Service to instantiate and configure a local email service for you (in dev mode only!).

You can check it running:

Expand Down Expand Up @@ -200,15 +223,15 @@ You can also run the following command:
curl localhost:8080/email-me-a-poem
----

An example of output (it can vary on each prompt execution):
An example of output (will vary on each prompt execution):

[.console-output]
[source,text]
----
I have composed a poem about Quarkus. I have sent it to you via email. Let me know if you need anything else
----

If you have a valid OpenAI key configured, you can check the "real" email:
If you have a tool calling model configured, you can check your inbox for the actual email:

First, open the http://localhost:8080/q/dev-ui[DevUI, window=_blank] and click on the Mailpit arrow.

Expand Down
14 changes: 11 additions & 3 deletions documentation/modules/ROOT/pages/20_embed_documents.adoc
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
= Embedding Documents
= Embedding Documents and Creating a Chatbot
:description: Learn how to embed documents and create a chatbot using LangChain4J in Quarkus.

:project-ai-name: quarkus-langchain4j-app

Expand Down Expand Up @@ -44,7 +45,14 @@ Add the following properties to your `application.properties` so that it looks l
[.console-input]
[source,config,subs="+macros,+attributes"]
----
quarkus.langchain4j.openai.api-key=<YOUR OPENAI KEY>
# Set OpenAI key if you want to use the API key
# quarkus.langchain4j.openai.api-key=demo

# With Ollama
quarkus.langchain4j.openai.base-url=http://localhost:11434/v1
# Configure server to use a specific model
quarkus.langchain4j.openai.chat-model.model-name=granite3.1-dense:2b
quarkus.langchain4j.openai.embedding-model.model-name=granite3.1-dense:2b

quarkus.langchain4j.openai.log-requests=true
quarkus.langchain4j.openai.log-responses=true
Expand All @@ -71,7 +79,7 @@ quarkus.langchain4j.openai.chat-model.model-name=gpt-4o #<4>

== Embedding the business document

NOTE: If you don't provide an actual OpenAI key you will still be able to go through this exercise but the "Tools" functions won't be called, resulting in unexpected answers.
NOTE: If you don't provide a model that supports embeddings and tools you will still be able to go through this exercise but the "Tools" functions won't be called, resulting in unexpected answers. See the previous "Agents and Tools" chapter for more information.

Let's provide a document containing the service's terms of use:

Expand Down
6 changes: 3 additions & 3 deletions documentation/modules/ROOT/pages/21_podman_ai.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,15 @@

:project-podman-ai-name: quarkus-podman-ai-app

Throughout this tutorial, we've been working with OpenAI's remote models, however wouldn't it be nice if we could work
with models on our local machine (without incurring costs)?
Throughout this tutorial, we've been working with OpenAI's remote models, or Ollama's models on our local machine, however wouldn't it be nice if we could work
with models on our local machine (without incurring costs) AND have a nice visualization of what's going on?

Podman Desktop is a GUI tool that helps with running and managing containers on our local machine, but it can also help with running AI models locally as well thanks to its AI Lab extension. Thanks to Quarkus and LangChain4j, it then becomes trivial to start developing with these models. Let's find out how!


== Installing Podman Desktop AI

First, if you haven't yet, you must download and install Podman Desktop on your operating system. https://podman-desktop.io/downloads[The instructions can be found here, window="_blank"].
First, if you haven't yet, download and install Podman Desktop on your operating system. https://podman-desktop.io/downloads[The instructions can be found here, window="_blank"].

NOTE: For Windows/macOS users, if you can, give the Podman machine at least 8GB of memory and 4 CPUs (Generative AI Models are resource hungry!). The model can run with less resources, but it will be significantly slower.

Expand Down