Skip to content

aileftech/jpromptmanager

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 

Repository files navigation

JPromptManager - A simple Java library to efficiently manage and run LLM prompts

Javadoc

If you've developed an application that makes heavy use of LLMs, you've probably run into the issue of managing your many prompts efficiently. JPromptManager tries to solve the most common pain points by providing a simple interface to store, manage and run prompts.

JPromptManager allows you to:

  1. Store your prompts in an easily editable and machine-readable XML format, using a file you can keep under version control;
  2. Run multi-step prompts with variable interpolation without having to worry about correctly chaining the calls to the underlying LLM;
  3. Access the output of your prompts in a standardized, Java-friendly way, e.g. allowing to easily deserialize to POJOs.

JPromptManager is structured in such a way that it can be extended to support any LLM. At the moment, though, we only provide an implementation that connects to OpenAI supporting both GPT-3 and ChatGPT endpoints (which in turn relies on openai-java). If you want to know how to extend it look at the available implementations of the LLMConnector interface; if you need any additional guidance feel free to open an issue.

Usage

Basic usage

  1. Define your prompts in the prompts.xml file:
<?xml version="1.0" encoding="UTF-8"?>
<prompts>
	<prompt type="ExampleCreateTagline">
		<step name="tagline">
			Write a tagline for a [[${shopType}]] shop.
		</step>
	</prompt>
</prompts>

Each prompt must define e type attribute which must correspond to a Java class that extends Prompt<T> (where T is the type of object returned by the prompt). A prompt can be made of multiple steps, and each step must have a unique name. The text in each step is processed with Thymeleaf to allow complex templating possibilities.

  1. There must be a 1-to-1 mapping between prompts in the XML file and Java classes. In this simple scenario, we can define our class as:
public class ExampleCreateTagline extends SingleStepStringPrompt {

}

The SingleStepStringPrompt class is provided by JPromptManager and its default implementation is already enough for our needs, as it will return the raw response as a String (it extends Prompt<String>).

  1. Initialize our LLM connector and JPromptManager:
LLMConnector openAI = new OpenAIGPT3Connector("OPENAI_KEY", 0, "text-davinci-003");
// or for ChatGPT: LLMConnector openAI = new OpenAIChatGPTConnector("OPENAI_KEY", 0, "gpt-3.5-turbo");
JPromptManager jPrompt = new JPromptManager(openAI, Paths.get("src/main/resources/prompts-examples.xml"));
  1. Run our prompt:
String tagline = jPrompt.complete(ExampleCreateTagline.class, new PromptContextBuilder().set("shopType", "car repair").build());
System.out.println(tagline);

The second parameter to the complete method contains the variables that are to be replaced in the template. This parameter can be omitted if the prompt contains no variables.

Multi-step prompts

Sometimes we are interested in sending multiple sequential prompts and collecting all the answers. For example:

<prompt type="ExampleCreateTaglineLocation">
	<step name="tagline">
		Write a tagline for a [[${shopType}]] shop.
	</step>
	<step name="location">
		Make up a location for the shop (in [[${country}]]).
	</step>
</prompt>

Here we have defined a prompt with two steps. Inside the same prompt, each step must have a unique name. We can define our mapping class as:

public class ExampleCreateTaglineLocation extends MultiStepStringPrompt {

}

The MultiStepStringPrompt provides a default implementation that returns the output of the various steps as a map.

Map<String, String> complete = 
	jPrompt.complete(
		ExampleCreateTaglineLocation.class, 
		new PromptContextBuilder().set("shopType", "car repair").set("country", "France").build()
	);
System.out.println(complete);

Example output:

{tagline="We'll get you back on the road in no time!", location="Le Garage du Champs-Élysées - Quality Car Repairs in the Heart of Paris!"}

Mapping output to POJOs

It is often the case that we want to have our prompt output deserialized directly into a POJO. For example, let's say we have a class:

public class Shop {
	private String name;

	private String tagline;

	private List<String> owners;

	private String shopHistory;

	// ... getters and setters as needed ...
}

and we want to instantiate it with details generated by our LLM. We could create a MultiStepStringPrompt and then build our Shop object manually (through its constructor or setters).

Alternatively, we can instruct our LLM to output JSON code and deserialize it directly into a Shop object. First, we define a prompts like this:

<prompt type="ExampleCreateShop">
	<!--  
	When asking to generate JSON it's better to use low temperature,
	as it will more reliably generate syntactically correct code.
	 -->
	<step name="shopJson" temperature="0">
	Create a JSON object representing a [[${shopType}]] shop in [[${country}]]. 
	Use the following template:
	{
		"name" : "the name of the shop",
		"tagline" : "a funny tagline the shop",
		"owners" : ["a list of the people who own the shop, full names"]
	}
	</step>
</prompt>

As usual, we must create a corresponding class that extends Prompt<Shop>, and in this case it will contain our deserialization logic:

public class ExampleCreateShop extends Prompt<Shop> {
	private static final Gson gson = new Gson();

	@Override
	public Shop getOutput() {
		return gson.fromJson(steps.get(0).getResponse(), Shop.class);
	}
}

When called:

Shop shop = jPrompt.complete(ExampleCreateShop.class, new PromptContextBuilder().set("shopType", "car repair").set("country", "France").build());

we will get our output directly as a Shop object.

In this example our prompt consists of a single step. If there are more, you'll need to customize the deserialization logic in the getOutput method in order to properly build your final object.

Customizing prompt execution

LLMs usually have several parameters which can customize the execution. We can define these on each prompt step as follows:

<prompt type="ExampleCreateShop">
	<step name="tagline" temperature="1" maxTokens="256" topP="1" model="text-davinci-003">
		Write a tagline for a car repair shop.
	</step>
</prompt>

These are the supported parameters for all available connectors.

GPT-3 completion

  • model (required)
  • temperature (optional, default 0)
  • maxTokens (optional, default 256)
  • topP (optional, default 1)

ChatGPT completion

  • model (required)
  • temperature (optional, default 0)
  • maxTokens (optional, default 256)

More

You can check the ExampleMain class (and the other classes in the same package) for a comprehensive list of examples of usage.

About

A simple Java library to efficiently manage and run complex LLM prompts

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Languages