Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add logging and model parameter tutorials to docu #536

Merged
merged 2 commits into from
Jan 24, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
6 changes: 4 additions & 2 deletions docs/docs/tutorials/image-models.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ sidebar_position: 21

# 12. Image Models

Creating and interpreting
Creating images: example in https://github.com/langchain4j/langchain4j-examples/blob/main/tutorials/src/main/java/_02_OpenAiImageModelExamples.java

Coming soon
Interpreting images: coming very soon

More elaborated content coming soon - or help us by adding it <3
44 changes: 44 additions & 0 deletions docs/docs/tutorials/logging.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
---
sidebar_position: 30
---

# 15. Logging

### Model requests and responses
Console output can be switched on and off by setting `.logRequests()` and `.logResponses()` on the model

```
ChatLanguageModel model = OpenAiChatModel.builder()
.apiKey(ApiKeys.OPENAI_API_KEY)
.logRequests(true)
.logResponses(true)
.build();
```

### Default logging: Tinylog
By default, we offer the Tinylog framework. An example can be found in langchain4j-examples/tutorials.
langchain4j marked this conversation as resolved.
Show resolved Hide resolved
Logging properties are set in `tinylog.properties`, as follows
```
writer.level = info
```
Typical log level settings are `error`, `warn`, `info` and `debug`.

An overview of all the options:
- `off`: No log messages will be written. This effectively disables logging.
- `trace`: All log messages, including trace, debug, info, warn, and error, will be written to the log output.
- `debug`: Log messages of debug, info, warn, and error levels will be written to the log output. Trace messages will be ignored.
- `info`: Log messages of info, warn, and error levels will be written to the log output. Debug and trace messages will be ignored.
- `warn`: Log messages of warn and error levels will be written to the log output. Info, debug, and trace messages will be ignored.
- `error`: Only log messages of error level will be written to the log output. Warn, info, debug, and trace messages will be ignored.
- `fatal`: This level is not part of the standard log levels in Tinylog. You can use it to specify a custom level for log messages. By default, it behaves the same as the `error` level.

Or you can choose to add you own logger.

## Spring Boot
langchain4j marked this conversation as resolved.
Show resolved Hide resolved
In Spring Boot examples, logging properties are set in the `application.properties` file
```
logging.level.dev.langchain4j=INFO
logging.level.dev.ai4j.openai4j=INFO
```

_This documentation page is a stub - help us make it better_
2 changes: 1 addition & 1 deletion docs/docs/tutorials/response-streaming.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,6 @@ sidebar_position: 9

# 4. Response Streaming

[Streaming of LLM responses](https://github.com/langchain4j/langchain4j-examples/blob/main/other-examples/src/main/java/StreamingExamples.java)
[Streaming of LLM responses](https://github.com/langchain4j/langchain4j-examples/blob/main/tutorials/src/main/java/_04_Streaming.java)

Tutorial coming soon
79 changes: 78 additions & 1 deletion docs/docs/tutorials/set-model-parameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,82 @@ sidebar_position: 4
---

# 2. Set Model Parameters
An example of specifying model parameters can be found [here](https://github.com/langchain4j/langchain4j-examples/blob/main/tutorials/src/main/java/_01_ModelParameters.java
).

Coming soon
## What are Parameters in LLMs
Depending on which model and which model provider you use, you can set a lot of parameters that will influence the model's output, speed, logging, etc.
Typically, you will find all the parameters and their meaning on the provider's website.


For example, OpenAI API's parameters can be found at https://platform.openai.com/docs/api-reference/chat
and include options like

| Parameter | Description | Type |
|--------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|
| `modelName` | The name of the model to use (gtp-3.5-turbo, gpt-4-1106-preview, ...) | `String` |
langchain4j marked this conversation as resolved.
Show resolved Hide resolved
| `temperature` | The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit. | `Double` |
langchain4j marked this conversation as resolved.
Show resolved Hide resolved
| `max_tokens` | The maximum number of tokens that can be generated in the chat completion. | `Integer` |
| `frequencyPenalty` | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. | `Double` |
| `n` | How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs. | `Integer` |
langchain4j marked this conversation as resolved.
Show resolved Hide resolved
| `...` | ... | `...` |

For the full list of parameters in OpenAI LLMs, see the [OpenAI Language Model page](/docs/integrations/language-models/openai).
Full lists of parameters and default values per model can be found under the separate model pages (under Integration, Language Model and Image Model).

## Default Parameter Settings
The LangChain4j framework for very easy model constructors with a lot of parameters set under the hood to sane defaults. The minimal way to construct a model object is
langchain4j marked this conversation as resolved.
Show resolved Hide resolved
```
ChatLanguageModel model = OpenAiChatModel.withApiKey("demo");
```
In this case of an OpenAI Chat Model for example, some of the defaults are

| Parameter | Default Value |
|----------------|---------------|
| `timeout` | 60s |
| `modelName` | gpt-3.5-turbo |
| `temperature` | 0.7 |
| `logRequests` | false |
| `logResponses` | false |
| `...` | ... |

Defaults for all language and image models can be found on the pages of the respective providers under [Integrations](/docs/integrations).

## How to Set Parameter Values
When we use the builder pattern, we will be able to set all the available parameters of the model as follows:
```
ChatLanguageModel model = OpenAiChatModel.builder()
.apiKey(ApiKeys.OPENAI_API_KEY)
.modelName(GPT_3_5_TURBO)
.temperature(0.3)
.timeout(ofSeconds(60))
.logRequests(true)
.logResponses(true)
.build();
```

## Parameter Settings in Quarkus
LangChain4j parameters in Quarkus applications can be set in the `application.properties` file as follows:
```
quarkus.langchain4j.openai.chat-model.temperature=0.5
quarkus.langchain4j.openai.timeout=60s
quarkus.langchain4j.openai.api-key=${OPENAI_API_KEY}
```

Interestingly, for debugging, tweaking or even just knowing all the available parameters, one can have a look in the quarkus DEV UI.
In this dashboard, you can make changes that will be immediately reflected in your running instance, and your changes are automatically ported to the code.
The DEV UI can be accessed by running your Quarkus application with the command `quarkus dev`, then you can find it on localhost:8080/q/dev-ui (or wherever you deploy your application).


[![](/img/quarkus-dev-ui-parameters.png)](/docs/tutorials/set-model-parameters)

## Parameter Settings in Spring Boot
LangChain4j parameters in Spring Boot applications can be set in the `application.properties` file as follows:
```
langchain4j.open-ai.chat-model.api-key=${OPENAI_API_KEY}
langchain4j.open-ai.chat-model.model-name=gpt-4-1106-preview
langchain4j.open-ai.chat-model.temperature=0.0
langchain4j.open-ai.chat-model.timeout=PT60S
langchain4j.open-ai.chat-model.log-requests=false
langchain4j.open-ai.chat-model.log-responses=false
```
Binary file added docs/static/img/quarkus-dev-ui-parameters.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.