Skip to content

Commit

Permalink
add logging and model parameter tutorials to docu (#536)
Browse files Browse the repository at this point in the history
Adding Model Parameter Setting tutorial and Logging explanation to
documentation
  • Loading branch information
LizeRaes committed Jan 24, 2024
1 parent e7bee00 commit 283107c
Show file tree
Hide file tree
Showing 5 changed files with 138 additions and 4 deletions.
6 changes: 4 additions & 2 deletions docs/docs/tutorials/image-models.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ sidebar_position: 21

# 12. Image Models

Creating and interpreting
Creating images: example in https://github.com/langchain4j/langchain4j-examples/blob/main/tutorials/src/main/java/_02_OpenAiImageModelExamples.java

Coming soon
Interpreting images: coming very soon

More elaborated content coming soon - or help us by adding it <3
56 changes: 56 additions & 0 deletions docs/docs/tutorials/logging.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
---
sidebar_position: 30
---

# 15. Logging

### Model requests and responses
Console output can be switched on and off by setting `.logRequests()` and `.logResponses()` on the model

```
ChatLanguageModel model = OpenAiChatModel.builder()
.apiKey(ApiKeys.OPENAI_API_KEY)
.logRequests(true)
.logResponses(true)
.build();
```

### Default logging: slf4j
LangChain4j comes with an slf4j facade, but the user is free to use any logging backend (e.g. log4j, logback, tinylog, etc.)

An example of Tinylog backend can be found in langchain4j-examples/tutorials, where logging properties are set in `tinylog.properties`, as follows:
```
writer.level = info
```

Typical log level settings are `error`, `warn`, `info` and `debug`.

An overview of all the options:
- `off`: No log messages will be written. This effectively disables logging.
- `trace`: All log messages, including trace, debug, info, warn, and error, will be written to the log output.
- `debug`: Log messages of debug, info, warn, and error levels will be written to the log output. Trace messages will be ignored.
- `info`: Log messages of info, warn, and error levels will be written to the log output. Debug and trace messages will be ignored.
- `warn`: Log messages of warn and error levels will be written to the log output. Info, debug, and trace messages will be ignored.
- `error`: Only log messages of error level will be written to the log output. Warn, info, debug, and trace messages will be ignored.
- `fatal`: This level is not part of the standard log levels in Tinylog. You can use it to specify a custom level for log messages. By default, it behaves the same as the `error` level.

## Quarkus
In Quarkus examples, logging properties are set in the `application.properties` file:
```
quarkus.log.console.enable = true
quarkus.log.file.enable = false
quarkus.langchain4j.openai.chat-model.log-responses = true
quarkus.langchain4j.openai.chat-model.log-requests = true
```

These properties can also be set and changed in the Quarkus Dev UI, when running the application in dev mode (command: `quarkus dev`).
The Dev UI can then be accessed via `host:port/q/dev-ui`.

## Spring Boot
In Spring Boot examples, logging properties are set in the `application.properties` file
```
logging.level.dev.langchain4j=INFO
logging.level.dev.ai4j.openai4j=INFO
```

_This documentation page is a stub - help us make it better_
2 changes: 1 addition & 1 deletion docs/docs/tutorials/response-streaming.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,6 @@ sidebar_position: 9

# 4. Response Streaming

[Streaming of LLM responses](https://github.com/langchain4j/langchain4j-examples/blob/main/other-examples/src/main/java/StreamingExamples.java)
[Streaming of LLM responses](https://github.com/langchain4j/langchain4j-examples/blob/main/tutorials/src/main/java/_04_Streaming.java)

Tutorial coming soon
78 changes: 77 additions & 1 deletion docs/docs/tutorials/set-model-parameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,81 @@ sidebar_position: 4
---

# 2. Set Model Parameters
An example of specifying model parameters can be found [here](https://github.com/langchain4j/langchain4j-examples/blob/main/tutorials/src/main/java/_01_ModelParameters.java
).

Coming soon
## What are Parameters in LLMs
Depending on which model and which model provider you use, you can set a lot of parameters that will influence the model's output, speed, logging, etc.
Typically, you will find all the parameters and their meaning on the provider's website.


For example, OpenAI API's parameters can be found at https://platform.openai.com/docs/api-reference/chat (most up-to-date version)
and include options like

| Parameter | Description | Type |
|--------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|
| `modelName` | The name of the model to use (gtp-3.5-turbo, gpt-4-1106-preview, ...) | `String` |
| `temperature` | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. | `Double` |
| `max_tokens` | The maximum number of tokens that can be generated in the chat completion. | `Integer` |
| `frequencyPenalty` | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. | `Double` |
| `...` | ... | `...` |

For the full list of parameters in OpenAI LLMs, see the [OpenAI Language Model page](/docs/integrations/language-models/openai).
Full lists of parameters and default values per model can be found under the separate model pages (under Integration, Language Model and Image Model).

## Default Parameter Settings
The LangChain4j framework offers very easy model constructors with a lot of parameters set under the hood to sensible defaults. The fastest way to construct a model object is
```
ChatLanguageModel model = OpenAiChatModel.withApiKey("demo");
```
In this case of an OpenAI Chat Model for example, some of the defaults are

| Parameter | Default Value |
|----------------|---------------|
| `timeout` | 60s |
| `modelName` | gpt-3.5-turbo |
| `temperature` | 0.7 |
| `logRequests` | false |
| `logResponses` | false |
| `...` | ... |

Defaults for all language and image models can be found on the pages of the respective providers under [Integrations](/docs/integrations).

## How to Set Parameter Values
When we use the builder pattern, we will be able to set all the available parameters of the model as follows:
```
ChatLanguageModel model = OpenAiChatModel.builder()
.apiKey(ApiKeys.OPENAI_API_KEY)
.modelName(GPT_3_5_TURBO)
.temperature(0.3)
.timeout(ofSeconds(60))
.logRequests(true)
.logResponses(true)
.build();
```

## Parameter Settings in Quarkus
LangChain4j parameters in Quarkus applications can be set in the `application.properties` file as follows:
```
quarkus.langchain4j.openai.chat-model.temperature=0.5
quarkus.langchain4j.openai.timeout=60s
quarkus.langchain4j.openai.api-key=${OPENAI_API_KEY}
```

Interestingly, for debugging, tweaking or even just knowing all the available parameters, one can have a look in the quarkus DEV UI.
In this dashboard, you can make changes that will be immediately reflected in your running instance, and your changes are automatically ported to the code.
The DEV UI can be accessed by running your Quarkus application with the command `quarkus dev`, then you can find it on localhost:8080/q/dev-ui (or wherever you deploy your application).


[![](/img/quarkus-dev-ui-parameters.png)](/docs/tutorials/set-model-parameters)

## Parameter Settings in Spring Boot
LangChain4j parameters in Spring Boot applications can be set in the `application.properties` file as follows:
```
langchain4j.open-ai.chat-model.api-key=${OPENAI_API_KEY}
langchain4j.open-ai.chat-model.model-name=gpt-4-1106-preview
langchain4j.open-ai.chat-model.temperature=0.0
langchain4j.open-ai.chat-model.timeout=PT60S
langchain4j.open-ai.chat-model.log-requests=false
langchain4j.open-ai.chat-model.log-responses=false
```
Binary file added docs/static/img/quarkus-dev-ui-parameters.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 283107c

Please sign in to comment.