-
Notifications
You must be signed in to change notification settings - Fork 2
/
chat_completion_demo.yml
65 lines (59 loc) · 2.16 KB
/
chat_completion_demo.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
modules:
- .llms # docassemble.ALToolbox.llms
---
id: interview order
mandatory: True
code: |
system_message
process_input
show_results
---
id: ask_for_input
question: |
Enter an instruction and text to transform
subquestion: |
This screen lets you experiment with the OpenAI API. It works
like the OpenAI playground.
Enter a system instruction and text to transform.
For example, you could enter a system instruction of:
```
Rewrite the input to
a 6th grade reading level
```
and text of
```
In accordance with the stipulations set forth in Section 4, Subsection 3.2 of the Company's Operational Handbook, it is incumbent upon all employees to meticulously submit their individualized timekeeping records for the preceding month no later than the fifth business day of the ensuing month. Failure to comply with this mandate may result in a sequential series of remedial actions as outlined in the Employee Disciplinary Policy, commencing with a formal written reprimand and potentially culminating in termination of employment, contingent upon the recurrent nature of the non-compliance and the discretion of the departmental supervisor in consultation with the Human Resources Department, pursuant to company policy and applicable labor laws.
```
It may help to include an instruction like "reply only with the changed text" in your system
instruction to avoid extraneous information in the response.
fields:
- System instructions: system_message
datatype: area
- Text to transform: user_message
datatype: area
- Creativity (temperature): temperature
datatype: range
min: 0
max: 1
default: 0.5
step: 0.05
help: |
The higher the temperature, the more creative the response. The lower the temperature, the more predictable the response.
---
code: |
results = chat_completion(
system_message,
user_message,
temperature=temperature,
# The following are optional parameters shown for demonstration purposes
model="gpt-3.5-turbo",
json_mode=False,
)
process_input = True
---
event: show_results
question: |
Here are your results
subquestion: |
${ single_to_double_newlines(results) }