Skip to content

Commit d645f06

Browse files
authored
Merge pull request #27 from mattleibow/dev/system-prompt-file
feat: Add system-prompt-file input for file-based system prompts
2 parents cacab0d + 9c57490 commit d645f06

File tree

9 files changed

+412
-79
lines changed

9 files changed

+412
-79
lines changed

.github/workflows/ci.yml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -80,11 +80,16 @@ jobs:
8080
- name: Create Prompt File
8181
run: echo "hello" > prompt.txt
8282

83+
- name: Create System Prompt File
84+
run:
85+
echo "You are a helpful AI assistant for testing." > system-prompt.txt
86+
8387
- name: Test Local Action with Prompt File
8488
id: test-action-prompt-file
8589
uses: ./
8690
with:
8791
prompt-file: prompt.txt
92+
system-prompt-file: system-prompt.txt
8893
env:
8994
GITHUB_TOKEN: ${{ github.token }}
9095

README.md

Lines changed: 25 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,21 @@ steps:
4747
prompt-file: './path/to/prompt.txt'
4848
```
4949
50+
### Using a system prompt file
51+
52+
In addition to the regular prompt, you can provide a system prompt file instead
53+
of an inline system prompt:
54+
55+
```yaml
56+
steps:
57+
- name: Run AI Inference with System Prompt File
58+
id: inference
59+
uses: actions/ai-inference@v1
60+
with:
61+
prompt: 'Hello!'
62+
system-prompt-file: './path/to/system-prompt.txt'
63+
```
64+
5065
### Read output from file instead of output
5166
5267
This can be useful when model response exceeds actions output limit
@@ -70,15 +85,16 @@ steps:
7085
Various inputs are defined in [`action.yml`](action.yml) to let you configure
7186
the action:
7287

73-
| Name | Description | Default |
74-
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------ |
75-
| `token` | Token to use for inference. Typically the GITHUB_TOKEN secret | `github.token` |
76-
| `prompt` | The prompt to send to the model | N/A |
77-
| `prompt-file` | Path to a file containing the prompt. If both `prompt` and `prompt-file` are provided, `prompt-file` takes precedence | `""` |
78-
| `system-prompt` | The system prompt to send to the model | `""` |
79-
| `model` | The model to use for inference. Must be available in the [GitHub Models](https://github.com/marketplace?type=models) catalog | `gpt-4o` |
80-
| `endpoint` | The endpoint to use for inference. If you're running this as part of an org, you should probably use the org-specific Models endpoint | `https://models.github.ai/inference` |
81-
| `max-tokens` | The max number of tokens to generate | 200 |
88+
| Name | Description | Default |
89+
| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------ |
90+
| `token` | Token to use for inference. Typically the GITHUB_TOKEN secret | `github.token` |
91+
| `prompt` | The prompt to send to the model | N/A |
92+
| `prompt-file` | Path to a file containing the prompt. If both `prompt` and `prompt-file` are provided, `prompt-file` takes precedence | `""` |
93+
| `system-prompt` | The system prompt to send to the model | `"You are a helpful assistant"` |
94+
| `system-prompt-file` | Path to a file containing the system prompt. If both `system-prompt` and `system-prompt-file` are provided, `system-prompt-file` takes precedence | `""` |
95+
| `model` | The model to use for inference. Must be available in the [GitHub Models](https://github.com/marketplace?type=models) catalog | `gpt-4o` |
96+
| `endpoint` | The endpoint to use for inference. If you're running this as part of an org, you should probably use the org-specific Models endpoint | `https://models.github.ai/inference` |
97+
| `max-tokens` | The max number of tokens to generate | 200 |
8298

8399
## Outputs
84100

0 commit comments

Comments
 (0)