New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(deps): update github/codeql-action action to v3.25.3 #364
base: main
Are you sure you want to change the base?
Conversation
[puLL-Merge] - github/codeql-action@v3.23.0..v3.23.1 logError 400 This model's maximum context length is 16385 tokens. However, your messages resulted in 79710 tokens. Please reduce the length of the messages. |
9830ea9
to
9c9be4a
Compare
[puLL-Merge] - github/codeql-action@v3.23.0..v3.23.2 logError 400 This model's maximum context length is 16385 tokens. However, your messages resulted in 94300 tokens. Please reduce the length of the messages. |
9c9be4a
to
692360d
Compare
[puLL-Merge] - github/codeql-action@v3.23.0..v3.24.0 logError 400 This model's maximum context length is 16385 tokens. However, your messages resulted in 128015 tokens. Please reduce the length of the messages. |
692360d
to
dc58303
Compare
[puLL-Merge] - github/codeql-action@v3.23.0..v3.24.1 logError 429 Request too large for gpt-3.5-turbo-0125 in organization org-BnmWIy0BEq9kKl99WCZYV68U on tokens per min (TPM): Limit 160000, Requested 218228. The input or output tokens must be reduced in order to run successfully. Visit https://platform.openai.com/account/rate-limits to learn more. |
dc58303
to
7e6e377
Compare
[puLL-Merge] - github/codeql-action@v3.23.0..v3.24.2 logError 429 Request too large for gpt-3.5-turbo-0125 in organization org-BnmWIy0BEq9kKl99WCZYV68U on tokens per min (TPM): Limit 160000, Requested 224898. The input or output tokens must be reduced in order to run successfully. Visit https://platform.openai.com/account/rate-limits to learn more. |
7e6e377
to
125ebf0
Compare
[puLL-Merge] - github/codeql-action@v3.23.0..v3.24.3 logError 429 Request too large for gpt-3.5-turbo-0125 in organization org-BnmWIy0BEq9kKl99WCZYV68U on tokens per min (TPM): Limit 160000, Requested 226441. The input or output tokens must be reduced in order to run successfully. Visit https://platform.openai.com/account/rate-limits to learn more. |
125ebf0
to
a2952d0
Compare
[puLL-Merge] - github/codeql-action@v3.23.0..v3.24.4 logError 429 Request too large for gpt-3.5-turbo-0125 in organization org-BnmWIy0BEq9kKl99WCZYV68U on tokens per min (TPM): Limit 160000, Requested 2196830. The input or output tokens must be reduced in order to run successfully. Visit https://platform.openai.com/account/rate-limits to learn more. |
a2952d0
to
71ac859
Compare
[puLL-Merge] - github/codeql-action@v3.23.0..v3.24.5 logError 429 Request too large for gpt-3.5-turbo-0125 in organization org-BnmWIy0BEq9kKl99WCZYV68U on tokens per min (TPM): Limit 160000, Requested 2198311. The input or output tokens must be reduced in order to run successfully. Visit https://platform.openai.com/account/rate-limits to learn more. |
9fb3e43
to
7ceabb1
Compare
7ceabb1
to
580718f
Compare
[puLL-Merge] - github/codeql-action@v3.23.0..v3.24.6 logError 429 Request too large for gpt-3.5-turbo-0125 in organization org-BnmWIy0BEq9kKl99WCZYV68U on tokens per min (TPM): Limit 160000, Requested 2552823. The input or output tokens must be reduced in order to run successfully. Visit https://platform.openai.com/account/rate-limits to learn more. |
580718f
to
618ee60
Compare
[puLL-Merge] - github/codeql-action@v3.23.0..v3.24.7 logError 429 Request too large for gpt-3.5-turbo-0125 in organization org-BnmWIy0BEq9kKl99WCZYV68U on tokens per min (TPM): Limit 160000, Requested 1291123. The input or output tokens must be reduced in order to run successfully. Visit https://platform.openai.com/account/rate-limits to learn more. |
618ee60
to
9e33782
Compare
[puLL-Merge] - github/codeql-action@v3.23.0..v3.24.8 logError 429 Request too large for gpt-3.5-turbo-0125 in organization org-BnmWIy0BEq9kKl99WCZYV68U on tokens per min (TPM): Limit 1000000, Requested 1296355. The input or output tokens must be reduced in order to run successfully. Visit https://platform.openai.com/account/rate-limits to learn more. |
9e33782
to
3b95a92
Compare
[puLL-Merge] - github/codeql-action@v3.23.0..v3.24.9 logError 429 Request too large for gpt-3.5-turbo-0125 in organization org-BnmWIy0BEq9kKl99WCZYV68U on tokens per min (TPM): Limit 1000000, Requested 1418969. The input or output tokens must be reduced in order to run successfully. Visit https://platform.openai.com/account/rate-limits to learn more. |
3b95a92
to
70a6bc9
Compare
70a6bc9
to
c683230
Compare
[puLL-Merge] - github/codeql-action@v3.23.0..v3.24.10 logError 400 {"type":"error","error":{"type":"invalid_request_error","message":"prompt is too long: 209080 tokens > 199999 maximum"}} |
c683230
to
9f0e2d7
Compare
[puLL-Merge] - github/codeql-action@v3.23.0..v3.25.0 logError 400 {"type":"error","error":{"type":"invalid_request_error","message":"prompt is too long: 212117 tokens > 199999 maximum"}} |
9f0e2d7
to
c5ad0b8
Compare
c5ad0b8
to
f957d93
Compare
f957d93
to
34324ad
Compare
This PR contains the following updates:
v3.23.0
->v3.25.3
Release Notes
github/codeql-action (github/codeql-action)
v3.25.3
Compare Source
v3.25.2
Compare Source
v3.25.1
Compare Source
v3.25.0
Compare Source
v3.24.10
Compare Source
v3.24.9
Compare Source
v3.24.8
Compare Source
v3.24.7
Compare Source
v3.24.6
Compare Source
v3.24.5
Compare Source
v3.24.4
Compare Source
v3.24.3
Compare Source
v3.24.2
Compare Source
v3.24.1
Compare Source
v3.24.0
Compare Source
v3.23.2
Compare Source
v3.23.1
Compare Source
Configuration
📅 Schedule: Branch creation - "* 0-4 * * 3" (UTC), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR has been generated by Mend Renovate. View repository job log here.