Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error if "This model's maximum context length is 4097" #10

Open
CrazyBoy49z opened this issue Apr 3, 2023 · 13 comments
Open

Error if "This model's maximum context length is 4097" #10

CrazyBoy49z opened this issue Apr 3, 2023 · 13 comments
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@CrazyBoy49z
Copy link

Unhandled exception in [StandaloneCoroutine{Cancelling}@65296840, EDT]

com.aallam.openai.api.exception.OpenAIAPIException: This model's maximum context length is 4097 tokens. However, your messages resulted in 747057 tokens. Please reduce the length of the messages.
	at com.aallam.openai.client.internal.http.HttpTransport.handleException(HttpTransport.kt:43)
	at com.aallam.openai.client.internal.http.HttpTransport.perform(HttpTransport.kt:25)
	at com.aallam.openai.client.internal.http.HttpTransport$perform$1.invokeSuspend(HttpTransport.kt)
	at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
	at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:104)
	at com.intellij.openapi.application.impl.DispatchedRunnable.run(DispatchedRunnable.kt:35)
	at com.intellij.openapi.application.TransactionGuardImpl.runWithWritingAllowed(TransactionGuardImpl.java:209)
	at com.intellij.openapi.application.TransactionGuardImpl.access$100(TransactionGuardImpl.java:21)
	at com.intellij.openapi.application.TransactionGuardImpl$1.run(TransactionGuardImpl.java:191)
	at com.intellij.openapi.application.impl.ApplicationImpl.runIntendedWriteActionOnCurrentThread(ApplicationImpl.java:831)
	at com.intellij.openapi.application.impl.ApplicationImpl$3.run(ApplicationImpl.java:456)
	at com.intellij.openapi.application.impl.FlushQueue.doRun(FlushQueue.java:79)
	at com.intellij.openapi.application.impl.FlushQueue.runNextEvent(FlushQueue.java:122)
	at com.intellij.openapi.application.impl.FlushQueue.flushNow(FlushQueue.java:41)
	at java.desktop/java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:318)
	at java.desktop/java.awt.EventQueue.dispatchEventImpl(EventQueue.java:788)
	at java.desktop/java.awt.EventQueue$3.run(EventQueue.java:739)
	at java.desktop/java.awt.EventQueue$3.run(EventQueue.java:731)
	at java.base/java.security.AccessController.doPrivileged(AccessController.java:399)
	at java.base/java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:86)
	at java.desktop/java.awt.EventQueue.dispatchEvent(EventQueue.java:758)
	at com.intellij.ide.IdeEventQueue.defaultDispatchEvent(IdeEventQueue.kt:666)
	at com.intellij.ide.IdeEventQueue._dispatchEvent$lambda$7(IdeEventQueue.kt:570)
	at com.intellij.openapi.application.impl.ApplicationImpl.withoutImplicitRead(ApplicationImpl.java:1446)
	at com.intellij.ide.IdeEventQueue._dispatchEvent(IdeEventQueue.kt:570)
	at com.intellij.ide.IdeEventQueue.access$_dispatchEvent(IdeEventQueue.kt:68)
	at com.intellij.ide.IdeEventQueue$dispatchEvent$processEventRunnable$1$1$1.compute(IdeEventQueue.kt:349)
	at com.intellij.ide.IdeEventQueue$dispatchEvent$processEventRunnable$1$1$1.compute(IdeEventQueue.kt:348)
	at com.intellij.openapi.progress.impl.CoreProgressManager.computePrioritized(CoreProgressManager.java:787)
	at com.intellij.ide.IdeEventQueue$dispatchEvent$processEventRunnable$1$1.invoke(IdeEventQueue.kt:348)
	at com.intellij.ide.IdeEventQueue$dispatchEvent$processEventRunnable$1$1.invoke(IdeEventQueue.kt:343)
	at com.intellij.ide.IdeEventQueueKt.performActivity$lambda$1(IdeEventQueue.kt:994)
	at com.intellij.openapi.application.TransactionGuardImpl.performActivity(TransactionGuardImpl.java:105)
	at com.intellij.ide.IdeEventQueueKt.performActivity(IdeEventQueue.kt:994)
	at com.intellij.ide.IdeEventQueue.dispatchEvent$lambda$4(IdeEventQueue.kt:343)
	at com.intellij.openapi.application.impl.ApplicationImpl.runIntendedWriteActionOnCurrentThread(ApplicationImpl.java:831)
	at com.intellij.ide.IdeEventQueue.dispatchEvent(IdeEventQueue.kt:385)
	at java.desktop/java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:207)
	at java.desktop/java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:128)
	at java.desktop/java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:117)
	at java.desktop/java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:113)
	at java.desktop/java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:105)
	at java.desktop/java.awt.EventDispatchThread.run(EventDispatchThread.java:92)
	Suppressed: kotlinx.coroutines.DiagnosticCoroutineContextException: [StandaloneCoroutine{Cancelled}@65296840, EDT]
Caused by: io.ktor.client.plugins.ClientRequestException: Client request(POST https://api.openai.com/v1/chat/completions) invalid: 400 Bad Request. Text: "{
  "error": {
    "message": "This model's maximum context length is 4097 tokens. However, your messages resulted in 747057 tokens. Please reduce the length of the messages.",
    "type": "invalid_request_error",
    "param": "messages",
    "code": "context_length_exceeded"
  }
}
"
	at io.ktor.client.plugins.DefaultResponseValidationKt$addDefaultResponseValidation$1$1.invokeSuspend(DefaultResponseValidation.kt:54)
	at io.ktor.client.plugins.DefaultResponseValidationKt$addDefaultResponseValidation$1$1.invoke(DefaultResponseValidation.kt)
	at io.ktor.client.plugins.DefaultResponseValidationKt$addDefaultResponseValidation$1$1.invoke(DefaultResponseValidation.kt)
	at io.ktor.client.plugins.HttpCallValidator.validateResponse(HttpCallValidator.kt:51)
	at io.ktor.client.plugins.HttpCallValidator.access$validateResponse(HttpCallValidator.kt:43)
	at io.ktor.client.plugins.HttpCallValidator$Companion$install$3.invokeSuspend(HttpCallValidator.kt:152)
	at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
	at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
	... 38 more

@Blarc
Copy link
Owner

Blarc commented Apr 3, 2023

I am aware of this error and am planning to add a notification for whenever an error occurs.

However, I am not sure if we should handle commit messages for diffs larger than 4097 tokens. We could split the diff into multiple API requests, but commit message wouldn't reflect the whole change then.

Any ideas or suggestion how to solve this are welcome.

@Blarc Blarc added the help wanted Extra attention is needed label Apr 3, 2023
@CrazyBoy49z
Copy link
Author

CrazyBoy49z commented Apr 5, 2023

I think it's better to throw out a notification that something went wrong than to crash ide

example
image

@Blarc
Copy link
Owner

Blarc commented Apr 5, 2023

@CrazyBoy49z You are absolutely correct. Error notifications have been added in version 0.5.0.

@StringKe
Copy link

StringKe commented May 3, 2023

Maybe there is a need to add a feature to generate a commit message with some optional files.

I use it in front-end projects where some html / svg code formatting generates code changes that should not generate a commit message

@Blarc
Copy link
Owner

Blarc commented May 3, 2023

Hi @StringKe. The commit message is generated only from the file changes that are selected in the commit dialog. You can also be very specific, and selected only the lines that are important.

I do understand your problem though, maybe we could somehow filter out the changes that only contain whitespace changes?

@StringKe
Copy link

StringKe commented May 3, 2023

I think it's important to include some lock file changes in addition to removing spaces, and I don't know if you know about node package management tools like npm yarn pnpm that produce a lock file that is very large and has a lot of changes per change.

I would prefer that I choose to use diff of those files to generate the definitions, rather than all the files added to the git add staging.

@Blarc
Copy link
Owner

Blarc commented May 3, 2023

Sorry, but I'm not sure I understand correctly. Isn't lock file also in the commit dialog? If it is, you can use it to generate commit message. If the whole lock file is too big, you can mark only specific lines in the commit dialog.

The commit message is generated only from the files (or more specifically lines) that are marked for commit in commit dialog.

Maybe add an example?

@StringKe
Copy link

StringKe commented May 3, 2023

The files a b c d are all in the git staging area, which is all the files in this commit, but the d file is a lock file it is very large and you don't want to use it to generate messages, should use the a b c file to generate

The a b c d files are entire feature submissions and should not be submitted separately

@Blarc
Copy link
Owner

Blarc commented May 3, 2023

Oh alright. So you would like an exclusion list? So you can make a list of file path globs and if file's path matches that glob, it is never used when generating commit message?

I could maybe add this. The current workaround would be to just not select the lock file (d) when generating the message and select it after the commit message is generated, so it's commited in the same commit.

Did I understand correctly?

@StringKe
Copy link

StringKe commented May 3, 2023

Yes, the understanding is correct.

of course there is another way of solving it, I guess.

  1. sort by number of changed rows in positive order (most changed at the top)
  2. splice the diffs one file at a time until 4097 tokens are consumed (or maxTokens as defined by the user)

@Blarc
Copy link
Owner

Blarc commented May 3, 2023

That's an interesting solution. The problem I see with it, is that the user won't know exactly what was used for generating commit message. This could be confusing. That is, if you just ignore all other changes once you reached maxTokens ?

edit: The way it works now, the user can split the changes to multiple smaller commits and has full control what gets used.

@StringKe
Copy link

StringKe commented May 4, 2023

Yes, very true.

When the project is large, a commit is meant to implement a certain feature, and splitting is incorrect behavior and does not facilitate project management.

@alvaroleon
Copy link

Hello!
Good solutions, I think you can, if the code has too many tokens, maybe send just send a part of the code to the OpenAi API or just review what changed from the code and send that to the OpenAI api, I don't know if the Jetbrains API provides that information. For example, Codium Plugin apparently does this, when the commits are very long it gives you a message where a small fragment of the code was taken.

@Blarc Blarc added the enhancement New feature or request label Mar 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

4 participants