Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regarding OpenAI token #90

Open
hamperia4 opened this issue Mar 5, 2024 · 7 comments
Open

Regarding OpenAI token #90

hamperia4 opened this issue Mar 5, 2024 · 7 comments
Assignees
Labels
bug Something isn't working enhancement New feature or request

Comments

@hamperia4
Copy link

Hello
Playing around with the project, after running nmap with OpenAI using profile 5 or 12 i come with the error:

"message: "This model's maximum context length is 16385 tokens, however you requested 17216 tokens (14716 in your prompt; 2500 for the completion). Please reduce your prompt; or completion length.","

I'm getting this message after changing the model from gpt-3.5-turbo-0613 to gpt-3.5-turbo-0125
What would be the best approach here?

Thank you for your time.

@morpheuslord morpheuslord self-assigned this Mar 6, 2024
@morpheuslord morpheuslord added the bug Something isn't working label Mar 6, 2024
@morpheuslord
Copy link
Owner

morpheuslord commented Mar 6, 2024

So I am looking into this issue, the thing is OPENAI has a token limit and that's common for all models. The only way to mitigate this is to make it streamable and I am working on this correction this is only an issue with NMAP and its related output I will be working on this and maybe in the next update, this will be corrected.

Thanks for letting me know 👍

@hamperia4
Copy link
Author

Thank you for the feedback.
Yes i agree as due to nmap responses this can be expected for openai.
Just out of curiosity, even if you use bigger models like gpt-4-0125-preview and expand the max_token value from 2500 to 4096 this will still remain?

@morpheuslord
Copy link
Owner

morpheuslord commented Mar 6, 2024

Thank you for the feedback.
Yes I agree as due to Nmap responses this can be expected for Openai.
Just out of curiosity, even if you use bigger models like gpt-4-0125-preview and expand the max_token value from 2500 to 4096 this will remain?

I guess yes. It boils down to how the AI can process. And what is the optimal parameter for accuracy? I am pretty sure that's what is going on here. Maybe openai limits the input tokens to maintain a consistent accurate output 🤷‍♂️

@hamperia4
Copy link
Author

On my error i can see that actually openai provides the limit as the maximum context length is 16385 tokens, but i requested for 17216 tokens (14716 in your prompt, and 2500(max_token value) for the completion)
I believe that in gpt4 model this will not be an issue. Let me test this and get back to you.

@morpheuslord
Copy link
Owner

On my error i can see that actually openai provides the limit as the maximum context length is 16385 tokens, but i requested for 17216 tokens (14716 in your prompt, and 2500(max_token value) for the completion)
I believe that in gpt4 model this will not be an issue. Let me test this and get back to you.

So the thing is the output limit and input limits are different gpt-4 has more output and input capacity but can't leverage that properly with direct prompts like what I am doing so I need to modify that

@hamperia4
Copy link
Author

Hello, as tested in multiple models this is still the same so you are correct.
Is there any update?

@morpheuslord
Copy link
Owner

I was kinda busy with uni exams and stuff so not able to work on this 😅. I will work on this when I get the time for it.

@morpheuslord morpheuslord added the enhancement New feature or request label Apr 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants