Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added 128k model and dall-e-3 #448

Merged
merged 12 commits into from
Nov 18, 2023
Merged

Added 128k model and dall-e-3 #448

merged 12 commits into from
Nov 18, 2023

Conversation

AlexHTW
Copy link
Contributor

@AlexHTW AlexHTW commented Nov 7, 2023

Hey,

I added the new preview model gpt-4-1106-preview as a new max_token category with 31*base. Not sure what a good default_max_tokens would be. Added gpt-3.5-turbo-1106 model to GPT_3_16_K_MODELS.

I also added support for Dall-E-3 model and some new Dall-E-3 exclusive parameters (quality and style).
Unfortunately, it's only possible to create 1024x1024 images with Dall-E-3 and not the also available landscape and portrait format - this will need some changes with the image_prices parameter and usage_tracker.py, but for now I couldn't come up with a solution that won't break legacy .env files and ongoing usage_logs :/ I'll keep thinking about a solution.

I also tried to add the visual preview model but I couldn't get it to work, OpenAI refused to accept it and I guess there is also telegram functionality to be added. So, for another time.
Also, I saw there is a tts model from openai available now, this would be also a great addition.

@alexey-mik
Copy link

Can you add new gpt-3.5-turbo-1106 model too? It looks like a drop-in replacement for current 16k model, just cheaper.

@rokipet
Copy link

rokipet commented Nov 7, 2023

can you add teh tts model

@AlexHTW
Copy link
Contributor Author

AlexHTW commented Nov 7, 2023

Can you add new gpt-3.5-turbo-1106 model too? It looks like a drop-in replacement for current 16k model, just cheaper.

Ah, great, didn't event notice that, thanks. Added it to GPT_3_16K_MODELS.

@AlexHTW
Copy link
Contributor Author

AlexHTW commented Nov 7, 2023

can you add teh tts model

I'll try as soon as I can, maybe someone else will be quicker :)

@alexey-mik
Copy link

Ah, great, didn't event notice that, thanks. Added it to GPT_3_16K_MODELS.

Thanks, you are the best :)

@Jipok
Copy link

Jipok commented Nov 7, 2023

With OPENAI_MODEL="gpt-4-1106-preview" i have error:
openai - INFO - error_code=None error_message='max_tokens is too large: 37200. This model supports at most 4096 completion tokens, whereas you provided 37200.' error_param=max_tokens error_type=invalid_request_error message='OpenAI API error received' stream_error=False

Fixed with:
MAX_TOKENS=4096

.env.example Outdated
@@ -34,6 +34,9 @@ ALLOWED_TELEGRAM_USER_IDS=USER_ID_1,USER_ID_2
# TEMPERATURE=1.0
# PRESENCE_PENALTY=0.0
# FREQUENCY_PENALTY=0.0
# IMAGE_MODEL=dall-e-3
# IMAGE_QUALITY=hd
# IMAGE_STYLE=natural
# IMAGE_SIZE=512x512
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You suggest dalle-3 by default, which cannot handle 512x512. If you uncomment it, it will return an error.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks, I changed the IMAGE_SIZE suggestion in the example env to 1024x1024
The default values are dall-e-2 and 512x512 set in the main.py

@AlexHTW
Copy link
Contributor Author

AlexHTW commented Nov 7, 2023

With OPENAI_MODEL="gpt-4-1106-preview" i have error: openai - INFO - error_code=None error_message='max_tokens is too large: 37200. This model supports at most 4096 completion tokens, whereas you provided 37200.' error_param=max_tokens error_type=invalid_request_error message='OpenAI API error received' stream_error=False

Fixed with: MAX_TOKENS=4096

Ah now I understand the difference between the default_max_tokens and the __max_model_tokens functions.
I fixed the default values for the new gpt4 and gpt3 models.
I think the _max_model_tokens is unaffacted by this and should reflect the full context size?

Thanks for testing!

@NewFashionD
Copy link

The latest GPT-3.5 Turbo model with improved instruction following, JSON mode, reproducible output, parallel function calls and more. Returns a maximum of 4,096 output tokens.
I think we need to make this fix:
elif model in GPT_3_16K_MODELS:
if model == "gpt-3.5-turbo-1106":
return 4096
return base * 4

@AlexHTW
Copy link
Contributor Author

AlexHTW commented Nov 7, 2023

The latest GPT-3.5 Turbo model with improved instruction following, JSON mode, reproducible output, parallel function calls and more. Returns a maximum of 4,096 output tokens. I think we need to make this fix: elif model in GPT_3_16K_MODELS: if model == "gpt-3.5-turbo-1106": return 4096 return base * 4

Oh snap, nice catch. I added the condition for the wrong model. Fixed, thanks!

@matveyevichevg
Copy link

Hello! how to reset user statistics?

@AlexHTW
Copy link
Contributor Author

AlexHTW commented Nov 16, 2023

Hello! how to reset user statistics?

Hey! You can remove the usage log .json files for the users you want to reset or set all values of current_cost inside the .json to 0 (this way you would keep the history of their usage).

@n3d1117
Copy link
Owner

n3d1117 commented Nov 16, 2023

Hey everyone and @AlexHTW, thank you so much for your awesome contributions! 🎉 I will be testing these changes in the next couple of days

@AlexHTW
Copy link
Contributor Author

AlexHTW commented Nov 16, 2023

Hey everyone and @AlexHTW, thank you so much for your awesome contributions! 🎉 I will be testing these changes in the next couple of days

Awesome :)
Nice to hear from you!

I am still struggling to come up with a clean solution for the image prices of different sizes than originally planned.
It would be best to just use one value for image price, not a fixed sized list.
However unless we include some "legacy support calculation" the old log files will probably have to be deleted.

Ugh, I am rambling ... I'll try to do something about this problem if I can find the time.

@n3d1117
Copy link
Owner

n3d1117 commented Nov 18, 2023

Did a couple of tests and looks good to me! Thanks @AlexHTW, you could maybe open an issue for the image prices thing, so we can keep track of it!

@n3d1117 n3d1117 merged commit 890a5f2 into n3d1117:main Nov 18, 2023
@AlexHTW
Copy link
Contributor Author

AlexHTW commented Nov 18, 2023

Did a couple of tests and looks good to me! Thanks @AlexHTW, you could maybe open an issue for the image prices thing, so we can keep track of it!

will do, thanks!

@mastermind091
Copy link

mastermind091 commented Nov 21, 2023

hello,

I installed the bot with this parameters

#IMAGE_MODEL=dall-e-3
# IMAGE_QUALITY=hd
# IMAGE_STYLE=natural
  • gpt_4_preview in models

but obviously it still generate picture as dalle 1

where could be a mistake?

@AlexHTW
Copy link
Contributor Author

AlexHTW commented Nov 21, 2023

Hey @mastermind091,

make sure to remove the `#' from your .env file

IMAGE_MODEL=dall-e-3
IMAGE_QUALITY=hd
IMAGE_STYLE=natural

@mastermind091
Copy link

Hey @mastermind091,

make sure to remove the `#' from your .env file

IMAGE_MODEL=dall-e-3
IMAGE_QUALITY=hd
IMAGE_STYLE=natural

thanks! it was soooo simple but really helpful : ))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

9 participants