New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The App Starts Rendering with Partial Code #1
Comments
Wow, yes, you are right! I'm not sure how to solve this, let me try some things... |
Hi @AnonymoZ! I'm workin on a solution right now. Anyway, if you see any other issues in the app, please open another issue. Thank you so much! |
The cause of the problem was the token limitation of up to 300 for the response. For example for "swing a double pendulum", GPT-4 would generate:
I increased the use of tokens up to 400 by default, but if users add their own API Key the number goes up to 1200. I also added a button to download the generated Python code, in order to debug this kind of cases... So if the animation fails, we can know where it was wrong... |
I updated the code generated to accept the |
Hey, do you know the conversion from token to characters? You can use this for your estimate (it seems ChatGPT can't count the number of lines without first generating the code): |
I don't understand this so much. You can find the conversion from tokens to characters with the OpenAI tokenizer. Can you give me more details of what you mean please?
I already ask ChatGPT to generate without comments or explainations in the system instructions (Check utils.py in line 3). Sometimes it does not obey 😅 I don't know how to enforce this... The variable naming is interesting, since it will save characters too, so I'll add it. Thank you! |
You know, I'm tapping into a greater problem as we speak. Essentially a token is a character. Of course token = ¾ word length, but let's make as if token–character is 1:1. Now how many lines does this make? With 70 character per line on average let's put it at 4. But if his problem can be solved in 2–3 lines, we will proceed to ask ChatGPT and hope it falls within quota even if emergency funds have to be used. But how do we know how many lines will a program need? The best we can do is estimate how many lines the program will have beforehand. The problem of ChatGPT's limit has caused many an attempt for circumvention—sadly, no one has passed. That's because there's a trick to ask ChatGPT to answer in chunks.. tell 'em to only give the next chunk when I say, ‘next’. But this only works with textual answers. ChatGPT cannot cut a code snippet into half and output the next when you say, ‘next’. As we speak I think I know a way of circumventing that, maybe we could combine these techniques for long code. Try this:
Sometimes you just need to speak a language that it understands! 😉 Furthermore there are deeper problems with your entire concept if Generative-Manim itself. Anyway I don't know if ChatGPT prefers to use ManimGL but it sometimes generated with the use of Mesh(..). Which is not a ManimCE function. The other day it did a Mandelbrot with Mandelbrot(..). I never knew there was a Mandelbrot function in Manim. I wanted to present another bizarre function but it seems Google Chrome turned off my tab so I lost it. It looked like this: Another suggestion could be to add a tab explaining what you know to work, so people deal less with errors and having to imagine something that they think will work. |
Yes. It's very hard, dealing with code splitted from ChatGPT.
Good idea!, I can add Numpy as a package. Or even just let ChatGPT generate the whole file (with the necessary imports) but this comes with the task of verifying that the required packages are installed before running the code.
Do you think this has to do with the old documentation or old version it learned? Remember that ChatGPT has information until 2021. If that was the case, I was thinking of updating a new experiment as many people are doing: Linking it to Langchain so that it learns the latest Manim stable documentation.
Is this like a section to let people know what kind of animations they can generate? Or do you mean something else? |
Yeah, like a list of successful prompts with proven responses. Even if your code is perfect, ChatGPT will occasionally make mistakes. |
|
Your app is good and to some extent, brilliant.
However, it seems Streamlit starts rendering having only received partial code, and not the complete code for an animation.
This obviously, cause the animation not to render.
Here try this for example:
swing a double pendulum
The text was updated successfully, but these errors were encountered: