Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The App Starts Rendering with Partial Code #1

Closed
AnonymoZ opened this issue Mar 23, 2023 · 10 comments
Closed

The App Starts Rendering with Partial Code #1

AnonymoZ opened this issue Mar 23, 2023 · 10 comments
Assignees
Labels
bug Something isn't working

Comments

@AnonymoZ
Copy link

Your app is good and to some extent, brilliant.
However, it seems Streamlit starts rendering having only received partial code, and not the complete code for an animation.
This obviously, cause the animation not to render.
Here try this for example:
swing a double pendulum

@360macky
Copy link
Owner

Wow, yes, you are right! I'm not sure how to solve this, let me try some things...

@360macky
Copy link
Owner

Hi @AnonymoZ! I'm workin on a solution right now. Anyway, if you see any other issues in the app, please open another issue. Thank you so much!

@360macky 360macky self-assigned this Mar 24, 2023
@360macky 360macky added the bug Something isn't working label Mar 24, 2023
@360macky
Copy link
Owner

The cause of the problem was the token limitation of up to 300 for the response. For example for "swing a double pendulum", GPT-4 would generate:

from manim import *

class GenScene(Scene):
  def construct(self):
    
    
    # Constants
    G = 9.81
    LENGTH1 = 2
    LENGTH2 = 1
    MASS1 = 20
    MASS2 = 10
    ANGLE1 = PI/2
    ANGLE2 = PI/2
    ANGLE1_VELOCITY = 0
    ANGLE2_VELOCITY = 0
    ANGLE1_ACCELERATION = 0
    ANGLE2_ACCELERATION = 0
    dt = 1/60
    
    # Convert degree to radian
    def deg_to_rad(degrees):
    return degrees*DEGREES
    
    # Pendulum
    def update_pendulum(pendulum, angle1, angle2):
      pendulum[0].next_to(ORIGIN, RIGHT)
      pendulum[1].next_to(pendulum[0], DOWN).rotate(angle1, about_point=pendulum[0].get_center())
      pendulum[2].next_to(pendulum[1], DOWN).rotate(angle2, about_point=pendulum[1].get_center())
    
    # Calculation
    def update_angles(angle1, angle2, vel1, vel2, acc1, acc2):
      num #And here is where it stops

I increased the use of tokens up to 400 by default, but if users add their own API Key the number goes up to 1200.

I also added a button to download the generated Python code, in order to debug this kind of cases... So if the animation fails, we can know where it was wrong...

@360macky
Copy link
Owner

I updated the code generated to accept the math package to overcome this complex prompts.

@AnonymoZ
Copy link
Author

AnonymoZ commented Mar 29, 2023

Hey, do you know the conversion from token to characters?
Once we determine how much can ChatGPT output, we'll be able to tell the user beforehand if his request is possible or not.
Also, ask ChatGPT to generate without any comments, and use a maximum of two characters when it names functions and variables. I bet this should save us characters!

You can use this for your estimate (it seems ChatGPT can't count the number of lines without first generating the code):
“Roughly estimate the number of lines of a Python program that calculates how many days Obama has lived. Don't tell me any details what would you do to achieve it. Just give me a number.”

@360macky
Copy link
Owner

360macky commented Apr 2, 2023

Hey, do you know the conversion from token to characters? Once we determine how much can ChatGPT output, we'll be able to tell the user beforehand if his request is possible or not.

I don't understand this so much. You can find the conversion from tokens to characters with the OpenAI tokenizer. Can you give me more details of what you mean please?

Also, ask ChatGPT to generate without any comments, and use a maximum of two characters when it names functions and variables. I bet this should save us characters!

I already ask ChatGPT to generate without comments or explainations in the system instructions (Check utils.py in line 3). Sometimes it does not obey 😅 I don't know how to enforce this...

The variable naming is interesting, since it will save characters too, so I'll add it. Thank you!

@AnonymoZ
Copy link
Author

AnonymoZ commented Apr 2, 2023

You know, I'm tapping into a greater problem as we speak.

Essentially a token is a character. Of course token = ¾ word length, but let's make as if token–character is 1:1.
This means if your total tokens is 300, it means you have 300 characters. Of course, it is actually more than that and let's consider the excess to be a ‘gift’ from GPT. Let's consider the excess as emergency funds and not plan our code beyond 300 characters.

Now how many lines does this make? With 70 character per line on average let's put it at 4.
With 4 lines allowed per program, we have to decide upon a user request to ask ChatGPT to produce the code or not.
If his problem clearly requires 10 lines or more, we will not grant his request and maybe ask him to use his own API key.

But if his problem can be solved in 2–3 lines, we will proceed to ask ChatGPT and hope it falls within quota even if emergency funds have to be used. But how do we know how many lines will a program need?
That's the thing. ChatGPT can count the lines in its code, but only after writing his script.

The best we can do is estimate how many lines the program will have beforehand.
In the previous answer for example in the Obama problem, ChatGPT resolutely estimates the amount of lines the program will take. We could use that as a basis to select which query is appropriate or not.

The problem of ChatGPT's limit has caused many an attempt for circumvention—sadly, no one has passed. That's because there's a trick to ask ChatGPT to answer in chunks.. tell 'em to only give the next chunk when I say, ‘next’. But this only works with textual answers. ChatGPT cannot cut a code snippet into half and output the next when you say, ‘next’.

As we speak I think I know a way of circumventing that, maybe we could combine these techniques for long code. Try this:

Write a program in Python that calculates the sum of the squares of all primes up to 50.
Do not format your code as code.
Do not explain the code using comments.
Do not write any English communication in your answer.

Sometimes you just need to speak a language that it understands! 😉

Furthermore there are deeper problems with your entire concept if Generative-Manim itself.
Let's look at more innocent problems first. Allow GPT to use NumPy instead of Math. This allows arrays which could be more useful to GPT. Why cut access to its resources and force it to solve problems? Also, add in other modules. As NetworkX—I know a lot of peeps like to work with graphs. Add in.. Random. Just in case.

Anyway I don't know if ChatGPT prefers to use ManimGL but it sometimes generated with the use of Mesh(..). Which is not a ManimCE function. The other day it did a Mandelbrot with Mandelbrot(..). I never knew there was a Mandelbrot function in Manim.

I wanted to present another bizarre function but it seems Google Chrome turned off my tab so I lost it. It looked like this:
MoveDotToCenter(parametre)
Definitely does not exist in Manim.

Another suggestion could be to add a tab explaining what you know to work, so people deal less with errors and having to imagine something that they think will work.

@360macky
Copy link
Owner

360macky commented Apr 9, 2023

The problem of ChatGPT's limit has caused many an attempt for circumvention—sadly, no one has passed. That's because there's a trick to ask ChatGPT to answer in chunks.. tell 'em to only give the next chunk when I say, ‘next’. But this only works with textual answers. ChatGPT cannot cut a code snippet into half and output the next when you say, ‘next’.

Yes. It's very hard, dealing with code splitted from ChatGPT.

Let's look at more innocent problems first. Allow GPT to use NumPy instead of Math. This allows arrays which could be more useful to GPT. Why cut access to its resources and force it to solve problems? Also, add in other modules. As NetworkX—I know a lot of peeps like to work with graphs. Add in.. Random. Just in case.

Good idea!, I can add Numpy as a package. Or even just let ChatGPT generate the whole file (with the necessary imports) but this comes with the task of verifying that the required packages are installed before running the code.

Anyway I don't know if ChatGPT prefers to use ManimGL but it sometimes generated with the use of Mesh(..). Which is not a ManimCE function. The other day it did a Mandelbrot with Mandelbrot(..). I never knew there was a Mandelbrot function in Manim.

Do you think this has to do with the old documentation or old version it learned? Remember that ChatGPT has information until 2021. If that was the case, I was thinking of updating a new experiment as many people are doing: Linking it to Langchain so that it learns the latest Manim stable documentation.

Another suggestion could be to add a tab explaining what you know to work, so people deal less with errors and having to imagine something that they think will work.

Is this like a section to let people know what kind of animations they can generate? Or do you mean something else?

@AnonymoZ
Copy link
Author

Yeah, like a list of successful prompts with proven responses.
Honestly, I've had no problems using ChatGPT for ManimCE, but only so long I've specified my code had to be for CE!
Of course, I've had to send back the code because it didn't always realise which was CE which was GL.
Whether you specified GL from the start, whether that would give you flawless code I do not have the expertise to test this, although it is unlikely ChatGPT spews perfect code even in GL.

Even if your code is perfect, ChatGPT will occasionally make mistakes.
The ultimate solution IMO (aside from perfecting the app here and there) is to wait for GPT-4's public release.
GPT-4 assumedly makes less mistakes, but crucially has a bumped limit to up to ~30,000 tokens!
Should well give you more room!

@360macky
Copy link
Owner

GPT-4 assumedly makes less mistakes, but crucially has a bumped limit to up to ~30,000 tokens! Should well give you more room!
Yes, definitely 😁 I've been thinking even in creating a ChatGPT plugin since I got access to that also.

@360macky 360macky closed this as not planned Won't fix, can't repro, duplicate, stale Aug 26, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants