Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

World.step is buggy when attempting to account for variable framerate #16

Closed
codynova opened this issue Apr 6, 2020 · 20 comments · Fixed by #40
Closed

World.step is buggy when attempting to account for variable framerate #16

codynova opened this issue Apr 6, 2020 · 20 comments · Fixed by #40
Labels
enhancement New feature or request

Comments

@codynova
Copy link
Member

codynova commented Apr 6, 2020

When passing a timeSinceLastCalled value to the world.step method, the simulation performance quickly degrades beyond usability.

Example usage: Cannon demo

cannon.js issue 371

Article: Fix your timestep

Spectrum chat: World step does not adapt to framerate

Per Schteppe (schteppe#371):

Perhaps the number of substeps are maxed out? This could happen if you, for example, pass in deltaTime in milliseconds instead of seconds... or if you accidentally calculate the value wrong for the first frame (there is no “previous” time value). or if the first deltaTime values are too large (they usually are larger during the first frames while the JIT is warming up).
What happens if you pass in 1000 substeps instead of 10? If you have implemented it correctly, changing the number of substeps should do nothing.

@Platane
Copy link

Platane commented Apr 7, 2020

I added support for timeSinceLastCalled in the worker. On this fork Platane/use-cannon

I updated the demo to highlight issues. The ping pong demo is particularly relevant.

frame rate demo

Well it's not easy to catch since the gif framerate is also super low. But you can notice that :

  • first it runs well at 60fps, with maxSubSteps set to 1 which is equivalent to not using timeSinceLastCalled
  • then I lower the framerate by running junk code at every frame. The simulation seems to slow ( but not the paddle )
  • when I increase maxSubSteps it runs more that one internal step at each frame in order to match the actual time elapsed rather than be dependent on frame rate.

You can try it here demo/Pingpong

So far I did not encounter wacky simulation explosions as referenced in schteppe#371


There one thing I find odd in the implementation. If maxSubStep is not sufficient to catch up with the actual time, the accumulator build up seconds of delay. So when the framerate gets better, to lower the accumulator the simulation will actually run faster for a few seconds !

frame rate demo

here you can see in the log that the maxsubstep is not sufficient, so it fills up the accumulator.
Once i release the pressure on the frame rate, the simulation speed up (again hard to see in the gif :x ) until the accumulator is emptied.

One strategy that I rather have would be to prevent the accumulator to build up more than X dt delay.

@codynova
Copy link
Member Author

codynova commented Apr 7, 2020

I'm seeing the same issue here that I was seeing in my own local attempt at implementing timeSinceLastCalled, and mentioned in the issue on Schteppe's github

Leave the wasted frames set to 0, and leave maxSupSteps at 1. Watch the falling instanced cubes demo. The framerate looks good, but switching between browser tabs causes some of the cubes to freeze in midair until they despawn. This doesn't seem to happen on the current use-cannon@0.4.2.

Leave the wasted frames set to 0, and turn maxSupSteps to 7. Watch the falling instanced cubes demo again, the framerate degrades heavily after a while.

@Platane
Copy link

Platane commented Apr 8, 2020

the tab switching makes sense, since request animation frame gets paused, the delta when focusing back is huge.


About the cube heap, it pretty interesting, it looks like the simulation takes too long to be renderer at live speed. It takes more than dt to compute step(dt) which builds up a delay that need to be catch up (the accumulator prop). Eventually filling all the maxSubSteps available.

In that case it is indeed a better solution to slow down the simulation, giving up on rendering at live speed.

One obvious fix could be to prevent the accumulator to grow out of control, as I don't see any situation where it leads to expected behaviour.

We could also detect slow rendering simulation and shortcut them for the sake of keeping a good framerate at the cost of slowing down the simulation.


About the cube freezing midair, I am not entirely sure, but I suspect it has something to do with the cube going into a broken sleep mode.
Because it seems that on the param used to determine if an object sleeps is this.time which does not take account of the accumulator

this.time is increase by an amount that might be higher than the delta of the simulation

@marcofugaro
Copy link
Member

marcofugaro commented Apr 8, 2020

Is this PR somehow related? I think we should bring in that PR as well, it was closed because it didn't receive any attention.

@Platane
Copy link

Platane commented Apr 8, 2020

yes it seems so, although it also brings questionable features ( "timeSinceLastCalled = -1 instead of 0" )

I am not sure this address the cube freezing midair effect, I am looking into that.

@marcofugaro
Copy link
Member

marcofugaro commented Apr 9, 2020

You're right, timeSinceLastCalled === -1 is weird, I would personally just check if the argument was passed as 0 or wasn't passed at all

if (timeSinceLastCalled === undefined) {

and stop. Get rid of the timeSinceLastCalled = 0; and update its type to optional number.

Can you do that in your existing PR? Or should we do a different PR?

@Platane
Copy link

Platane commented Apr 10, 2020

@marcofugaro agreed 8c8aea5

tbh I rather have two methods. How about renaming internalStep step, and allowing it to be called publicly, and renaming the actual step something like executeSteps and enforcing the 3 params existance.

@marcofugaro
Copy link
Member

That's a big api change 😬maybe if this library was a standalone physics library instead of a fork of cannon, we could have done it.

The only valid api changes I see are the ones to modernize the library and the ones that make it more compatible with Three.js

@Platane
Copy link

Platane commented Apr 10, 2020

Yes You are right. I was thinking loudly :)

@codynova
Copy link
Member Author

codynova commented Apr 10, 2020

This looks good and is working well now except for the freezing cubes (no framerate degradation in my testing). We should wait to merge until that issue is resolved.

I agree with Marco regarding API changes, we should strive to make as few API changes as possible, and maintain backwards compatibility where we can.

@Platane Great work on this so far, I think supporting variable framerate would be a huge feature.

EDIT: I spoke too soon regarding framerate performance

@codynova
Copy link
Member Author

codynova commented Apr 10, 2020

@Platane I spoke too soon in my last comment, the framerate is still degrading heavily over time on the latest of your fork with maxSubSteps set to 7.

I wonder if this commit might give us some insight into how the accumulator was expected to be used, and why this.time was incremented in both the step and internalStep schteppe@e42557a

If all else fails, we could tweet Schteppe and see if he remembers...

@Platane
Copy link

Platane commented Apr 10, 2020

ok I will check. What environment are you using ? I can't reproduce the frame rate drop on chrome desktop :/

@codynova
Copy link
Member Author

codynova commented Apr 10, 2020

I'm using Google Chrome Version 81.0.4044.92 (Official Build) (64-bit), on Windows 10 x64, on an extremely powerful desktop (Intel i7-8700K@3.7, Nvidia 1080TI, 16GB DDR4 4400 RAM). I can test with a Mac on Chrome and Safari later today.

I just set the maxSubSteps to 7 and leave it running for 5 - 10 minutes or so. I can time how long it takes to degrade and maybe record a video later today if that will help.

The framerate degradation is mentioned in cannon.js issue 371, along with this quote from Schteppe:

Perhaps the number of substeps are maxed out? This could happen if you, for example, pass in deltaTime in milliseconds instead of seconds... or if you accidentally calculate the value wrong for the first frame (there is no “previous” time value). or if the first deltaTime values are too large (they usually are larger during the first frames while the JIT is warming up).
What happens if you pass in 1000 substeps instead of 10? If you have implemented it correctly, changing the number of substeps should do nothing.

@codynova codynova added the enhancement New feature or request label Apr 11, 2020
@codynova
Copy link
Member Author

codynova commented Apr 11, 2020

@Platane I may have made some mistake in my testing. I set everything up from scratch and stuff is looking much better.

I don't think I'm able to reproduce the cubes freezing in midair anymore, are you? Okay, I was finally able to reproduce a couple freezing cubes - but the problem is very intermittent. I thought it might be related to stopping the process that the local dev server is running on, or maybe hot reloading, but I can't seem to pinpoint the issue.

It also looks like the framerate degradation may be completely gone, although I want to leave this running for longer before I say for sure. However, when setting maxSubSteps to anything other than the default 1, the simulation seems to speed up significantly. That's not expected, correct?

@Platane
Copy link

Platane commented Apr 13, 2020

Ok good, that seems consistent with what I experienced so far.

The simulation might appear sped up when increasing the maxSubStep if one step was not sufficient to render at live speed.

I am focusing on the freezing cube issue. I don't see it as often as before, but still more than never (which is the case in the current stable version)

@codynova
Copy link
Member Author

Would you expect that the simulation stays sped up consistently? It seems like it should catch up at some point, but that doesn't seem to be the case while testing the cube heap demo.

@Platane
Copy link

Platane commented Apr 13, 2020

I think the sped up version is the actual speed. It was previously slower because we restricted to one step (1/60s of simulation ) each animation frame ( > 1/60s of real time ).

@codynova
Copy link
Member Author

codynova commented Apr 13, 2020

But wouldn't it only be slower if it wasn't achieving 60fps? I thought the whole point of variable framerate stepping is to achieve the same simulation speed between high-performance 60fps code and low-performance variable framerate code?

My concern is that the non-variable step that's been the default in Cannon for years has set an expectation for the speed of the simulation when it's achieving 60fps - and I would expect that to be the same speed in the substep method. That seems to be what Schteppe implies with his comments as well.

In other words, I would have expected the 1/60 constant step simulation to be at max speed when running 60fps, and wouldn't have expected the variable step simulation to always exceed that speed

@Platane
Copy link

Platane commented Apr 14, 2020

yes that's the goal. Unless something goes wrong in the current version (which is probable), it should do exacty that.

@codynova
Copy link
Member Author

codynova commented Jul 6, 2020

Released fix in cannon-es@0.15.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants