Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improved performance of particle container for small amounts #1939

Closed
wants to merge 6 commits into from

Conversation

bchevalier
Copy link

The particle container always upload a full batch size (15000 particles). When the number of particles is smaller it becomes a huge performance drawback in terms of CPU.
In my use case: maximum of roughly 500 sprites, bufferSubData accounts for 25% of my whole program cpu usage. After this optimisation it accounts only for 5%.
This is far from negligible when developing for smartphones where the maximum number of particles will max at 1000.

@JiDW
Copy link
Contributor

JiDW commented Jul 3, 2015

Ohhh so that's why is was slower than Container. Good job !

@englercj
Copy link
Member

englercj commented Jul 3, 2015

This looks awesome, but according to #1926 it can cause GC stalls. We need to figure out of subarray is the cause of this before merging.

@GoodBoyDigital
Copy link
Member

We could create an array that 'grows' based on how much is required (it its too small we double the size)

@bchevalier
Copy link
Author

The problem is that, if the buffer does not get downsized, someone who empties his particle container might wonder why he still gets poor performance. Would it be acceptable in terms of performance to make the vertex buffer shrink as well whenever the number of particles gets too small?

@davedx
Copy link
Contributor

davedx commented Jul 3, 2015

IMHO the sensible way to use a particle system is to create a fixed amount of particles that you then switch on/off as you need them. In other words, you allocate how ever many you need for your buffers, and take the hit to resize them if on the off chance you need to add/remove some permanently in that frame.

So what I reckon is the buffer should always be exactly the size needed to render the current amount of sprites/particles.

@bchevalier
Copy link
Author

Well I would like this container to be as flexible as possible. As a user I do not want to think about the number of particles I might need nor worry about the performance drawback if I do not use the full buffer.

In my use case I have a particle container to display from 50 to 500 particles (could be more, there is no actual upper bound). But in average it is around 100. If I have to decide beforehand the container size, I would put something like 700 to be safe, but then I get performance 7 times slower than what I need in average.

@englercj
Copy link
Member

englercj commented Jul 4, 2015

What if we let the user specify an initial size (optionally), we grow the array dynamically (doubling when full), and we shrink dynamically (halfing when < 1/2 used).

We have a sensible default for the common case, but the initial size specification allows you to tweak it to be just right for your application.

@bchevalier
Copy link
Author

That seems good to me.

@davedx
Copy link
Contributor

davedx commented Jul 5, 2015

I like the initial size specification.

For our use case, as long as the subarray isn't done every frame no matter what the conditions are, we're happy ;)

@bchevalier
Copy link
Author

I created a new pull request that solves the problem differently: https://github.com/GoodBoyDigital/pixi.js/pull/1990/files

If the new solution is fine I suggest we close this PR.

@englercj
Copy link
Member

Closing this in favor of the other PRs we have improving ParticleContainer.

@englercj englercj closed this Jul 23, 2015
@lock
Copy link

lock bot commented Feb 25, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Feb 25, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants