Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bloom causes massive performance hit (>50% framerate reduction) #13109

Open
lgrammer opened this issue Apr 26, 2024 · 15 comments
Open

Bloom causes massive performance hit (>50% framerate reduction) #13109

lgrammer opened this issue Apr 26, 2024 · 15 comments
Labels
A-Rendering Drawing game state to the screen C-Performance A change motivated by improving speed, memory usage or compile times S-Needs-Investigation This issue requires detective work to figure out what's going wrong

Comments

@lgrammer
Copy link

OS = "Win11"

Bevy version = "0.14.0-dev" && "0.13.2"

AdapterInfo { name: "AMD Radeon RX 7700S", vendor: 4098, device: 29824, device_type: DiscreteGpu, driver: "AMD proprietary driver", driver_info: "24.3.1 (LLPC)", backend: Vulkan }

[profile.dev.package."*"]
opt-level = 3

[profile.dev]
opt-level = 3

[profile.release]
lto = true
opt-level = 3
codegen-units = 1
incremental = false
debug = false

What you did

Enabled bloom

image

What went wrong

Enabling bloom gives a massive hit to performance in both debug and release. In these examples I'll use debug (as nothing changed between them in RenderDoc except overall framerate).

In other game engines I've never personally experienced a hit this large from bloom. This was tested with both a personal example and the bloom_2d example in 0.14, as well as a simple bloom setup in 0.13.2. I tested bloom in Unity with a similar scene for a quick comparison and the performance impact in Unity is around 10% give or take - which iirc from using other engines is about the standard hit.

In Bevy issues I did a quick search for issues related to bloom and didn't see any mention of a performance profile on the bloom pass. With that in mind I figured opening up an issue would be best. If I missed an open issue relating to this please point me in the right direction - as I plan to explore the bloom passes code more in the future.

Bloom off:

image

Bloom on:

image

RenderDoc info

Full time:

image

Bloom pass:

image

image

image

image

Conclusion:

Based on this information from RenderDoc - it would seem that the bloom pass is heavier on rendering than everything else combined. Bloom takes up more than 50% of the render time when enabled.

@JMS55
Copy link
Contributor

JMS55 commented Apr 26, 2024

RenderDoc is not really a great GPU profiling tool. You'll want to use NSight/RGP/Xcode/GPA/etc (Nvidia, AMD, Apple, Intel) depending on your GPU manufacturer.

CPU metrics are also important for rendering - maybe recording so many render passes is expensive. Tracy will show you that.

I can take a look later this week and figure out why it's expensive.

@lgrammer lgrammer reopened this Apr 26, 2024
@lgrammer
Copy link
Author

RenderDoc is not really a great GPU profiling tool. You'll want to use NSight/RGP/Xcode/GPA/etc (Nvidia, AMD, Apple, Intel) depending on your GPU manufacturer.

CPU metrics are also important for rendering - maybe recording so many render passes is expensive. Tracy will show you that.

I can take a look later this week and figure out why it's expensive.

Thanks for the tip. Will definitely check them out and even give Tracy a go next time I do some profiling.

The performance hit happens without profiling and can be seen in the first two bloom on/off pictures. Those pictures are without profiling. There I am just using bevy frame diagnostics to track the frames per second. Sorry for not clarifying that when using those pictures to explain the issue.

In this case - all the profiling is doing is confirming the >50% reduction is in fact coming from the bloom pass. The bloom pass taking up just over 50% of the render time in the profiling lines up 1:1 with the framerate reduction seen when not profiling.

Currently I'm taking a look into the first two downsampling and last two upsampling passes in the code to see if I can get more information or optimize anything. Please also ignore me open/closing the issue. Misclick!

@alice-i-cecile alice-i-cecile added A-Rendering Drawing game state to the screen C-Performance A change motivated by improving speed, memory usage or compile times S-Needs-Investigation This issue requires detective work to figure out what's going wrong labels Apr 26, 2024
@lgrammer
Copy link
Author

I reduced bloom.wgsl to its simplest form and only noticed a negligible increase in performance with bevy frame diagnostics and the profiler (somewhere between 5-10 fps improvement?). Here is the reduced code:

struct BloomUniforms {
    threshold_precomputations: vec4<f32>,
    viewport: vec4<f32>,
    aspect: f32,
};

@group(0) @binding(0) var input_texture: texture_2d<f32>;
@group(0) @binding(1) var s: sampler;
@group(0) @binding(2) var<uniform> uniforms: BloomUniforms;

fn rgb_to_srgb_simple(color: vec3<f32>) -> vec3<f32> {
    return pow(color, vec3<f32>(1.0 / 2.2));
}

fn sample_input_4_tap(uv: vec2<f32>) -> vec3<f32> {
  let j = textureSample(input_texture, s, uv, vec2<i32>(-1, 1)).rgb;
  let k = textureSample(input_texture, s, uv, vec2<i32>(1, 1)).rgb;
  let l = textureSample(input_texture, s, uv, vec2<i32>(-1, -1)).rgb;
  let m = textureSample(input_texture, s, uv, vec2<i32>(1, -1)).rgb;
  var sample = (j + k + l + m) * 0.125;
  return sample;
}

fn sample_input_mini_tent(uv: vec2<f32>) -> vec3<f32> {
    let x = 0.004 / uniforms.aspect;
    let y = 0.004;
    let e = textureSample(input_texture, s, vec2<f32>(uv.x, uv.y)).rgb;
    let a = textureSample(input_texture, s, vec2<f32>(uv.x - x, uv.y + y)).rgb;
    let c = textureSample(input_texture, s, vec2<f32>(uv.x + x, uv.y + y)).rgb;
    let g = textureSample(input_texture, s, vec2<f32>(uv.x - x, uv.y - y)).rgb;
    var sample =  (a + c + g + e) * 0.25;
    return sample;
}

@fragment
fn downsample_first(@location(0) output_uv: vec2<f32>) -> @location(0) vec4<f32> {
    //let sample_uv = output_uv;
    var sample = sample_input_4_tap(output_uv);
    return vec4<f32>(sample, 1.0);
}

@fragment
fn downsample(@location(0) uv: vec2<f32>) -> @location(0) vec4<f32> {
    return vec4<f32>(sample_input_4_tap(uv), 1.0);
}

@fragment
fn upsample(@location(0) uv: vec2<f32>) -> @location(0) vec4<f32> {
    return vec4<f32>(sample_input_mini_tent(uv), 1.0);
}

@lgrammer
Copy link
Author

After moving on to the render code.. So far my only clue has been this:

image

When I divide into the mip dimensions in bloom/mod.rs to reduce it, I get a good portion of the the frames back.

image

image

Obviously this isn't a solution or anything. Just sharing what I found before throwing my hands in the air for the day.

@JMS55
Copy link
Contributor

JMS55 commented Apr 27, 2024

On my system, at 1080p, it takes 355 microseconds on the CPU (according to Tracy) to encode rendering commands for bloom, and 0.20ms of GPU time (according to NSight) to execute those commands.

2024-04-27T06:53:32.154332Z  INFO bevy_diagnostic::system_information_diagnostics_plugin::internal: SystemInfo { os: "Windows 11 Home", kernel: "22631", cpu: "AMD Ryzen 5 2600 Six-Core Processor", core_count: "6", memory: "15.9 GiB" }
2024-04-27T06:53:32.710031Z  INFO bevy_render::renderer: AdapterInfo { name: "NVIDIA GeForce RTX 3080", vendor: 4318, device: 8710, device_type: DiscreteGpu, driver: "NVIDIA", driver_info: "551.61", backend: Vulkan }

@lgrammer
Copy link
Author

I'm probably a bit newer to graphics programming than some of you @JMS55. Can you go into detail into what that means for you? Are you not seeing a huge hit to your framerate using that Nvidia card with bloom enabled? Here are my AMD profiler results (1200p 16:10)

Bloom/hdr/tonemapping on (250fps):

image

image

Here is off (700fps):

image

image

If you aren't seeing the same issue perhaps it's just another AMD "feature" and can be marked as a driver bug for now?

@JMS55
Copy link
Contributor

JMS55 commented Apr 27, 2024

Yes I don't see much of a slowdown between bloom on/off. Just to check are you only toggling bloom? You're leaving tonemapping and Camera::hdr the same between runs?

@lgrammer
Copy link
Author

I'm turning all three on yes, but tonemapping itself can be on/off without a difference. I'll provide the code just to fully clarify.

250fps (hdr on w/ bloom on and tonemapping (optional):

fn setup(mut commands: Commands) {     
    commands.spawn((
        Camera2dBundle {
            camera: Camera {
                hdr: true,
                ..Default::default()
            },
            //tonemapping: Tonemapping::AcesFitted
            ..default()
        },
         Name::new("light_camera"),

         BloomSettings::default(),
    ));
 }

700-750+fps (nothing on):

fn setup(mut commands: Commands) {     
    commands.spawn((
        Camera2dBundle {
            camera: Camera {
                ..Default::default()
            },
            ..default()
        },
         Name::new("light_camera"),
    ));
 }

500fps (hdr on w/ bloom off):

fn setup(mut commands: Commands) { 
    commands.spawn((
        Camera2dBundle {
            camera: Camera {
                hdr: true,
                ..Default::default()
            },
            ..default()
        },
         Name::new("light_camera"),
    ));
 }

And this happens in every case I've tried running it. Different version, local fresh bevy deps, using bevys examples etc.

@superdump
Copy link
Contributor

Could you use frame times instead of frame rates?

700 fps: 1.43ms
500 fps: 2.00ms (so, 0.57ms slower)
250 fps: 4.00ms (an additional 2ms slower)

@lgrammer
Copy link
Author

@superdump sure np i'll make sure to convert to ms whenever I can moving forward. Sorry about that.

@lgrammer
Copy link
Author

It might also help to note that this occurs even without any actual bloom in the scene. Blank screens get the same hit when changing the camera settings.

@lgrammer
Copy link
Author

Just to confirm what @JMS55 was saying, I tested it on an older Nvidia machine with a different display (1050 ti, 1080p display) also using Vulkan and the performance loss was a tad bit less, but for me it was still around 25-30% on average. I'd be interested in seeing benchmarks on different resolutions. For now I'm just going to take the node out, as I don't really need it in my graphics pipeline atm anyways. If there's anything else I can share that will help, just let me know.

@s-puig
Copy link
Contributor

s-puig commented May 8, 2024

I also get this 50% performance hit on bloom_2d.

SystemInfo { os: "Linux 23.10 Ubuntu", kernel: "6.5.0-28-generic", cpu: "Intel(R) Core(TM) i5-4670K CPU @ 3.40GHz", core_count: "4", memory: "23.3 GiB" }
AdapterInfo { name: "AMD Radeon RX 580 Series (RADV POLARIS10)", vendor: 4098, device: 26591, device_type: DiscreteGpu, driver: "radv", driver_info: "Mesa 23.2.1-1ubuntu3.1", backend: Vulkan }

@CptPotato
Copy link
Contributor

CptPotato commented May 8, 2024

It's possible this is due to the large number of separate render passes that do very little to no work.

Especially on lower mip levels the GPU does mostly nothing and is stalled by barriers (previous passes) and other overhead:

image
snapshot of a frame in the bloom_3d example

It also looks like there's a clear pass for each level of the mip chain, which I feel like isn't needed.


Edit: I forgot to point out what I think is most important:

The overall complexity of this scene is extremely simple, hence taking ~0.25 milliseconds of actual GPU time for a frame on my machine. Bloom itself is a "fixed" cost, only dependent on the resolution - so it doesn't tell you much to compare the ratios of bloom on vs off, especially in such a simple case.

Because of that I'd expect the bloom cost to mostly vanish when there is an actual workload that draws more than just a handful of meshes. While I think there's definitely room for improvement with the bloom implementation, I don't think it's performance is a major concern unless users see a GPU time for bloom that is significantly larger (more than 0.5-1ms).

@SludgePhD
Copy link
Contributor

I've also noticed this with simple scenes. It should be possible to rewrite bloom to use a compute shader for down- and upscaling like SPD.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-Rendering Drawing game state to the screen C-Performance A change motivated by improving speed, memory usage or compile times S-Needs-Investigation This issue requires detective work to figure out what's going wrong
Projects
None yet
Development

No branches or pull requests

7 participants