Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pixels no longer handles scaling factors correctly. #330

Closed
Cthutu opened this issue Jan 8, 2023 · 10 comments · Fixed by #332
Closed

Pixels no longer handles scaling factors correctly. #330

Cthutu opened this issue Jan 8, 2023 · 10 comments · Fixed by #332
Labels
bug Something isn't working question Usability question

Comments

@Cthutu
Copy link

Cthutu commented Jan 8, 2023

I used to be able to set the surface and buffer physical sizes to the same and it would work. I would have a pixel buffer that filled my current window (who's surface matched the window's inner size). Now pixels asserts when resizing the surface and buffer to the same sizes when scale factor is not 1. This assertion happens inside the clamp function.

Even the examples do not correctly function with a scale factor other than 1. For example in minimal-winit you get a thick black border around the rendered area.

Right now, I can't see any method where I can get the buffer and the window the same size when scale factor is not equal to 1. And therefore, I am unable to update to a newer pixels version.

What I would expect:

  1. Get the logical size of the window (this takes into account scale factor)
  2. call resize_surface with the new logical size (maybe physical size?)
  3. call resize_buffer with the new logical size
  4. Have pixels not panic with an assert.
  5. See the pixel buffer fully cover the window.

This is the minimal winit example with scale factor of 125%

image

This is the minimal winit example with scale factor of 100%

image

Consequently, switching scale factor while the minimal winit example is running causes a crash.

@Cthutu
Copy link
Author

Cthutu commented Jan 8, 2023

A thought did occur that last time I used Pixels was on a desktop with scale factor set to 100%. So it probably always been the case that Pixels has misbehaved with regard to scale factor.

@parasyte
Copy link
Owner

parasyte commented Jan 8, 2023

It doesn't necessarily misbehave with the scale factor, though. We have a scale factor of 2.0 on macOS, and all of the examples work fine there. And this may in fact be the problem (see below). But I do have a couple of observations.

  • The surface should be the physical size of the window. E.g.:

    let mut pixels = {
    let window_size = window.inner_size();
    let surface_texture = SurfaceTexture::new(window_size.width, window_size.height, &window);
    Pixels::new(WIDTH, HEIGHT, surface_texture)?
    };

  • pixels.resize_buffer() is for setting the size of the backing texture (pixel buffer) that you want to draw on. If your goal is to have this perfectly fit inside the window, it needs to be the same size 1 as the surface (or an integer division of it). In other words, set it to the physical window size.

    • It should be noted that this method is kind of unusual for typical use cases. It was added so that games and apps could draw on the entire surface area, no matter how the user resizes the window. But doing this is a bad idea because CPU rasterization (especially on very large textures) is extremely slow.

    • The most common use case is to have a known-stable texture size. Like the screen resolution for an old video game, which is fairly small. And you let the crate scale it to fit in the window.


My leading theory at the moment is that a scale factor with a non-integer value (like 1.25) is not going to work as intended. pixels wants to maximize rasterization clarity, so it tries to perfectly align the backing texture to the physical pixels on the display. With a scale factor like 1.25, it is impossible to both fill the window and align the texture at the same time. There are two workarounds I can think of.

First, let's assume your target texture resolution is 320x200, which is a common VGA display resolution. And the scale factor is 1.25.

  1. Reduce the logical window size and surface size by the scale factor (or an even multiple of the reduced size).

    • Use 320x200 physical pixels or 256x160 logical pixels (which are the same for the given scale factor). Use 320x200 for the buffer size.
    • For a 2x display, use 640x400 physical pixels or 512x320 logical pixels for the window and surface, and 320x200 for the buffer size. Etc.
  2. Increase the window size so that its physical pixel size is an even multiple of the pixel buffer size. The smallest value that will work for the given scale factor is 4x: 4 * 1.25 = 5. Which means you will have a window with 4x logical pixels and the pixel buffer will be scaled with 5x physical pixels.

    • In numerical terms, set the window size and surface size to 1600x1000 physical pixels or 1280x800 logical pixels (again, these are the same size). And set the pixel buffer size to 320x200.

Also note that #262 will hide this problem, and only real pixel purists (me...) would notice it exists.

Footnotes

  1. Note that the concept of "logical pixels" only applies to the window on a display with a high DPI. Every other size in the discussion is in physical pixels.

@parasyte parasyte added the bug Something isn't working label Jan 8, 2023
@Cthutu
Copy link
Author

Cthutu commented Jan 9, 2023

Thanks for the reply.

I was using physical pixels only throughout the code (so I completely agree with your assessment here). My foray into logical pixels was to stop Pixels from crashing.

Here's the bug (I believe) I face in one sentence: If you resize both the buffer and surface to be the physical dimensions of the window (in any order), AND the OS scale factor is 125% on Windows, Pixels will panic with an assertion in the clamp function (while creating a new matrix) during one of the resize methods.

That is, I pass exactly the same physical size into either resize_buffer or resize_surface, which matches the window inner size (basically the size passed in from the events) and Pixels panics. This will not occur on 100% OS scaling. So even if I ignore scale factor (by not using logical pixels) I still get the crash.

I resize the buffer when either I get a WindowedEvent::Resized or WindowedEvent::ScaleFactorChanged (to handle dragging from one monitor to another with a different scale factor.

I will attempt to get you some minimal code. I think the example on the pixels-u32 crate will crash when scale factor is set to 125% on Windows OS.

@parasyte
Copy link
Owner

parasyte commented Jan 9, 2023

Until I find a way to reproduce the panic with a 1.0 or 2.0 scale factor, a backtrace will be somewhat helpful for further diagnostics.

@parasyte
Copy link
Owner

parasyte commented Jan 9, 2023

I noticed I can get the panic with scale factor 1.0 iff the buffer height is set larger than the surface height. This panic is probably appropriate, and we might want a similar check on the width. Here's is my minimal repro:

diff --git a/examples/minimal-winit/src/main.rs b/examples/minimal-winit/src/main.rs
index ae0b6c1..d0828f1 100644
--- a/examples/minimal-winit/src/main.rs
+++ b/examples/minimal-winit/src/main.rs
@@ -42,6 +42,8 @@ fn main() -> Result<(), Error> {
     };
     let mut world = World::new();

+    pixels.resize_buffer(WIDTH, HEIGHT + 1);
+
     event_loop.run(move |event, _, control_flow| {
         // Draw the current frame
         if let Event::RedrawRequested(_) = event {

And the backtrace:

thread 'main' panicked at 'assertion failed: min <= max', /rustc/0fb8b72ce49997d60a631e921d2cf5be9ca229e6\library\core\src\num\f32.rs:1394:9
stack backtrace:
   0: std::panicking::begin_panic_handler
             at /rustc/0fb8b72ce49997d60a631e921d2cf5be9ca229e6/library\std\src\panicking.rs:575
   1: core::panicking::panic_fmt
             at /rustc/0fb8b72ce49997d60a631e921d2cf5be9ca229e6/library\core\src\panicking.rs:64
   2: core::panicking::panic
             at /rustc/0fb8b72ce49997d60a631e921d2cf5be9ca229e6/library\core\src\panicking.rs:114
   3: core::f32::impl$0::clamp
             at /rustc/0fb8b72ce49997d60a631e921d2cf5be9ca229e6\library\core\src\num\f32.rs:1394
   4: pixels::renderers::ScalingMatrix::new
             at .\src\renderers.rs:234
   5: pixels::builder::create_backing_texture
             at .\src\builder.rs:443
   6: pixels::Pixels::resize_buffer
             at .\src\lib.rs:294
   7: minimal_winit::main
             at .\examples\minimal-winit\src\main.rs:45
   8: core::ops::function::FnOnce::call_once<enum2$<core::result::Result<tuple$<>,enum2$<pixels::Error> > > (*)(),tuple$<> >
             at /rustc/0fb8b72ce49997d60a631e921d2cf5be9ca229e6\library\core\src\ops\function.rs:250
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
error: process didn't exit successfully: `target\debug\minimal-winit.exe` (exit code: 101)

@Cthutu
Copy link
Author

Cthutu commented Jan 11, 2023

That's the same assert I get if they are equal and scale factor is 1.25.

@parasyte
Copy link
Owner

I am having trouble understanding how it would panic with the values you've described. If the surface and the backing texture are the same size, the divisions in there will just result in 1.0. (Assuming no rounding errors.) clamp should not panic because 1.0 is not greater than 1.0, nor is it NaN.

pixels/src/renderers.rs

Lines 233 to 236 in bf296a4

// Get smallest scale size
let scale = (screen_width / texture_width)
.clamp(1.0, screen_height / texture_height)
.floor();

Maybe you could add dbg!() around all of the values and report back on what the actual numbers are. Something isn't quite right. The scale factor doesn't matter because we're only talking about physical dimensions.

let scale = dbg!(dbg!(screen_width) / dbg!(texture_width))
    .clamp(1.0, dbg!(dbg!(screen_height) / dbg!(texture_height)))
    .floor();

@parasyte parasyte added the question Usability question label Jan 15, 2023
@parasyte
Copy link
Owner

@Cthutu Were you able to add the dbg!()s to help with troubleshooting on this issue? Without more info, I don't know what else I can do, here.

@Cthutu
Copy link
Author

Cthutu commented Jan 27, 2023 via email

@parasyte
Copy link
Owner

I'm in the process of updating dependencies and noticed that the new version of tao now has the same panic when a menubar is used on Windows. This is because querying the window's physical inner size before the window is shown will give you a smaller height than what you asked for. I "fixed" it by forcing the smallest ratios to 1.0 like this:

        let width_ratio = (screen_width / texture_width).max(1.0);
        let height_ratio = (screen_height / texture_height).max(1.0);

        // Get smallest scale size
        let scale = width_ratio.clamp(1.0, height_ratio).floor();

You can try this out if you like. It might help in your case? I think it might actually cause some unwanted stretching in some cases, especially if you have provided any incorrect dimensions anywhere. But this is currently what I am planning to implement for the next release.

parasyte added a commit that referenced this issue Jan 28, 2023
* Update wgpu to 0.15
* Fix the panic described in #330
* Specify `min_binding_size` since we know what it is.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working question Usability question
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants