-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rendering: scaling one display affects rendering on the other #3171
Comments
This is how it is supposed to work:
So, we need to check we're giving the client appropriate information to scale the buffer |
One clarification: does this happen with the scales set on startup? Or only if changed dynamically? |
On startup, too. |
Here's a log from This looks suspect, then:
|
Looks as though we tell the surface it is on both monitors:
|
That'd do it. Compared to GNOME, moving between a
|
This looks suspicious: // TODO: send enter/leave when the surface actually enters and leaves outputs |
Duplicate of #342 |
The images show what appear to be texture minification artifacts. The downsampling filter currently in use is a simple bilinear filter (OpenGL's The problem also occurs when multiple outputs display cloned content at different scales. If the client uses the highest scale to set the buffer's resolution, the compositor downsamples the buffer when the surface is rendered on the lower-scale outputs. The issue affects server-side decorations for the same reason. I ran some experiments using different types of texture sampling to try to improve the image quality for the case of a buffer being created for scale 10 and rendered at scale 1. Of course, it is unlikely that someone will use cloned outputs with such large differences in scale, but the artifacts are visible even if one output has a scale of 4 while another is unscaled. The best results seem to be achieved by enabling mipmapping with trilinear filtering and a LOD bias of -1: The rationale is that mipmapping prefilters the texture, while the negative bias forces the sampler to use the next higher LOD to avoid a loss of sharpness. Other biases also work, but -1 seems to be the sweet spot for clearer text. I also tried supersampling (using the In summary, if downsampling artifacts become a problem, biased trilinear filtering could be used. It is faster than supersampling and produces better results. |
@hbatagelo thanks for the dig! Yes,we could scale better. But we shouldn't be scaling at all in this case - Alan's already found this: We need to be telling the apps which output they're on, so they render to the correct scale in the first place. Your stuff will still be useful for when apps can't scale to the target scale for whatever reason. We may even want to use different filters depending on the factor by which we need to scale (e.g. 2x → 1.5x, or 1x → 2x etc.) |
With a couple displays, I have a setup like so:
So the left-most display is unscaled, at
0,0
.If I change the
scale
, I can clearly see it badly affecting my unscaled display, suggesting that the applications get told to scale at the other display's scale, and then get scaled back.The reason for this is we are not tracking the outputs the surface is on. All we do is send an "enter" for every output when the window is created.
https://github.com/MirServer/mir/blob/bc6864bc7b2575b445355dcd63a2b3525be68458/src/server/frontend_wayland/window_wl_surface_role.cpp#L472
The text was updated successfully, but these errors were encountered: