Skip to content
This repository has been archived by the owner on Dec 2, 2019. It is now read-only.

Demo/Examples: Interface unusably small on HiDPI screen on Linux #283

Open
cbuschardt opened this issue Nov 15, 2016 · 14 comments
Open

Demo/Examples: Interface unusably small on HiDPI screen on Linux #283

cbuschardt opened this issue Nov 15, 2016 · 14 comments

Comments

@cbuschardt
Copy link

I've read a few of the threads about HiDPI support. Regardless of what decision is made with pixels vs a resolution independent measurement -- the samples should be usable. I'm using a 15" screen at 3840x2400 running Ubuntu 16. Typically I run at 225% scaling in GNOME.

One possible solution would be to pretend the screen is always 1920x1080 for applications that don't make an explicit call claiming HiDPI awareness. Ideally, the GNOME scaling factor would be honored.

@dumblob
Copy link
Contributor

dumblob commented Nov 15, 2016

Use e.g. Amoeba as mentioned in #123 (comment) and you're saved.

@cbuschardt
Copy link
Author

cbuschardt commented Nov 16, 2016

Thank you for the tip about Amoeba. I always love small libraries.

However, I'm sure not sure that Amoeba fits with the KISS principle clearly at work in this library. It may make more sense to start with linear scaling for the samples. Even users of Amoeba still need the DPI in order to properly size the text.

Here's how xrandr gets the screen DPI:

SizeID        current_size;
  XRRScreenSize *sizes;
  dpy = XOpenDisplay (display_name);
  // ...
  root = RootWindow (dpy, screen);
  sc = XRRGetScreenInfo (dpy, root);
  current_size = XRRConfigCurrentConfiguration (sc, &current_rotation);
  sizes = XRRConfigSizes(sc, &nsize);
  for (i = 0; i < nsize; i++) {
    printf ("%c%-2d %5d x %-5d  (%4dmm x%4dmm )",
         i == current_size ? '*' : ' ',
         i, sizes[i].width, sizes[i].height,
         sizes[i].mwidth, sizes[i].mheight);
     // ...
  }

@dumblob
Copy link
Contributor

dumblob commented Nov 16, 2016

The font rendering isn't DPI aware [making it impossible to properly size the text].

Maybe I misunderstood something, but in Nuklear there is no notion of DPI and also there should not be any. Just use bigger font size and you'll get what you need (the scaling might not be linear, so feel free to "bind" e.g. font height to the physical screen height or other widgets height using Cassowary).

@cbuschardt
Copy link
Author

Please note that this bug is specifically referring to the samples/demos.

The samples were designed with pixel positioning for a ~96 DPI screen. If the screen is 200 DPI, clearly the sample isn't going to be usable. It needs to be scaled.

The question is: What's the simplest way to ensure the samples are usable on a HiDPI display.

@dumblob
Copy link
Contributor

dumblob commented Nov 16, 2016

What's the simplest way to ensure the samples are usable on a HiDPI display.

A complete rewrite with Amoeba. The demos should demonstrate in the simplest possible manner how to use Nuklear with different backends, not how to write full-featured multiplatform UIs (this should be though the case for /examples which also currently do not support HiDPI).

Note also, that the samples do not have any notion of DPI, but use some "random" fixed values which @vurtun found good for testing/demonstration purposes.

@vurtun we should probably slightly restructure the repository as many users are still very confused. What about introducing a directory /backends (and filling it with just backend implementations without any demo GUI code, i.e. also without the main loop; some of the backends will be Git Submodules to foreign repos), and then merging /demo with /example resulting in one /demo directory (these will use either of the backends from /backends based on your taste), and finally modifying at least one GUI demo from the resulting /demo directory to use solely Amoeba for widget positioning and sizing?

@cbuschardt
Copy link
Author

I'm certainly not against using Amoeba if @vurtun is happy with it. Have you done any prototyping of this yet? It's a small enough library that you might be able to distill its functionality and include it as part of nuklear.

Regardless, You can't just size text as a percentage of total screen height outside of video games and embedded. For touch interfaces, constraints need to be in physical units like mm to ensure finger sized widgets. For large monitors, you need DPI to ensure you don't make the font too big. For tiny screens, you need DPI to ensure that they're not too small.

The backends and the samples should be constructed in such a way that they provide acceptable interfaces for their intended use cases. Ideally, there should be a sample that produces an acceptable interface for Windows and Linux desktop use.

@dumblob
Copy link
Contributor

dumblob commented Nov 16, 2016

Have you done any prototyping of this yet?

Unfortunately not - I was buried with work (therefore also my delayed post I linked above).

It's a small enough library that you might be able to distill its functionality and include it as part of nuklear.

I wouldn't do that. First, Amoeba is very minimal (just read the code and you'll realize you can't throw anything out of it). Second, it's a single header library and as such is very suitable for composition like all other single header libraries (composition is one of the main advantages of this single header principle).

Regardless, You can't just size text as a percentage of total screen height outside of video games and embedded. For touch interfaces, constraints need to be in physical units like mm to ensure finger sized widgets. For large monitors, you need DPI to ensure you don't make the font too big. For tiny screens, you need DPI to ensure that they're not too small.

Actually not at all. First, I'm talking all the time about physical dimensions (see my post above - "...feel free to "bind" e.g. font height to the physical screen height or other widgets height..."). Second, DPI is totally useless, because the "widget dimensions & position equation" is not dependent only on physical dimensions and resolution, but on the distance of the canvas/screen from the observer. This extremely important information about the distance somehow disappeared since the W3C/Opera guys in 1996 defined it in CSS 1.0 and I quote _"The suggested reference pixel is the visual angle of one pixel on a device with a pixel density of 90dpi and a distance from the reader of an arm's length. For a nominal arm's length of 28 inches, the visual angle is about 0.0227 degrees" - note the _pixes density* (i.e. physical size and resolution) and a distance.... Both guys must be utterly mad from what has happened, that nearly no person knows how to scale UI.

To wrap up, Nuklear has no notion (and should not) about physical size of the canvas (yes, I'm talking about canvas which has no physical dimensions, because Nuklear can draw to any buffer/canvas). Thus it's left upon the Nuklear user to provide the physical dimensions of the canvas, the distance of the canvas from the user, the resolution of the canvas, and first then use all these 3 inputs to calculate the correct dimensions and positions of widgets. Cassowary greatly simplifies it, because it's a totally dimensions agnostic, resolution agnostic and distance agnostic (i.e. in one word relative) description of the dimensions and positions of widgets and thus such GUI can then be fully interoperably used with any combination of dimensions, resolution, and distance.

@cbuschardt
Copy link
Author

DPI is totally useless, because the "widget dimensions & position equation" is not dependent only on physical dimensions and resolution, but on the distance of the canvas/screen from the observer
...The suggested reference pixel is the visual angle of one pixel on a device with a pixel density of 90dpi and a distance from the reader of an arm's length....

You've stumbled across the crux of the disagreement. For touch based user interfaces DPI gives you an incredibly valuable guide -- since you know the user is within approximately arm's length. In fact it's more than a guide since it gives you the EXACT dimensions that need to interface with the finger. This is where my focus is.

However, for desktop devices you're essentially arguing that DPI isn't the right metric. Certainly the desktop scaling values would be more useful -- as they give the user's expectation of what constitutes readable feature size. Without any input you're essentially picking a random font size [as say a percentage of screen].

We're making the same argument from polar opposite perspectives.

To wrap up, Nuklear has no notion (and should not) about physical size of the canvas (yes, I'm talking about canvas which has no physical dimensions, because Nuklear can draw to any buffer/canvas).

I'm arguing that the samples should work out the box, nothing more. I'm not talking about changes to nuklear.h. I'm talking about [minimal] changes to the samples.

@dumblob
Copy link
Contributor

dumblob commented Nov 17, 2016

You've stumbled across the crux of the disagreement.

I'm sorry for that. As you can see, I'm overly sensitive on this topic 😉.

For touch based user interfaces DPI gives you an incredibly valuable guide -- since you know the user is within approximately arm's length. In fact it's more than a guide since it gives you the EXACT dimensions that need to interface with the finger. This is where my focus is.

Still not there. Touch devices only limit the maximum distance, but definitely do not designate the distance. Each person I know uses her/his touch device from a different distance. In general tiny devices (watches, small smartphones) tend to be used from a way closer distance than big touch TV screens and touch notebooks. So still, DPI can't be relied upon at all.

By the way, the distance mainly corresponds to the shortest dimension of the two on rectangular devices (narrow but long stripe displays will be looked at from about the same distance as a square display with the same width as the shorter dimension of the stripe - remember how the users bow their head to use the new MacBook Pro strip or BlackBerry PRIV).

Without any input you're essentially picking a random font size [as say a percentage of screen].

Yep. And I agree, that this random picking is a bad solution.

I'm arguing that the samples should work out the box, nothing more. I'm not talking about changes to nuklear.h. I'm talking about [minimal] changes to the samples.

I fully understand, that your intention is to make samples work out of the box. I totally agree. Unfortunately, although it might not seem so, this is a huge leap requiring significant changes to samples. Therefore I'm discussing it so extensively.

Once again, I'm sorry if I made you feel uncomfortable with this discussion. It wasn't meant so.

@MrSapps
Copy link

MrSapps commented Nov 17, 2016

Even if it requires significant changes to samples it seems worth while, its good to have a working reference that works correctly at any DPI.

@cbuschardt
Copy link
Author

cbuschardt commented Nov 17, 2016

I have a thick skin, no worries =) There's some great news:

  1. We agree that a change is desirable
  2. We agree that simply using the DPI is too simple for Desktop use cases

I'd like to get your agreement to narrow the topic of discussion to what I think is one of the core questions. How big should descriptive text be?

What about the idea of using GNOME3's scaling factor? 'org.gnome.desktop.interface scaling-factor'.
This parameter scales fonts, images, everything. It's almost always how the user configures for HiDPI and is directly controlled by the one and only slide in display settings.

Personally, I'm loathe to open Pandora's box and start querying desktop environments. Things go in and out of vogue and sometimes internal settings like this are difficult to get at. However, it's the only solution I can think of that is guaranteed to produce the font size used in the rest of the user's applications.

@cbuschardt
Copy link
Author

By the way, the distance mainly corresponds to the shortest dimension of the two on rectangular devices (narrow but long stripe displays will be looked at from about the same distance as a square display with the same width as the shorter dimension of the stripe - remember how the users bow their head to use the new MacBook Pro strip or BlackBerry PRIV).

You could almost imagine a Heuristic using the physical screen dimensions to inform the decision. The trouble is it still fails to predict viewing distance for large screens. Are they close to a 40" screen? Or across the room?

However, for these samples, it may be good enough to assume they are tethered by either touch or mouse. These are desktop/laptop samples after-all. Maybe we could come up with a heuristic to predict viewing distance from the physical screen dimensions?

Obviously anything along this line of thinking is going to be imperfect for some uses cases.

@juliuszint
Copy link
Contributor

juliuszint commented Dec 26, 2016

I have the d3d11 sample for windows working and looking good on a high dpi screen. The Issue is about Linux but the way should be similar on any other OS. On Windows i had to tell the OS, with the API call SetProcessDpiAwareness, that my window is able to Render itself on High DPI screens. Previously Windows would scale the Application up which resulted in a very blurry image on High DPI screens. After this the Window does not get scaled anymore and we can now take care of the size issue, because since we do not get scaled, the window looks really tiny. The API call GetDpiForWindow returns the DPI for our Window that we can divide by 96 to get the percentage Windows uses to scale Applications. On my screen for example GetDpiForWindow returns 240 and that divided by 96 is equal to 2.5 that corresponds to the 250% scaling i have in my Screen settings. All thats left to do, is multiply every hard coded size with the percentage we got from the OS.

@dumblob
Copy link
Contributor

dumblob commented Jan 8, 2017

@juliuszint thanks for describing one of the many pieces potentially needed in the heuristic we're looking for.

@cbuschardt maybe a separate single header library written in ANSI C, full of ifdefs for different platforms would be a good start. For linux the KMS, evdev, etc. functions could be used to obtain physical screen size and resolution (should work for X, Wayland, and Mir), for BSD something similar (pciconf -lvbce devinfo -vr) and for Windows is everything on MSDN (note though, that the C symbols @juliuszint proposed are supported only on Win 8.1+ and Win 10+).

Then this "clean" information should be consulted with end-user application information about physical screen size, DPI, scale factors, screen types & shapes (e.g. X - xrandr xdpyinfo xrdb -symbols xrdb -query -all, Gnome has DPI settings - gsettings get org.gnome.desktop.interface scaling-factor, Qt - env | grep -i QT_AUTO_SCREEN_SCALE_FACTOR env | grep -i QT_SCREEN_SCALE_FACTORS env | grep -i QT_SCALE_FACTOR), GDI UI - env | grep -i GDK_SCALE, GDK FONT env | grep -i GDK_DPI_SCALE, Enlightenment - env | grep -i ELM_SCALE, Firefox/Thunderbird - in about:config layout.css.devPixelsPerPx). These inputs should then serve for guessing of the distance

A preliminary draft of the interface of this single header lib estimatescale.h (it totally avoids callbacks as well as memory allocation):

/*
  Monitor, display, and screen terms are defined in
  https://wiki.archlinux.org/index.php/multihead (note one computer can
  run multiple displays at the same time - i.e. having more parallel
  "desktop sessions" each having different number of monitors with different
  resolutions and different physical dimensions). We're not interested in
  monitors, because they can't be used (because they're not part of any
  desktop as a screen).
*/
struct es_screen {
  uint32_t phys_x, phys_y;  // physical dimensions in micrometers
  uint32_t res_x, res_y;  // resolution in px
                          // We could use DPI instead of resolution, but
                          // in that case we would need anyway 2 numbers
                          // (vertical and horizontal DPI differ, because
                          // pixels are often not squares).
  float est_scale;  // estimated scale, initialized to 1.0
                    // There is no need to distinguish x and y scales.
};
struct es_display {
  uint64_t id;  // runtime-specific unique internal ID
                // 0 has no meaning (and thus is not a valid ID)
  int32_t view_dist;  // in micrometers
                      // If not provided by backend, it's initialized to
                      // 711200 (~ 28" ~ nominal arm's length as per CSS 1.0).
  uint16_t screens_n;  // number of elements in screens below
  struct es_screen *screens;  // array of screens in this display
};
struct es_state {
  uint64_t probed_displays;  // uses internal representation, but 0 is
                             // reserved with the meaning: no displays
                             // probed so far, probe the current one.
  uint64_t probed_screens;   // uses internal representation, but 0 is
                             // reserved with the meaning: no displays
                             // probed so far, probe the first one (in
                             // an internal order).
};

enum es_probe_next_display {
  ES_DISP_DONE,  // all displays probed
  ES_DISP_NEXT,  // not all displays probed yet
  ES_DISP_NOT_EXIST,  // display doesn't exist
  ES_DISP_STATE_MISMATCH,  // system display configuration changed in a way,
                            // that invalidated the until now probed
                            // displays' state
};
// probes the next or given display
// guarantees d.screens == NULL and d.screens_n == 0 after return
// if d.id is not 0, probe just the display with d.id and ignore s
// thread-safe: yes
enum es_probe_next_display es_probe_next_display(struct es_state *s, struct es_display *d);

enum es_probe_next_screen {
  ES_SCR_DONE,  // all screens probed
  ES_SCR_NEXT,  // not all screens probed yet
  ES_SCR_STATE_MISMATCH,  // system screen configuration in the given display
                           // changed in a way, that invalidated the until now
                           // probed screens' state
};
// probes the next screen
// thread-safe: yes
enum es_probe_next_screen es_probe_next_screen(struct es_state *s, struct es_screen *sc);

// estimates the scale factor for each screen in the given display
// thread-safe: yes
// returns 0 if successful, otherwise any non-zero number
// This function might be pretty slow (due to access to
//     disk/network/event_bus/...).
// Note: Scale factors guarantee a consistent pixel-perfect UI across all
// screens (potentially having different resolutions and physical dimensions)
// in the given display (but not across displays).
int es_estim_scales_in_screens(struct es_display *d);

Untested demo in C99 scraping all screens in all displays and estimating screen scales (yes, the code is brutal with proper memory handling, but this complexity will be anyway well abstracted in bindings):

#include <stdio.h>  // fprintf()
#include <stdlib.h>  // realloc()
#include <assert.h>  // assert()

#include "estimatescale.h"

int main( int argc, char *argv[] ){
  struct es_state s = { 0 };
  int ds_n = 0;  // items in ds
  struct es_display *ds = NULL;  // array of displays

  // we could also immediately fail instead of retrying after mismatch
  for( max_disp_mismatches = 10; ; ){
    // just for failed realloc()
    struct es_display *_ds = ds;

    ds = (struct es_display *)realloc( ds,
        sizeof( struct es_display ) * ( ++ds_n ) );

    if( ds == NULL ){
      fprintf( stderr, "ERR realloc( ..., struct es_display )\n" );
      ds = _ds;
      --ds_n;
      goto cleanup;
    }

    ds[ds_n -1].id = 0;  // probe all displays
    enum es_probe_next_display r = es_probe_next_display( &s, &ds[ds_n -1] );

    // shouldn't happen at all, because we're not probing a specific display
    if( r == ES_DISP_NOT_EXIST ){
      fprintf( stderr, "ERR ES_DISP_NOT_EXIST\n" );
      goto cleanup;
    }

    if( r == ES_DISP_STATE_MISMATCH ){
      if( ! --max_disp_mismatches ){
        fprintf( stderr, "ERR ES_DISP_STATE_MISMATCH\n" );
        goto cleanup;
      }

      s = { 0 };  // start probing displays from the beginning
      for( int n = 0; n < ds_n; ++n ) free( ds[n].screens );
      ds_n = 0;
      // no free( ds ), because of the upcoming realloc()
      continue;
    }

    for( max_scr_mismatches = 5; ; ){
      // just for failed realloc()
      struct es_screen *_screens = ds[ds_n -1].screens;

      ds[ds_n -1].screens = (struct es_screen *)realloc( ds[ds_n -1].screens,
          sizeof( struct es_screen ) * ( ++ds[ds_n -1].screens_n ) );

      if( ds[ds_n -1].screens == NULL ){
        fprintf( stderr, "ERR realloc( ..., struct es_screen )\n" );
        ds[ds_n -1].screens = _screens;
        --ds[ds_n -1].screens_n;
        goto cleanup;
      }

      enum es_probe_next_screen rr = es_probe_next_screen( &s,
          &ds[ds_n -1].screens[ ds[ds_n -1].screens_n ] );

      if( rr == ES_SCR_STATE_MISMATCH ){
        if( ! --max_scr_mismatches ){
          fprintf( stderr, "ERR ES_SCR_STATE_MISMATCH\n" );
          goto cleanup;
        }

        s.probed_screens = 0;  // start probing screens from the beginning
        ds[ds_n -1].screens_n = 0;
        // no free( ds[ds_n -1].screens ), because of the upcoming realloc()
        continue;
      }

      if( rr == ES_SCR_DONE ){ break; }
      assert( rr == ES_SCR_NEXT );
    }

    // we could estimate scale for each screen in the display already
    // here, but because it's expected to be slow, we rather probe all other
    // displays first

    if( r == ES_DISP_DONE ){ break; }
    assert( r == ES_DISP_NEXT );
  }

  for( int n = 0; n < ds_n; ++n ){
    // we can optionally influence the scale computation
    //ds[n].view_dist = ...;

    if( int r = es_estim_scales_in_screens( &ds[n] ) ){
      fprintf( stderr,
          "ERR es_estim_scales_in_screens( %ld ): %d\n", ds[n].id, r );
      // optionally we can finish others and then retry these failed ones
      // as displays are highly independent
      goto cleanup;
    }
  }

cleanup:
  for( int n = 0; n < ds_n; ++n ) free( ds[n].screens );
  free( ds );

  return 0;
}

Note that the API shouldn't change even if the physical displaying device surface will not be flat (think of curved/flexible displays, VR equipment, etc.). In case of these non-flat surfaces some averaging based on projection matrices and possibly other transformations will be internally used in this library.

Now comes the second hard part. An implementation of a "multibuffer canvas" in GUI libraries. With "multibuffer canvas" I mean a SW solution for the case, when part of the GUI (e.g. a window or a widget) spans across multiple physical screens which have different physical dimensions of a pixel (see the visualization https://gist.github.com/vurtun/61b6dbf21ef060bcbbd8d1faa88350d9#gistcomment-2223201 ). We usually try to avoid these cases, but still this seems to be more than common (e.g. notebooks/tablets/smartphones with connected external LCDs/projectors/TVs; also basically noone is buying a new - second, third, ... - screen with the same /low/ resolution as the old screen).

The basic idea how to implement "multibuffer canvas" is to have a separate canvas for each of the different pixel sizes and draw everything to each of those canvases separately with the corresponding scale (thanks to clipping the double/triple/... drawing of certain windows/widgets would be in most cases eliminated like it's done now when a window/widget is half outside the canvas).

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants