Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[STICKY] Should I drop scaling support? #396

Closed
bvaughn opened this issue Sep 18, 2016 · 15 comments
Closed

[STICKY] Should I drop scaling support? #396

bvaughn opened this issue Sep 18, 2016 · 15 comments

Comments

@bvaughn
Copy link
Owner

bvaughn commented Sep 18, 2016

Browsers have maximum limits on clientHeight & clientWidth (and by extension on maximum scroll offset). Positioning content beyond this limit will result in it not being visible (even though it is in the DOM). Starting with version 7.3.0, react-virtualized introduced a feature that auto-scales clientHeight & clientWidth to keep them within safe limits. Beyond a certain threshold, react-virtualized scales down width, height, and scroll position so that the browser can display more information than it would normally be able to.

Unfortunately these limits vary widely by browser (eg Chrome is safe up to 33.5M pixels, or in many cases beyond, but Edge maxes out at 1.5M pixels). Until this point I have tried to support the lowest common denominator, lowering my scaling threshold to accommodate Edge limitations. This has a major drawback though: it causes scaling to feel too fast when the scaling factor gets too large. This results in a sub-par scrolling experience for browsers that could easily handle the clientHeight (eg Firefox, Chrome) in order to not break Edge.

I have considered attempting a browser-specific threshold as it would improve the UX for Chrome and Firefox. I have not implemented one yet though because I have not found any authoritative/official documentation on which browsers have which limits (eg on Chrome it seems to vary by operating system).

I'm opening this issue for discussion of the following:

Do you know more about this topic than me?

(I know very little.) Perhaps you could refer me to any official browser vendor documentation about this. Alternately any OSS project that already detects/encodes these limits.

If I found an official list I could at least increase the threshold for Chrome and Firefox and improve the UX in those browsers.

Do you think I should continue to implement scaling in react-virtualized?

Or should I leave it up to whatever the browser's native limitations are? An alternative to this could be that I disable scaling by default but give users a way to opt-in.

One argument for dropping scaling is that past a certain point it results in a horrible user experience anyway. (If scrolling a few pixels will jump you forward tens or hundreds of rows, the list view is mostly useless anyway.)

It may also result in slight performance improvements as I would be doing less in each scroll event handler.

@bvaughn bvaughn changed the title Should I drop scaling support? [STICKY] Should I drop scaling support? (Feedback needed) Sep 18, 2016
@sophiebits
Copy link

sophiebits commented Sep 19, 2016

I tried to articulate a solution in my tweet https://twitter.com/soprano/status/777649671783284737 but I'm not sure I did it clearly so here's a longer version.

It's my understanding that the current react-virtualized code is akin to

const maxDimension = 1500000;
const scaleFactor = listLogicalHeight / maxDimension;

function onScroll() {
  const physicalTop = list.scrollTop;
  const logicalTop = scaleFactor * physicalTop;

  rerenderRows(physicalTop, logicalTop);
}

Here I'm assuming a rerenderRows function that renders the rows whose logical positions are near logicalTop and sets their positions to be near physicalTop so that the rows appear in the list viewport.

My proposed (much more complicated) alternative wouldn't adjust small changes in scroll position at all, but large jumps would be treated essentially the same as the current scaling so that it is easy to jump to 30% of the way through a list, for example.

const maxDimension = 1500000;
const scaleFactor = listLogicalHeight / maxDimension;

const currentPhysicalTop = 0;
const currentLogicalTop = 0;

function onScroll() {
  const physicalTop = list.scrollTop;
  const physicalDiff = physicalTop - currentPhysicalTop;

  if (Math.abs(physicalDiff) < 2 * listViewportHeight) {
    // Small scroll -- preserve delta in scroll position
    currentPhysicalTop += physicalDiff;
    currentLogicalTop += physicalDiff;

    // If you were to scroll forever with small scrolls, eventually you would
    // hit the end of the physical list before reaching the end of the logical
    // list. So we fix the physical scroll position when we get a chance.
    fixPhysicalScrollPosition(Math.round(currentLogicalTop / scaleFactor));
  } else {
    // Big scroll -- don't try to preserve delta in scroll position
    currentPhysicalTop = physicalTop;
    currentLogicalTop = scaleFactor * physicalTop;
  }

  rerenderRows(currentPhysicalTop, currentLogicalTop);
}

const fixPhysicalScrollPosition = debounce(function(correctPhysicalTop) {
  list.scrollTop = correctPhysicalTop;

  currentPhysicalTop = correctPhysicalTop;
  rerenderRows(currentPhysicalTop, currentLogicalTop);
}, 1000);

This is pseudocode and is also logically incomplete: at the very least, it is one-dimensional. It also does not attempt to guarantee that scrolling to the top of the list would be correctly interpreted as scrolling to the top of the list. I imagine you'd want to dedicate a few screenfuls at the top and bottom of the list to have 1:1 scrolling and only apply scaling in the middle parts.

@bvaughn
Copy link
Owner Author

bvaughn commented Sep 19, 2016

Thanks for writing up a more detailed explanation @spicyj. I more or less understood what you meant from your initial Twitter messages but it's hard to convey anything too complex on Twitter. My concern with this approach- which I tried to articulate on Twitter but failed- has to do with user experience.

If I re-adjust scroll position (eg on debounce) after a user stops scrolling, it's a bad user experience, because it causes the scrollbars to visibly jump. (Even if you're on OS with scrollbars hidden, they'll reappear temporarily when scroll position is changed and it's jarring.)

It would also be a weird user experience to scroll to the middle of a list (using the scrollbar) only to have the scroll position reset to somewhere else (eg 1/4 of the way through the list). It would make it impossible to quick-jump to a given point in the data. (Let's say I specifically wanted to view the middle rows for some reason; I would be unable to actually locate them using the scrollbar.) It would also not be possible to jump to the same place twice using the scrollbars. (Jumping from 0 to 50% scroll position would result in a different set of rows than jumping from 25% to 50% b'c of how we would be calculating the diff.)

I also worry about edge-cases with any approach that involves accumulating inaccurate data to-be-fixed once a user stops scrolling. Let's say you're 90% of the way through the list and you started scrolling down. It's feasible that you'd be able to keep scrolling until there were no more rows at all- only to have some rows appear once you stop scrolling (because of the way we'd have to be managing the offset adjustments). This last point I'm not sure I'm explaining well. (I'm still a bit jet lagged, sorry.)

@sophiebits
Copy link

My sense is that it is not going to be a significant problem in practice. With a screen height of 1000px, 1.5M pixels is 1500 entire screenfuls of content. That means that if you scroll down 10 whole pages before stopping, that's still only 0.6% of the scrollbar. Any movement would be just a few pixels at most.

It would also be a weird user experience to scroll to the middle of a list (using the scrollbar) only to have the scroll position reset to somewhere else (eg 1/4 of the way through the list). It would make it impossible to quick-jump to a given point in the data. (Let's say I specifically wanted to view the middle rows for some reason; I would be unable to actually locate them using the scrollbar.) It would also not be possible to jump to the same place twice using the scrollbars. (Jumping from 0 to 50% scroll position would result in a different set of rows than jumping from 25% to 50% b'c of how we would be calculating the diff.)

I'm not sure I fully understand this, but my recommendation was that a jump to 50% of the physical height would be treated as a jump to 50% of the logical height, as long as you are not starting from within a few screenfuls (which is 0.2%, say) of 50%. So if you are in the range [49.8%, 50.2%] already, then jumping to 50% will be a relative scroll at 1x. But if you are outside that tiny range, it is treated as a jump to 50%.

I still feel pretty good about my proposal. ;)

@bvaughn
Copy link
Owner Author

bvaughn commented Sep 19, 2016

My sense is that it is not going to be a significant problem in practice. With a screen height of 1000px, 1.5M pixels is 1500 entire screenfuls of content. That means that if you scroll down 10 whole pages before stopping, that's still only 0.6% of the scrollbar. Any movement would be just a few pixels at most.

Maybe I'm worrying about an edge case that is unlikely to exist, but... I was picture a user slow-scrolling for a long time. Never a big enough delta that we threw things away while they were scrolling. Seems like it could be awkward, particularly if they were near enough the bottom that they ran out of scrollable space while there should still be rows to scroll to.

I'm not sure I fully understand this, but my recommendation was that a jump to 50% of the physical height would be treated as a jump to 50% of the logical height

Gotcha. I believe I misunderstood that detail. I agree with you that this concern is not valid.

I think this proposal definitely warrants a prototype. It seems promising. 😄

@sophiebits
Copy link

I think the clearest way to implement this is to write a piecewise (bijective, reversible) function mapping from physical to logical position that is 1:1 near the top, bottom, and current position and is scaled elsewhere.

Here is a diagram of what I mean:

image

In response to any scroll, you can look up in the map where your new logical position should be. Hopefully it makes sense. After a pause in time, you would create a new mapping based on the current logical position and readjust the physical position.

@bvaughn
Copy link
Owner Author

bvaughn commented Sep 19, 2016

That's more or less what the ScalingCellSizeAndPositionManager is doing (whew, what a name 😝). The key difference you're proposing, as I'm understanding it at least, is that when a user scrolls slowly, I should temporarily suspend further adjusting the offsets.

Logically it's simple enough to understand. But I think it's a bit tricky with the way the code is currently architected. I just need to give it a little thought.

PS. Great diagram by the way! 👏 🙇

@sophiebits
Copy link

sophiebits commented Sep 19, 2016

My mapping takes into account that small scrolls should be 1:1 by way of encoding that in the mapping function (ex: 70% ± 5000px maps 1:1 to 70% ± 5000px in my picture).

The only subtlety is when you update the mapping, I think.

@bvaughn
Copy link
Owner Author

bvaughn commented Sep 20, 2016

I think that there are tricky edge-cases with regard to scrolling near the top or bottom of the list. (And I think the longer/farther a user slow-scrolls- without a single diff that exceeds the threshold- the trickier that edge-case gets.)

Please pardon the awful programmer art. Here's how I'm thinking about this.

We can probably avoid a lot of the trickiness by treating some amount of area near the top and bottom as never-scaling:
rv-scaling

For long lists, these 2 areas are probably the most important parts anyway (the ones a user is most likely to spend time looking at). When a user scrolls within an unscaled region then we just let the browser do its own thing, without interfering, and the UX is good.

When a user scrolls quickly (eg a using a scrollbar or updating a scroll-to-cell prop) then we decide to scale (or not) based purely on the position they scrolled to. (There's no debounce involved.)

When a user scrolls slowly, we track 2 things: (1) Where did they start scrolling from? and (2) Where did they eventually stop scrolling? Then (after a debounced period of time) we adjust the actual scroll position if necessary. (If "necessary" means if at any point they scrolled through the scaled region.) After this adjustment the same cells should be visible (so it should be mostly transparent to the user).

rv-scaling-2

I guess the only faux-tricky thing about this is that we'll need to calculate which rows are visible at offset X given that a user started scrolling at offset Y (which may have been scaled).

Is this more or less what you were picturing? (Sorry if it's taking me longer to arrive at the same conclusion. I have some pre-set thought patterns about this that I need to break, since I've spent so long thinking about it.)

@sophiebits
Copy link

I think you're overcomplicating it a little (like I was doing last night) and now believe that my proposed mapping function is all that you need. In particular, my 1:1 mapping region near the current position is all you need to take care of slow scrolls. You don't need to manually check the scroll deltas.

@sophiebits
Copy link

sophiebits commented Sep 20, 2016

I believe this code is correct (really! no corners were cut!), modulo possible improvements in the reverse computation in fixUpMapping:

const listPhysicalHeight = 1.5 million;
const listLogicalHeight = ...;
const scale = listLogicalHeight / listPhysicalHeight;

// How many px at top, near current, and at bottom should scroll at 1:1?
// Written separately for clarity.
const top5000 = 5000;
const mid5000 = 5000;
const end5000 = 5000;

function makeMappingFunction(currentPhysicalPos, currentLogicalPos) {
  return function physicalToLogical(physical) {
    if (physical < top5000) {
      // 1:1 mapping near top
      return physical;

    } else if (listPhysicalHeight - physical < end5000) {
      // 1:1 mapping near bottom
      return listLogicalHeight - physical;

    } else if (Math.abs(physical - currentPhysicalPos) < mid5000) {
      // 1:1 mapping near current position
      return currentLogicalPos + physical - currentPhysicalPos;

    } else if (currentPhysicalPos < physical) {
      // scaled mapping before current position (jumping to an earlier position)
      const topOfPhysicalRegion = top5000;
      const topOfLogicalRegion = top5000;
      const endOfPhysicalRegion = currentPhysicalPos - mid5000;
      const endOfLogicalRegion = currentLogicalPos - mid5000;
      assert(physical >= topOfPhysicalRegion && physical <= endOfPhysicalRegion);
      // linear interpolation between those two points:
      return (physical - topOfPhysicalRegion) * (endOfLogicalRegion - topOfLogicalRegion) / (endOfPhysicalRegion - topOfPhysicalRegion) + topOfPhysicalRegion;

    } else {
      // scaled mapping after current position (jumping to a later position)
      const topOfPhysicalRegion = currentPhysicalPos + mid5000;
      const topOfLogicalRegion = currentLogicalPos + mid5000;
      const endOfPhysicalRegion = listPhysicalHeight - end5000;
      const endOfLogicalRegion = listLogicalHeight - end5000;
      assert(physical >= topOfPhysicalRegion && physical <= endOfPhysicalRegion);
      // linear interpolation between those two points:
      return (physical - topOfPhysicalRegion) * (endOfLogicalRegion - topOfLogicalRegion) / (endOfPhysicalRegion - topOfPhysicalRegion) + topOfPhysicalRegion;
    }
  };
}

let currentMapping = makeMappingFunction(0, 0);

function onScroll() {
  const newPhysical = list.scrollTop;
  const newLogical = currentMapping(newPhysical);
  rerender(newPhysical, newLogical);
  fixUpMapping(newLogical);
}

const fixUpMapping = debounce((newLogical) => {
  // FIXME: Not sure if this is quite right. Probably not. Maybe should be:
  // (newLogical - top5000) * (listPhysicalHeight - top5000 - bot5000) /
  // (listLogicalHeight - top5000 - bot5000) + top5000.
  const correctedPhysical = newLogical / scale;

  list.scrollTop = correctedPhysical;
  currentMapping = makeMappingFunction(correctedPhysical, newLogical);
  rerender(newPhysical, newLogical);
}, 1000);

Let me know if it makes sense.

@bvaughn bvaughn self-assigned this Sep 20, 2016
@bvaughn
Copy link
Owner Author

bvaughn commented Sep 20, 2016

Sorry for the slow response. 😄 I was pretty tired last night (jet lag sucks) and I wanted to let my brain rest before responding.

Thinking about this more, I don't think we need to treat the top/bottom specially after all. The delta-from-current approach would cover the top of the list given that's the initial starting point ("current"). And the bottom would only be reachable by a fast-scroll so it would also be covered.

It's possible that someone could quick-scroll near the top or bottom and then slow-scroll for a prolonged period of time such that they eventually ran out of rows prematurely- but I think if the delta is sufficiently large this becomes very unlikely.

I'm still a little concerned about a jumping scrollbar after debounce. Maybe this is an argument for treating the top/bottom specially. (Maybe we could drastically reduce the chance of this by never adjusting within some safe region near the top and bottom. I still think it's likely that most users would never leave this region.)

@sophiebits
Copy link

I mostly wanted to be extra-sure that jumping to top or bottom would always work correctly.

Suppose you jump to 0.1% (2000px) of the physical list which corresponds to 0.1% (2,000,000px, say) of the logical list. Then scrolling from there to 0 is a small scroll and might take you to 0px physical = 1,998,000px logical. It may be that only the calculation of correctedPhysical needs to take this into account (mapping top and bottom 1:1 for some number of screenfuls) and the mapper itself doesn't. This would also mean that the adjustments are not necessary within that region which fixes your other concern.

@bvaughn
Copy link
Owner Author

bvaughn commented Sep 20, 2016

Aye. That was my concern too.

😁 This was a very productive discussion. Thanks @spicyj!

@bvaughn bvaughn changed the title [STICKY] Should I drop scaling support? (Feedback needed) [STICKY] Should I drop scaling support? Sep 20, 2016
@bvaughn
Copy link
Owner Author

bvaughn commented Dec 4, 2016

Note to self: The changes discussed above could also help improve the RTL and reverse list use cases. If we choose the initial indices to render at a given scroll offset based on the % within the total scrollable area, we could avoid having to pre-measure and cache positions before that offset. This would improve performance for cases that are currently awkward due to cached position invalidation.

I still plan (hope) to make time to work more on this idea soon.

@bvaughn
Copy link
Owner Author

bvaughn commented Mar 1, 2017

Great discussion here. Going to close for now since there's nothing more actionable to do. Still hope to finish implementing the ideas that resulted from this discussion at some point. For now there's a PR with my work in progress.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants