New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Preventing downloading images or objects until they are visible in the viewport #2806

Open
JoshTumath opened this Issue Jul 1, 2017 · 70 comments

Comments

@JoshTumath

JoshTumath commented Jul 1, 2017

See PR #3752

Problem

Many websites are very image heavy, but not all of those images are going to be viewed by visitors. Especially on mobile devices where most visitors do not scroll down very much; it is mostly the content at the top of the page that is consumed. Most of the images further down the page will never be viewed, but they are downloaded anyway.

This is slowing down the overall page load time, unnecessarily increasing mobile data charges for some visitors and increasing the amount of data held in memory.

Example workaround

For years, the BBC News team have been using the following method to work around this problem. Primary images at the top of the page are included in the HTML document in the typical way using an img element. However, any other images are loaded in lazily with a script. For those images, they are inidially included in the HTL document as a div which acts as a placeholder. The div is styled with CSS to have the same dimensions as the loaded image and has a grey background with a BBC logo on it.

<div class="js-delayed-image-load"
     data-src="https://ichef.bbci.co.uk/news/304/cpsprodpb/26B1/production/_96750990_totenhosen_alamy976y.jpg"
     data-width="976" data-height="549"
     data-alt="Campino of the Toten Hosen"></div>

Eventually, a script will replace it with an img element when it is visible in the viewport.

Doing this with a script is not ideal, because:

  1. If the visitor has scripts disabled, or the script fails to load, the images won't ever appear
  2. We don't know in advance the size of the visitor's viewport, so we have to arbitrarily determine which images to load in lazily. On a news article, vistors on small viewports will only initially see the News logo and an article's hero image, but larger viewports will initially be able to see many other images (e.g. in a sidebar). But we have to favour the lowest common denominator for the sake of mobile devices. This gives users with a large viewport a strange experience where the placeholders appear for a second when they load the page.
  3. We have to wait for the script to asyncronously download and execute before any placeholders can be replaced with images.

Solution

There needs to be a native method for authors to do this without using a script.

One solution to this is to have an attribute for declaring which images or objects should not be downloaded and decoded until they are visible in the viewport. For example, <img lazyload>.*

Alternatively, a meta element could be placed in the head to globally set all images and objects to only download once they are visible in the viewport.

* An attribute with that name was proposed in the Resource Priorities spec a few years ago, but it didn't prevent the image from downloading - it just gave a hint to the browser about the ordering, which is probably not as useful in a HTTP/2 world.

@domenic

This comment has been minimized.

Member

domenic commented Jul 1, 2017

Hmm, this was previously discussed at https://www.w3.org/Bugs/Public/show_bug.cgi?id=17842, but GitHub is more friendly for people. Let me merge that thread into here, but please please please read all the contents of the discussion there, as this is very well-trod ground and we don't want to have to reiterate the same discussions over again.

@wildlyinaccurate

This comment has been minimized.

wildlyinaccurate commented Jul 1, 2017

I've just spent an hour reading the thread on the original bug report (which @JoshTumath actually reported). There was initially confusion between two features: (1) Being able to tag images as "not important" so that the browser can give priority to other resources. (2) Being able to opt in to loading specific images only at the point where they are in the viewport or just about to enter it. This issue is specifically for (2). I will refer to this as "lazy loading".

The thread goes around in circles and doesn't really have a clear outcome, although the implementations discussed still seem valid and relevant today (Jake's summary in comment 49 is a good point to start at if you don't want to read the entire thread). I'm going to try not to repeat too much from that thread, but it has been 5 years now and as far as I can see lazy loading images is still a relatively common pattern. On top of that, the profile of the average internet-connected device has changed drastically (under-powered Android devices on very expensive cellular connections) and in my opinion the argument for lazy loading images is stronger now than it was 5 years ago.

I'm going to provide some insight into a use case that I'm very familiar with: the BBC News front page. I'll do this in the hopes that it provides some real life context around why I think lazy loading images is important, and why doing it in JS is not good for users.

Loading the page in Firefox in a 360 x 640 viewport from the UK (important because the UK does not get ads, which skews the results), the browser makes the following requests:

  • On the initial load: 49 requests, 314.43 kB transferred.
  • After scrolling a quarter of the way down the page (32% of mobile users reach this point): 57 requests, 373.06 kB transferred.
  • After scrolling halfway (20% reach this point): 66 requests, 437.95 kB transferred.
  • After scrolling to the bottom (1% reach this point): 84 requests, 546.60 kB transferred.

We use lazysizes to lazy load all but the very first article image. Lazysizes makes up about half of our JS bundle size. I know it's overkill for our use case but it's a popular and well-tested library. We load our JS with a <script async> tag, so it can take some time before the JS is executed and the images are inserted into the document. The experience of seeing so many image placeholders for several seconds can be quite awkward. We actually used defer for a while but the delay was deemed too long on slower devices.

From our point of view the benefits of the UA providing lazy loading are:

  • We literally halve the amount of JS in our bundle (although there are several other bundles from other BBC products so the real impact on the user is not that great on this page).
  • The UA can load images earlier, probably as early as DOMContentLoaded.
  • The UA can decide whether to lazy load at all (e.g. only lazy load on cellular connections).

Despite Ilya's arguments against lazy loading in general, we've been doing it for 5 years and we're going to continue doing it until cellular data is much cheaper. If we got rid of our lazy loading, two thirds of our mobile users would download 170kB of data that they never use. Keeping the next billion in mind, that's about 3 minutes of minimum wage work. At our scale (up to 50M unique mobile visitors to the site each week) 170kB per page load starts to feel uncomfortably expensive for our users.

So what do the WHATWG folk think? Is it worth having this conversation again? Is there still vendor interest? Mozilla expressed interest 5 years ago but it seems like nothing really happened.

@jakearchibald

This comment has been minimized.

Collaborator

jakearchibald commented Jul 4, 2017

We literally halve the amount of JS in our bundle

Intersection observers means the JS for triggering loading on element visibility is tiny.

The UA can load images earlier, probably as early as DOMContentLoaded.

That's also possible with a small amount of JS.

The UA can decide whether to lazy load at all (e.g. only lazy load on cellular connections).

Yeah I think browser heuristics (along with no JS dependency) are the remaining selling points of something like lazyload. But is it enough to justify it?

@wildlyinaccurate

This comment has been minimized.

wildlyinaccurate commented Jul 4, 2017

Intersection observers means the JS for triggering loading on element visibility is tiny.

Yeah, fair call. If we drop our big ol' lazy loading JS for a lazyload attribute we may as well drop it for 10 lines of intersection observer wiring.

I guess the thing that appeals to me most about a lazyload attribute is that it's pretty much the minimum amount of friction you could have for implementing lazy loading, and it leaves all of the nuance up to the UA. In my experience developers don't really know about or care about the nuance of whether their JS is blocking or deferred; runs at DOMCL or load. If there was a big slider that controlled who did the most work UA o-----------|--o Devs I would shift it all the way to UA because devs often don't have the time to do things in a way that provides the best experience for users. I realise this kind of thinking goes against the Extensible Web Manifesto, though. 🙊

@Zirro

This comment has been minimized.

Contributor

Zirro commented Jul 4, 2017

I can see two more arguments in favour of an attribute. The first is that lazy loading mechanisms which depend on scripts have a significant impact for user agents where scripts don't execute. To prevent images from loading early, the images are only inserted into the DOM later on, leaving non-scripting environments without images at all. Few sites seem to think about the <noscript> element these days.

The second is that providing it through an attribute means that the user can configure the behaviour as they prefer to experience the web. Someone on a slow connection might want to make images start loading earlier than when the image enters the viewport in order to finish loading in time, while someone else with a lot of bandwidth who dislikes lazy loading can disable it entirely.

(In general, I believe it is important that common website practices are standardised in order to give some control of the experience back to the user, or we may eventually find ourselves with a web that is more of a closed runtime than a document platform which is open to changes by extensions, user scripts and userstyles.)

@jakearchibald

This comment has been minimized.

Collaborator

jakearchibald commented Jul 4, 2017

@Zirro those arguments are the "browser heuristics" and "no JS dependency" benefits I already mentioned, no?

@Zirro

This comment has been minimized.

Contributor

Zirro commented Jul 4, 2017

@jakearchibald I suppose I understood the "no JS dependency" benefit as referring only to having to load less JavaScript rather than the content being available to non-scripting agents, and missed the meaning of "browser heuristics" in your sentence. Still, I hope that detailing the arguments and why they are important can help convince those who are not yet sure about why this would be useful.

@domenic

This comment has been minimized.

Member

domenic commented Jul 4, 2017

In general non-scripting agents are not a very compelling argument to get browsers to support a proposal, given that they all support scripting :). (And I believe these days you can't turn off scripting in any of them without extensions.)

@Zirro

This comment has been minimized.

Contributor

Zirro commented Jul 4, 2017

@domenic I would hope that they see the value in having a Web that is accessible to all kinds of agents beyond their own implementations, much like a person without a disability can see the value of designing a website with accessibility in mind.

@JoshTumath

This comment has been minimized.

JoshTumath commented Jul 4, 2017

In general non-scripting agents are not a very compelling argument to get browsers to support a proposal, given that they all support scripting :).

@domenic The issue is more whether these scripts fail to download, which does lead to an odd experience. It's becoming harder and harder these days to progressively enhance websites as we seem to depend on scripting more and more for the core functionality of our websites.

Yeah I think browser heuristics (along with no JS dependency) are the remaining selling points of something like lazyload. But is it enough to justify it?

I think both of these are big selling points for the reasons above. As I say, this is not something that's possible to progressively enhance. There is not any way to provide a fallback for those for whom the JS fails for whatever reason.

A few years ago, GDS calculated how many visits do not receive 'JavaScript enhancements', which was a staggering 1.1%. Like GDS, at the BBC, we have to cater to a very wide audience and not all of them will have stable internet connections. I have a good connection at home and even for me the lazyloading script can fail to kick in sometimes.

Additionally, I feel as though we haven't covered one of the main issues with this that I mentioned in my original comment:

We don't know in advance the size of the visitor's viewport, so we have to arbitrarily determine which images to load in lazily.

Because we're using a script, we've had to use placeholder divs for most images. While this is great for mobile devices whose viewports are too small to see many images at once, this is really unhelpful on large viewports. It creates an odd experience and means we can't benefit from having the browser start downloading the images as normal before DOMContentLoaded is triggered. Only a browser solution can know in advance the viewport size and determine which images to download immediately and which ones to only download once scrolled into view.

@hartman

This comment has been minimized.

hartman commented Jul 4, 2017

@domenic The issue is more whether these scripts fail to download, which does lead to an odd experience. It's becoming harder and harder these days to progressively enhance websites as we seem to depend on scripting more and more for the core functionality of our websites.

I completely agree with this. At Wikipedia/Wikimedia, we have seen that interrupted JS downloads in low quality bandwidth situations are one of the most common causes of various problems. And that's also exactly the user situation where you'd want lazy loaded images. I'd guess with service workers you could do lazy loaded images as well, and then at least you're likely to have them on your second successful navigation, but yeah:

It's becoming harder and harder these days to progressively enhance websites as we seem to depend on scripting more and more for the core functionality of our websites.

Only a browser solution can know in advance the viewport size and determine which images to download immediately and which ones to only download once scrolled into view.

@addyosmani

This comment has been minimized.

addyosmani commented Jul 13, 2017

A topic I would like to tease apart is whether lazy-loading of images alone is the most compelling use-case to focus on vs. a solution that allows attribute-based priority specification for any type of resource (e.g <iframe lazyload> or <video priority="low"> ).

I know <img lazyload> addresses a very specific need, but I can imagine developers wanting to similarly apply lazyload to other types of resources. I'm unsure how much granular control may be desirable however. Would there be value in focusing on the fetch prioritization use-case?

@JoshTumath

This comment has been minimized.

JoshTumath commented Jul 13, 2017

It would definitely be useful to have this for iframes, objects and embeds as well!

As for video and audio, correct me if I'm wrong, but unless the preload or autoplay attributes are used, the media resource won't be downloaded anyway until it's initiated by the user. However, if they are specified, it might be useful to be able to use lazyload so they don't start buffering until they are scrolled into view.

When you mention a more general priority specification, do you mean something like the old Resource Priorities spec? What kind of behaviour are you thinking of?

@smfr

This comment has been minimized.

smfr commented Nov 8, 2017

I believe Edge already does lazy image loading. For out-of-viewport images, it loads enough of the image to get metadata for size (with byte-range requests?), but not the entire image. The entire image is then loaded when visible to the user.

Would lots of small byte-range requests for out-of-viewport images be acceptable?

@annevk

This comment has been minimized.

Member

annevk commented Nov 9, 2017

@mcmanus I think the previous comment in this thread is of interest to you.

@shallawa

This comment has been minimized.

shallawa commented Nov 10, 2017

I have two questions:

  • Will the css background image use a similar attribute?
    .box { background-image: url("backgorund.gif") lazyload; }

  • The image async attribute is discussed here #1920. I think the 'lazyload' and the 'async' attributes are very related. The first one postpone loading the image till it is needed. The second attribute moves the decoding to a separate thread and this will skip drawing the image till the decoding finishes. They both have almost the same effect if the image source or the decoded image is not available: the image will not be drawn; only the background of the image element will be drawn. When the image source and the decoded image are available, the image will be drawn.

I can't think of any use of these cases:

async="on" and lazyload="off"
async="off" and lazyload="on"

If any of them is "on", the browser will be lazy loading or decoding the image. In any case, the user won't see the image drawn immediately. So should not a single attribute be used to indicate the laziness for loading and the decoding the image?

<img src="image.png" lazy>
and
.box { background-image: url("backgorund.gif") lazy; }

@JoshTumath

This comment has been minimized.

JoshTumath commented Feb 12, 2018

Will the css background image use a similar attribute?
.box { background-image: url("backgorund.gif") lazyload; }

I guess that would be a separate discussion in the CSS WG, but at least in the case of BBC websites, the few background images that are used are visible at the top of the page, and therefore need to be loaded immediately anyway.

If any of them is "on", the browser will be lazy loading or decoding the image. In any case, the user won't see the image drawn immediately. So should not a single attribute be used to indicate the laziness for loading and the decoding the image?

It also depends on if these attributes would prevent the image from being downloaded entirely, or whether it would just affect the order in which the images are downloaded. (I think the latter would be much less useful.)

@Malvoz

This comment has been minimized.

Malvoz commented Feb 24, 2018

The content performance policy draft suggests <img> lazy loading, although they only mention lazy loading of images and no other embeds it seems that their idea is to enable developers to opt-in for site-wide lazy loading.

@othermaciej

This comment has been minimized.

othermaciej commented Apr 5, 2018

For a complete proposal, we probably need not just a way to mark an image as lazy loading but also a way to provide a placeholder. Sometimes colors are used as placeholders but often it's a more data-compact form of the image itself (a blurry view of the main image colors seems popular). Placeholders are also sometimes used for non-lazy images, e.g. on Medium the immediately-visible splash images on articles briefly show a fuzzy placeholder.

Also: Apple is interested in a feature along these lines.

@laukstein

This comment has been minimized.

laukstein commented Apr 5, 2018

@othermaciej in early 2014 I proposed CSS placeholder (similar to background property only applied until loaded/failed) https://lists.w3.org/Archives/Public/www-style/2014Jan/0046.html and still there haven't been any progress related to it.

@bengreenstein

This comment has been minimized.

bengreenstein commented Apr 9, 2018

The Chrome team's proposal is a lazyload=”” attribute. It applies to images and iframes for now, although in the future we might expand it to other resources like videos.

“lazyload” has the following states:

  • on: a strong hint to defer downloading BTF content until the last minute
  • off: a strong hint to download regardless of viewability
  • auto: deferral of BTF downloading is up to the user agent. (auto is the default.)

In Chrome we plan to always respect on and off. (Perhaps we should make them always-respected in the spec too, instead of being strong hints? Thoughts welcome.)

Deferring images and iframes delays their respective load events until they are loaded. However, a deferred image or iframe will not delay the document/window's load event.

One possible strategy for lazyload="on", which allows lazily loading images without affecting layout, is to issue a range request for the beginning of an image file and to construct an appropriately sized placeholder using the dimensions found in the image header. This is the approach Chrome will take. Edge might already do something similar.

We’re also open to the idea of supporting developer-provided placeholder images, though ideally, lazyloaded images would always be fully loaded before the user scrolls them into the viewport. Note that such placeholders might already be accomplishable today with a CSS background-image that is a data URL, but we can investigate in parallel with lazyload="" a dedicated feature like lowsrc="" or integration into or similar.

Although we won’t go into the details here (unless you’re interested), we also would like to add a feature policy to flip the default for lazyload="" from auto to off. This would be used for example for a particularly important <iframe>, where you could do , which would disable all lazyloading within that frame and its descendants.

@JoshTumath

This comment has been minimized.

JoshTumath commented Apr 9, 2018

@bengreenstein It is great to hear your proposal. I have a couple of questions:

on: a strong hint to defer downloading BTF content until the last minute

Does this imply images will not be downloaded until visible in the viewport (at least on metered network connections)?

One possible strategy for lazyload="on", which allows lazily loading images without affecting layout, is to issue a range request for the beginning of an image file and to construct an appropriately sized placeholder using the dimensions found in the image header. This is the approach Chrome will take.

If width and height attributes are already provided by the author, will that negate the need for this request?

@othermaciej

This comment has been minimized.

othermaciej commented Apr 10, 2018

Here's a number of thoughts on this proposal:

  • I wish there was a way to make this a boolean instead of a tristate, since boolean attributes have much sweeter syntax in HTML.

  • Bikeshed comment: If it has to be a tristate, maybe we can have more meaningful keywords than on and off. How about load=lazy, load=eager, load=auto? This also makes it feasible to add other values if a fourth useful state should ever be discovered. And it's also a bit more consistent with the decoding attribute (on which see more below).

  • Many developer-rolled versions of lazy loading use some form of placeholder so that seems like an essential feature. CSS background-image with a data: URL seems like a pretty inelegant (and potentially inefficient) way to do it.

  • It's always possible to see the placeholder state during an initial load or when scrolling fast soon after load on a slow network. So it can't be assumed that "lazyloaded images would always be fully loaded before the user scrolls them into the viewport". This is a good goal but not always achievable. Concretely, I frequently see the placeholder image on Medium posts when on LTE and can sometimes even see flashes on my pretty good home WiFi.

  • It would be good to figure out how this interacts with async decoding. Should lazy-loaded images be be asynchronously decoded as if decoding=async was specified? I think probably yes, as the use cases for sync decoding don't seem to be consistent with lazy loading. At the extreme you could think of lazy as an additional decoding state, though that might be stretching the attribute too far.

@smfr

This comment has been minimized.

smfr commented Apr 10, 2018

Some additional thoughts:

  • it should be possible to specify a placeholder which is a content image (many sites use a low-res image placeholder and replace with a high-res version)
  • it needs to work with , srcset
@othermaciej

This comment has been minimized.

othermaciej commented Apr 10, 2018

Good point. We need to consider <picture> too, where different <source> images may need different placeholders, since they may not all have the same size.

@bengreenstein

This comment has been minimized.

bengreenstein commented Jun 18, 2018

@smfr and @domenic I see your points.

lazyload="metadata": I don't see the fetching of metadata as being something we should require of UAs. The loading of metadata is the default lazyload behavior for images in Chrome and I'd recommend that loading behavior for other UAs. Do you think other UAs should be required to fetch metadata? If so, and we made it the default behavior, do you think it would also be useful to provide lazyload="on-but-please-no-metadata"? Related, do you think we should improve interopability w.r.t. tracking pixels? E.g., Chrome will use some heuristics to determine which images to not lazyload, e.g., display:none, etc.. We could also require other UAs to do the same. Wdyt?

canvas: By default, the UA should lazyload when it is reasonably safe to do so. The presence of a canvas tag makes image lazyloading unsafe, so by default Chrome won't lazyload images when there's a canvas tag. The developer can explicitly enable lazyload on any image to override the default behavior. I can see a few other options: The first is to not change the default lazyloading behavior and instead require developers to disable lazyloading on any images that will be used by the canvas. The second is to provide a canvas attribute to say that the canvas is not affected by lazyloading. Wdyt?

@jakearchibald

This comment has been minimized.

Collaborator

jakearchibald commented Jun 19, 2018

@bengreenstein even if the metadata fetch is optional, it should be written into a standard. This could be a "may" series of steps.

If other browsers want to fetch metadata, this should be interoperable.

This is especially important if range requests are used. Range requests have historically been a source of security issues due to their lack of standardisation, and we shouldn't make the same mistake here. From in-person discussions, I believe the plan is to issue a range request for the metadata, but then make a standard non-ranged request for the full resource. This avoids the kind of security issues related to joining partials, which is exactly why it should be written down.

I'm happy to review. Recent additions to the fetch spec such as https://fetch.spec.whatwg.org/#concept-request-add-range-header may help.

@jakearchibald

This comment has been minimized.

Collaborator

jakearchibald commented Jun 19, 2018

https://wicg.github.io/background-fetch/#validate-partial-response-algorithm may also be interesting, which is something I'm working on at the moment.

@bengreenstein

This comment has been minimized.

bengreenstein commented Jun 23, 2018

Thinking about the interaction between image lazyloading and canvas, we don't lazyload images that are outside of the document, so I believe this would only be an issue if the canvas uses an image that is in the document and outside the viewport. This seems to me to be an edge case. To address it the developer can explicitly disable lazyload for that image.

On the topic of fetching image metadata, I don't think we should expose this choice to the web platform, nor should we require a user agent to fetch it. However, I do think we should agree on a common way to fetch metadata if a user agent chooses to do so. I've updated the PR with an algorithm.

@jakearchibald I'd appreciate a review.

@bengreenstein

This comment has been minimized.

bengreenstein commented Jun 28, 2018

I added the algorithm for fetching image metadata. Please take a look. Are there any further concerns with this being interoperably implementable? We hope to ship this in Chrome soon.

@feross

This comment has been minimized.

feross commented Aug 13, 2018

Has the use case of lazy loading CSS background images been addressed yet? Might we need something like this?

<div style="background-image: url(img.jpg); background-lazyload: on;"></div>

Edit: Nevermind, I can just use an <img> tag. Now that we have object-fit: cover; I don't need to use a background image. Are there other cases where lazy loading background images makes sense? I can't think of any.

@Malvoz

This comment has been minimized.

Malvoz commented Aug 14, 2018

@davidar

This comment has been minimized.

davidar commented Aug 14, 2018

In general non-scripting agents are not a very compelling argument to get browsers to support a proposal, given that they all support scripting :).

@domenic The issue is more whether these scripts fail to download, which does lead to an odd experience. It's becoming harder and harder these days to progressively enhance websites as we seem to depend on scripting more and more for the core functionality of our websites.

It's also worth noting that things like Firefox Reader View are impacted by lazy-loading scripts: mozilla/readability#299

@amazingrando

This comment has been minimized.

amazingrando commented Aug 14, 2018

What if the person viewing the site doesn't want lazy-loading? A person should have the option to disable lazy-loading at the browser level to override what a website has declared.

@Malvoz

This comment has been minimized.

Malvoz commented Aug 14, 2018

@amazingrando, maybe what @bengreenstein said makes you feel more comfortable with lazyloading:

We plan to start loading when the element gets close to the viewport due to scrolling, allowing some padding so that the resource finishes loading before actually reaching the viewport.

@amazingrando

This comment has been minimized.

amazingrando commented Aug 14, 2018

@Malvoz Thank you but this doesn't address the need. There should be options in the browser for a user to turn on or off such features. The preferences of the browser user should supersede what is declared in a web page.

When I load a web page on my main computer, I want the whole page to load so that all of the assets are available. I should be able to declare that in the browser (and without a need for a plugin).

And to further clarify, I want lazyloading available for the web, I just want to be mindful that we continue to empower individuals whose preferences don't align with our own.

@nuxodin

This comment has been minimized.

nuxodin commented Aug 14, 2018

I would find an API useful to intervene before loading the resource:

document.addEventListener('before-lacyload', function(event){
    const img = event.target;
    if (img.matches('[data-resized-on-demand]')) {
        img.source = '/img/width-'+img.offsetWidth+'/height-'+img.offsetHeight+'/img.jpg';
        e.preventDefault();
    }
});

Or a universal "beforeload" event?
But that would affect security? https://bugs.chromium.org/p/chromium/issues/detail?id=333318

@Nettsentrisk

This comment has been minimized.

Nettsentrisk commented Aug 16, 2018

In my humble opinion, this should not be in the HTML spec, and should be handled entirely by browsers.

In any case, why not reuse the defer attribute from script, and allow it to have a boolean value? Lots of talk about "deferring" here, and that also makes sense. "lazyload" is an ugly attribute name and is not very descriptive.

As far as "auto" goes, the browser can just handle all images like this according to their own rendering algorithm without this being set in the HTML.

@JamieWohletz

This comment has been minimized.

JamieWohletz commented Aug 17, 2018

I didn't see this mentioned anywhere in this thread or the other one, so forgive me if I missed it...

Another common use case that we at image sites like Shutterstock, iStock, and Adobe Stock have is to server-render all our <img>s in the DOM so that SEO crawlers can access their srcs without Javascript. It would be nice to have a built-in way to lazy-load some images so that we could keep server-rendering for SEO crawlers, but offer the end-user a snappy experience.

I think for this reason alone, this feature is extremely valuable.

@herrernst

This comment has been minimized.

herrernst commented Aug 17, 2018

@Nettsentrisk IMHO this needs to be in the spec, because it needs to be opt in by web sites, otherwise it would break a lot of existing sites (e.g. those that assume that at window.onload all images are loaded and query their or their parent nodes' dimensions).

@genemars

This comment has been minimized.

genemars commented Aug 18, 2018

Not loading off-viewport images would cause serious reflow and page performance issues.

Also, lazy-loading should not be considered something specific to apply to image loading. When components will be a fact for all browsers there will be the need of lazy-loading any kind of component, not just images and I believe that by then experts will find a proper solution.

Right now we can use JavaScript (eg. intersection observers) and in the near future, perhaps, the intrinsicsize attribute that will solve reflow issues due to the impossibility of determining image size ahead of time. If you ever tried any js lazy-loading implementation, you probably know how bugging it is to deal with container size and aspect-ratio when the image has not been loaded.

@verlok

This comment has been minimized.

verlok commented Aug 20, 2018

Hey there, I'm glad to see that things are moving to make it work natively! 👍

Right now we can use JavaScript (eg. intersection observers) and in the near future, perhaps, the intrinsicsize attribute that will solve reflow issues due to the impossibility of determining image size ahead of time. If you ever tried any js lazy-loading implementation, you probably know how bugging it is to deal with container size and aspect-ratio when the image has not been loaded.

I wrote some tips and tricks in my lazyload's readme on how to occupy horizontal and vertical space while the image is not loaded yet. So there's at least one solution.


By the way, my lazyload takes advantage of IntersectionObserver as @genemars suggested and it weights one quarter than LazySizes, to reply to @wildlyinaccurate. Find it on GitHub and npm.

@verlok

This comment has been minimized.

verlok commented Aug 20, 2018

Just to add "things to consider" to the discussion, how would a developer "pass options" to the desired behaviour of a html native lazyload?

Things like the following:

  • the distance ahead of the viewport's "fold" to which the browser should start loading the images
  • the time after which the lazy download should begin, to avoid loading images when the user is scrolling
    fast over the images
  • will there be callbacks on when the images will start/finish loading, enter/exit the viewport, etc?

Will developers need to put all of these options in each img or iframe tag?

@genemars

This comment has been minimized.

genemars commented Aug 20, 2018

@verlok caniuse.com reports that IntersectionObserver is not yet viable for Safari.
I also implemented a lazy-load facility that it's not limited to images but it applies to components as well.
It also have a tolerance parameter that is about your first point (distance for ahead loading).
It would be very nice to see all of these implemented natively in browser. =)
I just starred your repo =)

@verlok

This comment has been minimized.

verlok commented Aug 20, 2018

caniuse.com reports that IntersectionObserver is not yet viable for Safari.

Right, and there's IE 11 too. That's why I recommend to dynamically load version 8 or 10 of LazyLoad depending on the browser's support to IntersectionObserver. They share the same API, but loading only version 10, all images would be loaded at once in browsers which don't support IntersectionObserver.

Coming back to the main topic of having a native lazyload feature, let's speak of the speed of browser vendors adopting the standards: how would a developer deal with the fact that an image marked as "lazy" (<img src="" lazyload>) would load lazily only in a subset of browsers?

@addyosmani

This comment has been minimized.

addyosmani commented Aug 20, 2018

As it's likely only a subset of browsers will initially support native lazy-loading, we probably want to define a mechansim for feature-detecting lazyload support.

I can see web developers wanting to provide a fallback for browsers that don't support lazyload (yet) to ensure a more consistent cross-browser experience.

Failing that, a developer needing this level of control could use the lazyload Feature Policy/switch native behavior off and just use a JS library (like LazySizes) for their lazy-loading needs.

@clelland

This comment has been minimized.

Contributor

clelland commented Aug 20, 2018

As an HTML image attribute, is it sufficient to web developers to check

    typeof document.createElement('img').lazyload === 'undefined'

to see whether the attribute is supported?

@jakearchibald

This comment has been minimized.

Collaborator

jakearchibald commented Aug 20, 2018

'lazyload' in img

…would do the trick. I'm not sure what you'd do with that info though. It's too late to try and polyfill at that point.

@Nettsentrisk

This comment has been minimized.

Nettsentrisk commented Aug 20, 2018

Will developers need to put all of these options in each img or iframe tag?

See, this is why I think that we just need to hand this off to the browser and let it take care of all this complexity. The developer says "Hi, I would like you to defer loading this image", and then the browser in question decides how to do it. We're approaching JS-in-HTML territory here otherwise.

If you want to have full control over how the lazy-loading functions, then you'll just need to roll your own solution anyway and not leave it up to the browser.

@Malvoz

This comment has been minimized.

Malvoz commented Aug 21, 2018

@verlok

Will developers need to put all of these options in each img or iframe tag?

@Nettsentrisk

If you want to have full control over how the lazy-loading functions, then you'll just need to roll your own solution ...

#2806 (comment)

@eeeps

This comment has been minimized.

Contributor

eeeps commented Aug 23, 2018

@jakearchibald Re: feature detection in JS happening too late:

Installing a MutationObserver in the <head> and using it to set src="" ASAP is gross, but much better than nothing. In a very-simple test:

  • No image requests visible in Firefox dev tools
  • Safari shows requests but 0 bytes transferred
  • Chrome seems to get 4 kb over the wire before calling it quits

https://twitter.com/rikschennink/status/931256220303978496

@Ambient-Impact

This comment has been minimized.

Ambient-Impact commented Aug 24, 2018

@eeeps I've tested that, and it doesn't seem to always catch all images on the initial load in both Firefox and Chrome. Sometimes it does, but sometimes the browser seems to just download a couple of images regardless, even if the MutationObserver is inlined in the <head>. I don't know if that's related to having the cache disabled while devtools are open, or if it's something to do with the way browsers parse HTML and fire off requests pre-emptively. I'd be interested to find out if anyone else is seeing the same results.

I very much wish we had a way to tell the browser to delay loading images without having to remove the src attribute, which rubs me the wrong way with regards to accessibility and just valid markup.

@jakearchibald

This comment has been minimized.

Collaborator

jakearchibald commented Aug 24, 2018

if it's something to do with the way browsers parse HTML and fire off requests pre-emptively

That's the reason.

@herrernst

This comment has been minimized.

herrernst commented Aug 27, 2018

@verlok

  • the distance ahead of the viewport's "fold" to which the browser should start loading the images
  • the time after which the lazy download should begin, to avoid loading images when the user is scrolling

I think these are mostly things the browser knows best (slow/fast network connection, is user scrolling fast?) and should decide itself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment