Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow CSS larger than 50k if 90% used #4555

Closed
jpettitt opened this issue Aug 17, 2016 · 77 comments
Closed

Allow CSS larger than 50k if 90% used #4555

jpettitt opened this issue Aug 17, 2016 · 77 comments

Comments

@jpettitt
Copy link
Contributor

@jpettitt jpettitt commented Aug 17, 2016

We'd like to see the CSS limit increased from the current 50K to 100K.

Background - we're automatically converting sites, including all their look and feel to AMP. Many sites have 3-400k of CSS which we prune down to only the rules that are used on the page. This Typically yields between 30 and 80k of CSS.

This means that on the larger files we need to very aggressively optimize the CSS size by rewriting extensively which is a) computationally expensive, b) tends to break things in unexpected ways and c) means we're dropping /*! comments containing rights info which puts us out of license compliance. Increasing the limit to 100K would make all those issues go away for many publishers who are converting existing templates.

Here is an example page that's right up against the current limit https://cdn.relaymedia.com/amp/www.niemanlab.org/2016/08/that-friends-and-family-facebook-algorithm-change-doesnt-seem-to-be-hurting-traffic-to-news-sites/

@erwinmombay
Copy link
Member

@erwinmombay erwinmombay commented Aug 17, 2016

@dvoytenko
Copy link
Collaborator

@dvoytenko dvoytenko commented Aug 17, 2016

@Gregable Do we have some quick stats of CSS size distribution in AMP pages overall?

@jpettitt
Copy link
Contributor Author

@jpettitt jpettitt commented Aug 17, 2016

@dvoytenko and @Gregable it's not just the current size, as we move beyond using AMP for phones and start using it for responsive pages like the one I linked above the CSS size jumps markedly.

@dknecht
Copy link
Contributor

@dknecht dknecht commented Aug 17, 2016

Maybe we can have sections that aren't delivered to mobile. The cache can remove and vary on mobile vs desktop?

@jpettitt
Copy link
Contributor Author

@jpettitt jpettitt commented Aug 17, 2016

@dknecht The issue show up when you're converting existing content - for new designs we can hand build CSS and get it under the limit. However that doesn't scale to 10's of thousands of sites. As automatic amp converters like ours get better (see link above) we're starting to bring in the whole page rather than just fragments of it.

Apart from the initial cost of processing the CSS it adds very little to the page cost weight overall (particularly since AMP doesn't load any graphics in areas hidden by responsive layout). The page I linked above is 22.2k on the wire despite having 46k of CSS and an overall size of 102k uncompressed.

@Gregable
Copy link
Member

@Gregable Gregable commented Aug 17, 2016

@dvoytenko, I don't have anything at hand. I suspect it wouldn't be all that interesting though since folks have been optimizing against the current rules.

@jpettitt, at a quick glance, that css example looks pretty bloated. I count 8 different rules targeting .simple-rightsidebar which is on exactly 1 tag that has 6 other classes on it.

@pdufour
Copy link
Contributor

@pdufour pdufour commented Aug 17, 2016

I'm a fan of keeping it at 50k personally. The fact the spec is causing you to rewrite your CSS is a good sign, it's making you keep it lightweight.

@jpettitt
Copy link
Contributor Author

@jpettitt jpettitt commented Aug 17, 2016

@Gregable sadly that's the nature of converting existing templates. If you look closer a lot of the rules you cite are inside media queries. AMP forces bloat when converting existing templates - All the rules that start ._RM were created to replace style attributes on elements and all the rules that start #_RM were created to replace !important tags in the original CSS. This particular example bloats a lot becasue they were liberal with the !important and style attribute.

I'll send you a link on slack to an example that's over the limit (not a live customer so I can't post it here)

@pdufour rewriting CSS is fine if you have the resources but it doesn't scale to 10's of thousands of sites. If we stick at 50K makes automated conversion withe full look and feel far harder. As is we can optimize heavily - the example linked had 183584 bytes of css reduced to 47612 with 2225 rules cut down to 709.

I'm trying to avoid having to rip out all the media queries and treat the page as phone only. That would work for now, however if we want AMP to be usable as more general fast web page framework we need to allow enough space for a responsive design.

@jmadler
Copy link
Contributor

@jmadler jmadler commented Aug 22, 2016

My understanding of the 50k selection as a CSS limit was that it was believed to be enough to fit nearly all cases, but not enough to just copy/paste the existing CSS implementation over, for the reasons that @pdufour described.

Lifting it could remove that performance improvement (negligible though it may be), but it could be replaced by other solutions. Maybe a cache optimization that removes any unused CSS rules based on selectors that don't apply, or a cache optimization that intelligently compresses CSS rules/selectors.

@jpettitt
Copy link
Contributor Author

@jpettitt jpettitt commented Aug 23, 2016

@jmadler we're already removing unused CSS and, if it still doesn't fit, renaming all the classes and ID's to short names. We still see CSS over 50k (keep in mind with some sites we're starting with as much as 800k of CSS). We're also stripping data url's. We've yet to find one we can't squeeze under 100k. AMP itself creates some of the issues by banning '*', !important and inline styles.

I think this goes to a bigger question of is AMP a mobile only and article only standard, in which case we could pre-render all the media queries and strip out anything wider than a phone. Or is it (or will it become) a more generalized acceleration framework. If the latter, particularity if we're going to do highly structured responsive pages like product pages, home pages etc then 50k become a real obstacle.

@jpettitt
Copy link
Contributor Author

@jpettitt jpettitt commented Aug 25, 2016

So where are we with this?

@dvoytenko
Copy link
Collaborator

@dvoytenko dvoytenko commented Aug 25, 2016

Let me pass it on to @cramforce. He is back next week and can provide a definitive answer.

@cramforce
Copy link
Member

@cramforce cramforce commented Aug 26, 2016

It would be good to get a good sample of pages that run up to the limit. I expected this to be controversial, but haven't seen it come up anywhere else, yet.

@jpettitt
Copy link
Contributor Author

@jpettitt jpettitt commented Aug 29, 2016

@cramforce I'll send you a customer example over slack (not a live site yet so I can't post it here).

@ericlindley-g ericlindley-g added this to the Pending Triage milestone Aug 31, 2016
@jpettitt
Copy link
Contributor Author

@jpettitt jpettitt commented Sep 7, 2016

ping.

@cramforce
Copy link
Member

@cramforce cramforce commented Sep 7, 2016

Here is my recommendation:

  • leave limit unchanged for now
  • not count whitespace
  • no longer count non-data URIs in limit

@jpettitt
Copy link
Contributor Author

@jpettitt jpettitt commented Sep 8, 2016

Not perfect but better than nothing. If we could do that and bump to 75K that would be perfect.

@cramforce
Copy link
Member

@cramforce cramforce commented Sep 8, 2016

I've not heard any other parties to ask for an increase in size. This seems to come up when transcoding pages to AMP, but AMP is not designed to be a transcoding target for non-AMP pages.

@jpettitt
Copy link
Contributor Author

@jpettitt jpettitt commented Sep 8, 2016

I get that's it's not designed to be a transcoding target - however in the real world automatically translated pages mostly look like sh*t (eg wordpress plugin) and the vast majority of sites don't have developers on staff. Those that do have developers have a todo list 50 items long and only the top one or two items on the list will ever happen. If AMP is to achieve parity in UX with existing sites and spread beyond the minority of publishers who have both the technical staff and the free resources to do decent amp conversion auto transcoding is pretty much essential.

Publishers we talk to complain that AMP doesn't monetize well, this will kill AMP. Much of the reason it's not monetizing is they have abysmal AMP conversions that don't support the full ad map, lack navigation, and lack recirc elements.

I think this comes down to letting the perfect be the enemy of the good. Yes in a perfect world we'd have all AMP pages designed from scratch, avoiding all the bad practices. This isn't a perfect world and there is an installed base running to billions of pages on millions of sites. Allowing for a simple path from there to here will speed AMP adoption, improve monetization and therefor help the AMP ecosystem.

I'm having a hard time seeing why, what is basically a number pulled out of thin air, is so important that it's worth making people spend, cumulatively across all sites, million of $ on rewrites.

@cramforce
Copy link
Member

@cramforce cramforce commented Sep 9, 2016

@jpettitt Does your current output only include selectors that actually match on the pages they apply to?

@jpettitt
Copy link
Contributor Author

@jpettitt jpettitt commented Sep 9, 2016

Yes we compare every CSS rule to the actual page content and drop all those that don't apply. If that doesn't get it under the limit we rename all the classes and id's. Finally if that doesn't do it we start pre-computing media queries and stripping out content for wider pages, this last step is what we'd like to avoid.

I'm most cases we're cutting the original CSS by ~80%. Sometimes as much at 90%.

@cramforce
Copy link
Member

@cramforce cramforce commented Sep 9, 2016

I sympathize, but we cannot go higher without messing up our performance model.

Maybe we need easily statically separable CSS sections per device type.

@dknecht
Copy link
Contributor

@dknecht dknecht commented Sep 9, 2016

@cramforce Much earlier in thread i proposed "Maybe we can have sections that aren't delivered to mobile. The cache can remove and vary on mobile vs desktop?"

Instead of having publisher setup vary correctly we can just have cache hide sections not needed for the requesting device

@cramforce
Copy link
Member

@cramforce cramforce commented Sep 9, 2016

Yep, unfortunately such mechanisms rely on the cache to be fast.

@jpettitt
Copy link
Contributor Author

@jpettitt jpettitt commented Sep 9, 2016

Does it really mess up the model? On the wire we see ~75% compression with gzipped AMP pages and so an extra 50k of CSS is around 12.5k (worst case, probably less) on the wire with AMP pages having an overall weight with the JS, ads, images etc of 1 to 2 MB it's in the noise. As is we're actually slowing the page down as we squish the CSS by using URL shortening on the css url() values and moving data urls, particularly small icon font fragments, to external resources. Not counting non-data urls will help some.

@cramforce
Copy link
Member

@cramforce cramforce commented Dec 13, 2019

The previous decision was to allow unlimited or at least much-higher-limit CSS if the usage-percentage is high (>XX%). Sounds like both @westonruter's and @stephengardner's examples would be fixed by such a validation change.

PS: We should allow external print style-sheets.

@alastaircoote
Copy link

@alastaircoote alastaircoote commented Dec 13, 2019

In addition to the issues others have mentioned here, we at the New York Times are running into issues with the 50KB limit on the custom pages we create for events like elections. In a few instances we’ve had to remove features we intended to include in the AMP version to get under the limit. For example, our 2018 U.S. Senate results AMP page (in which we aggressively compressed our CSS), is missing some editorial content that exists in our web results.

There are a few reasons:

amp-live-list

We use amp-live-list to update the page when new results arrive. This means we need to bundle the CSS for all possible states rather than optimizing for what the user sees on page load. It would be great if we had the option to add an additional <style> tag inside a live list payload that would let us swap out different CSS like we do HTML.

Site chrome that isn’t visible on load

We have a number of components on our AMP page that aren’t visible on immediate render but might appear shortly afterwards, our paywall dialog being one example. It would be great if we could separate the CSS for these “async” components from the initial 50KB payload, perhaps only loading that CSS when the component is made visible.

We also wondered whether we could wrap something like our site navigation (hidden behind the hamburger button) as an amp-script component, applying CSS separately from the initial payload. But that would be an enormous development effort and require extra processing on the client side. Perhaps there is a way we can achieve the same result without requiring the full amp-script component?

Vendor prefixing

Vendor prefixes are often necessary for supporting older browsers or using new CSS features. Currently, vendor prefixes are counted towards the CSS limit. Would it be possible for vendor prefixed CSS properties to not be included in the limit? Alternatively, polyfills for common properties could be included with the AMP runtime.

Penalizes CSS Background Images

For images that may be repeated multiple times in a page, it can be beneficial to overall page byte size to utilize a CSS background image, rather than an inline SVG. The present 50k limit discourages this practice at the expense of increasing overall page size.

@stephengardner
Copy link

@stephengardner stephengardner commented Dec 14, 2019

@cramforce Thanks Malte. Sounds like there has already been a decision made, is this related to another open issue or anywhere I can follow that? Or here is best

@westonruter
Copy link
Member

@westonruter westonruter commented Dec 14, 2019

PS: We should allow external print style-sheets.

See also #19381

@cramforce
Copy link
Member

@cramforce cramforce commented Dec 16, 2019

@stephengardner I think this is mostly a question of prioritizing the validator work; not a question of agreement on the path forward.

Having said that. I very much think we need to prioritize this. CC @nainar

@Gregable
Copy link
Member

@Gregable Gregable commented Dec 16, 2019

@cramforce's suggestion of allowing a "90% used" path as an alternative to the 50k path seems to suffer from a problem of under specification of how "used" is to be defined statically.

Attempting to go this route in practice results in one of two approaches:

  1. Strict: Be very strict/conservative about what we statically consider "used". For example, only accept selectors that we can statically find in the document's DOM. Never accept pseudoselectors, etc. Over time, add more complexity to the rules to extend the definition of used to more cases.
  2. Generous: Be very generous/accepting about what we statically consider "used". For example, if the selector includes a class, substring search for that class in the document and if there is a match, accept it as used. All selectors with pseudoselectors are also considered used, etc.

The generous approach has the problem, in validation, of never being able to support becoming more strict over time. If you were to become more strict, you'd break existing documents, which is difficult at best. Thus, the first iteration of this algorithm is likely the last, even if it's later found to be overly generous and results in slow pages.

Another way to look at the generous approach is that if you site's preprocessing runs the same pruning algorithm as the generous algorithm, you will always be allowed at whatever CSS size. We would be effectively allowing all CSS which is first run through a specific tree pruning algorithm. If that's the design, maybe the requirement should simply be that: Running your CSS through pruner X produces no changes to the CSS file.

The strict approach is unlikely to actually help many pages. Unless your CSS usage pattern is nearly all the same as the usage pattern that the developers designed for, your CSS will be considered unused. This then results in bugs being filed for more extended use cases. This approach is also very resource intensive for the AMP team to implement, but does allow for iterative development.

In either case, the pruning/"used" algorithm needs to be implemented in: pure JavaScript, C++, and now Java, as we support Validator ports for all 3. This is a substantial effort most likely, depending on the algorithm proposed. This should be considered against the potential volume of pages this would fix.

@nainar's effort manually looked at a representative sample of valid AMP pages whose CSS was close to the 50k limit. All of the sampled documents showed signs that even basic minification had not been performed. This indicates that the suggested solutions wouldn't actually help in most cases and are proposals that would help a fairly small fraction of documents.

The examples provided in this thread are effectively anecdotes, though very useful ones.

The nytimes.com example is a very complex application-like map experience that would be uncommon, but still supported. However it's only 35k of CSS and has still has unused selectors in it. For example:

.e65c.c4f7{width:25px;height:25px;min-width:25px;background-position:0 -112px}

c4f7 isn't in the document outside of the CSS. I don't know if 90% is used or not, only that it's not 100%. It's unclear if a 90% used metric would help this page.

The shopify CSS uses some very long CSS class strings, such as shopify-section-featured-collections or index-section--newsletter-background which while "used" could be minified further. It also uses collections of ids, rather than classes such as:

#TextColumnImage-1574358043328-0,
#TextColumnImage-1574358043328-1,
#TextColumnImage-1574358043328-2,
#TextColumnImage-1574358287616,
#TextColumnImage-1574358419832,
#TextColumnImage-1574358489107,
#TextColumnImage-1574358514631,
#TextColumnImage-1574358584130 {
	max-width: 100px;
	max-height: 100px
}

The shopify CSS also has selectors whose properties are completely overwritten. For example, all 3 elements which match the class btn--secondary also match .article__meta-buttons a.btn by a more specific selector with the same properties:

.article__meta-buttons a.btn {
	background: #fff;
	font-size: 15px;
	font-weight: 400;
	letter-spacing: .28px;
	border-radius: 2px;
	text-transform: capitalize;
	padding: 10px 15px;
	border: 2px solid #a9b18c;
	color: #53641a
}

.btn--tertiary {
	background-color: transparent;
	color: #3d4246;
	border-color: #3d4246
}

background-color: transparent; is a default and the other two properties are overridden by the more specific selector.

It also repeats rules in a number of cases. For example, there are 5 @media only screen and (min-width:750px) rules.

I didn't check, but I imagine every selector matches something. There still remain many improvements that could be made. Some, like the strings length or discarding browser default properties, could even be automated.

The automatic AMP generators that attempt to create AMP from non-AMP inputs are another good case. While it's clear that these documents could be optimized further, it's hard to do so without human intervention. Is that the type of use case that we want to support?

@cramforce
Copy link
Member

@cramforce cramforce commented Dec 17, 2019

So, what I had in mind is an algorithm that given a document and a stylesheet classifies for each selector if the selector has a chance of matching the current document. It is OK to implement this with an algorithm that tokenizes both stylesheet and doc and just looks if the respective strings event appear instead of doing full selector matching (so, it is basically a 2 phase algo building a hashmap from the doc, and then walking the selectors).

If the answer is yes, it counts towards the 90%.

I would ignore all other aspects of CSS size like selector complexity and verbosity. I don't have a good intuition as to whether it is important to enforce things like property shorthand rewriting.

I agree that we only get one shot at this, as making it stricter would not be backward compatible. But the selector redundancy (paragraph one of this comment) is the only aspect of CSS that tends to get out of control, so I'm wondering if we should only care about this and ignore all other aspects.

@Gregable
Copy link
Member

@Gregable Gregable commented Dec 17, 2019

@cramforce What you are suggesting is a specific implementation of the generous approach above. Do you mind if I ask for some more recommendations on the specifics of your suggestion via some examples. Which of these should be considered used?

/* foo and bar never appear on the same element */
.foo.bar {...}
<div class=foo></div><div class=bar></div>
/* foo never appears on a div, but foo and div are present */
div.foo {...}
<span class=foo></span><div></div>
/* Only body is actually on the page */
body,article,aside,figcaption,figure,footer,header,main,nav,section,... {}
<body></body>
/* selector with no properties */
div {}
/* selector with invalid properties */
div {not-a-real-css-property: foo}
/* Pseudo selectors never match.
   Should validator understand structure and meaning of pseudo-selectors? */
div:nth-child(n+2) {...}
:not(body *) {...}
/* Substring not present in a tokenized document, but is used.
   Should validator understand substrings, not just simple tokenizer */
[class*="foo"] {...}
<div class=blah-foo-blah></div>
/* Animation name never used.
   Should validator understand structure of CSS animations? */
@keyframes mymove { ... }
/* Font family never used.
   Should validator understand structure of CSS fonts? */
@font-face {
	font-family: FooFont;
	src: url(http://site.example/FooFont.woff) format('woff')
}
/* Different stylesheets for every 20px of viewport width, 
   Every selector correctly matches something on the page, but 5MB+ of CSS  */
@media (min-width: 20px)   { /* 10KB of _used_ CSS here. */ }
@media (min-width: 40px)   { /* 10KB of _used_ CSS here. */ }
@media (min-width: 60px)   { /* 10KB of _used_ CSS here. */ }
...
@media (min-width: 1020px) { /* 10KB of _used_ CSS here. */ }
/* Hide the div.alwayshidden node from the page. */
div.alwayshidden {display: none}
/* Always include div.alwayshidden in every selector on the page.
/* Validator will consider every selector used.*/
div.alwayshidden,.unused {...}
div.alwayshidden,.unused2 {...}
...
<body>...<div class=alwayshidden></div></body>

Note on this last one that there will almost certainly always be ways of doing this in a way that can defeat the "used" algorithm, even if this example is handled. These mechanisms will likely be fully automatable by a simple script/tool that takes an input CSS file and mutates it, such as in this case where div.alwayshidden is added to every selector. This script will likely be easier to implement than actually removing unused CSS. If we consider that OK, I recommend that we instead just remove the CSS byte limit entirely.

@cramforce
Copy link
Member

@cramforce cramforce commented Dec 18, 2019

I'd be certainly open to feedback, but I was thinking of a relatively naive mechanism that ignores document structure. I think the main question is: Would folks start to write transformers to game the system (such as by creating an empty element that just lists all classes in the CSS) or would they be incentivized to do the right thing and optimize their CSS? I think the latter is more likely in this case :)

@westonruter
Copy link
Member

@westonruter westonruter commented Dec 18, 2019

I think the problem here is that the CSS in question is not often “theirs”. In a WordPress context, the CSS may be coming from third party theme or plugin developers. If the site owner's choice is between having to abandon a plugin with a critical site feature because of its unoptimized CSS or to create a hack which forces the unoptimized CSS to be valid AMP, then I believe the latter route will almost always be taken.

@cramforce
Copy link
Member

@cramforce cramforce commented Dec 18, 2019

@westonruter So, you're arguing we should not relax the current rule at all?

@westonruter
Copy link
Member

@westonruter westonruter commented Dec 18, 2019

Not necessarily; some relaxing seems required. I just wanted to make a philosophical assertion about human nature and pragmatism. If a hack is the easiest way to make something work, then a hack will be used.

The hack to prepend selectors with a dummy element will be doubly-bad as it would not only result in CSS above 50KB but it would also have a large percentage of wasted bytes.

What if the validator was made to allow any amount of CSS but instead of an error after 50KB it issues a warning? This would give incentive to those who want to do the right thing, but it wouldn't be a blocker for those who can't?

@cramforce
Copy link
Member

@cramforce cramforce commented Dec 18, 2019

@Gregable
Copy link
Member

@Gregable Gregable commented Dec 18, 2019

Here's a 10-line python implementation of the "hack" I called out in my last example above:

out = ""
for c in input_content:                                                               
  if c == '{':                                                                  
    if bracedepth == 0:                                                         
      out += ",div.alwayshidden"                                                
    bracedepth += 1                                                             
  if c == '}':                                                                  
    bracedepth -= 1                                                             
  out += c
return out

It doesn't require parsing CSS or even touching HTML. Obviously it has bugs, but already works in the majority of cases.

As @westonruter showed above, there are already numerous CSS converters that are using the trick to work around !important by prepending with:

:root:not(#_):not(#_):not(#_):not(#_):not(#_):not(#_):not(#_):not(#_):not(#_):not(#_):not(#_):not(#_):not(#_):not(#_):not(#_):not(#_):not(#_) .amp-wp-27ad0ad {

I don't see any problem with this !important workaround, but it does speak to the idea that it's easier to implement a hack than a CSS purging pipeline. Even manually adding div.alwayshidden to a few hundred CSS selectors is easier than CSS purging. I agree with @westonruter that hacks like this would be quickly implemented.

I agree that we only get one shot at this.

I also would not feel bad to do some cat and mouse game

Speaking from experience contributing to the validator, I don't feel that the cat and mouse game is a serious option, as I tried to explain when discussing the drawbacks of the "generous" approach above. The validator would need to invalidate the "hack" cases in some way that doesn't invalidate non-hack now passing documents.

@dvoytenko
Copy link
Collaborator

@dvoytenko dvoytenko commented Dec 18, 2019

Couple of thoughts:

  1. Elements that start undisplayed (sidebars, lightbox, etc) could allow some form of scoped CSS.
  2. Themes CSS could be asynchronous and passed through cache.

Scoped CSS

For instance, a sidebar or a lightbox could allow scoped CSS. This could work even for amp-list and other template-based elements. And could work for amp-script as well.

<amp-sidebar id="sidebar1">
  <style scoped>
    #sidebar1 .x {...}
  </style>
</amp-sidebar>

Validating such a CSS should be straightforward. Every selector has to be prefixed with #ID. And probably the only exception would be #ID + x selectors.

There are many scoping CSS libraries. AMP uses one internally for Shadow DOM polyfill. Plus there's a whole host of CSS-in-JS transformers that do something similar.

Async theme CSS

Some form of async themed-CSS declaration could be used to proxy and asynchronously download theme-only CSS. If it's available within the render-blocking interval it could be quickly applied. Since we would proxy it, we could apply additional validation restriction on such CSS. For instance we could strictly require font-display: optional.

@Gregable
Copy link
Member

@Gregable Gregable commented Dec 18, 2019

I particularly like the scoped CSS thoughts, though it's unclear to me if they would be usable for use cases like wordpress. I don't think that's a reason not to pursue.

Anecdotally, I recall hearing implementations struggling with composing documents from smaller snippets of HTML. This has motivated interest in inline style elements and attributes. Scoped CSS could help with this separate issue too.

For composed documents, one issue is that the scope might be a repeated section on the page. For example, scoping to image carousels where there might be 3 of them on the page. It would be interesting if we could define this in a way to allow for scopes that aren't just IDs, and provide tooling (toolbox optimizer, cache, etc) for deduplicating the scoped CSS in this case. For example:

<amp-carousel class=my-carousel-style ...>
  <style scope-id=my-carousel-style>
    .my-carousel-style .x {...}
  </style>
  ...
</amp-carousel>
...
<amp-carousel class=my-carousel-style ...>
  <style scope-id=my-carousel-style>
    .my-carousel-style .x {...}
  </style>
  ...
</amp-carousel>

Tooling could detect and remove the second instance of <style scope-id=my-carousel-style>.

We'd probably want to still limit the size of the scoped CSS, but it could be a more generous limit.

@westonruter
Copy link
Member

@westonruter westonruter commented Dec 18, 2019

I particularly like the scoped CSS thoughts, though it's unclear to me if they would be usable for use cases like wordpress. I don't think that's a reason not to pursue.

Correct. It would be difficult to leverage in a WordPress context with the existing CSS found in themes and plugins.

@cramforce
Copy link
Member

@cramforce cramforce commented Dec 18, 2019

I definitely like the scoped idea. It would especially help the Stories editor use case that really needs a per slide limit.

I wonder (this would need more analysis) if we could devise a scheme where we have a selector budget and selectors must match a minimum percentage of the DOM, such that adding virtual elements does not help. The analysis we'd need to do it to find out how legit CSS behaves on optimized pages.

@Gregable
Copy link
Member

@Gregable Gregable commented Dec 18, 2019

I don't know about the matching a percentage of the DOM, but I think something like 50k bytes OR 500 selectors (maybe still with a higher byte limit) would be straightforward to implement and easy for developers to reason about. I don't know if this matches our model for where performance comes from, or if as @kristoferbaxter has said, number of packets before the content is really king.

@cramforce
Copy link
Member

@cramforce cramforce commented Dec 18, 2019

I'd like a model that was to some extend CSS limit ~ O(DOM size)

@Gregable
Copy link
Member

@Gregable Gregable commented Dec 18, 2019

CSS limit ~ O(DOM size)

Cons: Might encourage intentionally bloating pages to match the bloated CSS. Makes it harder for a publisher to understand the consequence of adding one more CSS rule to a site-wide template, since it's unknown what page fraction it will break.

Pros: Simple to implement and for a developer to understand. Supports application-like documents of arbitrary complexity.

@westonruter
Copy link
Member

@westonruter westonruter commented Jan 24, 2020

See also #26466 for I2I to increase limit from 50K to 75K. PR: #26475.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet