Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[css-cascade-6] Strong vs weak scoping proximity #6790

Closed
chrishtr opened this issue Nov 2, 2021 · 43 comments
Closed

[css-cascade-6] Strong vs weak scoping proximity #6790

chrishtr opened this issue Nov 2, 2021 · 43 comments

Comments

@chrishtr
Copy link
Contributor

chrishtr commented Nov 2, 2021

Issue 8 here is about strong vs weak.

This issue is to track resolving on one of the two behaviors.

@fantasai
Copy link
Collaborator

fantasai commented Nov 3, 2021

@chrishtr What question specifically did you want to put on the agenda? I think this isn't a question we should resolve one way or the other until we have plenty of feedback from the authoring perspective, and imho we should wait until cascade layers are usable so that the feedback on how scoping should work is in the context of already having layers.

@tabatkins
Copy link
Member

The question is "which behavior should we settle on", so we can actually start an experimental implementation.

I think this isn't a question we should resolve one way or the other until we have plenty of feedback from the authoring perspective, and imho we should wait until cascade layers are usable so that the feedback on how scoping should work is in the context of already having layers.

That means delaying the entire scoping feature for a significant amount of time; half a year at minimum, probably 1-2 years realistically. I'm not sure what the connection is with layers, tho - both of the proposed placements for scope are underneath layers, either just above or just below specificity.


Aside from the procedural matters, i think the proposed "scoped descendant" combinator has an obvious answer - it has to be weak (less than specificity) or else it'll be incredibly confusing. The example shows off the canonical use-case - "dueling selectors" where you have mutually nested containers with similar children, and want the rule with the closest indicated container to win in each case - and if scoping was stronger than specificity, it would mean that using the combinator like this would accidentally also block the ability for any other code to override those properties, which is absolutely an unintended and unwanted side effect.

And I think it would be confusing if the two scoping mechanisms used different scoping positions, unless there was a very good reason for it (which I don't think there is).


More generally, I still feel very strongly that the "weak scoping" placement (scoping is weaker than specificity, but stronger than order-of-appearance) is the correct place to insert it, for several reasons:

  • One of the core complaints that people have had with specificity thruout CSS's history has been specificity wars - you style an element, then need to override it in some circumstances which needs a more specific selector, then need to override that in some circumstances which requires an even more specific selector. If your circumstances fall into the "general tag behavior, then class-specific, then element-specific", specificity works for you; if it doesn't (if this is just levels of classes with different semantic importance in your codebase), specificity fights you. In particular, this means that you shouldn't reach for an ID selector at first, for instance, even if it would be the most convenient way to target a particular element, unless you purposely intend for these styles to be very strong and hard to override. This has resulted in multiple CSS management strategies all basically concluding "don't ever use ID selectors", because once you do you're locked into using them in every subsequent rule targeting that element.

    If scoping was above specificity, this would be all of that but worse. The scope would make the styles more powerful than any global rule. Just like how using a single ID selector to target an element means you have to always use an ID selector to target that element, using scoping to style an element means you'd have to use scoping to always style the element, and you don't even have the flexibility of using a dummy ID on body or writing an ID selector twice; you must target the same scoping element, or something closer. And worse - if it's the same scoping element, then you still have to specificity-fight; if it's closer, then you've just done the equivalent of using two ID selectors, and the next one-off override has to scope even tighter.

  • That's a holistic argument. A more specific one is - I can't think of any justification whatsoever why @scope (.foo) { .bar {...}} should win over #bar, when targetting the same element. ID selectors are by their nature selecting one specific element to put styles on, which is why a single one automatically wins over any other selectors in any quantity. The additional constraint of a scoping element doesn't make a general selector (like .bar) more specific (in the general sense, not the CSS term-of-art sense) than a selector explicitly targeting one single element specifically (like #bar).

    But if scoping is weaker than IDs, it's either weaker than all specificity, or it's inserted into the middle of the specificity tuple. I think the latter has far more confusion potential, especially when considered in light of the specificity-altering pseudo-classes (which presumably would not be able to alter the scope of a selector!).

  • The potential use-cases for scoping cover a broad range of things, from "I want to write baseline styles, then have all context-specific styles automatically win without worrying about specificity on either side" (already covered, better, by layers), to "I want to write generic styles for this container, but have a particular instance work differently" (already covered by existing specificity), to "I have clashing styles, but want the styles that are 'closest' to the element to work, regardless of how things get nested" (the canonical scoping use-case). We should let scoping do what only it can do, and leave the other problems to the other solutions, rather than trying to interweave them; mixing solutions can easily produce author confusion.

  • The other part of scoping is the lower-boundary protection, to prevent your container styles from "leaking out" further. This is only tangentially connected to any notion of style strength within the container, so authors can easily want one without the other. If scoping is relatively strong, tho, then they must take it into account when considering whether to use this functionality for lower-boundaries as well; if it's relatively weak, then they can usually ignore it and consider the two halves separately. In particular, with the strength set to "just higher than order-of-appearance", they can pretty much completely ignore it if they want; since order-of-appearance is almost always just a meaningless tiebreaker rather than a semantic decision, aka random noise that they'll use specificity to fix if necessary, anything else operating at approximately that level is random noise as well if they're not paying attention to it, and the same fixes apply.

    If scoping operates at any higher point, then we should really disconnect it from lower-boundaries, to avoid confusion when people just want one of them. But that would be fairly sad, and imo itself somewhat confusing, since they operate in such similar semantic space.

@mirisuzanne
Copy link
Contributor

The potential overlap with layers is that strong proximity would require layers in order to override scope. Without layers, strong proximity is not a viable option.

I tend to agree with @tabatkins arguments here – in part because I don't like that required use of layers – but I also think it's reasonable to build a prototype, and experiment with that before forcing a resolution here. I don't know how long exactly we want to put off the final decision, but I'm happy to leave it as an open question for now, if it doesn't block prototyping some of the options, and experimenting with the feature behind a flag.

@mirisuzanne mirisuzanne added this to In progress in Cascade 6 (Scope) Nov 4, 2021
@chrishtr
Copy link
Contributor Author

chrishtr commented Nov 4, 2021

I propose we tentatively recommend weak scoping proximity. The Chrome team intends to prototype this API pretty soon, and then would choose that implementation approach. Once the prototype is done we can ask developers to try it out and provide feedback on this question.

@css-meeting-bot
Copy link
Member

The CSS Working Group just discussed [css-cascade-6] Strong vs weak scoping proximity, and agreed to the following:

  • RESOLVED: WG leans towards weak proximity at this time, and recommends this direction for prototyping to get more feedback
The full IRC log of that discussion <fantasai> Topic: [css-cascade-6] Strong vs weak scoping proximity
<fantasai> TabAtkins: Right now, scoping spec cascade-6 is intentionally ambiguous on exactly where scoping sits in cascade
<fantasai> TabAtkins: offers two options: less strong that specificity (just stronger than order of appearance) and another that's stronger that specificity
<fantasai> TabAtkins: Given it's currently ambiguous, makes difficult to do test implementation
<fantasai> TabAtkins: Would like to do a test implementation, and prefer weak scoping
<fantasai> TabAtkins: I've argued in the thread about why the weaker scoping is the better way to go, going for strong would be a mistake imho and make things less usable for authors
<fantasai> TabAtkins: but for the moment, I think we should at least currently resolve to go with weak scoping
<fantasai> TabAtkins: and revisit later
<fantasai> TabAtkins: fantasai said in issue, she believes that related features have been released to make a decision
<Rossen_> q
<fantasai> TabAtkins: that would delay scoping feature by a year or two
<fantasai> TabAtkins: and I don't think the input we'd get would be worth that level of delay
<Rossen_> ack fantasai
<drott> fantasai: no problem if chrome wants to go ahead and do prototyping of weak scoping
<drott> fantasi: exactly how it works will be fundamental to how css is used
<drott> fantasai: need to be diligent and figuring it out
<drott> fantasai: 6 months timeframe is reasonable for that
<TabAtkins> (I really, *strongly* think that going with "strong" scoping would be making a serious mess, but I argued that in volume in the thread already.)
<drott> fantasai: scoping feature is desired and useful
<Rossen_> +1 to fantasai point ^
<fremy> Is this discussed in any GitHub issue?
<drott> fantasai: more important to get it right, first time around - waiting 6months to a year is reasonable for the proprotion of this feature
<TabAtkins> fremy: Yes, in the github issue linked in the agenda and right here, a few lines up
<drott> fantasai: i suggest to resolve with sth like: "the WG is leaning towards weak proximity and thinks it's the right way for prototyping"
<fantasai> fantasai: but keep the issue open for discussion
<TabAtkins> (oh, it *hasn't* been linked)
<miriam> +1
<fantasai> github: https://github.com//issues/6790
<fantasai> RESOLVED: WG leans towards weak proximity at this time, and recommends this direction for prototyping to get more feedback
<fremy> It hadn't been linked indeed; this is why I was confused about it ^_^

@FremyCompany
Copy link
Contributor

I would love to see an example of situation where this matters. I think we are reasoning too much in the abstract, it would be nice to see what this looks like when authored, and what makes the most sense.

At first sight, I'm tempted to think that we probably shouldn't do anything about the scoping at all, and just assume declarations are in the source order and have a selector that is the concatenation of the selectors of the parent scopes and the selector of the current declaration. Is that the "weak" proposal?

@fantasai
Copy link
Collaborator

@tabatkins
Copy link
Member

@FremyCompany No, that's just Nesting, I suppose. Doing that would remove most of the point of Scoping in the first place, leaving us with just lower-boundary protection.

@fantasai's link indeed shows off some of the use-cases for having Scoping actually, you know, scope. ^_^

@FremyCompany
Copy link
Contributor

Ok, lower boundary protection was actually what I thought I needed, but reading the examples, I can imagine it sometimes makes sense to have some form of scope-root proximity influence, for example the following example:

@scope (.light-scheme) { a { color: darkmagenta; } }
@scope (.dark-scheme) { a { color: plum; } }
Even though, I think there are probably better ways to achieve that...

For example:

@scope (.light-theme) to (.dark-theme) { ... }
@scope (.dark-theme) to (.light-theme) { ... }

Or maybe even:

.light-scheme { --link-color: darkmagenta; }
.dark-scheme { --link-color: plum; }
a { color: var(--link-color); }

Given I agree with Tab that weak scoping proximity would be better than strong scoping proximity, I guess everything is good!

@DarkWiiPlayer
Copy link

DarkWiiPlayer commented Feb 21, 2022

For example:

@scope (.light-theme) to (.dark-theme) { ... }
@scope (.dark-theme) to (.light-theme) { ... }

The problem I have with that approach is that it doesn't scale well. Say a project had around 20 themes and anybody could add a theme by making a pull request with a new .css file (an extreme example, I know); then every theme would need a list of all other themes in their lower boundaries and adding a new theme would require adding it to every other theme too.

One way around this that works only when the theme class appears at the end of the class list could be:

@scope (.light-theme) to ([class$="-theme"]) { ... }

but that feels like a hack at best. But maybe this is a problem that should be fixed on the selector side (maybe a class suffix/prefix selector?) instead of by the semantics of scoping.


Either way, I think strong scoping proximity would be very inconsistent with the way CSS currently works:

<style>
   .red p { color: red; }
   .blu p { color: blue; }
</style>
<div class="blu">
   <div class="red">
      <p>Blue text, despite proximity to <code>.red</code> class</p>
   </div>
</div>

As well as solve the same problem that lower scope boundaries already solve.

@mirisuzanne
Copy link
Contributor

But maybe this is a problem that should be fixed on the selector side (maybe a class suffix/prefix selector?) instead of by the semantics of scoping.

I would likely do it by using a data-theme attribute, and then scoping ([data-theme=light]) to ([data-theme]).


Chrome has a prototype for the @scope rules using 'weak proximity' behind the 'Experimental Web Platform features' flag (navigate to chrome://flags and search). In my experimentation so far, I agree with the majority of responses here that 'weak proximity' is the best way to go. At this point, it would be helpful if anyone arguing for 'strong proximity' could provide examples where it really is preferable.

In my mind, to be preferable, proximity has to be the important factor in comparing two unequal selectors. It's possible to invent cases where a weaker selector should override a more powerful one, and where they are also scoped - but that's often a problem to be solved either at the selector level, or with explicit layering. I haven't yet seen a case where scope proximity should be the deciding factor between otherwise properly-unequal selectors.

@DarkWiiPlayer
Copy link

I'm excited to hear Chrome already has a prototype and to actually see it in action in my browser 😁

@keithjgrant
Copy link
Contributor

keithjgrant commented Dec 12, 2022

After playing around with the Chrome prototype a bit, I'm beginning to agree the weak scoping proximity is the right call.

My initial gut was the opposite: strong proximity seems to more accurately reflect what I have historically wanted in component-based design. But I think intentional use of the inner scoping bound is a good way to reproduce the effect of strong scoping, while still leaving the author with more options provided by weak scoping. In other words, if the author wants strong scoping, they can achieve it by providing all scoped components with an inner-bound container (or containers), thereby preventing outer scoped rules from ever selecting elements inside inner scopes.

In my experience using component based web applications, it's uncommon for an outer component/scope to need to apply general styles on all inner components, so applying an inner bound to the scope would likely be the norm — but I like that weak scoping doesn't lock me into that and I can code it a different way when needed. And paired with layers, there are some fascinating approaches that could be used here. Again, it feels like it provides the most flexibility to the author while still meeting the requirements.

@mirisuzanne
Copy link
Contributor

mirisuzanne commented Jan 27, 2023

I know that @bramus has also looked into this some, and might be able to comment in more detail – but I think he also came down on the side of weak proximity?

It seems to me that there is a clear and growing consensus around weak scope, and there have been no counter-examples showing where a stronger proximity weight would be better. The abstract argument is that @layer could be used if strong proximity ever has unwanted effects, but the same is true in reverse. Layers will always provide the escape-hatch, whichever way we go. But so far we haven't even seen examples where the 'proximity' heuristic should override the 'specificity' heuristic in practice – only examples of un-intentional specificity conflicts that would need to be resolved either way.

Various people have explored this, and we've consistently come to the same conclusions. But I'm not sure how to prove we've 'done enough' here. Can we either get a counter-example to discuss, or resolve on the direction that has vastly more support, so that this feature can move forward? I don't believe this abstract concern should be holding up the spec, when all the signals are pointing the same direction.

@bramus
Copy link
Contributor

bramus commented Feb 28, 2023

# I think the proposed "scoped descendant" combinator has an obvious answer - it has to be weak (less than specificity) or else it'll be incredibly confusing.

Yes. The story so far has always been that combinators don’t influence specificity, and I think it should be kept that way. The "scoped descendant" combinator should have no influence on it.

# Various people have explored this, and we've consistently come to the same conclusions. Can we either get a counter-example to discuss, or resolve on the direction that has vastly more support, so that this feature can move forward?

A reason might be #8500 (comment). I think the outcome of that issue and this issue are tied together.

@DarkWiiPlayer
Copy link

So after trying to wrap my head around the interactions of specificity, layers and proximity, I think there might be a case for having some mechanism for having at least opt-in strong proximity.

Correct me if I'm analysing this wrong, but let's consider using scopes for theming. A straight-forward way of implementing this would be to have a @scope ([theme="dark"]) to ([theme]) rule.

As there's often subtle differences in how something looks in dark vs. light mode, it is reasonable to assume that both will have specific overrides with more specificity than the base-case, say when a specific element looks good with a generic box-shadow in light mode, but needs a more individualised shadow in dark mode.

In this basic example, the lower boundary is enough to prevent the more specific dark-mode shadow to mess with the element within a nested light theme, which only has a less specific "generic" style.

What worries me is that I don't think real world applications will always have a clearly defined lower boundary. In the example of themes, two CSS authors may (without knowing of each other) decide on different ways to scope their themes, where one uses [theme="dark"] and the other uses [data-theme="light"], or anything of that sort.

This could not be solved by lower boundaries, as neither author would know where to end their scope, nor could it be fixed by layers, as these themes could be nested again and again, and the innermost theme will be the one we want to take precedence.

Without having used scoping in real-world projects yet, I haven't come across any specific example where this sort of situation might naturally happen without an easy workaround, but I'd be surprised if these cases wouldn't end up happen a lot once scopes start to be used.


As for a solution, I'm not sure how I'd prefer to "opt in" to strong proximity. Adding an optional keyword to the scope @-rule seems like a last resort. My initial idea was, based on misremembering >>, that one could just prepend rules inside the @scope block with :scope >>, but after checking it seems like >> also only applies weak proximity.

Maybe an analogue operator that applies strong proximity could be a way remedy this? The advantage of that would be, of course, that it could be added later on and thus wouldn't get in the way of an initial implementation of @scope which I want to see as soon as possible.

Just to illustrate this with some pseudo-CSS

something like

@scope (.theme-light) { p { text-shadow: <generic> } }
@scope ([theme="dark"]) {
   p { text-shadow: <generic> }
   div.inset p { text-shadow: <tweaked> }
}

could turn into something like

@scope (.theme-light) { :scope !>> p { text-shadow: <generic> } }
@scope ([theme="dark"]) {
   :scope !>> p { text-shadow: <generic> }
   :scope !>> div.inset p { text-shadow: <tweaked> }
}

@keithjgrant
Copy link
Contributor

What worries me is that I don't think real world applications will always have a clearly defined lower boundary. In the example of themes, two CSS authors may (without knowing of each other) decide on different ways to scope their themes, where one uses [theme="dark"] and the other uses [data-theme="light"], or anything of that sort.

I'm not sure I understand the scenario where this might occur. If my dev team was working on theming our site/app, we'd establish a plan and stick with it—I don't really expect the CSS spec to render that meeting of minds unnecessary.

Your scenario kind of sounds more like one where you might be pulling in dependencies created elsewhere, perhaps from an open source pattern library. In which case, I would expect the pattern library to clearly document how it deals with theming. The other approach would be web components & shadow DOM, and would thus provide very strong encapsulation anyway.

@mirisuzanne
Copy link
Contributor

Yeah, the problem with examples #6790 (comment) like and #8500 (comment) is that they rely on an assumption that we know (in the abstract) what the author intended. As @bramus points out in his follow-up comment:

Second guessing myself: depends on how you look at the “problem”. Is the goal of the author to color the links in the header similarly, or is the goal to be able to drop a nav element/component just anywhere in the DOM and have it look the same? 🤔

In the mis-matched lower boundaries example, the intent is clear to a human eye - but not at all clear as a general rule. 'Strong' scope proximity may help solve the relation between those two light/dark scopes specifically, but there might be third or fourth scopes in play: such as components on the page. Are we certain that the 'closer' of @scope (nav) and @scope (.theme-light) should always win, regardless of specificity?

We just can't solve these problems for authors in the abstract, and we shouldn't try to. Which one should 'win' is often situational, and authors will need to manage those situations intentionally. We can't magically 'get it right' in every single situation, including authoring-mistakes. All we can do is provide the tools for authors to resolve those questions when they come up. We should think of specificity, proximity, and layers as tools that authors can use to balance the cascade as they need.

From that perspective:

  1. The argument for 'weak' scope proximity is that:
  • Authors already have explicit tools for managing specificity, while proximity is entirely determined by the DOM
  • The tool that authors have more control over is the tool that should take priority
  1. The counter for 'strong' scope would be:
  • Now that we have layers, which provide a more explicit author layering from the CSS…
  • Placing scope between layers and specificity, authors have CSS layering controls on both sides of proximity

What has leaned me towards option 1 is that authors are already very familiar with managing specificity - but I do think either could work. I just don't want to sit on the decision indefinitely. I don't believe we're going to get a lot more author input on a prototype than we have already.

@css-meeting-bot
Copy link
Member

The CSS Working Group just discussed [css-cascade-6] Strong vs weak scoping proximity.

The full IRC log of that discussion <fremy> miriam: one of the main features of scoping is that it applies proximity to the cascade
<fremy> miriam: the closer from the scope root the element is, the higher the "priority"
<fremy> miriam: but does it beat specificity or not?
<fremy> miriam: there are use cases for both
<fremy> miriam: authors have more control over the css
<fremy> miriam: so specificity is more controllable
<fremy> miriam: but proximity is sometimes more valuable
<fremy> miriam: but I think in the end, weak is more useful
<fremy> miriam: (specificity>proximity)
<fremy> miriam: strong proximity puts proximity between two things that authors can control
<fremy> miriam: which might be confusing
<fremy> miriam: I think either one could work, so open to thoughts
<astearns> ack fantasai
<fremy> fantasai: my first thought is that I don't want to resolve this without leaverou and jen simmons on the call to think about this
<lea> I can join just for this, if it would help
<fremy> fantasai: weak scoping might be very weak though, because we remove implied :scope specificity
<fremy> fantasai: and I'm not sure this is useful that way
<fremy> fantasai: putting it between layers and specificity is actually nice in my opinion, as they can control things more
<fremy> fantasai: tweaking specifity is more constraining
<TabAtkins> q+
<fremy> fantasai: people should describe what they want to match
<fremy> fantasai: not have to think about how the sum of things should be to make things work
<fremy> fantasai: which is why I would maybe prefer people use layers for this
<astearns> ack dbaron
<fremy> dbaron: as a disclaimer, I never liked specificity because it's magical but not magical enough for what people want
<fantasai> s/they can control things more/authors can control things more explicitly/
<lea> I have expressed preference for strong scoping before
<fremy> dbaron: but if I understood correctly "weak proximity" is so weak that it might almost never be a factor
<astearns> ack TabAtkins
<fantasai> +1 to dbaron, I have the same concern
<fremy> dbaron: and in that case, it might be extra confusing because people would not think about ita at all
<lea> weak proximity makes this far less useful
<fantasai> s/very weak though/very weak, though, even weaker than nesting/
<TabAtkins> https://github.com//issues/6790#issuecomment-959733391
<fremy> TabAtkins: interesting thoughts, I will think about it more
<fremy> TabAtkins: I still strongly think weak proximity is important
<dbaron> s/people would not think about ita at all/it would make a difference too rarely for people to think about it/
<fremy> TabAtkins: because if you just want to have selectors "leak further"
<fantasai> s/it might be extra confusing/weak proximity might be more confusing than no proximity at all/
<fremy> TabAtkins: and now, the added proximity would ruin that, you would have to move everything to layers
<fremy> TabAtkins: second argument, strong proximity would overrule everthing
<fremy> TabAtkins: like #id outside the scope would lose to * inside the scope
<fremy> TabAtkins: that sounds very bad to me
<dbaron> (I have sometimes thought that specificity should have been processed in pieces, based on proximity.)
<fremy> TabAtkins: third argument, authors can use layers already
<fantasai> s/that sounds very bad to me/that sounds very bad to me. It might be reasonable when specificities are close, but not when they are far apart/
<fremy> TabAtkins: so, if authors need proximity to win, they can weaken their selectors and use layers
<fremy> TabAtkins: so, adding another complex mechanism here is maybe too much
<astearns> ack fantasai
<fremy> TabAtkins: I would not mind no proximity or weak, but strong is a different beast
<fremy> fantasai: if the main use case is to put a floor on things, can we incorporate this in the nesting syntax somehow?
<fremy> fantasai: and not have proximity come into play at all?
<fremy> fantasai: I'm concerned that selectors inside the scope have no specificity boost at all
<miriam> q+
<fremy> fantasai: so, it will not do much for authors when they want to actually use scoping
<TabAtkins> q+ to argue for weak prox vs no prox
<fremy> fantasai: sometimes people want descendant selectors to deal with proximity
<astearns> ack miriam
<fremy> fantasai: but I don't like that nesting would be stronger than scoping here
<fremy> miriam: I think I would not call lower boundaries the only use case
<fremy> miriam: I still think weak proximity comes into play at the right time, in my opinion
<fremy> miriam: it comes into play when you are targeting multiple overlapping scopes
<fremy> miriam: and when they conflict, you want to closest one to win
<fremy> miriam: so I disagree it is not relevant if it's weak
<astearns> ack TabAtkins
<Zakim> TabAtkins, you wanted to argue for weak prox vs no prox
<fremy> miriam: in my opinion, it becomes relevant exactly when you want it
<fremy> TabAtkins: I was gonna say exactly this
<fremy> TabAtkins: but there might be a few cases where proximity could matter more
<fremy> TabAtkins: but when you want to style a component in two themes, weak proximity works perfectly
<fremy> TabAtkins: specificity would be identical, but you want proximity to win
<fremy> TabAtkins: in other cases, it's not clear that any mix of specificity/proximity that might be right or wrong is useful
<astearns> ack fantasai
<fremy> TabAtkins: adding wrong mechanisms means people have to fight against it
<fremy> TabAtkins: I would rather them use the mechanisms we already have
<fremy> fantasai: in the example of oriol , why is the system we have now better than that?
<fremy> TabAtkins: that would be a substentially different world
<fremy> TabAtkins: I'm not sure this thought experiment informs us that much
<fantasai> s/in the example of oriol/suppose that the descendant selector was originally defined to use proximity (as many people expect until they learn better)/
<fremy> fantasai: the reason it makes a difference is that the relationship between scoping and nesting becomes quite different
<fremy> fantasai: because nesting could have scoping effect
<fremy> fantasai: and that would become the default way things relate to each other
<fremy> fantasai: we should think about whether the way it works now is actually better
<fremy> fantasai: maybe proximity could be made easier to use in a different way
<fremy> fantasai: nesting compounds
<fremy> fantasai: and weak proximity + compounded specificity makes sense
<fremy> fantasai: but weak proximity + ignored specificity seems odd to me
<fremy> TabAtkins: I disagree, for the exact same example you just proposed
<fremy> TabAtkins: because how you end up in a particular theme doesn't matter
<fremy> TabAtkins: there could be many ways to target the theme switch
<fremy> TabAtkins: but that doesn't matter
<fremy> TabAtkins: what matters is when was the last theme switch
<fremy> TabAtkins: and any change to this would harm the use case
<fremy> fantasai: I see that what you say here make sense
<fremy> fantasai: but nested selectors will win because they are more specific
<fremy> fantasai: and I think that the interation is not the way that you would expect that to work
<fremy> fantasai: I would expect it to have a weaker effect
<fremy> TabAtkins: that argument applies to anything that affects specificity
<fremy> TabAtkins: like, it would turn off .class or #id
<fremy> TabAtkins: which we assume usually has a meaning
<fremy> TabAtkins: and if they want to change that, they can use the existing mechanisms
<fremy> TabAtkins: adding something that would work in all cases could be a slam dunk
<fremy> TabAtkins: but a mechanism that works sometimes and breaks other times is not useful in my opinion
<fremy> astearns: let's go back to the issue for now, and try to come up with examples
<fremy> astearns: layers could help you get closer
<fremy> fantasai: not really
<fremy> TabAtkins: yeah, almost
<fremy> astearns: ok, at least it's not clear to me how layers come into play
<fremy> astearns: we should look into this more
<fremy> fantasai: so we have three things, layers, specificity, flooring effect of scope rules
<fremy> fantasai: right now we combine them different between nesting and scope rules
<TabAtkins> issue here is that this has been a strong blocker on the feature for like, a year or more I think? and the only person maintaining the block with any feeling has been fantasai. this *must* be resolved before the spec can mature, and i'm not sure at this point we can fix this with more conversation.
<fremy> fantasai: and I think we want all three
<fremy> miriam: if we are taking this back to the issue
<fremy> miriam: one issue is that developers who used it
<fremy> miriam: they say that weak proximity has worked for them
<fremy> miriam: even when they had initially said they would prefer strong
<fremy> miriam: all examples I have seen where strong specificity were a bit artificial
<fremy> miriam: so I'm not sure how we will get to a resolution if we keep accepting theoretical concerns
<fremy> astearns: leaverou and fantasai both have opinions here, and we should probably keep working on resolving this
<fremy> TabAtkins: every time we bring this up, this is the resolution
<fremy> TabAtkins: what do we do if that doesn't happen
<fremy> TabAtkins: that's like the third time we have this resolution
<fremy> TabAtkins: nesting cannot apply proximity because it does not establish a containment relationship
<fremy> TabAtkins: you could do siblings, you could reverse the order by putting the ampersand in :has, etc...
<fremy> fantasai: but the default is descendant
<fremy> fantasai: what if the default would add proximity?
<fremy> TabAtkins: that would surprise everyone
<bramus> Also wouldn’t change that. Seems like a huge can of worms to open.
<fantasai> fremy: Nesting is supposed to work like preprocessors do, we're using the same syntax
<fantasai> fremy: If we change behavior it will confuse everyone
<fremy> miriam: proximity also requires a specific element that you can refer to
<fremy> miriam: which isn't the case for nesting
<fremy> TabAtkins: so, what do we do if we can't convince fantasai and leaverou ?
<fremy> astearns: I don't think this will block the spec
<fremy> TabAtkins: we are saying the exact same things we were saying in 2021
<astearns> s/will/is a/
<fremy> TabAtkins: we cannot come to a conclusion
<fremy> TabAtkins: but we cannot ship this way
<fremy> astearns: yeah, sometimes concensus takes time
<fremy> TabAtkins: I would like to point the lack of progress
<fremy> TabAtkins: I would like this decided in the working group
<fremy> TabAtkins: but eventually people will want to ship, and we can't make progress
<fremy> miriam: it would be nice if leaverou and fantasai could give the counter examples
<fremy> miriam: I am yet to see a good example, and I cannot produce them myself
<fremy> astearns: ok, seems reasonable to ask
<fremy> ACTION: fantasai and leaverou to come up with examples where strong proximity is more useful
<fremy> astearns: if we don't have examples in a month-ish of time, we can see how to move forward in another way

@fantasai
Copy link
Collaborator

fantasai commented Mar 1, 2023

I don't have a problem with introducing weak scoping into CSS. I think it's super useful and we should have it. But what I am uncomfortable with is:

  • Weak @scope ends up weaker than nesting in almost all cases, yet is a heavier syntactic construct.
  • Weak scoping is something that probably is more useful than descendant selectors in almost all cases, so if we're introducing it it should be easy to use it everywhere instead of descendant selection.
  • We're tying the ability to floor a selector to applying weak scoping. But it's independently useful.

So I'm not convinced that what we're doing here with @scope is the best way forward.

@keithjgrant
Copy link
Contributor

keithjgrant commented Mar 1, 2023

FWIW, I generally anticipate my approach with scope to be: first, setting up layers along the lines of @layer reset, base, components, layout, utilities;, and then on the components layer, everything will be a scoped component.

When the component has a "slot" to nest more components, I will make heavy use of donut scoping to prevent the styles from reaching into other components. This prevents one component from styling another unless I explicitly need it to. I've essentially opted-in to strong scoping behavior.

At this point, it's rare that I write styles in two components that might target the same element. But when I do, I would want weak scoping: When an outer scope doesn't use donut scoping, and targets elements of an inner scope, specificity becomes my only means of controlling which selector takes precedence; with strong scoping, I have no way to give the outer scope precedence. If I try to define this control using nested layers instead, scope X styles will always override scope Y styles, and I can't make informed decisions on an element-by-element (or property-by-property) basis.

Maybe I'm missing something, but I'm not sure nesting makes much of a difference to me, as I would nest very similarly from component to component. And I can always use an id or :where() to manipulate specificity up and down regardless of nesting.

@DarkWiiPlayer
Copy link

I'd go for something like @scope strong (upper-element) to (lower-element) { /* ... */ } but otherwise I agree that adding a mechanism for strong scoping later on would be the best way forward.

That way @scope can get shipped and people start playing around with it and if enough of a need for strong scoping appears down the line, it can just be added later on (and if all the use-cases can get solved otherwise, then that's fine too).

To be honest, I think that's been the general consensus for a while now. It might be time to just conclude that the default will be weak scoping and move on with getting this into the browsers.

@romainmenke
Copy link
Member

romainmenke commented Mar 16, 2023

I don't think a mechanic to toggle weak vs. strong scoping is a good idea.
For the same reasons that !important often causes issues for authors.

Picking one and just sticking with it, is better in my opinion.


Weak proximity makes the most sense to me.

@DarkWiiPlayer
Copy link

I don't think that's a good analogy at all.

Strong scoping proximity is much more similar to @layer, in that it allows authors to, well, layer styling rules on top of each other and give them control over what takes precedence.

Strong-proximity scopes would fill a very similar role, except they would allow categorically overriding rules only for certain parts of the DOM. The problem with !important is that it automatically promotes a rule to the very top where it overrides everything else, except other !important rules, so it essentially just creates a two-class system for styling rules, which is really hard to work with. It only really worked for the case of having some working framework code and adding specific overrides to a page, but not for stacking frameworks or just manage conflicting rules from different components (in the broad sense, not the components sense).

Adding opt-in strong proximity scopes would have the same effect as adding @layers, in that it could be stacked, so we could just say "Anthing inside this thing should look like this, anything inside this other thing should look like that" on the CSS side, then just nest the two things in whichever order we want in the HTML.

This system would still be open to extension by simply adding new strong scopes that override styles locally. This could probably best be compared to shadow-dom in how it over-rides styles locally, except it would be independent from the application logic and purely a styling thing.

But again, that could easily be something to add later, and weak proximity is a good default. Maybe opt-in strong scoping should get its own issue so the question of which is default can just be closed already.

@matthewp
Copy link

I have a hard time answering this question because I'm having trouble understanding how tooling can leverage this proposal in general. For example, a component-oriented framework might want to use this feature instead of hashed class names to scope a component. So if you have a component like this:

<div class="one"><span class="inner"></span></div>

You might naively compile scoped CSS to:

@scope (.one) {
  .inner { color: darkmagenta; }
}

But then if you have a second component that uses the same one class name:

<div class="one"><input class="inner"></div>

Now you have mistakenly applied the first component's styles to this component. So you haven't gotten rid of the need for hashing. Am I misunderstanding something?

@romainmenke
Copy link
Member

romainmenke commented Mar 16, 2023

Now you have mistakenly applied the first component's styles to this component. So you haven't gotten rid of the need for hashing. Am I misunderstanding something?

@scope is framework agnostic.
It isn't aware that your code has .one in two different files/components and that they should be interpreted as two different things. Doing so would make the inverse impossible : overriding scoped styles from a framework in your specific project.

This is not something that @scope ever intended to facilitate.
It is also unrelated to strong vs. weak scoping :)

Having some uniqueness is required to use @scope.
e.g. @scope (.component-a) { .inner {} } and @scope (.component-b) { .inner {} }

@matthewp
Copy link

@romainmenke If this is the wrong thread to ask this question, where is the right one? I was asked to comment on this strong vs. weak question as a framework author but I do not know how to give feedback yet because I do not yet understand my above question.

@romainmenke
Copy link
Member

romainmenke commented Mar 16, 2023

I didn't mean to imply that this isn't the right thread, it might be, but I don't have any opinions or say about that :)


Now you have mistakenly applied the first component's styles to this component. So you haven't gotten rid of the need for hashing.

That is correct.
@scope is not a drop in replacement for CSS Modules or similar tools.
These tools try to isolate styles by hashing every user defined ident, thereby making them unique within a single codebase.

@scope does this by defining boundaries, not by making everything unique.
They are different tools that work in different ways but both can be used to solve some overlapping issues.

A few examples

Tools/frameworks can automatically add @scope rules around CSS that was written for a specific component.

An author might write :

img {
  aspect-ration: 1;
}
<img ...>

And the framework would convert to :

@scope (.component-34htjkhjg) {
  img {
    aspect-ration: 1;
  }
}

And if the framework has a clear "hook" for template slots it could also generate lower boundaries.

img {
  aspect-ration: 1;
}
<img ...>

<div class="template-slot-a">
  <!-- some framework mechanic for slots -->
</div>

And the framework would convert to :

@scope (.component-34htjkhjg) to (.template-slot-a > *) {
  img {
    aspect-ration: 1;
  }
}

@DarkWiiPlayer
Copy link

@matthewp @scope does not remove the need to somehow have a unique selector for your components and a generic selector for nested components.

The point is that once you can clearly select those two, you can guarantee that styles only apply within the outer scope but not within the inner scope. The second prat is what's more significant here, as the former could more or less be achieved by prefixing all your style rules with the selector for your outer boundary.

@matthewp
Copy link

Thank you @DarkWiiPlayer and @romainmenke for confirming my thoughts here. Given that, it's unclear to me, what problems is this meant to solve? I think I understand how it works better but not yet what problems it aims to solve. Is there a list of goals/non-goals somewhere?

In my mind, if you still need to add unique selectors (aka hashes) to elements, then why not add them to all elements in the component? That's how it's done today. Implementing this for 1 element is no easier than implementing it for N elements.

So maybe there's another use-case here that I'm missing? For example, the spec mentions overlapping scopes. That's a different use-case than the one I was thinking of. The nested dark/light modes. Is that the primary goal of this proposal?

@DarkWiiPlayer
Copy link

I'm not a framework maintainer myself and generally want this more for the benefits of clearer hand-written CSS; but I imagine preventing styles from leaking into nested components would be the main benefit here. Unless there is already some easy mechanism to achieve this in auto-generated CSS that I'm not aware of.

As mentioned above, the ability to add lower boundaries is really the primary innovation in terms of what you can actually do, as opposed to upper boundaries which mostly overlap with simple CSS nesting.

@keithjgrant
Copy link
Contributor

keithjgrant commented Mar 16, 2023

@matthewp I think the key here is @scope makes this possible now without the framework. I don't think it gives anything new to frameworks, but rather accomplishes something natively that developers have relied on frameworks/CSS-in-JS libs for.

If the author gives each component a unique name/classname, they can manage scoping with native CSS and no ugly BEM class name patterns. With appropriate scoping, developers can style .card .title differently than .callout .title and not worry about the two .title selectors conflicting.

Controlling the lower scope boundary is an important part of this. I don't know that a framework can automagically manage that for the developer—but I think that's okay as it gives a lot of flexibility to developers as they will learn to use the feature.

@matthewp
Copy link

Thank you @DarkWiiPlayer and @keithjgrant if this proposal is targeting hand-written CSS then it starts to make more sense to me. @keithjgrant with your analogy to BEM, this proposal gets rid of the need for the Element and Modifier, but not for the Block, if I'm understanding correctly. Which means you still might use tooling to produce a unique Block name. Or you might use naming conventions to ensure there are no conflicting block names.

@DarkWiiPlayer
Copy link

The proposal isn't targetting hand-written CSS exclusively, but that is one area where it will be useful.

As for the BEM situation, that's basically it, yes. You can axiomatically select elements with a "from to" rule without having to mark every child element with the corresponding parent-component, as long as you can formulate those two selectors properly.

Whether this results in reduced complexity of framework code will depend on how much extra logic you need for all those child elements, which would be interesting question for us non-framework-authors, as we can't judge that.

I imagine this could also be beneficial for the increased interoperability with non-framework code moving elements in and out of components, but that might not be a priority of most frameworks anyway.

As a framework user, I only remember one particular case when fiddling around with svelte where some lines of CSS that would only apply after some slightly hacky logic changing things around, which were just getting thrown out by the framework, presumably because it relied on prefixing all selectors and had no way of doing that correctly. That's sadly too long ago to give a more detailed description of my problem at the time or to really tell if I was really just using the framework in a dumb way or not, but I do think that the simpler CSS that could be generated with @scope would have allowed for that to work.

@kizu
Copy link
Member

kizu commented Mar 16, 2023

I want to drop one of the common cases that I encountered for scoping — mixing user-generated content with custom components for editorial copy.

We could want to target any semantic elements coming from markdown or whatever like h1, p, li etc, but then we need to be able to stop applying those selectors when they're inside our components.

My first naive idea to do this (following how we approximate this in our design system) could be something like

@scope (.content) to (:scope [class]:not(.scope-element)) {
  h1 {}
  p {}
  li {}
}

(I'm using the :scope for the limit, as I'm not exactly sure if without it the scope would start and end on itself, due to the .content matching the [class]?)

This would apply the styles for any class-less elements, stopping the scope at any component that has a class, unless that component has a special .scope-element, in which case it would participate in the scope.

Scope would guarantee that we won't have any collisions between our regular elements and any semantic elements used for custom components.

For the strong vs weak — I don't have an opinion based on just theory, I would need to play with both variants, applying them to some practical code in order to understand the real consequences of this. On the first glance, I would say that having the weak as default would be safer, but it would be interesting to play around with the strong one as well.

@mirisuzanne
Copy link
Contributor

@kizu Lower boundary 'scope end' selectors only match descendants of the :scope by default - so your example would be valid, but the :scope ancestor would not be required for the use-case.

I've written up a bit of a FAQ explainer-addendum covering some of the questions that have come up repeatedly around the scope feature, including this one. Hopefully that's useful to people who haven't been 'in the weeds' for the entire process leading here. (I also updated the associated explainer to match the current state of the (also updated) Working Draft specification.

@css-meeting-bot
Copy link
Member

The CSS Working Group just discussed [css-cascade-6] Strong vs weak scoping proximity, and agreed to the following:

  • RESOLVED: cascade proximity is weaker than specificity
The full IRC log of that discussion <emeyer> miriam: With @scope, there are two features that help avoid large scoped things overriding smaller scoped things
<emeyer> …ONe of those is pretty strong, which is the ability to set lower boundaries
<emeyer> …The other is somewhat weaker, which is a heuristic priority for inner over outer
<emeyer> …We’re calling that cascade proximity; question is whether that’s more or less powerful that the specificity heuristic
<emeyer> …Proposal is to have it be stronger than, because we’re trying to reduce specificity reliance
<emeyer> …When these two heuristics conflict, specificity is easier to change
<emeyer> s/stronger than/weaker than/
<emeyer> astearns: I suggested people provide arguments in favor of stronger, but all the comments in the issue argue for weaker
<emeyer> astearns: Comments?
<bramus> SGTM
<emeyer> RESOLVED: cascade proximity is weaker than specificity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
March 2023 VF2F
Wednesday - Mar 22
Development

No branches or pull requests