Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Format specification #1

Closed
kaelig opened this issue Jun 21, 2019 · 120 comments
Closed

[RFC] Format specification #1

kaelig opened this issue Jun 21, 2019 · 120 comments
Labels
Needs Feedback/Review

Comments

@kaelig
Copy link
Member

kaelig commented Jun 21, 2019

Principles

  • For core properties (name, value, description…), a design token file must be both human-editable and human-readable
  • The format is simple, extensible, and as unopinionated as possible
  • Vendors (design system tools, design tools…) can store information for their own usage, both globally and for each token
  • The format translates well to existing design tools, in order to facilitate adoption

v1 priorities

  1. Agreement and adoption by several design tools
  2. Define core use-cases

Inspiration


Proposal

At the moment, this proposal doesn't advocate for a particular file format (JSON, TypeScript…), it merely discusses what the shape of it should look like.

interface TokenList {
  // Where tokens are stored (in an array)
  tokens: Token[];
  // is this useful? should it be optional or not?
  version?: string;

  // Optional metadata
  data?: Data;

  // What other global properties are needed? (type, category, group…)
}
interface Token {
  name: string;
  value: any;
  description?: string;
  data?: Data;
  // What other properties are needed? (type, category, group…)
}
interface Data {
  // Vendor-prefixed data
  // for example: `data: { vendor: { "@sketch": {} } }`
  vendor?: object;

  // Any number of additional properties can live here,
  // for example, for storing additional information related to the token
}

Example

{
  tokens: [
    {
      name: 'Foo',
      value: 'Bar'
    },
    {
      name: 'I am a token',
      value: 'This is a value',
      description: 'A nice description.',
      data: {
        myOwnFlag: true,
        oneMoreThing: 'yay'
        vendor: {
          '@sketch': {
            // ...
          },
          '@figma': {
            // ...
          }
        }
      }
    }
  ],
  data: {
    vendor: {
      '@sketch': {
        // ...
      },
      '@figma': {
        // ...
      }
    }
  }
}
@kaelig
Copy link
Member Author

kaelig commented Jun 21, 2019

Our mileage varies across projects / companies / tools, so I would like to hear thoughts and feedback on where this draft format works / doesn't work, with use cases showing where it breaks 🧐

⚠️ Please provide use cases and examples in your posts (where appropriate)

@danoc
Copy link

danoc commented Jun 21, 2019

One interesting scenario that's come up in Thumbprint is that some of our tokens have different values depending on the platform.

For example, our spacing system on web versus native is:

Token Web iOS/Android
space1 4 4
space2 8 8
space3 16 16
space4 24 24
space5 32 32
space6 64 48
space7 128
space8 256

(Notice that space6 has a different value and space7 and space8 are web only.)

Here's what our JSON file for this looks like:
https://github.com/thumbtack/thumbprint/blob/73bedcf83fb01f3c8617aee30b6a14e4e9143c0c/packages/thumbprint-tokens/src/tokens/space.json

Not sure if this should be supported, but it's something to consider. 🤷‍♂

@kaelig
Copy link
Member Author

kaelig commented Jun 21, 2019

Thank you @danoc – do you know if any design tool supports this kind of theming/ platform-specific variants so far?

@dbanksdesign
Copy link
Contributor

dbanksdesign commented Jun 21, 2019

I think this is a great start! We might want to focus on the token interface to start, so we don't "boil the ocean".

@danoc brings up an interesting issue, where is the platform-specific information, and do we even include it in the core token interface? Theo and Style Dictionary keep that out of the tokens themselves by using transforms. Same with names, they could be different per platform as well. Maybe if this is a universal/interchange format each token could have any platform data it wants to provide. For example, in @danoc's example, the space8 token would only have 'web' platform data...

interface Token {
  name: string;
  value: any;
  description?: string;
  platforms: Platform[];
  data?: Data;
}

interface Platform {
  platform: string;  
  name: string;
  value: any;
}

@mathieudutour
Copy link

mathieudutour commented Jun 21, 2019

instead of platforms, I think a more generic term could be variant or theme. A variant could be a platform but it could also be a dark theme for colors, etc..
The value of a token could then be an object keyed with the variant names (if the value should change between variants ofc)

@kaelig
Copy link
Member Author

kaelig commented Jun 21, 2019

Those are definitely super valid use cases!

Theo achieves this with the concept of platform-specific and theming "overrides", or one could also do this by importing a different set of token aliases, specific to the platform or theme.


We might want to focus on the token interface to start, so we don't "boil the ocean".

I've added another principle to the draft above:

"The format translates well to existing design tools, in order to facilitate adoption".

This means we need to ask ourselves:

  • How much would design tools need to adapt to be able to work with this data?
  • Is there a way the schema could be extensible accommodate for theming and platform-specific concepts (via the data field, or other), without having them baked into the spec?

@mathieudutour
Copy link

mathieudutour commented Jun 21, 2019

A easy way to adopt the format without caring about the variants at first would be to specify which one is the default one and the existing design tool just has to care about that one

@zackbrown
Copy link
Member

zackbrown commented Jun 21, 2019

I know this conversation focuses on the "shape of the data" rather than the implementation, but FWIW our take on platform-specific overrides in Diez is to use TypeScript decorators on top of property declarations. variant is a solid way to think of this too, @mathieudutour!

@override({
  ios: 32,
  android: 16,
  web: 32,
}) spacingTop = 16

Is it worth elevating "cross-platform" as one of the guiding principles? Seems like a pretty big fork in the road. IMHO any spec with a claim to a Design Token Standard should treat platform-specifications(/overrides/variants) as first-class, but can do so without "hard-coding" specific platforms.

@design-tokens design-tokens deleted a comment from ventrebleu Jun 21, 2019
@mathieudutour
Copy link

mathieudutour commented Jun 21, 2019

@zackbrown I was also thinking about having a default value and some overrides for variants but with @dabbott we found that it might be safer to specify a value for all the variants so that when you add a new variant, your tool can warn you about the places where you need to look at (eg. the tokens not specifying the value for the new variant)

Is it worth elevating "cross-platform" as one of the guiding principles? Seems like a pretty big fork in the road. IMHO any spec with a claim to a Design Token Standard should treat platform-specifications(/overrides/variants) as first-class, but can do so without "hard-coding" specific platforms.

I think it's necessary too. But I'd say "platform independent" instead "cross-platform".

@zackbrown
Copy link
Member

zackbrown commented Jun 21, 2019

@mathieudutour to help me understand:

we found that it might be safer to specify a value for all the variants

could you sketch out (or point to) an example/pseudocode?

@mathieudutour
Copy link

mathieudutour commented Jun 22, 2019

{
  variants: [
    { name: "web" },
    { name: "ios"  }
  ],
  tokens: [
    { name: "token1", value: { web: 1, ios: 2 } },
    { name: "token2", value: 3 }
  ]
}

Let's say you want to add a new variant android. Then whatever tool you are using to manage your tokens could warn you that token1 needs to be looked at because it's missing the android variant. That way your are sure that you won't have any surprise about tokens implicitly using values that they shouldn't

@ventrebleu
Copy link

ventrebleu commented Jun 22, 2019

Definitely need some kind of category/grouping system, alias could be useful too.
Something like:

{
    name: 'I am a token',
    value: 'This is a value',
    description: 'A nice description.',
    category: {
        'Color': {
            'value': 'Primary'
        }
    },
    alias: {
        'value': 'Also known as',
        'value': 'Peter Parker'
    }
    data: {
    myOwnFlag: true,
    oneMoreThing: 'yay'
    vendor: {
        '@sketch': {
        // ...
        },
        '@figma': {
        // ...
        }
    }
    }
}

@kaelig
Copy link
Member Author

kaelig commented Jun 22, 2019

cc @nikolasklein you worked on the styles panel in Figma – do you have thoughts on data structure for grouping/categories?

@mathieudutour
Copy link

mathieudutour commented Jun 22, 2019

IMO a group is just a token which has an array of tokens as value. To reference another token, you can then use an array where each item is the name of the token you need to go through:

{
  tokens: [
    { name: "token1", value: 1 },
    { name: "group", value: [
        { name: "nestedToken", value: 2 },
        { name: "nestedToken2", value: 3 }
    ]},
    { name: "refToken", value: ["group", "nestedToken2"] }
  ]
}

@danoc
Copy link

danoc commented Jun 22, 2019

Thank you @danoc – do you know if any design tool supports this kind of theming/ platform-specific variants so far?

I do not! We rolled our own a while back:
https://github.com/thumbtack/thumbprint/tree/master/packages/thumbprint-tokens

@zackbrown
Copy link
Member

zackbrown commented Jun 22, 2019

@mathieudutour gotcha, I totally agree with this principle (re: variants; few messages above).

Implementation-wise (speaking from the perspective of having already implemented this!) Diez leans on the TypeScript compiler [and a subsequent static-analysis pass for our transpiler] as the tool providing a warning, e.g. that you tried to use a platform dart that wasn't defined.

It would be quite easy to warn also that e.g. "Your overrides are sparse! You might be forgetting Android definitions" without having to contort the data format. In other words, the data is either there or it isn't; IMO it should not be the purview of the data format to make it "extra easy" to perform static analysis, e.g. by duplication; the data format should instead strive to be minimal + ergonomic.

Automated tooling can read that data and determine "sparse overrides" or "missing variants" almost regardless of the shape of the data. So keep it human-centered! [I doubt we disagree on this point!]

@mathieudutour
Copy link

mathieudutour commented Jun 22, 2019

I kind of do disagree actually haha. While I do think it should be kept human-readable, I don't think we needs to go out of our way to make it human-writable. I'm sure it will be pretty easy to build tools to edit tokens once we agree on a common format. And so I'm not too worried about introducing duplication as long as it's still readable.

I'd even argue that specifying all the variants is more readable than a default value + overrides.

So my point was to make sure that nothing weird can happen with the data format and that there is only way way to write one thing. So a token should either be a constant or depends on the variant. Otherwise you have multiple way to write a constant: default value + all the overrides with that value, default value + 1 override with that value, etc.

@danoc
Copy link

danoc commented Jun 22, 2019

I'd even argue that specifying all the variants is more readable than a default value + overrides.

Agreed. In my earlier example, which would be the default for space6? The web or native value?

@zackbrown
Copy link
Member

zackbrown commented Jun 22, 2019

@mathieudutour

I don't think we needs to go out of our way to make it human-writable.

appears to be at odds with Principle 1 outlined by @kaelig :

For core properties (name, value, description…), a design token file must be both human-editable and human-readable

Granted, the joy of determining a standard is that of wrangling a variety of viewpoints & needs. And compromising! Maybe @kaelig and I are in the minority here in desiring hand- read/writebility?

Design tokens are a promising key to achieving that fabled "single source of truth" for a design language — maybe even, ultimately, to entire applications. To me, hand-editability is important because any "single source of truth" must be accessible to a maximum number of stakeholders [incl. developers and low-code designers.]

You can always build tooling on top of hand-editable code [see any IDE ever], but supporting hand-editability of machine-generated or 'machine-first' code is a very different beast. To me this all circles back to prioritizing ergonomics & minimalism in the data format.

@mathieudutour
Copy link

mathieudutour commented Jun 22, 2019

Of course it needs to be human-writable, and if it is readable, it will be writable. But introducing confusion (and multiple ways to write the same thing is def confusing) for the sake of convenience when writing the file by hand isn't something we should lean to IMO.

@jxnblk
Copy link

jxnblk commented Jun 22, 2019

Hi! 👋 Co-author of the System UI Theme Specification here. Thanks for bringing everyone together in one place!

Based on the Twitter conversations, I originally thought that this sounded very similar to the efforts we're working on in the Theme Specification, but after seeing the examples in the initial comment, I suspect that there might be slightly different goals.

If I'm wrong, I'm happy to combine efforts into a single place, but either way I'd love to make sure the two efforts can work together and build on top of one another – or perhaps the two specs can live under the same roof.

For background on where we're coming from, some of our high-level goals, which I've written about here are:

  • Create a common naming convention that other OSS libraries can adopt to ensure greater interoperability
  • Be as unopinionated as possible, striving for the lowest-common denominator (most naming conventions are derived directly from CSS)
  • Be as flexible and extensible as possible
  • Use an object that is JSON-serializable and works as production-ready shippable code

Part of the reason the word theme is used here is that this should allow components to be written in a themeable way so that, for example, the same datepicker component can be installed and used at GitHub or Artsy, but match their own brand's "theme".

As far as adoption goes, the spec is built into a few OSS libraries, and there is some interest from the following projects:

  • Styled System, which is used internally by Artsy, Priceline, GitHub, and others.
  • Modulz also seems likely to be adopting part of the spec – cc @peduarte
  • Theme UI is planned to be used in official Gatsby themes and Docz
  • Tailwind CSS expressed interest – cc @adamwathan
  • Several people have reached out to Material UI, but they are limited by their release schedule
  • Smooth UI & xstyled cc @neoziro
  • DesignQL cc @johno

In my opinion, adoption among the open source community is really key to adoption in proprietary tools. That is, you have to provide something so good that it would be silly not to use, but businesses will rarely have interoperability as a primary goal.

As far as the goals set out above, here are my thoughts.

For core properties (name, value, description…), a design token file must be both human-editable and human-readable

Agreed. This is sort of a definition of code, i.e. human-and-machine-readable language.
If human's can edit design tokens, then I would say that you're describing something more akin to machine learning.

The format is simple, extensible, and as unopinionated as possible

This is also one of our principles, and I think it's key for any standard to achieve adoption, which is a very difficult thing to pull off.

Vendors (design system tools, design tools…) can store information for their own usage, both globally and for each token

This sounds equivalent to what I mean when I say it should be flexible and extensible. By creating a solid foundation, others should be able to build anything they need on top of that foundation.

The format translates well to existing design tools, in order to facilitate adoption

Any schema can be converted to another shape. I might be reading this the wrong way, but I would argue that accepting translation leads to fragmentation. We already have transformers for parsing data from the Figma API, or converting a theme-spec compliant object to other libraries like Tailwind CSS, but this creates friction and doesn't lead to the level of interoperability that I would like to see tools adopt.

So far, this sounds like we're fairly closely aligned with goals, even if they are slightly different. However, the code examples make me suspect that the aim here is more for a documentation format and less about a schema. If that's the case, the Theme Specification and this one could both be adopted and not be mutually-exclusive.

Given this example:

interface Token {
  name: string;
  value: any;
  description?: string;
  data?: Data;
}

The Theme Specification intentionally omits this level of schema definition and does not include documentation or metadata since that would not be desirable in production code and isn't required for interoperability. Things like code comments and type definitions are removed during compilation and more human readable documentation is generally stored in formats like markdown or MDX.
That said, you could absolutely use the schema above to store a Theme-Spec-compliant theme object that is used in an application.

I'll try to chime in again with more thoughts later, but we have an initial discussion around the Theme Specification in this issue if anyone in this thread would like to join the discussion.

I hope this is helpful, and hopefully there's a way that both efforts can work together and we can stop reinventing the wheel as often. ✌️

@dabbott
Copy link

dabbott commented Jun 22, 2019

👋🏻 Lona here! (@mathieudutour is also Lona). I'm excited to see this get off the ground!

A couple more meta thoughts:

  • I'd like to see a separate discussion around the principles. I want to make sure we have a clear agreement on the problem(s) before getting too deep in possible solutions. For example, principle Give better credit #3 (vendor's storing their own info) doesn't seem important to me, but maybe with more discussion I would understand it better.
  • If this group is self-organized and we all tag our friends it's possible we'll end up with a very un-diverse group... which IMO will lead to a solution that doesn't work for a lot of people/tools/systems. It might be worth thinking about who should/shouldn't be involved and to what degree.

Sorry I haven't contributed much so far, I have some other priorities this week and next, but hope to hop in more soon

@colmtuite
Copy link

colmtuite commented Jun 22, 2019

👋🏻Modulz co-founder here.

Thanks for kicking this off, I'm very interested in seeing this discussion evolve.

We're a few months away from first-class theme/tokens support in Modulz. We've adopted Theme Specification for Radix (our own design system) and we're working towards adopting it for Modulz theming in general.

I'm very keen to see a standard for many aspects of design systems. I'm up for investing time and other resources into it, if people think that might be helpful.

@c1rrus
Copy link
Member

c1rrus commented Jun 24, 2019

Hi all. I'm the person behind Universal Design Tokens, which I started with essentially the same goal in mind: Defining a single file format for (source) design token data. So glad to see that others are tackling this too! Thanks @kaelig for kicking off this thread!

Reading through the thread so far, I have a few comments / thoughts:

  • There were a couple of comments about groups of tokens. I too feel this is something the format ought to support. I'm quite fond of Style Dictionary in that respect, since it lets you nest groups as much or as little as you like (although it does recommend their "Category / Type / Item" structure). I realise that having multiple token files (possibly arranged in folders) is another way of achieving this. But, in the spirit of being "unopinonated", I'd suggest we should strive to allow some structure within the files themselves.
  • The comments about different values for a certain OS are interesting. I wonder, if you go beyond just UIs, whether a similar mechanism could be useful when using design tokens in other media. Consider print - perhaps a color token could provide an explicit Pantone color in addition to an RGB value which would override the result of an automated conversion that tools would otherwise perform.
    • That being said, I also agree that this perhaps not a priority for version 1. As others have said, let's not boil the ocean. This is something people could use the data field for initially. If the working group spots patterns - i.e. lots of teams and/or products trying to solve the same thing - then it can be incorporated into a future spec revision as a dedicated property of the token objects. (Akin to how the WHATWG try to "pave the cowpaths" in the HTML5 spec by analysing real-world usage patterns)

I also have some additional suggestions:

  • Spec versioning. All going well, whatever spec we come up with won't be the final chapter in design tokens, so it feels inevitable that there will be future revisions. Versioning the spec therefore becomes important to help keep track of changes and communicate them to the wider community. I propose we adopt the SemVer convention for versioning the spec as it's already broadly used, various tools and libraries have support for it and I think it lends itself well to a spec like this. Similar to an API, we will have fixes, new additions and, perhaps some day, breaking changes.
  • Version identifier in file. Related to the above, I think it's a good idea for token files to identify which version of the spec they conform to. The Lona colors spec appears to have something like this in the form of $version and my own UDT format intends to do the same via a $schema property.
    • Consider a tool that is able to understand version 1.2.3 of this spec. Then, later, we introduce a v2.0.0 spec that has breaking changes. So, a 2.0.0 file is no longer guaranteed to work in that tool. Having a version identifier would let the tool determine this incompatibility automatically and warn the user.
    • Similarly, another tool might have support for both 1.x and 2.x file formats, but may need to parse them differently, so having the version in the file lets them do that. It's just like DOCTYPEs in HTML and how browsers switch between "quirks mode" and "standards mode", depending on what DOCTYPE a page has.
  • Token types. Looking at tools like Theo and Style Dictionary, they need to have some notion of what type a token's value is in order to correctly convert or transform that value. Theo does this by having type properties on the values themselves. StyleDictionary instead leans on the "Category / Type / Item" structure to determine the type of a token (and if you choose to not follow that structure, then the onus is on your own config to correctly assign types to your tokens).
    • I prefer something akin to Theo's approach for this. Having a type attribute directly on a token is more explicit and, perhaps, more human-readable too.

@c1rrus
Copy link
Member

c1rrus commented Jun 24, 2019

I also wonder whether there should be an additional v1 priority:

  • Create a conformance test suite to enable tool vendors to verify their tools output design token data correctly

That might be a validator, some kind of coded test suite or a combination of those. But I feel it's worth having something like that as soon as possible.

I believe the main motivation for an effort like this is for better interoperability. I want to one day be able to save out colors from PhotoShop, put them into ColorBox to to generate tints and shades, and then run them through a contrast checker to find accessible pairings and then save the results into my design system's source code without ever needing to convert between file formats.

So, the more tools we can provide to let developers ensure their software parses or writes our universal format correctly, the less risk of bugs and incompatibilities creeping in and - ultimately - fragmenting the eco-system and reducing the benefits.

@maraisr
Copy link

maraisr commented Jun 24, 2019

Hello 👋🏻 Marais here! Human behind Overdrive. Just here dropping my two cents:

  • This file is going to get massive, especially if you have a 1-9 colour scale, per colour. So to battle that, would be good build fragments like a colours.json, sizes.json - and combine them together with a CLI or something?
  • For things like elevations, that visually look different when applied to a background. Ie a white'ish background, drop shadows look more prevalent - whereas applied to a deep red, you'd darken everything more so it "looks more there". So would that mean a new token "red-elevation-1", or could we give tokens context? So we can have things like $COLOUR_RED_900__ELEVATION_1 - or is this more or less what @ventrebleu was talking about, with aliases?
{
	tokens: [
		{
			name: 'elevation-1',
			value: '0 1px 10px 0 rgba(0, 0, 0, 0.03)',
			'@context': [
				{
					'colour-red-900': '0 1px 10px 0 rgba(0, 0, 0, 0.09)',
				},
			],
		},
		{
			name: 'colour-red-900',
			value: '#780502',
		},
	];
}
  • Would also be nice to also allow micro-tokens, like component specific tokens. Like within a checkbox component you might want to token the size of the checkbox's tick-box. But not necessarily have that token hoisted to the design system. Just thinking of scenarios where you want input's heights the same height as buttons. So the decision of "do we make a token for button heights" that our "inputs use", or a "token for our inputs height" that our "buttons use" - "or 2 tokens with a comment for future me to keep in sync".
  • Bit of a trivial one. There's a name and a value in each token item, then a vendor override. Is this to say "our primary product is web" so the value's are for web targets, with overrides for Figma and say Sketch. Or should we merely just have name, and the value would be in the @scss vendor, or the @web .css vendor?

Thanks for all the efforts though guys!

@Kilian
Copy link
Contributor

Kilian commented Jun 25, 2019

There's a tension here between us naturally wanting an all-encompassing spec but also not wanting to 'boil the ocean'. I would suggest we start very small and pave the cowpaths before coming up with new things. The cowpaths here being: existing implementations in design tools, and examples used in more than 1 (or any other n) publicly available design system.

Paving cowpaths

If we zoom in on colors and the way they're used now:

  • Sketch: Colors are unnamed and ordered
  • Adobe XD: Colors are named and ordered
  • Figma: Colors are unnamed and unordered(?)
  • Many design systems: Colors are named, ordered and grouped/shown in a range

There's a discrepancy between how people want (others) to use colors, namely they want certain colors to be used only for backgrounds, and they want people to choose between named colors blue-400 and blue-500, and what design tools offer, which is mostly 'a list of colors'.

Keeping things small

Variants, platform-dependent tokens and meta data are all important but what is the least this format should do to be useful? For example, these can be solved by having different token files for different variants. Vendors could choose how to interpret tokens without that having to be in the spec (if we put that in a spec and they ignore it in favor of their own translation of the true value, then what good is the spec?)

What do we want a token to be?

If we keep most of these things out, then focussing on what an actual token should encompass has us ask questions like:

  • Is a token a single value, or can it also be a range, or multiple values? (like in system-ui),
  • Following from that, are all tokens named?
  • Should we enumerate the data type? (string, int, color, json)
  • What about usage type (like for colors: "text" vs "background")
  • What happens with related token values? (Like a color range: each color is an individual token, but they live in a range. Additionally, A "heading" token might be comprised of a color, a font-family, a font-size, a font-weight, a font-variant, a line-height and letter-spacing. I like UDT's id references here, but how would that look for combinatory tokens)
  • Most systems out there have a relatively similar grouping: colors, typography, spacing, border radius, shadow/elevation. Does it make sense to bring this along, or should tokens be ungrouped and solely depend on data type or usage type? This would be unopinionated but could also hamper adoption.

Still, some of these questions already make things bigger rather than smaller (the nature of exploration).

@mkeftz
Copy link

mkeftz commented Jun 25, 2019

Hello, 👋 Co-founder of Interplay here.
Thank you for starting this @kaelig 🙏

After reviewing the spec and comments above, here are some thoughts...

1. Overlap with System UI Theme Specification
Firstly I think it should be agreed on if/how this lives with System UI Theme Specification. It is a great spec and as @jxnblk mentions above, has some very similar goals to this. At Interplay we already support System UI structure for tokens. However, from a design tool perspective, it is intentionally missing some of the extra metadata around tokens we need. This looks like it will fill that gap. Possibly aligning the specs so design-tokens can compile to System UI?

2. Theming/Platform specific values
This is a tricky one. I feel having multiple variant values in a token would get overly complex, especially when you consider a matrix of values. e.g. what's a token value for mobile and dark mode?
As suggested previously this will bloat the file size and make the file less human readable/writable.

Another option would be to leave this above the spec for now. i.e. The spec only deals with the values for a specific theme/platform. You can have complete token-sets (with the exact same structure) for each theme/platform/whatever. Obviously, in implementation, they could inherit from a base set.
@kaelig - is this like the theming "overrides" in Theo?

3. Token Structure
One of the biggest challenges from the design tool perspective is the structure/categorization of the tokens. We need to be able to import tokens from code and understand what they can be used for and how they should be grouped. I like Style Dictionaries "Category/Type/Item" structure but think it could be too opinionated for the spec. Possibly specific meta-data fields on the tokens could work? e.g. "type" and "category"?

4. Cross-referencing
Not mentioned yet, but I think the ability to cross-reference tokens is important. This allows for "option" and "decision" tokens. Style Dictionary does an awesome job of this.

Other than that, I love the idea of having a spec, and happy to help however we can.

@kaelig
Copy link
Member Author

kaelig commented Jun 26, 2019

A lot of the (super interesting ❤️) comments mention theming. After talking with @dbanksdesign earlier today, we think it'd make sense for theming to get its own task force. I'd like to start the conversation by laying out the basics of what a simple, low level theming spec could look like and ask y'all to comment on this topic over here: #2

@ilikescience
Copy link

ilikescience commented Jul 20, 2020

👋 I'm Matt, a designer and developer - I've been exploring how design tokens can be stored, transformed, and accessed in ways that maximize their utility. You can read that work here.

It might be useful to separate some of the individual aspects being discussed (typing, uniqueness, aliasing, order-dependency) from specific language specs and interpreters (CSS, JSON, SASS) ... it'll help clarify some of the discussion and avoid wrestling with the complexity of the existing implementations.

I'll try and kick off some of those topics — though, as we get into some of the more "pure" concepts, my understanding gets a little fuzzier, so please correct me if I'm using these terms incorrectly.

High-level structure

I personally think of a design token as a key-value pair meaning it consists of two parts: one part is used for reference (looking up the token, talking about the token), and the other is used for application (indicating the color to be used, the font family, the border radius).

Is that the mental model that y'all use, too? Might be nice to just put a checkmark in this box :)

Typing

Reading through all the great conversations happening here, it's clear that typing is very important. Not only is it a key to a human-readable and human-writable format, but it's also going to have a big impact on the machines/code that read and write the tokens.

The main question around typing: Should tokens be strongly typed, weakly typed, or not typed at all?

Strongly typed: this might involve defining types as part of the spec. A token is only properly-formed if its types are declared and validated at compile time. This makes tokens a little harder to write, but has benefits for performance in the programs that utilize them.

Weakly typed: this puts the burden of type-checking on the interpreter. Tokens are easier to write, but applications have to do some extra work to check types before utilizing the tokens.

Not typed: this is some deep theory stuff that I don't understand very well.

Uniqueness

  • Should it be safe to assume that a given token is defined once and only once?
  • If no (ie, a token might be defined more than once) , should it be safe to assume that the two values are the same?

Some analogies here:

In JS, I can't define a const more than once. ECMAScript defined this rule to help interpreters be a bit more efficient.

In CSS, I can define a rule (like .token {}) over and over again.

What are use cases for defining a token more than once? What kinds of complications would that introduce to the humans that write and maintain the code, and the machines that have to correctly interpret these definitions?

Aliasing

I've found quite a few use cases for aliasing in writing tokens or using tokens — sometimes it's a lot more convenient to think about the button-background token than the purple-50 token.

However, there are some tradeoffs that come with writing aliases into the spec. For instance, what do we do about circular references?

Order dependency

Some current implementations of design tokens produce order-independent token files — Theo and Style Dictionary both work with JSON and YAML, which are essentially associative arrays.

Earlier in the conversation, folks have mentioned some operations and use cases that might be order-dependent, like overrides and functions.


I think that there's a ton of experience we can draw on from the history of other specs and how they were adopted over time — but ultimately it's the answers to these very core questions that will inform the shape and scope of the format specification.

@jonathantneal
Copy link

jonathantneal commented Jul 20, 2020

That’s a fantastic write-up, @ilikescience. I also enjoyed your post about your design API. I found the GraphQL interface quite straightforward. In that post I spotted the category relationship field, where I presume you’ve gone down this road quite a bit farther. That kind of experience will be super helpful.

Typing

I can see how typing might be helpful. I would be interested in seeing this explored more. In a CSS-like markup, I could imagine at-rules and functions accomplishing this. Tho, I like that you are focused on the should we before the how do we. It is harder for me to separate them. 🤷

Uniqueness

Regarding const and the prompt “What are use cases for defining a token more than once?”:

I would not be in favor of const for style rules, i.e. selectors paired with a block of styles. Correlating const with rules seems problematic to me, as CSS rules are better compared with JS class, Class.prototype prototypes, and {} objects.

To the point of the prompt; if I interpret a style rule like token { color: black } as const token = { color: 'black' }, then I’m unsure what an override to token looks like in CSS when I want to assign a background color to the token definition. In JS this might look like token['background-color'] = white, while in CSS it would look like another rule — token { background-color: white }.

If const meant the selector could only be used once, then the interpretation would more closely compare with const body = Object.freeze({ color: 'black' }), seems more restricting than what is possible in either CSS or JS. 🤔

Aliasing

I think aliasing is a must-have, tho most-crucial for design systems over time. Aliasing can happen in the start, as in primitive tokens like you described, and also in higher order components. Like, sometimes things are referred to by how they look; sometimes things are referred to by how they function; and sometimes things are referred to by how they relate to other things. And on and on. Things get referred to multiple ways within a design system, usually based on history or context. Aliasing can seem antithetical to order and consistency, but consistency gets super hard the more components get created and the more time passes.

Most design systems that I have seen up-close start out as a state of the union, often combining some new desires of the designers or developers with a larger set of existing patterns in production. The new and old are best paired with aliasing. Then, in the course of putting the design system together, the team makes choices over how to group color tokens along some axis of intensity, or how to group spacing by some multiple. And then later in the course of using the design system, exceptions are made and whole concepts change or get reimagined, and new pattern emerges. The new and old are best transitioned with aliasing.

Aliasing for all the things!

Order dependency

I probably have more to learn here. I would expect it to resolve a dependency tree like JS imports; e.g. Two JS files can both import from the same third JS file.

@c1rrus
Copy link
Member

c1rrus commented Jul 20, 2020

Wow. Lots of great activity going on here! It's sparked a ton of questions in my head...

@oscarotero I really like your CSS-like syntax proposals! I'd definitely support exploring that direction more.


@mirisuzanne I'm not sure I've fully understood what you meant by "custom groupings". Are you thinking of allowing people to define their own, custom group types for token values? I.e. something similar to interfaces found in some programming languages (e.g. TypeScript)? Or did you have something else in mind?


@ManeeshChiba How would you expect tools to interpret your proposed "molecule" tokens? Using your example:

typography {
    main {
        font-family: $font-families.display;
        font-size: $sizes.medium;
        color: $colors.primary;
    }
}

Is the nesting...

  • a short-hand for declaring some tokens whose names share common prefixes? I.e. it's equivalent to something like typography.main.font-family: $font-families.display; typography.main.font-size: $sizes.medium; ...
  • or, is this like a bunch of CSS declarations - i.e. you're assining the value of the $font-families.display token to the font-family property. If so, what is typography main selecting?
  • something else entirely ;-)

@jonathantneal Great points about the different ways we might re-use or build upon CSS syntax. I think you've touched on something interesting there: What are the concepts we might want to inherit or copy from CSS?

Most design token tools to date treat them as key-value pairs. Taking Styledictionary as an example, you might define a single design token in a JSON input file like so:

{
    "my-token": "#f00"
}

...and it can translate that in a range of different output formats. E.g. the SASS output might be: $my-token: rgba(255,0,0,1);, the JavaScript output might be: export const myToken = '#ff0000';, and so on.

My (possibly incorrect) interpretation of @oscarotero's original proposal was an alternative, CSS-inspired syntax for expressing same kind of design token input data. So, for instance, a future Style-dictionary-like tool might read in such files instead of JSON.

However, what's missing is the equivalent of CSS's selectors and properties. There's nothing that specifies what parts of a UI certain token values should be applied to. (That's assumed to be the job of a designer and/or developer who consumes the (potentially translated) tokens into their project)

It's a totally valid question to debate whether a design token format should let you assign tokens to properties and thus essentially define the visual appearance of a UI, but so far they mostly don't (though Diez is beginning to blur those boundaries!).

If I understood your option 2 correctly, where "we build upon the whole suite of CSS specifications", we'd be making a superset of CSS. I suppose we'd get all of CSS's properties, selectors and other goodness "for free" in that case. But, assuming we'd want to empower people that could write tools that knew how to translate that into something meaningful for completely different platforms (e.g. output native Swift UI code for iOS), I'd expect there to be some substantial challenges. Does native UI code for, say iOS, have an equivalent to the DOM in a web-browser and if not what how would we interpret a CSS selector? Also, would we not be reinventing the likes of SASS and LESS?

I suspect I misunderstood the options in your comment though and you meant something else?


@ilikescience Thanks for that excellent write-up! The more I think about it, the more I feel we need to capture and name the concepts we're all talking about. Then we can begin to decide which ones are in-scope for a design token format (at least for v1) and which are not.

As you already know, there's already been some discussion amongst the editors about how it might make sense to begin by defining the "model" or "mechanisms" of design token data first and then worry about what syntax(es) it can be serialised to / de-serialised from.

I feel like your comment is the first step towards being able to define such a model. :-)

@Blind3y3Design
Copy link
Member

Blind3y3Design commented Jul 21, 2020

👋 Hello everyone. Just getting some thoughts out here for the group to mull over.

High-Level Structure

I feel as though adopting or starting from the CSS syntax and looking in to using rules and properties associated with CSS is going to cause a sort "lock-in" and reduce the "platform/tool/language agnostic" ideal of design tokens.

CSS by it's own nature is a web technology. It does not need to serve native applications, and as such does not have considerations for how different native platforms handle their data or declarations.

Where a language like JSON/YAML/XML is almost purely data storage and key/value pair associations. Using these languages as a point of origin would result in a less restrictive structure and allow for a simpler association between a token's name and it's value.

Using something like YAML you can still get nested groupings/associations:

font:
    size:
        small: 8px;
        medium: 16px;
        large: 32px;
    family:
        display: fancy-font;
        body: normal-font;

This could result in tokes like font.size.small and font.family.body

One other consideration I think may be worth mentioning is that unless the CSS Working Group updates the CSS spec, and then browsers adopt it in a timely manner whatever format we choose will have to be compiled into native CSS (same logic for your pre-processor of choice) anyway.

If we're going to have to compile from "token syntax" to "platform syntax" (this could be swift, css, js, etc) I think it makes the most sense to try and make the "token syntax" simple/easy enough to allow for quick and efficient compilation to any/all of the other "platform syntax" languages.

Uniqueness

In my experience we typically only define our tokens once, however I could see a world where at a larger org, or for something like a "bootstrap of design tokens" a team may want to take a "master" tokens file and suppliment or modify it with new definitions for existing tokens.

Example:
"Master" token file defines a color palette using basic terms like primary and secondary

color:
    primary: #7F0000;
    secondary: #320000;

The tokens fit most of our needs, but my team wants to change the colors. Rather than create new tokens, it would be nice if we could simply override the existing token definitions.

color:
    primary: #6666FF;
    secondary: #33337F;

In this example the format would need to function similar to var, let, or the CSS cascade. If I redefine something further down in the file, or in an import after the original it would need to update the value of the key.

@import tokens-master
@import custom-token-definitions

color: $color.primary;

This would be expected to output to

color: #6666FF;

Aliasing

I would agree with this being a must-have. If only for the ability to define specific component/state/relational values based on an existing token's value.

The biggest use case that our team uses this for is for branding and state values. All of our system's colors have generic names, they are then aliased to product specific values based on that product's branding. The same logic applies to states. It makes more sense to define color.light, color.default, and color.dark once, and then alias states to those values rather than have a new definition for each components hover/active/disabled/focus state.

@ManeeshChiba
Copy link

ManeeshChiba commented Jul 21, 2020

Thanks for the thoughtful question @c1rrus

I may be incorrect about this, but I am thinking about Design Tokens as way to codify design decisions, so I view the "selectors" as keys rather than css selectors per se. In my mind, the nesting implies a relationship.

For instance in my example:

typography {
    main {
        font-family: $font-families.display;
        font-size: $sizes.medium;
        color: $colors.primary;
    }
}

There is a token named main which defines a set of design decisions about how text should be rendered. This token is grouped into a super set of typography.

So if I were to build a interpreter, I may look for the specific super set (perhaps because its reserved) of typography, within it I would expect to find collections of decisions to render a particular typographic element.

In case of my example, the main text element. If I were writing a Design Token to CSS transpiler; It might create a css class of .main and it would hold the specified font family, size and color information.

If I were writing a transpiler for some design software, perhaps it would be presented like this:

Example of Character Styles mocked up in Adobe XD

I fear I may have missed my mark with my initial example. What I was trying to express is that Tokens will need two parts, no matter which syntax or language we choose. One part to define a property, border thickness, font weight, gutter width etc. And another to define a collection of those properties, ie. A card element with a white background, rounded borders and a drop shadow.

Both are design decisions we need to codify.

@oscarotero
Copy link

oscarotero commented Jul 23, 2020

Hi again. What a lot of great thoughts!
Let me explain better my proposal of using a CSS-like syntax.

Why a CSS-like syntax?

I'm agree with many of you that design tokens are key-value pairs, with nested groups, so they could be represented with JSON. Example:

{
  "typography": {
    "font-family": "Arial",
    "font-size": "24px"
  }
}

But the problems I see with JSON are:

  • It's not easily editable by humans
  • It doesn't allow to add comments or extra data, so to add support for more than simply key-value pairs, we'd need to create and standardize a more complex data structure like following:
{
  "typography": {
    "comments": [
      "First comment",
      "Second comment"
    ],
    "value": {
      "font-family": {"value": "Arial"},
      "font-size": {"value": "24px"}
    }
  }
}

Other option is using YAML, a more human format, that allows to add comments:

# This is a comment
# Another comment
typography:
  font-family: Arial
  font-size: 24px

That's a good option but has some drawbacks:

  • Yaml cannot be minified because depends heavily on spaces, so it's not a good format to return data from an API, for example.
  • It's a fragile format. A bad indentation breaks the file. I know people that hate Yaml because this.

Other drawback of JSON and YAML is not about the structure of the tokens but how express the values. Colors are one of the most clear examples. Sketch, Figma, CSS, Swift, Java... all have different methods to represent colors. So we need a standard way to represent them as tokens. And the same for other values like dimmensions, gradients, different units, animations, etc.

CSS is a language that have support and documentation for all these stuff and have additional features:

  • The syntax is simple and robust. There are no problems with indentations like YAML or quotes and trailing commas like JSON.
  • It's flexible: allows comments, the format can be minified and loaded remotely.
  • Includes additional features in form of at-rules, so you can add some kind of logic in the tokens file, for example:
    • Import extra dependecies using @import
    • Define conditions using @media
    • There are some additional features not standard yet, but that could be included someday, like @extend or @when / @else
    • Or even new at-rules that we can create to solve specific problems with design tokens

@c1rrus says:

My (possibly incorrect) interpretation of @oscarotero's original proposal was an alternative, CSS-inspired syntax for expressing same kind of design token input data. So, for instance, a future Style-dictionary-like tool might read in such files instead of JSON.

Yes, exactly! My proposal is not about using the CSS format as is to store tokens, but creating a new format, inspired in CSS and taking advantage of all its standards. So when a tool import this new format, it nows how the colors, units, fonts, etc are expresed and how transform these values to use them in other platform (CSS, SASS, LESS, iOS, Android, CSS-in-JS, etc).

@c1rrus also says:

Is the nesting...

  • a short-hand for declaring some tokens whose names share common prefixes? I.e. it's equivalent to something like typography.main.font-family: $font-families.display; typography.main.font-size: $sizes.medium; ...
  • or, is this like a bunch of CSS declarations - i.e. you're assining the value of the $font-families.display token to the font-family property. If so, what is typography main selecting?
  • something else entirely ;-)

In my opinion, is the first case, illustrated with this example:

typography {
    main {
        font-size: $sizes.medium
   }
}

/* It's equivalent to: */
typography {
    main.font-size: $sizes.medium;
}

/* And to: */
typography.main.font-size: $sizes.medium;

Anyway, this is just sugar syntax, so I don't have a strong opinion about adding support for this or not. Just want to illustrate that in this format there are no selectors. Only key-value pairs.

Uniqueness

In my company, we have a white-label product, so we have basic styles and them multiple themes, created by overriding the tokens (mainly colors and fonts). So this new format should allow to override these values at any point, in the same way that CSS.

Aliasing

From a designer perspective, aliasing is a must. Design systems are recursive, and one clear example is the atomic design system, in which a value (atom) is used to generate a more big value (molecula) and this value will generate a even bigger value (organism). So we need a way to create aliases between values, in order to avoid repetition.

Practical example

For designers

I imagine design sofware like Sketch, Figma or Adobe XD with support for this new format, so you could have a panel to import and edit tokens in order to use them later in the elements. Maybe a simple text editor where you could include some @import to load the common tokens of your design system, override other values, leave comments, etc. These design tokens could be used to apply colors, styles, border width, fonts, etc to the elements of the document (for example, a button, or a text).

An example using Sketch:

tokens

For developers

The developers could use any tool (like Style-dictionary, Diez etc) to load these tokens and convert them to values that can be used in each platform (css, swift, android, etc). In CSS, the tokens could be converted to css variables, but also there could be a postcss plugin to use them directly, for example:

.button {
    background-color: $colors.primary;
}

Other example: In my company, we are using mjml to create emails, and you must assign style values as attributes (see, for example, the button component). We could use the tokens here too:

<mj-button background-color="$colors.primary">Click me</mj-button>

Typing

CSS has not typings and in most of cases it's not necessary. If we use the CSS way to represent values, we can detect that #333 is a color, 23px is a dimmension unit, 45deg is a degree unit, 200ms is time unit and even lineal-gradient($colors.primary, $colors.secondary) is a gradient. So, tokens consumers could load all these values and detect its type. I imagine, for example, Sketch or Figma could show all the tokens containing colors in the color palette, gradients tokens in the gradients palette... automatically.

But typing could be a nice feature anyway. I've proposed a simple syntax here: #1 (comment) but open to other proposals, for example, using something similar to the @property at-rule (https://web.dev/at-property/)

@lukasoppermann
Copy link

lukasoppermann commented Sep 21, 2020

On Value Types

I agree with @kevinmpowell that this is an important issue to be dealt with.
I am currently running into this issue, while extracting and converting tokens from figma to code.

With a missing unit value the only option is to supply many values as strings which makes them potentially harder to work with.

Think about something like 123.54874% if you wanted to round it you would need to convert it again.

The same goes when specifying rem or em which can make a huge different in the output but is not discernible without a unit .75 could be em or rem.

To come back to @kaelig original proposal I would suggest to add either an (optional) unit property or an optional sub-object with value and unit, but I would prefer the other option.

Another approach would be to do something like valueType which would allow to also specify something like colors, e.g. 'hex' or 'rgba'.

interface Token {
  name: string;
  value: any;
  valueType?: string;
  description?: string;
  data?: Data;
  // What other properties are needed? (type, category, group…)
}

Typings

I am strongly in favour of typing. This may not be important for the design tools, but for implementation of design tokens. Thinking about developers writing transformers, plugins, etc.

This would arguably also create a less error toolset around design tokens, as all tools and implementations can refer back to standardised types.

Property specifications

I don't know if this should be done in this issue or in a new one, but I think we need to also specify properties in a more detailed level. At least the common ones.

If you agree, I would prefer to move this to a new issue, as this one gets pretty crowded.

Property naming specification

It would be important to specify how certain values need to be named. Having a standard makes it much easier to use them in tools. For example are we talking about border-color or stroke-color, this would allow for tools to suggest this token in the right position. (Of course it could also be solve by using extra data per item, but a naming convention like in css would be helpful).

Property value specification

The same goes for property values.
For example should an rgb value be specified as rgb(50,50,50) or as 50,50,50 or as an object {r:50,g:50,b:50}

@lukasoppermann
Copy link

lukasoppermann commented Oct 1, 2020

Concerning token names

Currently the interface defines names as string.

interface Token {
  name: string;
  value: any;
  description?: string;
  data?: Data;
  // What other properties are needed? (type, category, group…)
}

However I am wondering if this should be more strict? Since I guess most of us have worked with tech a lot, the examples "accidentally" seem to all follow conventions. However I have seen names like this:

  • button login
  • buttonLogin
  • Button-Login
  • button_login
  • button+login

Also there are many languages with special characters like äüöß.

I think this is something to consider and specify, either by restricting it to a certain format and character set or by explicitly allowing any string.

Case 1: restricted

E.g. lower case with dash = button-login.
This makes it much easier to work with from a tech side, as you know what you are getting. Kebab case can be transformed to camel case to make items referencable using dot notation or usable in iOS or css.

However it does restrict the user or at least the users output.

Case 2: unrestricted

This would allow very expressive names. However this makes it complex to use references in dot notation or use the names in code. They need to be converted and special attention needs to be put on how the names are converted to be still easily readable.

@lukasoppermann
Copy link

lukasoppermann commented Mar 2, 2021

Hey @kaelig the version you proposed in the TokenList is it supposed to be the version of the tokens or of the spec used?

In case it is for the tokens, this would be very useful (I am currently looking into how a version could be added to a design tokens file).

My use case is that I can not store the tokens in a vcs (I don't have access due to missing vpns), so I need to add a version to the file.

@kaelig
Copy link
Member Author

kaelig commented Mar 3, 2021

I believe it was related to the version of the file itself, not of the token spec.

However, you bring a very important point that we need to discuss with tooling vendors at large: should the format be shaped in a way that allows for token-level versioning, or any mechanism that potentially helps with cross-tool conflict resolution between versions of a token set?

My hunch is that we'll see some tools acting as "design token brokers" in the middle, which are tools that would aggregate tokens and manage them in a way that works for an entire team's ecosystem. Another outcome I foresee will be that a team points all of their tools to a single "token source", which itself handles access management, versioning, and shipping.

With that in mind, I'd love to hear more about your use-case and perhaps you could even share what you think might solve what you brought up?

@lukasoppermann
Copy link

lukasoppermann commented Mar 3, 2021

Hey @kaelig,

I believe it was related to the version of the file itself, not of the token spec.

That sounds great. I think this is something that is definitely needed!

Your "hunch" seems very likely to me. However in any of those cases I think it would be helpful to have some meta data and a version within the token payload. This way it does not matter if you get the token from a broker via an api request, pull it from an npm package or import a json file that the design team stores on a shared drive.

Keeping a version tightly coupled with the tokens just makes sense. Like the version number that you have in a package.json. It makes the product (token.json) more resilient as it does not rely on a specific type of storage.

It also means that caching the payload (tokens) will always keep the version as well, so it makes it easy to compare if something has changed and a cache needs to be invalidated.

Let me know what your thoughts are on this.

Just as a last thought, DSP has a spec_version in addition to the token version. I think this is a great idea if you think about the tooling, as this makes it easier to change specs later on without breaking tools (The issue that still hunts the internet 😉 ).

@nhoizey
Copy link

nhoizey commented Mar 11, 2021

👋 Hello everyone, I'm Nicolas, I've been working on some projects involving Design Tokens, and hope to do more, so I'm really interested in a standard for these.

I've read the whole thread two times to make sure I understand all opinions, and here are some comments:

Syntax

I like @oscarotero's suggestion to use a CSS-like syntax, because it's really both "human-editable and human-readable", which is the first principle from @kaelig's first message, it's better than JSON or YAML for some reasons @oscarotero listed here, and it already has many syntaxes and units for colors, dimensions, etc.

However, sadly, I think dev tooling to deal with such a format could probably not re-use CSS based tooling, as this syntax would be "inspired by" more than "based on" CSS. And there are already a lot of robust tools to deal with JSON or YAML inputs and outputs.

Types

I agree that it would be best having types to help tools manipulate values, and allow for tokens files validation.

I use Style Dictionary and the CTI logic (or map it from my own hierarchy) to get typing, which allows for targeted transformations (px to rem, colors, etc.).

I see that @oscarotero said that "CSS has not typings and in most of cases it's not necessary", but I think CSS do have (infered?) typings, we can't put a dimension in a background-color property. Oh, but I said just above that CSS is not the best inspiration anyway… 😅

Naming

I would prefer it to be restricted, even without dashes (but underscores could be used), to prevent any issue, as suggested by @lukasoppermann in this comment

Aliasing

As most of you already said, aliasing is a must have to deal with token reuse. Tools can deal with circular references and alert the user when there's something wrong.

Property naming specification

From @lukasoppermann's comment here, I don't understand "are we talking about border-color or stroke-color". These are either tokens names or properties in the app, which values can be based on tokens. Why would this standard need to define these names?

@lukasoppermann
Copy link

lukasoppermann commented Mar 11, 2021

Hey @nhoizey, thanks for all your thoughts on the topic.

Concerning the Property naming specifications I was coming from a "tool maker" perspective (as I work on some design token tools). However the example is not that great as it does not necessarily matter so much for colors.

Lets say tools like Figma and Sketch implement design tokens:

  1. The user can use tokens In different places
  2. The user defines multiple size / pixel based tokens:
border-radius--small: 4px;
border-radius--default: 8px;

spacer--4: 4px;
spacer--8: 8px;
spacer--16: 16px;
  1. The tool maker may want to only show the radius tokens for the radius property and the spacer tokens for auto-layout or padding
  2. To allow this any consumer of tokens (incl. tools) must be able to "understand" the tokens. This is were naming standards would be helpful.

It could also be solved by a property on the item that defines the use case (which than would probably have to be standardised).

I am not saying this idea was the best solution. But having a way to "understand" values would be very helpful.

@nhoizey
Copy link

nhoizey commented Mar 11, 2021

@lukasoppermann ok, I now understand what you want to achieve, and I agree there's a need to filter tokens to those really useful for each use case in a design tool. 👍

@lukasoppermann
Copy link

lukasoppermann commented Mar 16, 2021

Hey @kaelig I just thought about another topic: relationships.

There are two types:

1) Values from a visual group (what DSP of combines as collections)

  • Font properties e.g. font-size, font-weight, font-family
  • Shadow properties e.g. shadow color, shadow blur, shadow offset

While one could argue that the font example would be a text-style group, and the tokens make sense on and individual base, this is not true for the shadow example. An x-offset of a shadow has no other use.

2) Logical pairs (e.g. a background & text font)

In css this is apparent by having an element and checking the background color and the text color. For tokens you can not know this at the moment.

I personally use the google material approach, where you have a color e.g. surface and a text/icon color to use on top on surface. I find this very helpful. However you need to logically understand this to make the connection, which is hard for tools.

Therefore it could be helpful to allow for some kind of pairing.

This could be solved via the data properties, but if this use case is bigger, it may make sense to consider it.

@seanmcintyre
Copy link

seanmcintyre commented Apr 2, 2021

Hi everyone! I wanted to chime in with the approach to tokens we're taking at Vimeo. Everything is in early stages and subject to change, but we have quirks that I'm not sure have been fully represented here, and I hope y'all will find them interesting. I'm still re-reading the entire thread to ensure I understand the many thoughts that have been put forward, but I feel like @danoc, @ventrebleu, @mathieudutour, @mirisuzanne, @NateBaldwinDesign (and others!) have touched on related ideas to our approach.

High Level Summary

  • tokens are organized in a taxonomic structure
  • tokens can be a property, a value, both, or multiple property-value pairs
  • tokens are context dependent and are stores of their contextual values
    • they do not need to consistent across contexts: we have edge tokens that are shadows in light mode and borders in dark mode
  • what we're calling a TokenKit is interchangeable and must have a minimal set of specific tokens to work with our design systems, in particular our primary component library Iris
  • what we're calling a TokenBand is a function that takes a value 0 - 1000 (in rare instances less than 0 or greater than 1000) and returns a discrete token, like blue-500.
    • tokens do not have to be functional, and may be handwritten, but it is discouraged
    • if a Token is derived from a TokenBand, ie. vimeo-core-blue-500 derived from vimeo-core-blue it will be read-only, but still human-readable!
  • TokenKits should be versioned
  • we're targeting our newer frontend's stack first (React, styled-components, TypeScript), then moving to CSS Variables, then striving to support all platforms agnostically
  • in addition to supporting React/SC/TS in our first-pass, we're writing a custom Figma plugin to sync library style properties for our designers from code
  • we're probably going to store color as LCH and provide tools to convert to other colorspaces when needed
  • we want to trend as closely to as possible to CSS names and conventions, and we want CSS Variables to be the most privileged target output for the token system (despite the fact that we're starting with CSS-in-JS out of necessity)
  • we're considering proposing TokenKits be published on npm in a manner similar to @types
    • if TokenKits are interchangeable, you could pull @tokens/vimeo or @tokens/spotify or @tokens/asana etc to get all their of TokenKits
    • alternatively, we also want to make a token browser (think minimalist, standardized Storybook for tokens) that follows the taxonomic hierarchy tokens as routes with a page for each token that has visual representation(s), documentation, and can function as a JSON endpoint
      • ie. vimeo.design/tokens/core/color/blue/500
      • I'm hopeful something like this can replace static brand styleguides/kits generally provided for public use
  • we've selected the following primitive token types to get started: size, edge, color, motion, layout, typography

Context

Tokens must specify or inherit default contexts. In the case of color tokens, the context will almost always be a theme.

note: in the below example grayscale and slate are TokenBands, and white is a Token

export type TokenValue = string | FlattenSimpleInterpolation;

export type TokenProxy = (grade: number) => TokenProxy | TokenPrimitive;
export type TokenPrimitive = (grade: number) => TokenValue;

interface Token {
  default?: string;
  themes: {
    [theme: string]: TokenPrimitive;
  };
}
// background is a TokenBand
export const background = (grade: number) => readToken(backgrounds, grade);

const backgrounds: Token = {
  default: 'light',
  themes: { dark, light },
};

function dark(grade: number): TokenValue {
  return grayscale(-1 * (grade / 5 - 1000));
}

function light(grade: number): TokenValue {
  return grade >= 300 ? white : slate(-1 * (grade / 2 - 150));
}
import { core } from '@tokens/vimeo';
import { Button } from '@vimeo/iris/components';

const SpecialDownloadButton = styled(Button)`
  display: flex;
  color: ${core.color.text(700)};
  background: ${core.color.background(400)};

  @media screen and ${core.layout.breakpoint(300)} {
    display: none
  }
`;

Taxonomy

This is a very rough draft. realm is reserved in case something higher is needed. I'm also deeply uncertain about platform existing above domain, but the thinking there is that aspects of a platform (iOS, Android, macOS, web standards, etc) generally affect the UI/UX of every company/org/project within them, and are therefore higher taxons.

Taxon Example Taxon Example Taxon Example Taxon Example
realm realm realm
platform web platform web platform ios platform
domain vimeo domain vimeo domain vimeo domain vimeo
phylum core phylum create phylum brand phylum
order color order color order typography order typography
series series secondary series header series
specimen blue specimen hover specimen size specimen size
grade 500 grade grade 800 grade 700

We're trying to avoid hyper-specific tokens and stick to as few tokens as possible to represent all the design concepts and relationships in Vimeo's UI/UX. Eventually though, I could see a need for increased token specificity, such as the component-specific tokens that exist in Adobe Spectrum (and other systems).

Taxon Example
realm
platform web
domain vimeo
phylum core
class svv
subclass dock
superorder button
order typography
series meta
specimen size
grade 500

Mapping Relationships (and Visualizing TokenBands)

Design systems should be able to store meaningful relationships that include adjustment of values: e.g. space2 = space1 * 2. If I can't capture real relationships, it doesn't feel like a system (@mirisuzanne)

I find this to be so, so, so important. It's the basis behind our more functional approach and use of TokenBands. In the case of color tokens, a TokenBand should be visualized as a gradient:

897eac80-75db-11eb-9fe9-cd2f89133965 (1)

Given that we view tokens as ways to capture relational meaning in design, we think that:

  • tokens should be strongly typed
  • a token need not be unique, but a TokenBand must be unique.
  • aliases are fine, in the sense that a token can point to another token: vimeo-core-color-primary-500 and vimeo-core-color-blue-500 are the same color. The primary color is a higher order token that corresponds to a concept shared across many components and patterns in the design system. Designers and devs should actually be discouraged from using low level tokens like blue-500 directly in exactly the same way as they should be discouraged (or forbidden) from using a specific color value directly instead of a token.

Versioning

Following the structure we've set up, there isn't really a need to individually version tokens. TokenBands could be versioned, but I'm not sure if we'll find value from that. I'm currently leaning toward a pseudo-semver approach to TokenKits where adding a token is a minor revision, and changing or removing tokens is a major revision.

@lukasoppermann
Copy link

lukasoppermann commented Apr 6, 2021

Hey, I am currently working on a tool to use design tokens in design.

I need to work with text-styles, meaning a collection of multiple values that make up a unique text style, e.g. body.

Since the specs do not include collections, how would I best go about it? Would I just use tokens with an object as a value?

Is this how such a token should be best defined?

{
      name: 'font/body',
      value: {
        fontSize: 16,
        textDecoration: 'line-through',
        fontFamily: 'Roboto',
        fontWeight: 500,
        fontStyle: 'italic',
        fontStretch: 'normal',
        letterSpacing: 0,
        lineHeight: 'normal',
        paragraphIndent: 0,
        paragraphSpacing: 0,
        textCase: 'none',
      },
      description: 'use for danger stuff',
      data: {
        type: 'fontStylte'
      }
}

@lukasoppermann
Copy link

lukasoppermann commented Sep 3, 2021

If I read the gh-page / w3c specs correctly this would be a composite token token, correct @kaelig?

Is the idea here a: The value property has named values (like seen below) (and the names can be used to interpret the value, e.g. fontSize is always a px unit)

{
      name: 'font/body',
      value: {
        fontSize: 16,
        textDecoration: 'line-through',
        // ...

or b: The value holds tokens:

{
      name: 'font/body',
      value: {
        fontSize: {
           type: 'dimension',
           value: 16
        },
        // ...

@kevinmpowell
Copy link
Contributor

kevinmpowell commented Sep 3, 2021

@lukasoppermann what we've been talking about is composite tokens themselves having a type which would help interpret values, so something like:

{
      name: 'body-text',
      type: 'text-style,
      value: {
        fontSize: 16,
        textDecoration: 'line-through',
        // ...

A separate composite type definition for text-style would define the types for fontSize and textDecoration

@lukasoppermann
Copy link

lukasoppermann commented Sep 3, 2021

Hey @kevinmpowell thanks for the quick reply.
So composite tokens would be defined within the specifications? I can not create my own "custom" composite tokens, correct?

Since I am currently trying to implement the draft to test it out, my best shot would be to create a group of tokens, right?
So that fontSize is an actual token with type dimension within the "group" body-text:

{
  'body-text': {
      fontSize: {
        value: 16,
        type: 'dimension',
      },
      textDecoration: //...
  }

@kevinmpowell
Copy link
Contributor

kevinmpowell commented Sep 3, 2021

I can not create my own "custom" composite tokens, correct?

☝️ We've talked about user-defined (custom) composite tokens, but haven't added it into the current draft.

I think your example makes sense. A group is similar to a composite, though with less defined structure.

@lukasoppermann
Copy link

lukasoppermann commented Sep 3, 2021

Hey @kevinmpowell thanks for the feedback.

I reread the section about groups and saw that it is state that:

Groups are arbitrary and tools SHOULD NOT use them to infer the type or purpose of design tokens.

I want to export tokens from figma, so that I can build a amazon style dictionary transformation to turn them into css, xml and xcassets. To me this feels like "inferring" the purpose (e.g. combining the tokens into a text style).

Considering this it would probably be more useful to create "user-defined composite tokens" and update them when the specs are finalised / composite token types are defined. Does that make sense?

@kevinmpowell
Copy link
Contributor

kevinmpowell commented Sep 3, 2021

Considering this it would probably be more useful to create "user-defined composite tokens" and update them when the specs are finalised / composite token types are defined. Does that make sense?

👍

@kaelig
Copy link
Member Author

kaelig commented Sep 8, 2021

Hi everyone, I'm closing this for now as we're looking to publish the first public editor's draft soon.

Each important point will be addressed in its own issue, so we can individually track decisions and have focused conversations.


You all provided tremendous amounts of inspiration, and I want to thank you all on behalf of the entire DTCG team for being so engaged ❤️

To complement the opinions of the community, we gathered requirements from all major design tool makers (both in a group setting and in smaller focus groups).

Design tool makers tell us that their customers have a growing interest toward design tokens, so please keep sharing your feedback and use-cases so we can make the spec better for everyone 🔥 🙏🏻

@kaelig kaelig closed this as completed Sep 8, 2021
@c1rrus c1rrus unpinned this issue Sep 13, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Needs Feedback/Review
Projects
None yet
Development

No branches or pull requests