New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Try: Framework-agnostic block interoperability (Vanilla, Vue) #2463

Closed
wants to merge 1 commit into
base: master
from

Conversation

Projects
None yet
8 participants
@aduth
Member

aduth commented Aug 18, 2017

This pull request seeks to explore a few different options for framework-agnostic block rendering. It stemmed from some initial attempts to refactor Editable state structure (#771) to be less dependent on React element trees, because (a) it would simplify block transforms to ensure a consistent state structure and (b) it would resolve issues with block state serialization for collaborative editing (#1877).

While changing the shape of childrens value was itself not too problematic, it surfaced that we were dependent on the React element shape to allow for save serialization, since React would otherwise not know how to handle the children value. An option here could have been to have block authors convert the children value to a React element with a helper method, but this would introduce additional overhead to implementing a block's save behavior.

Since we've also encountered other issues with using React for save serialization -- disabling HTML escaping (#421) and unnecessary applications of element key (#2349 (comment)) -- I took to exploring what it might take to give us full control over the render behavior for a tree of "nodes", where nodes could be a React element, or a children value, or even a component from another library.

A "Vanilla" element syntax

The element signature type, attributes, ...children has gained widespread adoption for representing a tree of nodes: React, Vue, and many other libraries use it, but in all of these cases, you need to feed it into their own flavor of an element creator function (createElement). Could we not represent these arguments as a simple array instead?

// React
React.createElement( 'section', 
	React.createElement( 'header',
		React.createElement( 'h1', 'Welcome' ) ),
	React.createElement( 'p', 'Hello World' )
);

// Vanilla
[ 'section', 
	[ 'header', 
		[ 'h1', 'Welcome' ] ],
	[ 'p', 'Hello World' ]
]

The performance characteristics of representing it this way should be measured, especially as currently implemented where we first traverse the tree to convert it to the equivalent React element hierarchy, but the interface considered alone is appealing. Would an ideal implementation require reinventing the wheel? Maybe not: Poking through internals of a library like Preact, its diffing logic is pretty well isolated, compact, performant, and compatible.

Interoperability renderers

An initial approach considered for interoperability operated by traversing this vanilla element hierarchy, specifically on looking at the type of element. Where custom logic is necessary, we can provide hooks to enable an implementation to determine whether it can handle an element of a particular type. For example, if it looks like a children value, pass it to Children Renderer implementation (Vue component, unescaped HTML markup, etc).

The implementation proposed here works by replacing the custom element type with a component which renders a mount target (DOM element), but defers actual rendering to the specifics of the implementation. It's assumed that custom implementations will accept the element (its props, children, if applicable) and perform necessary DOM operations. React reconciliation is bypassed by implementing shouldComponentUpdate = () => false:

class InteropRenderer extends Component {
componentDidMount() {
this.props.handler.render( this.props.element, this.node );
}
componentWillReceiveProps( nextProps ) {
nextProps.handler.render( nextProps.element, this.node );
}
shouldComponentUpdate() {
return false;
}
render() {
return createElement( 'div', { ref: ( node ) => this.node = node } );
}
}

This is a pattern we've used elsewhere, specifically the TinyMCE component which needs to manage itself without interference from React reconciliation.

If there is no interoperability handler for the element type, the array shape is then coerced to its React equivalent.

Block mount targets

One downside of a render interoperability pattern is that the handlers must be explicitly defined: Would it be the responsibility of WordPress to provide interoperability handlers for popular frameworks? If plugin authors implement their own, how would we avoid duplication/conflicts?

Another option is to apply only the idea of the mounting target. When rendering a block, we could provide as an additional parameter to the edit and save functions a DOM node to which the block should render, using its own appropriate implementation. This works in the same way as the interoperability renderer, as a component which excludes itself from React reconciliation (except in the case that the edit or save functions return a React element).

Proofs of Concept: Vanilla and Vue Blocks

Included in these changes are two example blocks, one implemented with no framework, and the other with Vue. Here's how they look:

Vanilla:

edit( { attributes, setAttributes } ) {
return [ 'div',
[ 'input', {
value: attributes.text,
onChange( event ) {
setAttributes( {
text: event.target.value,
} );
},
} ],
[ 'h1', attributes.text ],
];
},
save( { attributes } ) {
return [ 'h1', attributes.text ];
},

Vue:

edit( { attributes, setAttributes, target } ) {
if ( target.firstChild ) {
Object.assign( target.firstChild.__vue__, attributes );
return;
}
const child = document.createElement( 'div' );
target.appendChild( child );
new Vue( {
el: target.firstChild,
data: () => ( { ...attributes } ),
template: `
<div>
<input :value="text" @input="setText( $event.target.value )">
<h1>{{ text }}</h1>
</div>
`,
methods: {
setText( nextText ) {
setAttributes( { text: nextText } );
},
},
} );
},
save( { attributes, target } ) {
const child = document.createElement( 'div' );
target.appendChild( child );
new Vue( {
el: target.firstChild,
data: () => ( { ...attributes } ),
template: `
<h1>{{ text }}</h1>
`,
} );
},

The Vue component is slightly more difficult to manage for a few reasons:

  • By default, it transforms an assigned data object to its observable/reactive object format, so we must clone attributes to prevent it from becoming overridden (assuming we still want attribute changes to flow through setAttributes).
  • Rendering into a mount target requires creating a child node, since in my testing it appears Vue rendering wants to take the place of the target el, and assumes it to be of the same tag name of its root template node.
  • To apply attributes updates, we access and assign to the __vue__ internal property of the DOM element

Future Considerations and Challenges

There are a few different directions we could take here, particularly around how far we take the idea of no-framework "array" elements. Potentially, this could serve as a first-class WordPress rendering pattern in lieu of a third-party library. Of course, most of what's explored here is the simplest of cases, and will need further exploration around more difficult challenges:

  • React context
    • Used by react-redux to make Redux state available throughout components of the application
    • Used by react-slot-fill to allow merging React subtrees. Gutenberg uses slots for rendering toolbars and inspector controls, and has been proposed as an option for plugin extensibility.
  • Component interoperability
    • If we become more framework-agnostic, how do we expect other frameworks to consume or reimplement React (or no-framework) equivalent components (Editable, InspectorControls, etc)
  • Component lifecycle
    • An array reimplementation of components works well for equivalent output of stateless function components, but what about components with lifecycle (didMount, willReceiveProps)? Presumably we would need some equivalent of a component class? ... or would we? 馃挱

The work here begs the question though: Why did React et. al take the approach of a createElement function? Am I overlooking some critical disadvantage to plain object elements?

@youknowriad

This comment has been minimized.

Show comment
Hide comment
@youknowriad

youknowriad Aug 25, 2017

Contributor

What about your Editable's state exploration? do you think we should work on this regardless of the status of this PR? Do you have any tangible work on this? Asking because I might be interested in trying this issue (if it's not already being worked on)

Contributor

youknowriad commented Aug 25, 2017

What about your Editable's state exploration? do you think we should work on this regardless of the status of this PR? Do you have any tangible work on this? Asking because I might be interested in trying this issue (if it's not already being worked on)

@aduth

This comment has been minimized.

Show comment
Hide comment
@aduth

aduth Aug 25, 2017

Member

@youknowriad I pushed my work-in-progress branch as update/children-value (specifically 6521f06). The approach there was slightly different, more akin to a Slate.js state structure.

The major issue I'd encountered is one I explained in the original post here:

it surfaced that we were dependent on the React element shape to allow for save serialization, since React would otherwise not know how to handle the children value. An option here could have been to have block authors convert the children value to a React element with a helper method, but this would introduce additional overhead to implementing a block's save behavior.

This option can be seen in the update/children-value branch as a toElement helper function:

6521f06#diff-9e70015597c35b4faecd9a6beae81344L127

Edit: Noting that this effort was only toward updating the shape of the children value, and not Editable entirely.

Member

aduth commented Aug 25, 2017

@youknowriad I pushed my work-in-progress branch as update/children-value (specifically 6521f06). The approach there was slightly different, more akin to a Slate.js state structure.

The major issue I'd encountered is one I explained in the original post here:

it surfaced that we were dependent on the React element shape to allow for save serialization, since React would otherwise not know how to handle the children value. An option here could have been to have block authors convert the children value to a React element with a helper method, but this would introduce additional overhead to implementing a block's save behavior.

This option can be seen in the update/children-value branch as a toElement helper function:

6521f06#diff-9e70015597c35b4faecd9a6beae81344L127

Edit: Noting that this effort was only toward updating the shape of the children value, and not Editable entirely.

@youknowriad

This comment has been minimized.

Show comment
Hide comment
@youknowriad

youknowriad Aug 25, 2017

Contributor

This option can be seen in the update/children-value branch as a toElement helper function

I'm ok with that, maybe it could be a component, we have Editable for edit and it could be Saveable or EditableHtml

Contributor

youknowriad commented Aug 25, 2017

This option can be seen in the update/children-value branch as a toElement helper function

I'm ok with that, maybe it could be a component, we have Editable for edit and it could be Saveable or EditableHtml

@youknowriad

This comment has been minimized.

Show comment
Hide comment
@youknowriad

youknowriad Aug 25, 2017

Contributor

The major issue I'd encountered is one I explained in the original post here

What about the transforms, are these any better? Maybe it's worth a PR regardless of this tradeoff

Contributor

youknowriad commented Aug 25, 2017

The major issue I'd encountered is one I explained in the original post here

What about the transforms, are these any better? Maybe it's worth a PR regardless of this tradeoff

@aduth

This comment has been minimized.

Show comment
Hide comment
@aduth

aduth Aug 25, 2017

Member

What about the transforms, are these any better?

It can certainly help make things more consistent, which is one of the bigger pain points of transforms currently (checking type of incoming value, reaching into children, normalizing string content, etc).

Member

aduth commented Aug 25, 2017

What about the transforms, are these any better?

It can certainly help make things more consistent, which is one of the bigger pain points of transforms currently (checking type of incoming value, reaching into children, normalizing string content, etc).

@gziolo

This comment has been minimized.

Show comment
Hide comment
@gziolo

gziolo Aug 26, 2017

Member

@aduth I noticed that all existing blocks use JSX for markup. I have no experience with Vue, but it looks like they maintain JSX to Vue Babel transform: https://github.com/vuejs/babel-plugin-transform-vue-jsx. It seems like it allows to keep using JSX for blocks and decide on build time which library pick to run the code. I'm assuming JSX transformed to library internals would work the same way in both cases. This doesn't solve other issues you mentioned in the description. However we still could pipe another Babel transform that would output not only framework/library specific code, but also the array representation proposed in this PR. This way it would be possible to use a different representation tailored to needs: virtual dom part would work out of the box and the array representation could be consumed internally by Gutenberg. I hope it's not too confusing. I'm not even sure if that is what is needed, but I thought it is worth sharing anyway 馃槂

Member

gziolo commented Aug 26, 2017

@aduth I noticed that all existing blocks use JSX for markup. I have no experience with Vue, but it looks like they maintain JSX to Vue Babel transform: https://github.com/vuejs/babel-plugin-transform-vue-jsx. It seems like it allows to keep using JSX for blocks and decide on build time which library pick to run the code. I'm assuming JSX transformed to library internals would work the same way in both cases. This doesn't solve other issues you mentioned in the description. However we still could pipe another Babel transform that would output not only framework/library specific code, but also the array representation proposed in this PR. This way it would be possible to use a different representation tailored to needs: virtual dom part would work out of the box and the array representation could be consumed internally by Gutenberg. I hope it's not too confusing. I'm not even sure if that is what is needed, but I thought it is worth sharing anyway 馃槂

@ahmadawais

This comment has been minimized.

Show comment
Hide comment
@ahmadawais

ahmadawais Sep 19, 2017

Contributor

@aduth Thanks for exploring this route. I see a few issues with going on with the native approach here since with that I think we end up with a new WordPress JS Framework 鈥 that's not ideal.

If we use an existing JS FW e.g. VueJS then we can Get people to start building Gutenberg blocks right now instead of teaching them a new JS FW documenting it (Vue already has a strong community, documentation, packages, extension) 鈥 that puts us at least an year or two behind the current states of JS FW.

And then what if the community rejects it 鈥斅爄n a way by not contributing to it or not using it.

What's your thought on that?

Contributor

ahmadawais commented Sep 19, 2017

@aduth Thanks for exploring this route. I see a few issues with going on with the native approach here since with that I think we end up with a new WordPress JS Framework 鈥 that's not ideal.

If we use an existing JS FW e.g. VueJS then we can Get people to start building Gutenberg blocks right now instead of teaching them a new JS FW documenting it (Vue already has a strong community, documentation, packages, extension) 鈥 that puts us at least an year or two behind the current states of JS FW.

And then what if the community rejects it 鈥斅爄n a way by not contributing to it or not using it.

What's your thought on that?

@youknowriad

This comment has been minimized.

Show comment
Hide comment
@youknowriad

youknowriad Sep 19, 2017

Contributor

I see a few issues with going on with the native approach here since with that I think we end up with a new WordPress JS Framework 鈥 that's not ideal

Can you clarify where are we creating a "new" framework in this PR? It seems to me that the block API is needed no matter the approach, and it's the same (aside providing an extra dom node maybe). People don't have to learn anything aside the block API and their framework of choice.

Contributor

youknowriad commented Sep 19, 2017

I see a few issues with going on with the native approach here since with that I think we end up with a new WordPress JS Framework 鈥 that's not ideal

Can you clarify where are we creating a "new" framework in this PR? It seems to me that the block API is needed no matter the approach, and it's the same (aside providing an extra dom node maybe). People don't have to learn anything aside the block API and their framework of choice.

@ahmadawais

This comment has been minimized.

Show comment
Hide comment
@ahmadawais

ahmadawais Sep 19, 2017

Contributor

@youknowriad You are right about the blocks API, but since there is a possibility 鈥 and making it framework agnostic means that we'll end up writing the framework part ourselves 鈥 isn't that true?

Contributor

ahmadawais commented Sep 19, 2017

@youknowriad You are right about the blocks API, but since there is a possibility 鈥 and making it framework agnostic means that we'll end up writing the framework part ourselves 鈥 isn't that true?

@youknowriad

This comment has been minimized.

Show comment
Hide comment
@youknowriad

youknowriad Sep 19, 2017

Contributor

and making it framework agnostic means that we'll end up writing the framework part ourselves 鈥 isn't that true?

No, it means a blog author could use any framework but we'll still pick a framework for Core Development, but it will make this choice less critical.

Contributor

youknowriad commented Sep 19, 2017

and making it framework agnostic means that we'll end up writing the framework part ourselves 鈥 isn't that true?

No, it means a blog author could use any framework but we'll still pick a framework for Core Development, but it will make this choice less critical.

@aduth

This comment has been minimized.

Show comment
Hide comment
@aduth

aduth Sep 19, 2017

Member

Thanks for the feedback @ahmadawais .

and making it framework agnostic means that we'll end up writing the framework part ourselves 鈥 isn't that true?

I don't think this needs to be the case, no. Or at least with an abstraction, it doesn't matter. The underlying implementation could be Vue, React, or a home-grown solution, and could even change from one to the other, so long as the interface of the abstraction remains the same. As a point for backwards compatibility, it's important that the decisions we make today won't suffer churn in a few years time should the particular framework of choice fall out of fashion or change dramatically between. But it's also challenging to find the "perfect" unchanging interface that fits all the requirements while remaining familiar and easy to learn (minimizing the knowledge necessary to come up to speed with applying the interface). The original proposal here identified and embraced a common characteristic of virtual DOM interfaces present across React, Vue, and other frameworks: the [ tagName, attributes, ...children ] signature.

At the same time, it explored an even more flexible offering in the form of merely providing a DOM node, leaving it to the block implementer to use their preferred approach. While not as easy to manage, with its flexibility I could imagine adapters being developed to manage the complexities. For example:

// Before:
edit( { attributes, setAttributes, target } ) { 
	if ( target.firstChild ) { 
		Object.assign( target.firstChild.__vue__, attributes ); 
		return; 
	} 

	const child = document.createElement( 'div' ); 
	target.appendChild( child ); 

	new Vue( { 
		el: target.firstChild, 

		// ...
	} ); 
}, 

// After:
edit: fromVueComponent( Vue.component( 'my-block-edit', {
	// ...
} ) )

If we aim for interoperability, we must also do so in a way which treats each option as first-class, not an after-thought with the bare minimum of compatibility. Shared components are a key feature of what we're building: Managing rich content can be very complex, but if we can maintain the complexities of Editable behind the facade of an easily-incorporated shared component, it's of no consequence to the block implementer. If we aim to interoperate, these or equivalent components should be made available. This could be via WordPress- or community-maintained offerings, or better yet, by compiling all components down to a baseline shared understanding of a component. @developit has been exploring some interesting ideas here. An ideal outcome here would be that a block implementer could write their components in whichever framework they prefer, taking advantage of a single set of core shared components, transparently supported by compiling down (at runtime or ahead-of-time) to a common baseline component type.

Member

aduth commented Sep 19, 2017

Thanks for the feedback @ahmadawais .

and making it framework agnostic means that we'll end up writing the framework part ourselves 鈥 isn't that true?

I don't think this needs to be the case, no. Or at least with an abstraction, it doesn't matter. The underlying implementation could be Vue, React, or a home-grown solution, and could even change from one to the other, so long as the interface of the abstraction remains the same. As a point for backwards compatibility, it's important that the decisions we make today won't suffer churn in a few years time should the particular framework of choice fall out of fashion or change dramatically between. But it's also challenging to find the "perfect" unchanging interface that fits all the requirements while remaining familiar and easy to learn (minimizing the knowledge necessary to come up to speed with applying the interface). The original proposal here identified and embraced a common characteristic of virtual DOM interfaces present across React, Vue, and other frameworks: the [ tagName, attributes, ...children ] signature.

At the same time, it explored an even more flexible offering in the form of merely providing a DOM node, leaving it to the block implementer to use their preferred approach. While not as easy to manage, with its flexibility I could imagine adapters being developed to manage the complexities. For example:

// Before:
edit( { attributes, setAttributes, target } ) { 
	if ( target.firstChild ) { 
		Object.assign( target.firstChild.__vue__, attributes ); 
		return; 
	} 

	const child = document.createElement( 'div' ); 
	target.appendChild( child ); 

	new Vue( { 
		el: target.firstChild, 

		// ...
	} ); 
}, 

// After:
edit: fromVueComponent( Vue.component( 'my-block-edit', {
	// ...
} ) )

If we aim for interoperability, we must also do so in a way which treats each option as first-class, not an after-thought with the bare minimum of compatibility. Shared components are a key feature of what we're building: Managing rich content can be very complex, but if we can maintain the complexities of Editable behind the facade of an easily-incorporated shared component, it's of no consequence to the block implementer. If we aim to interoperate, these or equivalent components should be made available. This could be via WordPress- or community-maintained offerings, or better yet, by compiling all components down to a baseline shared understanding of a component. @developit has been exploring some interesting ideas here. An ideal outcome here would be that a block implementer could write their components in whichever framework they prefer, taking advantage of a single set of core shared components, transparently supported by compiling down (at runtime or ahead-of-time) to a common baseline component type.

@ahmadawais

This comment has been minimized.

Show comment
Hide comment
@ahmadawais

ahmadawais Sep 20, 2017

Contributor

@aduth

Thanks for the explanation and I completely agree with you on that.

An ideal outcome here would be that a block implementer could write their components in whichever framework they prefer, taking advantage of a single set of core shared components, transparently supported by compiling down (at runtime or ahead-of-time) to a common baseline component type.

That'd be an ideal situation. Let me know how I can help. I am trying to explore a better abstraction layer as well. But my knowledge of how things are implemented in Gutenberg is limited, and I am reading more and more source code as I get time 鈥 to understand and to be in a better position to contribute.

Contributor

ahmadawais commented Sep 20, 2017

@aduth

Thanks for the explanation and I completely agree with you on that.

An ideal outcome here would be that a block implementer could write their components in whichever framework they prefer, taking advantage of a single set of core shared components, transparently supported by compiling down (at runtime or ahead-of-time) to a common baseline component type.

That'd be an ideal situation. Let me know how I can help. I am trying to explore a better abstraction layer as well. But my knowledge of how things are implemented in Gutenberg is limited, and I am reading more and more source code as I get time 鈥 to understand and to be in a better position to contribute.

@yyx990803

This comment has been minimized.

Show comment
Hide comment
@yyx990803

yyx990803 Sep 20, 2017

Matias reached out to me mentioning this idea and If I am understanding this correctly, the goal here seems to be decoupling the choices of 鈥渇ramework for developing Gutenberg blocks鈥 vs. 鈥渇ramework for developing Gutenberg itself鈥, which IMO is the right thing to do.

The proposed Vue usage can be further simplified and I can even implement the adaptor right now (ignoring edge cases not mentioned so far, not tested):

function fromVueComponent (options) {
  let vueInstance

  return ({ attributes, setAttributes, target }) => {
    if (vueInstance) {
      Object.assign(vueInstance.attributes, attributes)
      return
    }

    const adaptorMixin = {
      data: () => ({
        attributes: { ...attributes }
      }),
      methods: {
        setAttributes
      }
    }

    // augment raw options with adaptor mixin
    options = {
      ...options,
      mixins: (options.mixins || []).concat(adaptorMixin)
    }

    vueInstance = new Vue(options).$mount()
    target.appendChild(vueInstance.$el)
  }
}

Differences from original implementation:

  • Gutenberg-injected attributes are namespaced under this.attributes to avoid conflicting with component's private data.
  • setAttributes is auto injected as an instance method.

Usage:

registerBlockType({
  // ...
  edit: fromVueComponent({
    template: `
      <div> 
        <input :value="text" @input="setAttributes({ text: $event.target.value })"> 
        <h1>{{ text }}</h1> 
      </div>
    `
  })
})

Or even (assuming vue-loader enabled):

import Foo from './Foo.vue'

registerBlockType({
  // ...
  edit: fromVueComponent(Foo)
})

yyx990803 commented Sep 20, 2017

Matias reached out to me mentioning this idea and If I am understanding this correctly, the goal here seems to be decoupling the choices of 鈥渇ramework for developing Gutenberg blocks鈥 vs. 鈥渇ramework for developing Gutenberg itself鈥, which IMO is the right thing to do.

The proposed Vue usage can be further simplified and I can even implement the adaptor right now (ignoring edge cases not mentioned so far, not tested):

function fromVueComponent (options) {
  let vueInstance

  return ({ attributes, setAttributes, target }) => {
    if (vueInstance) {
      Object.assign(vueInstance.attributes, attributes)
      return
    }

    const adaptorMixin = {
      data: () => ({
        attributes: { ...attributes }
      }),
      methods: {
        setAttributes
      }
    }

    // augment raw options with adaptor mixin
    options = {
      ...options,
      mixins: (options.mixins || []).concat(adaptorMixin)
    }

    vueInstance = new Vue(options).$mount()
    target.appendChild(vueInstance.$el)
  }
}

Differences from original implementation:

  • Gutenberg-injected attributes are namespaced under this.attributes to avoid conflicting with component's private data.
  • setAttributes is auto injected as an instance method.

Usage:

registerBlockType({
  // ...
  edit: fromVueComponent({
    template: `
      <div> 
        <input :value="text" @input="setAttributes({ text: $event.target.value })"> 
        <h1>{{ text }}</h1> 
      </div>
    `
  })
})

Or even (assuming vue-loader enabled):

import Foo from './Foo.vue'

registerBlockType({
  // ...
  edit: fromVueComponent(Foo)
})
@aduth

This comment has been minimized.

Show comment
Hide comment
@aduth

aduth Sep 20, 2017

Member

@yyx990803 Thanks for weighing in, and for the suggestions to improve the implementation. Yes, your understanding on decoupling the choices is correct, or at least what's currently being explored. For additional context, I alluded to this in an earlier conversation in the WordPress Slack ([1], [2], [3]). I'm glad this direction is showing some promise, and I plan to pick up work聽again on this pull request this week.

Member

aduth commented Sep 20, 2017

@yyx990803 Thanks for weighing in, and for the suggestions to improve the implementation. Yes, your understanding on decoupling the choices is correct, or at least what's currently being explored. For additional context, I alluded to this in an earlier conversation in the WordPress Slack ([1], [2], [3]). I'm glad this direction is showing some promise, and I plan to pick up work聽again on this pull request this week.

@BE-Webdesign

This comment has been minimized.

Show comment
Hide comment
@BE-Webdesign

BE-Webdesign Sep 22, 2017

Contributor
// Vanilla
[ 'section', 
	[ 'header', 
		[ 'h1', 'Welcome' ] ],
	[ 'p', 'Hello World' ]
]

The work here begs the question though: Why did React et. al take the approach of a createElement function? Am I overlooking some critical disadvantage to plain object elements?

@aduth Welcome to Lisp 馃槉. JavaScript is not Lisp though, so there are definitely advantages to using functions and objects. createElement( component, config, children ) basically becomes this (way over simplified see ReactElement):

{
  type: 'section',
  props: {
    children: [
      {
        type: 'header',
        props: {
          children: [
            type: 'h1',
            props: {
              children: 'Welcome'
            }
          ]
        }
      },
      {
        type: 'p',
        props: {
          children: 'Hello World'
        }
      }
    ]
  },
}

createElement() et al. as a function serves as an abstraction to help avoid people from having to write out big boilerplate objects like the above. React uses that object structure to do its magic under the hood, while also providing validation etc. Starting with a nice data format like you are coming up with, is a great start, then we could build our createElement alternative around that to provide validation, etc.

One advantage of JS object literals over arrays is that we can name our values, whereas in an array format our array value names are implicit in their index. In the proposed array syntax, component would be [0], and config/children would be [1]. If we ever wanted to change things around it would be more difficult using the array syntax, as opposed to named properties.

Contributor

BE-Webdesign commented Sep 22, 2017

// Vanilla
[ 'section', 
	[ 'header', 
		[ 'h1', 'Welcome' ] ],
	[ 'p', 'Hello World' ]
]

The work here begs the question though: Why did React et. al take the approach of a createElement function? Am I overlooking some critical disadvantage to plain object elements?

@aduth Welcome to Lisp 馃槉. JavaScript is not Lisp though, so there are definitely advantages to using functions and objects. createElement( component, config, children ) basically becomes this (way over simplified see ReactElement):

{
  type: 'section',
  props: {
    children: [
      {
        type: 'header',
        props: {
          children: [
            type: 'h1',
            props: {
              children: 'Welcome'
            }
          ]
        }
      },
      {
        type: 'p',
        props: {
          children: 'Hello World'
        }
      }
    ]
  },
}

createElement() et al. as a function serves as an abstraction to help avoid people from having to write out big boilerplate objects like the above. React uses that object structure to do its magic under the hood, while also providing validation etc. Starting with a nice data format like you are coming up with, is a great start, then we could build our createElement alternative around that to provide validation, etc.

One advantage of JS object literals over arrays is that we can name our values, whereas in an array format our array value names are implicit in their index. In the proposed array syntax, component would be [0], and config/children would be [1]. If we ever wanted to change things around it would be more difficult using the array syntax, as opposed to named properties.

@dmsnell

This comment has been minimized.

Show comment
Hide comment
@dmsnell

dmsnell Sep 22, 2017

Contributor

Welcome to Lisp 馃槉

@BE-Webdesign I don't think it's necessarily Lisp that we end up dealing with here but "everything is data," of which Lisp is a materialization of that idea.

I'm a fan of the array-based approach because individual nodes are simple and arrays are fast. actually we can easily give names to them here with destructuring.

const [ type, [ name, attrs, children ] ] = [ 'tag', [ 'p', {}, [] ] ];

with such a simple data structure we can allow for functions to provide an API around the underlying specifics

const imageBlock = ( src, caption, attrs = {} ) = [ 'block', [
	'core/image',
	{ ...attrs, src },
	[ figure( [ img( src ), figcaption( caption ) ] ) ]
] ];

^^^ something like that. we can use functions however we want if the underlying model is a tree. React was good with this but we didn't really get access to the tree, which was problematic in my opinion (but good for performance!)

Contributor

dmsnell commented Sep 22, 2017

Welcome to Lisp 馃槉

@BE-Webdesign I don't think it's necessarily Lisp that we end up dealing with here but "everything is data," of which Lisp is a materialization of that idea.

I'm a fan of the array-based approach because individual nodes are simple and arrays are fast. actually we can easily give names to them here with destructuring.

const [ type, [ name, attrs, children ] ] = [ 'tag', [ 'p', {}, [] ] ];

with such a simple data structure we can allow for functions to provide an API around the underlying specifics

const imageBlock = ( src, caption, attrs = {} ) = [ 'block', [
	'core/image',
	{ ...attrs, src },
	[ figure( [ img( src ), figcaption( caption ) ] ) ]
] ];

^^^ something like that. we can use functions however we want if the underlying model is a tree. React was good with this but we didn't really get access to the tree, which was problematic in my opinion (but good for performance!)

@BE-Webdesign

This comment has been minimized.

Show comment
Hide comment
@BE-Webdesign

BE-Webdesign Sep 23, 2017

Contributor

I don't think it's necessarily Lisp that we end up dealing with here but "everything is data," of which Lisp is a materialization of that idea.

Yup, that was mainly for aduth, because I thought he would enjoy Lisp alot. The syntax of the array stuff is pretty similar to parens syntax in Lisp. I am far from a Lisp expert, but this array syntax just reminded me of it especially when aduth said he could not see a big difference between needing a function vs. having a list, which is basically what Lisp is, (operator ...arguments). In short, aduth basically invented Lisp for this PR 馃槉

( block ( core/image attrs ( figure attrs ( conj ( img src ) ( figcaption caption ) ) ) ) )

Replace the ( with [ and it is not too far off, which is pretty cool.

I'm a fan of the array-based approach because individual nodes are simple and arrays are fast. actually we can easily give names to them here with destructuring.

I completely misunderstood the purpose of this syntax, and what it is being used for. I thought it was part of the block state, so I was not thinking about this issue the same way, so that is an oopsie on my part. I checked out the Array performance, and it pretty much crushes everything else, so thank you for the knowledge drop.

How would we handle additional parameters beyond just the [ type, attrs, children ] format that many of these libraries use? I don't know how we would go about handling additional unforeseen changes to the array structure elegantly. Since children can sometimes be [1], what if we needed to add an additional context value to the array [ type, attrs, children, context ]. Now the handling of children at [1]: [ type, children, context ] would need extra logic and stuff, and any more additions would just keep building on that complexity, where using objects would not have that same problem, because the ordering does not matter. So potentially the solution is to never change this?

with such a simple data structure we can allow for functions to provide an API around the underlying specifics

Yup we are on the same page, that is what I was trying to say above, but I am not good at communicating lol. From what I can tell buildVTree is the start of the internal API for handling the use of the array syntax.

Contributor

BE-Webdesign commented Sep 23, 2017

I don't think it's necessarily Lisp that we end up dealing with here but "everything is data," of which Lisp is a materialization of that idea.

Yup, that was mainly for aduth, because I thought he would enjoy Lisp alot. The syntax of the array stuff is pretty similar to parens syntax in Lisp. I am far from a Lisp expert, but this array syntax just reminded me of it especially when aduth said he could not see a big difference between needing a function vs. having a list, which is basically what Lisp is, (operator ...arguments). In short, aduth basically invented Lisp for this PR 馃槉

( block ( core/image attrs ( figure attrs ( conj ( img src ) ( figcaption caption ) ) ) ) )

Replace the ( with [ and it is not too far off, which is pretty cool.

I'm a fan of the array-based approach because individual nodes are simple and arrays are fast. actually we can easily give names to them here with destructuring.

I completely misunderstood the purpose of this syntax, and what it is being used for. I thought it was part of the block state, so I was not thinking about this issue the same way, so that is an oopsie on my part. I checked out the Array performance, and it pretty much crushes everything else, so thank you for the knowledge drop.

How would we handle additional parameters beyond just the [ type, attrs, children ] format that many of these libraries use? I don't know how we would go about handling additional unforeseen changes to the array structure elegantly. Since children can sometimes be [1], what if we needed to add an additional context value to the array [ type, attrs, children, context ]. Now the handling of children at [1]: [ type, children, context ] would need extra logic and stuff, and any more additions would just keep building on that complexity, where using objects would not have that same problem, because the ordering does not matter. So potentially the solution is to never change this?

with such a simple data structure we can allow for functions to provide an API around the underlying specifics

Yup we are on the same page, that is what I was trying to say above, but I am not good at communicating lol. From what I can tell buildVTree is the start of the internal API for handling the use of the array syntax.

@gziolo

This comment has been minimized.

Show comment
Hide comment
@gziolo

gziolo Sep 23, 2017

Member

This article explains how Ionic team come up with a framework agnostic approach using Web Components API:
http://blog.ionic.io/the-end-of-framework-churn/

The following statement is quite true:

Framework Churn: that breakneck pace of creation and abandonment that plagues the JavaScript community. Here one day, out the next. Hot today, obsolete in a year. Number one loved on the Hacker News frontpage, now number one hated on the Hacker News comments.

Member

gziolo commented Sep 23, 2017

This article explains how Ionic team come up with a framework agnostic approach using Web Components API:
http://blog.ionic.io/the-end-of-framework-churn/

The following statement is quite true:

Framework Churn: that breakneck pace of creation and abandonment that plagues the JavaScript community. Here one day, out the next. Hot today, obsolete in a year. Number one loved on the Hacker News frontpage, now number one hated on the Hacker News comments.

@BE-Webdesign

This comment has been minimized.

Show comment
Hide comment
@BE-Webdesign

BE-Webdesign Sep 23, 2017

Contributor

Awesome stuff @gziolo, thank you for sharing that.

Contributor

BE-Webdesign commented Sep 23, 2017

Awesome stuff @gziolo, thank you for sharing that.

@dmsnell

This comment has been minimized.

Show comment
Hide comment
@dmsnell

dmsnell Sep 23, 2017

Contributor

@BE-Webdesign I'm reluctant to draw this out as it's somewhat of a tangent, but I think a few of your quotes are notable.

The syntax of the array stuff is pretty similar to parens syntax in Lisp鈥his array syntax just reminded me of it

The similarity isn't superficial! In Lisp arrays are denoted with parens while in JavaScript they are denoted with square brackets. That's it. One of the aspects about Lisps are that they are just lists in the very real sense that we talk about when we deal with JavaScript arrays.

what Lisp is, (operator ...arguments)

Here is the interesting bit: these lists of lists (Lisp programs end up being trees) only form a program when run by an appropriate interpreter. The first operator isn't exactly an operator so much as it's just a name. We could build a Lisp (or a Lisp macro) to simply ignore any "function call" whose name starts with x-; this is some of the value in having code-as-data (homoiconicity) where we can manipulate the program as easily as we can manipulate a POJO (plain-old-JavaScript-object).

How would we handle additional parameters beyond just the [ type, attrs, children ] format that many of these libraries use?

Arrays are an optimization: in runtime speed, code size, and developer time. However, they are inflexible like this. We could use a POJO as a class to carry the same information and give it names. On the other hand, we see an abundance of this data structure because it's pattern is widespread in reality. What kind of context might we be passing down? If it's relating to the render of the component it may not come from the data structure but maybe from the runtime (the interpreter) and could be carried along "from the outside" and injected into the render().

Contributor

dmsnell commented Sep 23, 2017

@BE-Webdesign I'm reluctant to draw this out as it's somewhat of a tangent, but I think a few of your quotes are notable.

The syntax of the array stuff is pretty similar to parens syntax in Lisp鈥his array syntax just reminded me of it

The similarity isn't superficial! In Lisp arrays are denoted with parens while in JavaScript they are denoted with square brackets. That's it. One of the aspects about Lisps are that they are just lists in the very real sense that we talk about when we deal with JavaScript arrays.

what Lisp is, (operator ...arguments)

Here is the interesting bit: these lists of lists (Lisp programs end up being trees) only form a program when run by an appropriate interpreter. The first operator isn't exactly an operator so much as it's just a name. We could build a Lisp (or a Lisp macro) to simply ignore any "function call" whose name starts with x-; this is some of the value in having code-as-data (homoiconicity) where we can manipulate the program as easily as we can manipulate a POJO (plain-old-JavaScript-object).

How would we handle additional parameters beyond just the [ type, attrs, children ] format that many of these libraries use?

Arrays are an optimization: in runtime speed, code size, and developer time. However, they are inflexible like this. We could use a POJO as a class to carry the same information and give it names. On the other hand, we see an abundance of this data structure because it's pattern is widespread in reality. What kind of context might we be passing down? If it's relating to the render of the component it may not come from the data structure but maybe from the runtime (the interpreter) and could be carried along "from the outside" and injected into the render().

@BE-Webdesign

This comment has been minimized.

Show comment
Hide comment
@BE-Webdesign

BE-Webdesign Sep 23, 2017

Contributor

On the other hand, we see an abundance of this data structure because it's pattern is widespread in reality.

Yup, which is why "So potentially the solution is to never change this?" is probably fine.

Contributor

BE-Webdesign commented Sep 23, 2017

On the other hand, we see an abundance of this data structure because it's pattern is widespread in reality.

Yup, which is why "So potentially the solution is to never change this?" is probably fine.

@BE-Webdesign

This comment has been minimized.

Show comment
Hide comment
@BE-Webdesign

BE-Webdesign Oct 3, 2017

Contributor

I think this would be a great path forward for interop. We can even combine #2791 with this idea. By using this array syntax, we could also create a HOC function like [ WebComponent( 'my-element' ), { props, attrs }, ...children ], to handle web component interop. Maybe at the start, we can try to have everyone build towards web components, as the main supported interop layer. By having this underlying array syntax, we will remain flexible for the future! Also it is just a cool idea 馃槃

Contributor

BE-Webdesign commented Oct 3, 2017

I think this would be a great path forward for interop. We can even combine #2791 with this idea. By using this array syntax, we could also create a HOC function like [ WebComponent( 'my-element' ), { props, attrs }, ...children ], to handle web component interop. Maybe at the start, we can try to have everyone build towards web components, as the main supported interop layer. By having this underlying array syntax, we will remain flexible for the future! Also it is just a cool idea 馃槃

@effulgentsia

This comment has been minimized.

Show comment
Hide comment
@effulgentsia

effulgentsia Oct 7, 2017

Just curious: rather than build a VDOM abstraction on top of React's, why not use React's VDOM directly, and allow React wrappers/adapters to bring in Vue components, Web components, and others? As an example, https://github.com/akxcv/vuera seems like a really cool approach.

effulgentsia commented Oct 7, 2017

Just curious: rather than build a VDOM abstraction on top of React's, why not use React's VDOM directly, and allow React wrappers/adapters to bring in Vue components, Web components, and others? As an example, https://github.com/akxcv/vuera seems like a really cool approach.

@aduth

This comment has been minimized.

Show comment
Hide comment
@aduth

aduth Oct 9, 2017

Member

@effulgentsia Aside from interoperability, one of the other original objectives with this pull request was to explore solutions to the challenge of representing the value of rich text in the state of the editor, where currently we use React elements as a convenience for representing the structure of content. This has a number of not-so-nice consequences, so a less framework-specific approach (the "Vanilla" syntax) was explored.

From this, it seemed natural that this structure could serve the role of a common baseline to target for representing block UIs themselves. At least in the case of a save implementation (serializing content for the database), we don't even need to handle interactivity, so a simple static structure is easy to represent.

For the editor interface, it's not quite as simple: in the editor, a block is long-lived and will change over time. Exposing the DOM node as a mount target provides much more flexibility, but to your point, I could see this working equally as well with a React wrapper, particularly if we can achieve transparency where the block implementer doesn't need to consider React as existing (perhaps abstracted behind a function wrapper).

#2791 is similar to this, except instead of React components, the common target is web components, where wrappers could exist to render React or Vue components within the web component.

Member

aduth commented Oct 9, 2017

@effulgentsia Aside from interoperability, one of the other original objectives with this pull request was to explore solutions to the challenge of representing the value of rich text in the state of the editor, where currently we use React elements as a convenience for representing the structure of content. This has a number of not-so-nice consequences, so a less framework-specific approach (the "Vanilla" syntax) was explored.

From this, it seemed natural that this structure could serve the role of a common baseline to target for representing block UIs themselves. At least in the case of a save implementation (serializing content for the database), we don't even need to handle interactivity, so a simple static structure is easy to represent.

For the editor interface, it's not quite as simple: in the editor, a block is long-lived and will change over time. Exposing the DOM node as a mount target provides much more flexibility, but to your point, I could see this working equally as well with a React wrapper, particularly if we can achieve transparency where the block implementer doesn't need to consider React as existing (perhaps abstracted behind a function wrapper).

#2791 is similar to this, except instead of React components, the common target is web components, where wrappers could exist to render React or Vue components within the web component.

@aduth

This comment has been minimized.

Show comment
Hide comment
@aduth

aduth Jan 30, 2018

Member

As we move to polishing an initial release of Gutenberg, we鈥檝e been doing some triage of old pull requests. The ideas put forth here are still valid and interesting, but simply in the name of shipping, we鈥檙e going to close this one for now. That doesn鈥檛 mean it鈥檚 not a good idea, nor that it can鈥檛 be revisited and reopened.

Some of the ideas explored here are being adapted into other change proposals, as in the case of Editable value refactor with #4049 (the same array syntax as implemented here).

Framework interoperability is certainly not off the table, and continues to be compatible into the future with wrapper functions like those discussed in the comments here.

Member

aduth commented Jan 30, 2018

As we move to polishing an initial release of Gutenberg, we鈥檝e been doing some triage of old pull requests. The ideas put forth here are still valid and interesting, but simply in the name of shipping, we鈥檙e going to close this one for now. That doesn鈥檛 mean it鈥檚 not a good idea, nor that it can鈥檛 be revisited and reopened.

Some of the ideas explored here are being adapted into other change proposals, as in the case of Editable value refactor with #4049 (the same array syntax as implemented here).

Framework interoperability is certainly not off the table, and continues to be compatible into the future with wrapper functions like those discussed in the comments here.

@aduth aduth closed this Jan 30, 2018

@aduth aduth deleted the try/elements-interop branch Jan 30, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment