Parse inline markup during the parse phase (use recursive decent parser) #61

Open
mojavelinux opened this Issue Jan 8, 2013 · 27 comments

Comments

Projects
None yet
6 participants
@mojavelinux
Member

mojavelinux commented Jan 8, 2013

Currently, the parsing of inline content happens during rendering. This limits the information you have in the document after parsing. Instead, Asciidoctor should parse all text extents into inline nodes during parsing.

This requires moving the substitutions from the rendering phase to the parsing phase.

It also means that each line in the buffer will become an array of inline nodes that represent the chunked text, toggling between plain text and elements like links and images.

Since this has the potential to slow down a single pass parse/render, we may want to have a flag which controls the phase in which inline text is parsed.

@mojavelinux

This comment has been minimized.

Show comment
Hide comment
@mojavelinux

mojavelinux Jan 8, 2013

Member

Btw, this will not only make it easier to index the document, it will also make Asciidoctor capable of being its own API for a syntax highlighting engine. No need for a project like CodeRay to parse AsciiDoc. Instead, it can use Asciidoctor to get the structure down to the level of inline nodes and then format them accordingly. This is an alternative way to get output instead of using backend (render) templates.

Member

mojavelinux commented Jan 8, 2013

Btw, this will not only make it easier to index the document, it will also make Asciidoctor capable of being its own API for a syntax highlighting engine. No need for a project like CodeRay to parse AsciiDoc. Instead, it can use Asciidoctor to get the structure down to the level of inline nodes and then format them accordingly. This is an alternative way to get output instead of using backend (render) templates.

@mojavelinux mojavelinux added this to the v2.0.0 milestone Jul 16, 2014

@mojavelinux mojavelinux self-assigned this Jul 16, 2014

@mojavelinux mojavelinux changed the title from Parse inline markup during the parse phase to Parse inline markup during the parse phase (use recursive decent parser) Jul 31, 2014

@mojavelinux

This comment has been minimized.

Show comment
Hide comment
@mojavelinux

mojavelinux Sep 23, 2014

Member

I think the strategy to take here is to start developing an inline parser in Asciidoctor alongside the existing streaming transformer. Once it's fully fleshed out, we can switch to it. But the benefit is that you start to get something to use that at least hits the major syntax sooner rather than later. In other words, we can roll it out gradually. I envision the inline parser to be something you can call on a given block. Keep in mind that not all blocks in Asciidoctor have parsed text, or the text is parsed differently, so it makes sense that it's available as an API (at least in the near term) on the node.

Member

mojavelinux commented Sep 23, 2014

I think the strategy to take here is to start developing an inline parser in Asciidoctor alongside the existing streaming transformer. Once it's fully fleshed out, we can switch to it. But the benefit is that you start to get something to use that at least hits the major syntax sooner rather than later. In other words, we can roll it out gradually. I envision the inline parser to be something you can call on a given block. Keep in mind that not all blocks in Asciidoctor have parsed text, or the text is parsed differently, so it makes sense that it's available as an API (at least in the near term) on the node.

@mojavelinux mojavelinux referenced this issue in asciidoctor/asciidoctor-gradle-plugin Nov 18, 2014

Closed

The star character * causes issues with v1.5 #134

@mojavelinux

This comment has been minimized.

Show comment
Hide comment
@mojavelinux

mojavelinux Nov 18, 2014

Member

Use case to test for proper parsing once we switch over.

Use `/*` and `*/` for multiline comments.
Member

mojavelinux commented Nov 18, 2014

Use case to test for proper parsing once we switch over.

Use `/*` and `*/` for multiline comments.
@benignbala

This comment has been minimized.

Show comment
Hide comment
@benignbala

benignbala Oct 17, 2015

Should I have a go at this using ANTLR4 (https://rubygems.org/gems/antlr4/versions/0.9.2) ? Thanks

Should I have a go at this using ANTLR4 (https://rubygems.org/gems/antlr4/versions/0.9.2) ? Thanks

@benignbala

This comment has been minimized.

Show comment
Hide comment
@benignbala

benignbala Oct 17, 2015

Or rather, shall we try it with ANTLR4 in Java in the asciidoctorj project ?

Or rather, shall we try it with ANTLR4 in Java in the asciidoctorj project ?

@mojavelinux

This comment has been minimized.

Show comment
Hide comment
@mojavelinux

mojavelinux Oct 17, 2015

Member

There's a project setup w/ ANTLR to do this experimentation. See https://github.com/asciidoctor/asciidoc-grammar-prototype

Member

mojavelinux commented Oct 17, 2015

There's a project setup w/ ANTLR to do this experimentation. See https://github.com/asciidoctor/asciidoc-grammar-prototype

@mojavelinux

This comment has been minimized.

Show comment
Hide comment
@mojavelinux

mojavelinux Feb 25, 2017

Member

Test case:

The following should not be matched as emphasis (i.e., italics):

*_id* word_
Member

mojavelinux commented Feb 25, 2017

Test case:

The following should not be matched as emphasis (i.e., italics):

*_id* word_
@elextr

This comment has been minimized.

Show comment
Hide comment
@elextr

elextr Feb 27, 2017

I provided advice to another project that was trying to develop an Asciidoc implementation in C++ for an embedded project. That project has now been dropped (the embedded project and therefore the Asciidoc implementation 😞 ), but here are some lessons for future implementers of this parser.

Using the traditional parser tools is problematical, they all assume context free grammers, and Asciidoc is definitely not a CFG. Even the tools various ways of handling contextual dependence did not seem sufficient for this case (YMMV but don't waste too much time on it, so far nobody made it work). The example in the post above is a simple context dependence (parsing the _id as italic should stop if it hits the end of any quotes it is nested in, thats the context it depends on).

The big context dependence is of course the the different parsing between blocks (and changing that with the subs= option, since it changes the context dependence dynamically during parsing). So you have to recognise the block type or subs option early and apply it to the appropriate part of the input, even before you have parsed the appropriate input.

An alternative that was explored was to parse everything and then just revert the parses that were not required back to text. Unfortunately what is parsed affects how other things parse, eg if quotes are not parsed because the opening is inside a macro target and the ending is outside that will change if subsequently its discovered that macros are not in the subs= list, eg

[subs=-macros]
the italicised part of http:xxx__yyy[]__ frobnicates the foo bar

Its not entirely bad news, the top level structure of a document (sections, blocks, lists etc) is a CFG and defines most of the context for parsing the lower levels, so it seems possible to use a two pass approach where the structure and attribute lists are parsed and then then the block contents are parsed based on the context that specifies.

Thats where it was at when things stopped. Hope this will help future implementers avoid many of the same issues (I'm sure there are plenty still left for them to find).

elextr commented Feb 27, 2017

I provided advice to another project that was trying to develop an Asciidoc implementation in C++ for an embedded project. That project has now been dropped (the embedded project and therefore the Asciidoc implementation 😞 ), but here are some lessons for future implementers of this parser.

Using the traditional parser tools is problematical, they all assume context free grammers, and Asciidoc is definitely not a CFG. Even the tools various ways of handling contextual dependence did not seem sufficient for this case (YMMV but don't waste too much time on it, so far nobody made it work). The example in the post above is a simple context dependence (parsing the _id as italic should stop if it hits the end of any quotes it is nested in, thats the context it depends on).

The big context dependence is of course the the different parsing between blocks (and changing that with the subs= option, since it changes the context dependence dynamically during parsing). So you have to recognise the block type or subs option early and apply it to the appropriate part of the input, even before you have parsed the appropriate input.

An alternative that was explored was to parse everything and then just revert the parses that were not required back to text. Unfortunately what is parsed affects how other things parse, eg if quotes are not parsed because the opening is inside a macro target and the ending is outside that will change if subsequently its discovered that macros are not in the subs= list, eg

[subs=-macros]
the italicised part of http:xxx__yyy[]__ frobnicates the foo bar

Its not entirely bad news, the top level structure of a document (sections, blocks, lists etc) is a CFG and defines most of the context for parsing the lower levels, so it seems possible to use a two pass approach where the structure and attribute lists are parsed and then then the block contents are parsed based on the context that specifies.

Thats where it was at when things stopped. Hope this will help future implementers avoid many of the same issues (I'm sure there are plenty still left for them to find).

@mojavelinux

This comment has been minimized.

Show comment
Hide comment
@mojavelinux

mojavelinux Feb 27, 2017

Member

Thanks for this input. I'll study it in detail.

Just to share the idea in my head, I am definitely not planning to implement it as a single parser / grammar. My idea was to do the selective parsing I am doing now, which is working fantastic, but switch to the parser / grammar for inline stuff where the context is fixed. The problem right now is only there. The rest of AsciiDoc is actually easy to parse using a line-based approach.

I haven't validated that idea yet, but from my experience, I'm very confident it's going to work.

Member

mojavelinux commented Feb 27, 2017

Thanks for this input. I'll study it in detail.

Just to share the idea in my head, I am definitely not planning to implement it as a single parser / grammar. My idea was to do the selective parsing I am doing now, which is working fantastic, but switch to the parser / grammar for inline stuff where the context is fixed. The problem right now is only there. The rest of AsciiDoc is actually easy to parse using a line-based approach.

I haven't validated that idea yet, but from my experience, I'm very confident it's going to work.

@elextr

This comment has been minimized.

Show comment
Hide comment
@elextr

elextr Feb 27, 2017

Well, the inline stuff is in fact the part that depends on the context its in (see the subs= problem).

A hand carved RD or other parser that passes lots of context down to parsing of nested constructs is probably possible, if messy due to all the context tests. But due to the context dependence it can't be memoised I don't think, so it will likely have the dreaded exponential time performance. Though that may not be a problem if it is constrained to previously determined limited segments of the input as you propose.

elextr commented Feb 27, 2017

Well, the inline stuff is in fact the part that depends on the context its in (see the subs= problem).

A hand carved RD or other parser that passes lots of context down to parsing of nested constructs is probably possible, if messy due to all the context tests. But due to the context dependence it can't be memoised I don't think, so it will likely have the dreaded exponential time performance. Though that may not be a problem if it is constrained to previously determined limited segments of the input as you propose.

@benignbala

This comment has been minimized.

Show comment
Hide comment
@benignbala

benignbala Feb 27, 2017

@mojavelinux, But there are several benefits to having a complete (multi-pass) state based parser in addition to the streaming parser that asciidoctor currently has. The main benefit being, once we have the complete AST, it becomes very simple to translate from AsciiDoctor to any other output format (esp. thinking of ReST to make adoc files generate .rst files for sphinx). So, as @elextr suggests, a hand-written recursive descent is a nicer soltuion, although performance wise, it will be a lot slower.

My idea is to have the state transitions through generated code and a JSON/XML file driving the syntax/grammar logic, so that we can add more the parser API in more languages by adding necessary templates for the specific language.

@mojavelinux, But there are several benefits to having a complete (multi-pass) state based parser in addition to the streaming parser that asciidoctor currently has. The main benefit being, once we have the complete AST, it becomes very simple to translate from AsciiDoctor to any other output format (esp. thinking of ReST to make adoc files generate .rst files for sphinx). So, as @elextr suggests, a hand-written recursive descent is a nicer soltuion, although performance wise, it will be a lot slower.

My idea is to have the state transitions through generated code and a JSON/XML file driving the syntax/grammar logic, so that we can add more the parser API in more languages by adding necessary templates for the specific language.

@ghost

This comment has been minimized.

Show comment
Hide comment
@ghost

ghost Feb 27, 2017

@benignbala IMO conversion to other formats can be easily done like this: asciidoc > DOCBOOK > anything else using the huge existing ecosystem for dealing with docbook and xml in general. In other words, docbook could already be considered a viable AST

ghost commented Feb 27, 2017

@benignbala IMO conversion to other formats can be easily done like this: asciidoc > DOCBOOK > anything else using the huge existing ecosystem for dealing with docbook and xml in general. In other words, docbook could already be considered a viable AST

@jirutka

This comment has been minimized.

Show comment
Hide comment
@jirutka

jirutka Feb 27, 2017

Member

My idea is to write a standalone AsciiDoc parser in PEG that would output an AST represented as JSON. This would strictly decouple parser and renderer.

I'm not afraid of poor performance, currently I'm writing shell parser in LPeg (PEG for Lua), the grammar is quite complex, but the performance is much better than I've expected. I'm thinking about Rust for AsciiDoc parser and I'm quite confident that it will beat the current parser in performance without sacrificing code quality, readability and maintainability as the current one.

Member

jirutka commented Feb 27, 2017

My idea is to write a standalone AsciiDoc parser in PEG that would output an AST represented as JSON. This would strictly decouple parser and renderer.

I'm not afraid of poor performance, currently I'm writing shell parser in LPeg (PEG for Lua), the grammar is quite complex, but the performance is much better than I've expected. I'm thinking about Rust for AsciiDoc parser and I'm quite confident that it will beat the current parser in performance without sacrificing code quality, readability and maintainability as the current one.

@Mogztter

This comment has been minimized.

Show comment
Hide comment
@Mogztter

Mogztter Feb 27, 2017

Member

a Rust parser for AsciiDoc would be awesome! 👍

Member

Mogztter commented Feb 27, 2017

a Rust parser for AsciiDoc would be awesome! 👍

@Mogztter

This comment has been minimized.

Show comment
Hide comment
Member

Mogztter commented May 1, 2017

@jirutka

This comment has been minimized.

Show comment
Hide comment
@jirutka

jirutka May 1, 2017

Member

Yes, I know about nom, but haven't tried it yet.

Member

jirutka commented May 1, 2017

Yes, I know about nom, but haven't tried it yet.

@s-leroux

This comment has been minimized.

Show comment
Hide comment
@s-leroux

s-leroux Mar 14, 2018

Hi @mojavelinux,
What is the current status of this issue? Is it still WIP or has the idea being dropped?

Two reasons I ask that for:

  1. The docs make explicit reference to that issue in the "Extension" section
  2. :) I would find that very useful to extract link targets from the AST to check for broken links before rendering AsciiDoctor documents.

Hi @mojavelinux,
What is the current status of this issue? Is it still WIP or has the idea being dropped?

Two reasons I ask that for:

  1. The docs make explicit reference to that issue in the "Extension" section
  2. :) I would find that very useful to extract link targets from the AST to check for broken links before rendering AsciiDoctor documents.
@mojavelinux

This comment has been minimized.

Show comment
Hide comment
@mojavelinux

mojavelinux Mar 15, 2018

Member

Definitely not dropped. The best way to describe the current state is "not enough money to work on it yet". (to clarify what I mean, if I were to dive into it right now, I'd run out of money).

I completely understand your use case. You may also be interested in the fact that I'm starting to work on a validator for AsciiDoc over here (https://github.com/opendevise/textlint-plugin-asciidoc), which by its very nature has to parse inline elements. This will be essential for Antora.

Before we can put an inline parser in Asciidoctor, we have to write one in a lab to find out the unknowables. Will there be decisions we have to make about the syntax in order to make the switch? Will there be a migration? etc, etc. I'm coming up very rapidly on needing an inline parser for a variety of use cases, so it is still very high on my list of efforts to start.

Member

mojavelinux commented Mar 15, 2018

Definitely not dropped. The best way to describe the current state is "not enough money to work on it yet". (to clarify what I mean, if I were to dive into it right now, I'd run out of money).

I completely understand your use case. You may also be interested in the fact that I'm starting to work on a validator for AsciiDoc over here (https://github.com/opendevise/textlint-plugin-asciidoc), which by its very nature has to parse inline elements. This will be essential for Antora.

Before we can put an inline parser in Asciidoctor, we have to write one in a lab to find out the unknowables. Will there be decisions we have to make about the syntax in order to make the switch? Will there be a migration? etc, etc. I'm coming up very rapidly on needing an inline parser for a variety of use cases, so it is still very high on my list of efforts to start.

@elextr

This comment has been minimized.

Show comment
Hide comment
@elextr

elextr Mar 15, 2018

@mojavelinux just to let you know that after interruptions due to "real life" I have started again to work on a C++ Asciidoc implementation that does use a formal lexer and parser, but very much hand rolled as none of the standard approaches I can find handle the level of contextual sensitivity that is part of Asciidoc. To be clear no traditional tools handle things like "subs=" which actually change what is parsed dynamically in the document.

The design process indicates that there will be some differences due to "proper" lexing and parsing compared to the substitution approach used by Asciidoc Python and (IIUC) at least partly by Asciidoctor. All those "depends on the order of substitutions" issues (and benefits) should go away.

As this is only a spare time activity it is progressing slowly, but it will be interesting to see how it compares as it matures.

elextr commented Mar 15, 2018

@mojavelinux just to let you know that after interruptions due to "real life" I have started again to work on a C++ Asciidoc implementation that does use a formal lexer and parser, but very much hand rolled as none of the standard approaches I can find handle the level of contextual sensitivity that is part of Asciidoc. To be clear no traditional tools handle things like "subs=" which actually change what is parsed dynamically in the document.

The design process indicates that there will be some differences due to "proper" lexing and parsing compared to the substitution approach used by Asciidoc Python and (IIUC) at least partly by Asciidoctor. All those "depends on the order of substitutions" issues (and benefits) should go away.

As this is only a spare time activity it is progressing slowly, but it will be interesting to see how it compares as it matures.

@mojavelinux

This comment has been minimized.

Show comment
Hide comment
@mojavelinux

mojavelinux Mar 15, 2018

Member

@elextr That's great to hear!

What I know for sure is that there's no way a parser could process the whole document at once. AsciiDoc is too contextual for that. What needs to happen is that the blocks have to be identified first (as it descends), which is the approach I've started to use in the validator parser. (This is easily done by hand). Then, based on the block context and subs overrides, the appropriate inline parser is selected to descend into the block to process the inline nodes. (This is where a PEG parser comes in). In the end, it's a hybrid solution. I believe that will give us exactly the results we want.

Member

mojavelinux commented Mar 15, 2018

@elextr That's great to hear!

What I know for sure is that there's no way a parser could process the whole document at once. AsciiDoc is too contextual for that. What needs to happen is that the blocks have to be identified first (as it descends), which is the approach I've started to use in the validator parser. (This is easily done by hand). Then, based on the block context and subs overrides, the appropriate inline parser is selected to descend into the block to process the inline nodes. (This is where a PEG parser comes in). In the end, it's a hybrid solution. I believe that will give us exactly the results we want.

@mojavelinux

This comment has been minimized.

Show comment
Hide comment
@mojavelinux

mojavelinux Mar 15, 2018

Member

I also want to point out that we're not shooting for something pure. All we're shooting for is something that doesn't use multiple regexp/gsub passes over the same text (and which gives us an AST).

Member

mojavelinux commented Mar 15, 2018

I also want to point out that we're not shooting for something pure. All we're shooting for is something that doesn't use multiple regexp/gsub passes over the same text (and which gives us an AST).

@elextr

This comment has been minimized.

Show comment
Hide comment
@elextr

elextr Mar 15, 2018

@mojavelinux yes, parsing structural markup first certainly helps, its one of the things I tried in the past.
But inline parsing still has problems.

An example that caused me headaches. (Note I love PEG parsers, I have made several over the years, but it would need some sort of extended capability beyond normal PEG to handle the following).

:blah__blah: foo

[subs=attributes]
blah{blah__blah}blah__

vs

:blah__blah: foo

[subs=quotes]
blah{blah__blah}blah__

Same inline markup, but different AST depending on the subs value. So your PEG has to depend on that somehow.

And there is no simple AST that can record both the attribute and the quotes at the same time since they overlap. Fuzzy parsers and ASTs with multiple options that are later collapsed are of course an option, but they are not simple PEG parsers.

And then the question of what that resolves to in the absence of an explicit subs= attribute to limit it to either quotes or attributes, by default subs has everything set. Asciidoctor resolves it in favour of quotes due to a side effect of order of substitution.

But the first token is the attribute token { so a "real" parser will recognise the attribute and not recognise the quotes (and must not choke on the unmatched quotes as well) which is the opposite behaviour from current implementations.

To date I am just taking the "real parsers are different" route and will see how much difference it makes in the real world (not much I suspect).

elextr commented Mar 15, 2018

@mojavelinux yes, parsing structural markup first certainly helps, its one of the things I tried in the past.
But inline parsing still has problems.

An example that caused me headaches. (Note I love PEG parsers, I have made several over the years, but it would need some sort of extended capability beyond normal PEG to handle the following).

:blah__blah: foo

[subs=attributes]
blah{blah__blah}blah__

vs

:blah__blah: foo

[subs=quotes]
blah{blah__blah}blah__

Same inline markup, but different AST depending on the subs value. So your PEG has to depend on that somehow.

And there is no simple AST that can record both the attribute and the quotes at the same time since they overlap. Fuzzy parsers and ASTs with multiple options that are later collapsed are of course an option, but they are not simple PEG parsers.

And then the question of what that resolves to in the absence of an explicit subs= attribute to limit it to either quotes or attributes, by default subs has everything set. Asciidoctor resolves it in favour of quotes due to a side effect of order of substitution.

But the first token is the attribute token { so a "real" parser will recognise the attribute and not recognise the quotes (and must not choke on the unmatched quotes as well) which is the opposite behaviour from current implementations.

To date I am just taking the "real parsers are different" route and will see how much difference it makes in the real world (not much I suspect).

@mojavelinux

This comment has been minimized.

Show comment
Hide comment
@mojavelinux

mojavelinux Mar 15, 2018

Member

Same inline markup, but different AST depending on the subs value. So your PEG has to depend on that somehow.

Easy. You just use a different PEG for the two cases. Not necessarily a traditional thing to do, but what parsing AsciiDoc calls for. That's what I mean when I say a hybrid parser.

Member

mojavelinux commented Mar 15, 2018

Same inline markup, but different AST depending on the subs value. So your PEG has to depend on that somehow.

Easy. You just use a different PEG for the two cases. Not necessarily a traditional thing to do, but what parsing AsciiDoc calls for. That's what I mean when I say a hybrid parser.

@mojavelinux

This comment has been minimized.

Show comment
Hide comment
@mojavelinux

mojavelinux Mar 15, 2018

Member

But the first token is the attribute token { so a "real" parser will recognise the attribute and not recognise the quotes

I think either attributes have to be replaced first, or we have to change the rules about boundaries. I agree this is something that will require more thought.

Member

mojavelinux commented Mar 15, 2018

But the first token is the attribute token { so a "real" parser will recognise the attribute and not recognise the quotes

I think either attributes have to be replaced first, or we have to change the rules about boundaries. I agree this is something that will require more thought.

@mojavelinux

This comment has been minimized.

Show comment
Hide comment
@mojavelinux

mojavelinux Mar 15, 2018

Member

On second look, I understand better what you're pointing out. Yes, I agree there are going to be differences in the edge cases. These changes in the syntax are going to have to happen if we expect to move past this problem. We have to evolve.

Member

mojavelinux commented Mar 15, 2018

On second look, I understand better what you're pointing out. Yes, I agree there are going to be differences in the edge cases. These changes in the syntax are going to have to happen if we expect to move past this problem. We have to evolve.

@elextr

This comment has been minimized.

Show comment
Hide comment
@elextr

elextr Mar 16, 2018

Easy. You just use a different PEG for the two cases.

Lets see, six binary options for subs= thats 64 combinations, so 64 parsers, I suppose that is manageable. There might be some sharing of common parts as well.

Not necessarily a traditional thing to do, but what parsing AsciiDoc calls for.

Indeed, Asciidoc is not a context free grammar, so traditional context free techniques should not be expected to work.

Its good to see that you understand that a simple PEG parser isn't the solution by itself, I guess I harp on it a bit because many previous posts on the topic by various people seem to suggest its all you need, and it would be sad to see people wasting their time pursuing that path.

elextr commented Mar 16, 2018

Easy. You just use a different PEG for the two cases.

Lets see, six binary options for subs= thats 64 combinations, so 64 parsers, I suppose that is manageable. There might be some sharing of common parts as well.

Not necessarily a traditional thing to do, but what parsing AsciiDoc calls for.

Indeed, Asciidoc is not a context free grammar, so traditional context free techniques should not be expected to work.

Its good to see that you understand that a simple PEG parser isn't the solution by itself, I guess I harp on it a bit because many previous posts on the topic by various people seem to suggest its all you need, and it would be sad to see people wasting their time pursuing that path.

@s-leroux

This comment has been minimized.

Show comment
Hide comment
@s-leroux

s-leroux Mar 16, 2018

@mojavelinux, @elextr Great to hear things are things are evolving concerning this issue.

These changes in the syntax are going to have to happen if we expect to move past this problem. We have to evolve.

If I understand it well, we may have to expect syntax change in AsciiDoctor source document before we can see inline nodes parsed into the AST? Do you have already some ideas in mind? Or maybe is there already another issue to discuss those changes?

@mojavelinux, @elextr Great to hear things are things are evolving concerning this issue.

These changes in the syntax are going to have to happen if we expect to move past this problem. We have to evolve.

If I understand it well, we may have to expect syntax change in AsciiDoctor source document before we can see inline nodes parsed into the AST? Do you have already some ideas in mind? Or maybe is there already another issue to discuss those changes?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment