Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inline Markup Literals #26

Merged
merged 3 commits into from Sep 21, 2018
Merged

Inline Markup Literals #26

merged 3 commits into from Sep 21, 2018

Conversation

back2dos
Copy link
Member

@back2dos back2dos commented Aug 6, 2017

I actually wanted to brood about this a little while longer, but @mrcdk's initiative kinda forced my hand to present an alternative now.

Rendered version here.

@back2dos back2dos mentioned this pull request Aug 6, 2017
@nadako
Copy link
Member

nadako commented Aug 6, 2017

I like this in general, but I'm not sure if it's possible to implement such parsing in a straightforward way, given Haxe's current parser. The inline markup block should ideally come as a single token from a lexer, but it has no idea about what parser expects and whether it's at the "start of the expression", so when it finds <tag, it just emits two tokens (< and tag identifier). Maybe the parser could toggle some tag-lexing mode, I don't know, have to look into it.

@ibilon
Copy link
Member

ibilon commented Aug 7, 2017

An issue is that a tag can be valid haxe code.

https://try.haxe.org/#Ca85f line 6: <b> isn't a tag
It's a specifically constructed example, but could happen in real code.

@RealyUniqueName
Copy link
Member

Maybe some other chars could be used instead of < and >?

@benmerckx
Copy link
Contributor

benmerckx commented Aug 7, 2017

An issue is that a tag can be valid haxe code.
<b> isn't a tag

The proposal states:

If wherever an expression is expected to begin, the character < is found followed directly (i.e. no whitespace inbetween) by a letter, it signifies the start of an inline markup expressions.

Which means a<b>c; should parse as before because <b> is preceded by a. After a, which is an expression, the parser expects valid cases which may come after an expression. Such as a.b or a<b but not the start of a new expression.

@ibilon
Copy link
Member

ibilon commented Aug 7, 2017

What about a<<b>c with << overloaded? a<(<b>c) or (a << b) > c
IIRC << is already annoying for the lexer (parser?)

But yeah we could just use some other characters, the most likely easiest (I guess) would be to introduce a new one like # or `, that way it can't be currently valid haxe code.

@benmerckx
Copy link
Contributor

benmerckx commented Aug 7, 2017

The << token will be tokenized before <. The current rules wouldn't change. Which means if you would want to compare one of these markup literals you'd have to write a < <b></b>, where a<<b></b> would somewhat understandably result in an error. While I can't be sure there's no edge case at all, I think this proposal is of little use if other characters are chosen.

@nadako
Copy link
Member

nadako commented Aug 7, 2017

@ibilon I think it's the >> which is annoying because of type params.

Anyway, one of the easy ways would be to require a special character to make lexer enter tag parsing mode, not unlike how it's done for strings. E.g. var tag = `<div></div> (we don't need a closing backtick here because we apply the tag handling rules described in the proposal).

@RealyUniqueName
Copy link
Member

I'd prefer something like @<tag>...

@back2dos
Copy link
Member Author

back2dos commented Aug 7, 2017

@ibilon to build on what the others said: yes, a<<b>>c could be interpreted as a << b >> c or a < <b> > c, but always has and always will parse as the former, because Haxe always chooses the longest valid token. Examples:

  • 1.e8 parses as a float literal, not as accessing the field e8 on 1 ... if you want that, you need to write 1 .e8.
  • a != b and a ! = b mean different things just as well, where the latter is unary postfix ! and assignment.

Matters are relatively simple (conceptually - as for implementation, that seems to be a different matter): only when the parser begins parsing a new expression does it allow for a tag to occur and there is no room for ambiguity there, because < is syntactically not allowed in that place.

@nadako I've tried to get some understanding of how the parser works and found very little that would make me hopeful. The one solution I see is that the parser may be able to set some shared mutable state to tell the lexer to expect a markup literal, but peeking ahead would defeat this hack entirely: if (foo) bar; <div /> and we get "unexpected <".

So I see three options:

  1. move forward with Breaking if (cond) expr; else expr; for haxe 4? haxe#6477 and rely on said hack, which is this filthy as hell but would actually work (right?)
  2. lex < followed directly by a word character as inline markup literal. It's not what I originally proposed. However it is consistent with giving longer tokens preference, but it does break a specific class of unformated code ... we already have a migration tool for that though ;)
  3. use the backtick.

I think the first approach is a disaster waiting to happen, but you tell me. The third solution is the most pragmatic and I could live with it. No point in obsessing over one character. I'm not really looking forward to having to explain for the tenth time why the backtick is there, but it's a lot better than nothing and we can still look into getting rid of it in the future.

That said, I'm actually intrigued by the second one (I have never seriously considered it until this point). There's the issue of having if (a <b) leading straight to the confusing "unterminated inline markup" but I think this could be prevented by allowing unterminated markup in the lexer (add a bool flag of course to discern them). When parsing, unterminated markup as the first token in expr would in fact raise "unterminated inline markup" while in expr_next both terminated and unterminated markup would result in "ambiguous < must be followed by whitespace" (feel free to suggest a more easily understood way of telling that). It's not perfect, but then again what is. I'll let you guys think about that ;)

@back2dos
Copy link
Member Author

back2dos commented Aug 7, 2017

But yes, why not @. Don't see much of a difference there. Would leave backtick free for other things.

@frabbit
Copy link
Member

frabbit commented Aug 7, 2017

Not directly related, but i actually have a (working) prototype which allows an alternative calling syntax for macros via metadata like @-myMacro expr or @-myMacro(config) expr. The - is required to distinguish regular metadata from macro calls. That idea also allows block syntax like in:

@-myMacro { doSomething(); }

which gets translated to

myMacro({ doSomething })

or

@-myMacro(a,b,c) { doSomething }

which gets translated to something like
myMacro({ doSomething }, [a,b,c]);

@ncannasse
Copy link
Member

@nadako remarks are quite valid, you need to detail how this integrate with existing haxe lexer. Appending @ does not seem very elegant.

Actually we can detect an XML start in parse_expr by matching < (ident) (ident = const(string))* > in parse_expr, but then we need to switch the lexing in "raw mode" until we have close all idents. That's again feasible, but would not that be a problem for IDEs ?

For the record I found both Scala and VB have XML literals.

Also, it might be misleading how allowing what looks like "XML literals" without actually enforcing XML (or HTML) syntax. I can see how one would complain about the compiler not giving error when writing the following:

var x = <xml> <oops/ </xml>

So maybe using xml tags with this is a bad idea after all.

@back2dos
Copy link
Member Author

you need to detail how this integrate with existing haxe lexer.

I don't have the slightest idea how the lexer and parser work. I can whip up an implementation for hscript if that's of any help ^^

Appending @ does not seem very elegant.

Agreed.

Actually we can detect an XML start in parse_expr by matching < (ident) (ident = const(string))* > in parse_expr, but then we need to switch the lexing in "raw mode" until we have close all idents.

If there is any such thing as a "raw mode", then that's the simplest solution.

Still I have to stress that an opening tag is nothing but <(tagname). See the proposal for what constitutes a tag name, because svg:rect is valid in XHTML, custom-component is valid in HTML5, namespace.Component is valid in JSX. The parser should not even attempt to parse attributes, because again, <div contenteditable aria-labelledby="someLabel">Awesome content!!!</div> is perfectly valid HTML. The syntax proposed is chosen very carefully to cover all that ground.

That's again feasible, but would not that be a problem for IDEs ?

That depends on what "problem for IDEs" means. Getting completion to work is quite possible, as the little gif shows. Syntax highlighting is more of a challenge, as already mentioned at the end. It depends very strongly on how the IDE approaches the matter. I think an approach to highlighting that is as broad as this proposal may have a fair chance of providing a decent result. Then again I'm no expert on the matter. Perhaps @nadako can comment.

I can see how one would complain about the compiler not giving error when writing the following:

var x = <xml> <oops/ </xml>

And how would that ever happen? According to the proposal, the above will give a compiler error unless a macro processes it. If the macro expects XML syntax, it will produce an error (the most naive one will just produce the exception thrown by Xml.parse). But if it can make sense of it, then I don't see the problem. Alter it slightly and you may have a use case:

var x = <html><script> console.log(5 <oops/ 4 )</script></html>

There's a lot of super hairy stuff in actual HTML and XML. So if then you go to having half an XML or HTML parser (which one would you choose?), you're going to ingrain incompatibilities with the actual standard into the syntax.

The most pragmatic approach is thus to make it the user's problem. If they want all bells and whistles, they use something like dom4. Or whatever. And when parsing speed becomes a problem (and eval offsets that threshold quite a bit), my understanding is that they can write the parser in ocaml and just plug it in and call it from a macro (right?).

@ncannasse
Copy link
Member

Actually I'm taking back my previous comment. We could ensure that when getting compiled (without going through macros), XML literals are checked against XML by the compiler and parsed in a similar way to Haxe smart strings. This way we get both something that doesn't require macros and can be used by macros in a smart manner.

I still have one particular issue: your proposal leaves attributes syntax undefined, but it means the following would also be invalid syntax:

var s = <foo>${if (x<foo) 0 else 1}</foo> because of the <foo being mistaken as a opening tag, leading to quite hard to debug error messages about the first <foo> not being closed.

Maybe we should ensure some XML attribute syntax in order to distinguish an actual node from something else.

@kevinresol
Copy link

I personally prefer not to include the xml parsing part into the compiler, mainly because of the looooooong release cycle.

I found myself always in a "I want to use haxe nighty but I can't use haxe nightly" dilemma.

For example, I want to use haxe nighty for these fixes:

But I can't because HaxeFoundation/haxe#6321 breaks all my codes.

Sorry for being a bit off-track here, but my point is: please don't embed functionalities into the compiler if there are other alternatives. I think the same reason drove the team to move out SPOD as a macro library.

@ncannasse
Copy link
Member

@kevinresol I would agree in general not to put too much things in compiler if it can be avoided (SPOD required some compiler specific support a long time ago - before macro era, it can be safely removed now)

However I don't think that having a syntax that requires a macro and lead to a compiler error otherwise is a good idea, especially if there's some behavior that can be expected from the end user. It seems obvious to me that the following code should compile if it's considered valid syntax:
var x = <foo>$str</foo>

@piotrpawelczyk
Copy link

@ncannasse that's one of my ideas (I've had to share with anybody yet) to use interpolation syntax as a straight-forward solution for embedded DSLs (yes, I did notice this proposal is about "markup literals" ;)). It would require generation of "interpolation AST", so too speak, that would be available during macro processing. If there were no macros transforming it, every target would have to simply generate runtime interpolation, exactly as it's done now. This way every "inline DSL" would just be a string by default. Added bonus would be an easy way to find out which actual Haxe variables are referenced inside the interpolation/DSL string. I'm omitting now the need of finding out what macro to run on any given interpolation usage. Do you think it's a viable solution to this problem?

@Simn
Copy link
Member

Simn commented Sep 25, 2017

I still don't really know how people expect this to be actually implemented without implementing a full XML parser. Counting opening and closing XML tags is all nice and fun until you have <tag ... />. Can't parse that without understanding XML syntax, and we voted against integrating a XML parser before, so I don't really understand what we are discussing here.

I'm not saying that I'm strictly against something like this, but I'm still not sure if XML is the solution.

@markknol
Copy link
Member

Something I think that should be given some thoughts is how to deal with code comments. If the rule is <tag ANYTHING </tag> then this can lead to unexpected results.

var x = <div>
     // </div>; // Am I closing tag, content or comment
        </div>;

@Simn
Copy link
Member

Simn commented Sep 25, 2017

As I understand it, the idea is to leave the Haxe domain upon <div>, so it should no longer recognize Haxe comments.

But yes, literals in general are also a good point: In order to parse this correctly, you'd still have to understand e.g. string literals in order to not close on <div x="</div>">...

@boozook
Copy link

boozook commented Sep 29, 2017

IMHO. I don't like to mix markup lang in the any lang like Haxe. But i think it can be cool feature if it implement in isolated context like a block and builtin macro, look:

function foo()
{
    var myXml = @:markup(XML) {
        <node foo="bar"/>
    }

    var myYaml = @:markup(YAML) {
        foo:
            - "bar"
            - "far"
    }
}

This method provides modularity and extensibility.


I'll try to explain my strange idea :)

For this method we need one none-breaking compiler modification - new type of Expr - UnsafeBlockExpr.
So,

  • All expressions in the { block } marked with registered meta is a UnsafeBlockExpr
  • A build-function (like for @build) should receive two arguments:
    • meta arguments - @:markup(THIS_ARG, AND, THIS),
    • arg :UnsafeBlockExpr and return :Expr

What is that UnsafeBlockExpr and for what?
Any "raving lunatic" expressions in this block should be ignored by the compiler until registered MarkupBuilder-function returns Expr not null.
Else or if no suitable registered MarkupBuilder-function was than the compiler should generate an error about "have no suitable Markup Builder macro func...".

And I can say with confidence what any markup in future will can be implemented on macro only without modifying the compiler.

And abstract example:

class Main
{
    static function main()
    {
        var myValue = 42;
        var myXml = @:markup(XML) {
            <?xml version="1.0" encoding="utf-8"?>
            <x>
                <ml value=$myValue/>
            </x>
        }
        $type(myXml); // Xml

        var myYaml = @:markup(YAML) {
            foo:
                - "bar"
                - ${myValue}
        }
    }
}

// very stupid simple example of custom builder
class CustomMarkupBuilder
{
    public static macro function init()
    {
        // similarly to Compiler.addGlobalMetadata and something like Context.onAfterTyping but "before"
        Compiler.setCustomMarkupGenerator(":markup", "XML", buildXml)
        Compiler.setCustomMarkupGenerator(":markup", "YAML", buildYaml)
    }

    public static macro function buildXml(srcExpr:UnsafeBlockExpr):Expr
    {
        var src = srcExpr.toString();
        var xml = Xml.parse(src);
        return macro Xml.parse(${srcExpr});
    }

    public static macro function buildYaml(srcExpr:UnsafeBlockExpr):Expr
        return macro null;
}

and call init as initial macro, frag of build.hxml:

--macro "CustomMarkupBuilder.init()"

This is only my hopes and dreams. But I'm sure it would be very cool!

@fullofcaffeine
Copy link

Any news here?

@EricBishton
Copy link
Member

EricBishton commented Dec 19, 2017

This type of markup from the Nemerle programming language (http://nemerle.org/About) looks very clean:

def title = "Programming language authors";
def authors = ["Anders Hejlsberg", "Simon Peyton-Jones"];
    
// 'xml' - macro from Nemerle.Xml.Macro library which alows to inline XML literals into the nemerle-code
def html = xml <#
  <html>
    <head>
      <title>$title</title>
    </head>
    <body>
      <ul $when(authors.Any())>
        <li $foreach(author in authors)>$author</li>
      </ul>
    </body>
  </html>
#>
Trace.Assert(html.GetType().Equals(typeof(XElement)));
WriteLine(html.GetType());

I rather like a few aspects of this: First, the operators '<#' and '#>' as markup(?) delimiters is very readable; Second, the formatting of the xml as a "here" doc (a la bash) makes for very clean code; Third, the keyword (rather than a @macro(something) { stuff here }) is shorter, easier to type, and makes very clear the context.

So, more in a Haxe parlance, I would see it like this:

import Sys;

class Test {
  static function main() {
    var title = "Programming language authors";
    var authors:Array<String> = ["Nicolas", "Simon", "<it>et al</it>"];

    var xml:String = @lang(xml <#
        <html>
          <head>
            <title>$title</title>
    	  </head>
    	  <body>
            ${if (authors.size()) @lang(xml <#
       	      <ul>
                ${for (a in authors) @lang(xml <# <li>$a</li> #>) }
              </ul>
              #> }
          </body>
        </html>
      #>);

    trace(xml);
  }
}

It's not quite as clean because of the nice $when and $foreach that Nemerle has, but it's still fairly nice.

What @lang() does is:

  1. Determine which language plugin is to be used, and if it's missing stop the compilation (IIRC, already implemented in Haxe 4);
  2. Catenate all characters between the beginning and ending tokens (nesting is allowed!);
  3. Then it does interpolation on the collected string (we can disallow that with a '<!#' begin token);
  4. Then, it passes the final string off to the language plugin for parsing/processing.

The big benefit with this approach is that the Haxe compiler doesn't have to understand the embedded language.

Can this be done with a macro already? I don't know; it seems like it, except possibly for the token parsing. (It would be a pretty hefty macro to do parsing and validation!)

@benmerckx
Copy link
Contributor

benmerckx commented Dec 19, 2017

Maybe we should consider just supporting JSX on a language level similar to how Reason does and be done with it. Also JSX is really not XML, so the default behaviour should really be just desugaring into method calls.

The way Reason handles jsx is pretty clean and well-defined. Might be a good start and would still allow to be somewhat extendable through macros, as long as attribute values and children are valid Haxe expressions.

@EricBishton
Copy link
Member

... a plain jsx('...') call looks better to me.

I agree, for the most part. (I like the '<#', '#>' operators -- can't say why.) I was just trying to figure out how to do it without adding a new language keyword for every type of markup that we should ignore (or forcing us to pick just one...). Also, we really don't want to have to add markup parsing and evaluation in the compiler.

@EricBishton
Copy link
Member

All these ... don't really look useful when it comes to reentrancy tbh...

@nadako Are you thinking of compile-time or run-time reentrancy issues?

@djaonourside
Copy link

djaonourside commented Dec 21, 2017

The UnsafeBlockExpr solution suggested by @fzzr- is realy needed thing. Haxe gurus please take your notice on it:)

@kevinresol
Copy link

@EricBishton Rentrancy is a syntax problem, so it is compile time.

@fullofcaffeine
Copy link

fullofcaffeine commented Dec 21, 2017

Maybe we should consider just supporting JSX on a language level similar to how Reason does and >be done with it. Also JSX is really not XML, so the default behaviour should really be just >desugaring into method calls.

I like this idea, it's bold and focuses on keeping things straightforward and simple.

I suggest we consider this as the implementation approach for this proposal and let the core team vote on this, unless we want to wait a couple more months... anyone? :)

@ncannasse
Copy link
Member

Maybe we should consider just supporting JSX on a language level similar to how Reason does and be done with it. Also JSX is really not XML, so the default behaviour should really be just desugaring into method calls.

I think that's very short sighted. While today JSX is the hot thing, it might be something else entirely in two years. Language design is about creating solid bridges for the future, instead of single-usage wooden ladders.

@nadako
Copy link
Member

nadako commented Dec 21, 2017

Yeah, I agree to that in general. It's just that I don't really see how we can implement arbitrary syntaxes without pluggable parser/lexer or ugly and useless (for jsx at least) heredoc syntax.

@EricBishton
Copy link
Member

@nadako - I don't think you can. To get arbitrary syntaxes, you need to be able to arbitrarily extend the parser; something has to parse it. And there must be a way to delineate it, whether using a heredoc delimiter or a meta.

In truth, @fzzr- and I have shown very similar solutions (his UnsafeBlock example also uses string interpolation). From an IDE implementation point of view, having a dedicated operator and being able to treat the inserted language as a string is a simpler implementation. To treat it as an UnsafeBlock is harder, but really only because we can't detect it directly in the lexer (like we can a new operator). At that point, we either have to have meta support in the lexer (which is not good) or all supportable syntaxes have to be Haxe syntax compatible (which is untenable).

@EricBishton
Copy link
Member

EricBishton commented Dec 21, 2017

As another thought... In truth, we don't require a meta at all. We can create the new operator similar to a markdown tag:

import Sys;

class Test {
  static function main() {
    var title = "Programming language authors";
    var authors:Array<String> = ["Nicolas", "Simon", "<it>et al</it>"];

    var xml:String = <#xml
        <html>
          <head>
            <title>$title</title>
    	  </head>
    	  <body>
            ${if (authors.size()) <#xml
       	      <ul>
                ${for (a in authors) <#xml <li>$a</li> #>}
              </ul>
              #>}
          </body>
        </html>
      #>;

    trace(xml);
  }
}

Then, all we've really done is create a new string type that allows/requires an (ocaml) compiler extension to parse and/or verify it. If the plugin isn't available (or there is no name immediately following the opening operator), it's still just a string to Haxe. And, frankly, this is all people are really asking for: an easier way to embed their markup, which often requires its own string delimiters.

(I still like '<#' and '#>' better than triple-backticks (```), tough.)

@markknol
Copy link
Member

markknol commented Dec 21, 2017

@EricBishton The Haxe parser has to parse <# and #>. If the content inbetween is unparsed, How can you nest other blocks in it? How does it make a difference between <# as Haxe code and <# as arbitrary syntax code?

@fullofcaffeine
Copy link

Yeah, I agree to that in general. It's just that I don't really see how we can implement arbitrary >syntaxes without pluggable parser/lexer or ugly and useless (for jsx at least) heredoc syntax.

I was under the impression that the new plugin system allows this... no?

@EricBishton
Copy link
Member

It is true, the Haxe lexer does have to be extended to use <# and #> as string delimiters. They are easy to add and understand because of the way the lexer works, whereas <div> is ambiguous because (as was discussed above), < followed by letters is a default lexer word separation.

Second, the content is not unparsed. It is a Haxe string, subject to normal string interpolation, therefore must(?) be nested. (And, if we want an option for a non-interpolated block, we can add !#, <!# or some other variation.) However, string itself is unvalidated by the Haxe parser, and would be able to be re-parsed and/or validated by a plugin.

@EricBishton
Copy link
Member

BTW, we can come up with all sorts of delimiter combinations. Perhaps {' and '} for interpolated delimiters and {" and "} for non-interpolated. Those actually look like current syntax:

    var xml:String = {'xml
        <html>
          <head>
            <title>$title</title>
    	  </head>
    	  <body>
            ${if (authors.size()) {'xml
       	      <ul>
                ${for (a in authors) {'xml <li>$a</li>'} }
              </ul>
              '} }
          </body>
        </html>
      '};

I kind of get lost looking for the closing '} operator vs. the closing } operator for the interpolated code.

In the end, there are two goals:

  1. Easily (e.g. without requiring escape sequences) allow other markup or other language code to be embedded within Haxe;
  2. Allow (NOT require) the same to be parsed and/or validated during the Haxe compilation.

To do that, we need to:

  1. Choose an unlikely delimiter that is unused in other expected embeddable languages.
  2. Have a way to specify the validation.

@markknol
Copy link
Member

the Haxe ocaml plugin architecture

I am against a plugin system where one has to use ocaml . The whole point of Haxe is to have one language that rules them all, so that would be a weird contradiction. Also, I wonder how many are willing to learn ocaml to make such plugin. I think if everything is structured and available in the well known macro context, that would be more convenient, no?

@EricBishton
Copy link
Member

I am against a plugin system where one has to use ocaml.

I agree wholeheartedly. Unfortunately, that's not what is currently implemented and available; we have the ocaml plugin architecture already in the compiler. I think (though I don't know) that it would be harder to implement a parser/lexer/validator in a Haxe macro than it is in Ocaml.

As I think @ousado said when the plugin architecture was implemented, "All we need now is an Ocaml back-end for Haxe." My corollary is, "All we need is a Haxe (or Neko, or HL) binding for Ocaml."

@djaonourside
Copy link

djaonourside commented Dec 22, 2017

While today JSX is the hot thing, it might be something else entirely in two years.

@ncannasse IMHO You aren't right in this question. Nowdays using JSX and react-like libraries is the fastest, convenient and advanced way to develop web applications( and some type of mobile ones). I think it will continue to evolve and no reasons it will be forgotten. So we need more advanced supporting jsx and other alien syntax in haxe(or a mechanism that can provide it) than just macro function with a string param. This thing can help popularize haxe among potential js-target users. It's realy big audience.

@impaler
Copy link

impaler commented Mar 14, 2018

One cool use case that is not React, would be to see haxeui and other ui libs use it :)

While today JSX is the hot thing, it might be something else entirely in two years.

Apart from adoption and time I don't know how we could have a good measurement on whether something is just a hot thing or not. React has been around for about 5 years and it's adoption is quite impressive. Writing ui markup in xml like formats has obviouisly been around much longer.

As for jsx I don't think there have been too many changes apart from addition of the Fragment syntax sugar <></>. Also interestingly there is a project for this in php https://github.com/hhvm/xhp-lib.

@haxiomic
Copy link
Member

haxiomic commented Sep 9, 2018

What about using a syntax similar to ES6 template literals

Template literals are ES6's answer to DSLs

macro function jsx(string, expressions, ...) {
    // parse jsx, return react object expressions
}

function render() {
    var message = 'hello world';
    return jsx`
        <div>$message</div>
    `;
}

function renderSomethingMoreComplex() {
    return jsx`
        <section>${ render() }</section>
        <ul>
            ${ [1,2,3].map(n -> jsx`<li>$n</li>`);  }
        </ul> 
    `;
}
  • It doesn't lock us into a single DSL – implementing XML or JSX can be offloaded to the community rather than the compiler team
  • Will be familiar to ES6 JavaScript developers and may be adopted by other languages
  • Doesn't have the parsing error cases of mixing XML and haxe
  • Uses the same interpolation syntax as strings so it behaves predictably

@back2dos
Copy link
Member Author

back2dos commented Sep 9, 2018

It doesn't lock us into a single DSL – implementing XML or JSX can be offloaded to the community rather than the compiler team

Which part about this proposal locks us into a single DSL?

Will be familiar to ES6 JavaScript developers and may be adopted by other languages

I'm inclined to doubt that ES6 developers who're looking for familiarity will go for Haxe when they can use FlowType or TypeScript, especially since both of them have JSX support.


I don't mind template literals. It's a neat feature although I'm not sure of the improvement here, beyond having a third type of quote that means you can use single and double quotes in the string. You can already do:

function render() {
    var message = 'hello world';
    return @jsx'
        <div>$message</div>
    ';
}

@kevinresol
Copy link

@djaonourside
Copy link

@kevinresol Thanks. I've just noticed and removed the question)

@kevinresol
Copy link

kevinresol commented Sep 14, 2018

I wonder if the following will parse or not according to the proposal:

  • <foo><foo/><foo>
  • <foo attr=${a > b || b < c}/>
  • <foo><foo attr=${a > b || b < c}/></foo>

@Simn
Copy link
Member

Simn commented Sep 21, 2018

HaxeFoundation/haxe#7438

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet