New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Roadmap Nov 2018 #154

Open
raphlinus opened this Issue Nov 6, 2018 · 2 comments

Comments

Projects
None yet
2 participants
@raphlinus
Copy link
Owner

raphlinus commented Nov 6, 2018

Ok, it's a good time to pick this up again. Getting to no failing spec tests feels pretty urgent. I made a couple attempts to try to recruit a volunteer to take this on, but these days have more freedom of what to work on, and will find it more enjoyable to write the code myself.

Getting the existing codebase to 100% spec compliance is possible but the last few examples won't be easy. Plus a lot of that work would get thrown away on the switch to new_algo. So here's what's going to happen.

  • Make new_algo branch suitable for development. #153 goes most of the way there, and I locally have a patch to make it not panic on the spec tests.

  • Make new_algo compliant with cmark 0.28 one test at a time. I've PR'ed #153 in case people have input, but for this I'll just push to the branch. The focus will be on correctness and code clarity rather than performance (I often write too tricky code), but it's not going to be bad.

  • When it gets to 100%, turn on CI, so that spec tests must pass, but the extensions are allowed to fail.

  • Pull in the extensions (footnotes, tables, broken link callback). This isn't as bad as I feared - there's a lot of stuff that got discussed (math, GFM features such as task lists, etc) that would have been tricky.

  • Require all tests to pass in CI (including those for the extensions).

  • Do a release, probably 0.3.0 unless something semver breaking happens on 0.2, which I would prefer not. Also switch master over. This is the main point where I'd need feedback from users. I plan to keep breakage from 0.2 minimal, but it probably won't be zero, as there are some things that really should change.

  • Gather requirements for future work. A lot of this is feature requests in the tracker. Some will probably be more breaking - possibly some of that will be feature-gated, but we'll see.

  • A performance pass. A large part of the motivation for new_algo is to enable higher performance than today. Goals would be rigorous sub-quadratic behavior (even in the presence of deep nesting), SIMD accelerated scanners, and general benchmarking. Actually if this doesn't happen it's not a deal-breaker, but it helps motivate.

At that point, I'd really like to open it up to more community participation. I've been letting issues and PR's languish. Apologies for that. Some of the problem is that the code is tricky and some changes are hard; I'd like to reduce that. Other parts of the problem are that the requirements are poorly specified. Hopefully cmark itself will reach 1.0, and it would be great if there were more clarity around extensions; the GFM work is useful for that, and I can very easily imagine adding full GFM compatibility as an option (obviously this would require extending some enums, which is a breaking change).

There's a #pulldown-cmark channel on xi zulip where we're discussing some of this.

@ScottAbbey

This comment has been minimized.

Copy link
Collaborator

ScottAbbey commented Nov 15, 2018

Make new_algo compliant with cmark 0.28 one test at a time.

Not really sure how checklists work, but maybe we can try one to keep track of what is done/not done? Most of these are present in the codebase somewhere but not all of them are hooked up and I'm starting to have a hard time remembering which ones are and are not done.

Some of these will have edge cases but how about we just check some boxes to indicate that a part is at least mostly complete?

This is just from the spec table of contents, some of them should probably be broken down into sub-parts though.

  • Leaf blocks
    • Thematic breaks
    • ATX headings
    • Setext headings
    • Indented code blocks
    • Fenced code blocks
    • HTML blocks
    • Link reference definitions
    • Paragraphs
    • Blank lines
  • Container blocks
    • Block quotes
    • List items
    • Lists
  • Inlines
    • Backslash escapes
    • Entity and numeric character references
    • Code spans
    • Emphasis and strong emphasis
    • Links
    • Images
    • Autolinks
    • Raw HTML
    • Hard line breaks
    • Soft line breaks
    • Textual content

Maybe you can help fix my list....

@raphlinus

This comment has been minimized.

Copy link
Owner

raphlinus commented Nov 15, 2018

There's also the question of what counts as "done". Emphasis is mostly done, but probably not all tests pass, in part because of a spec clarification that happened when I was last in that.

In any case, we're going to see a bunch of these checked soon. I have fenced code blocks mostly working now in my local tree.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment