Join GitHub today
Roadmap Nov 2018 #154
Ok, it's a good time to pick this up again. Getting to no failing spec tests feels pretty urgent. I made a couple attempts to try to recruit a volunteer to take this on, but these days have more freedom of what to work on, and will find it more enjoyable to write the code myself.
Getting the existing codebase to 100% spec compliance is possible but the last few examples won't be easy. Plus a lot of that work would get thrown away on the switch to new_algo. So here's what's going to happen.
At that point, I'd really like to open it up to more community participation. I've been letting issues and PR's languish. Apologies for that. Some of the problem is that the code is tricky and some changes are hard; I'd like to reduce that. Other parts of the problem are that the requirements are poorly specified. Hopefully cmark itself will reach 1.0, and it would be great if there were more clarity around extensions; the GFM work is useful for that, and I can very easily imagine adding full GFM compatibility as an option (obviously this would require extending some enums, which is a breaking change).
Not really sure how checklists work, but maybe we can try one to keep track of what is done/not done? Most of these are present in the codebase somewhere but not all of them are hooked up and I'm starting to have a hard time remembering which ones are and are not done.
Some of these will have edge cases but how about we just check some boxes to indicate that a part is at least mostly complete?
This is just from the spec table of contents, some of them should probably be broken down into sub-parts though.
Maybe you can help fix my list....
There's also the question of what counts as "done". Emphasis is mostly done, but probably not all tests pass, in part because of a spec clarification that happened when I was last in that.
In any case, we're going to see a bunch of these checked soon. I have fenced code blocks mostly working now in my local tree.