Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attempting CM 331-338 #1124

Closed
wants to merge 1 commit into from
Closed

Attempting CM 331-338 #1124

wants to merge 1 commit into from

Conversation

joshbruce
Copy link
Member

@joshbruce joshbruce commented Mar 4, 2018

Marked version: 0.3.17

Might have overcommitted when I said equivalent of 7 a day (also, pardon the off by one error). Stopping for now to re-evaluate approach and level of committment.

Markdown flavor: CommonMark

Description

Time elapsed: 20+ minutes. (20/8 = 2.5 minutes per) This includes coming up with minimal optimizations strategies along the way:

  1. Two file explorer windows open - one on the package the other with a folder to store the failing test samples to put in separate PRs.
  2. Duplicate two files.
  3. Rename files.
  4. Update HTML and MD (potential for human error - forget to update)
  5. Run npm test.
  6. Verify any failures were the one created.
  • If failed, move files to other file window.
  • if Passed go to step 2.
  1. Generate individual PRs for failures (not accounted for in time elapsed).

Think this is the correct equation for total time:

(625-8)/2.5*60 = 14,808 hours

Considerations and ideas (??):

  1. CommonMark spec is single page - parse and generate tests using script - problem...separating out the failures into separate PRs.
  2. Chose to start with emphasis because Federico rightly points out that it has some of the strangest rules and possibilities; therefore, future examples may have more success.
  3. ...anything I missed, need to get back to life and back to reality.

Recommend holding off on merging for a moment to discuss optimization strategies.

Contributor

  • Test(s) exist to ensure functionality and minimize regresstion (if no tests added, list tests covering this PR); or,
  • no tests required for this PR.
  • If submitting new feature, it has been documented in the appropriate places.

Committer

In most cases, this should be a different person than the contributor.

  • Draft GitHub release notes have been updated.
  • CI is green (no forced merge required).
  • Merge PR

@joshbruce
Copy link
Member Author

Given the prevalence of Markdown parsers now, I can't imagine this test suite problem isn't already solved. I don't know about you, but copy-paste 50,000 times seems...yeah...

If one doesn't already exist, maybe we should just create a repo with a fillable test suite for the MD + HTML...or even Jasmine, if possible.

Pseudo:

var attempted = 'Hello, World!'; 
var expected = '<p>Hello, World!</p>';
expect(
  // your method signature
  // marked(attempted)
).toBe(expected);

Make that available to the other JS markdown parsers as a package to test their stuff too...??

Anyway, definitely pausing on this one for now.

Tagging Dr. DevOps for thoughts: @UziTech

@Feder1co5oave
Copy link
Contributor

Feder1co5oave commented Mar 8, 2018

Sorry for chipping in late on this, I have been quite busy.
Commonmark has a special purpose parser to extract all tests from the spec into a json file made available here. You just have to parse it and plug it into the test suite.
Now that is not intended to be directly human editable (it's JSON...).
I think the current md-html fixture files system works well and is very newbie-friendly, apart from the fact that it tests each file as a single unit, whereas it typically contains several test cases (with the current comparing algorithm, you can only see the first failing test case in the file, and many times it is difficult to pinpoint which one it is).

I wrote a script that I used to import test cases from the spec.json file into md-html fixtures in the following format:

### Example 114
``` aa ```
foo

Now this still needs some manual attention as single examples are written to be parsed as a separate document. So link references (that are global to the whole document) often clash, and test cases that tests the behavior if the EOF is encountered should be put in a separate file.

I had also rewrote the test runner to parse (the fix() method) all md-html fixture and save them as a single compiled_tests.json file instead of the compiled_tests folder, that required a further step of parsing. It also automatically splits test files that contain '### Example ' into subtests (this seems to work but is definitely not failproof).
The json is compatible with the one of the commonmark suite, so that they could be easily merged in the future.
Then, the parsed json is loaded and subtests are run individually. If any of them fails, you get a nice log of the test identifier and diff.
Until now, I imported commonmark examples with the script and fixed them manually. Then the parsing phase squeezed everything back into a json file.
This has the side advantage that the whole test runner and test suite can be browserified, served via http, and tested in a browser.
Then, by using the Browserstack infrastructure through Travis, I can run tests in any browser/OS they have available, and get the output.

That's what I'd been working on. I suggest you not to take this road: #1129 because it is not scalable IMHO.

Everything I've listed above is currently unmergeable on master, but you can at least use the import.js script.

/cc @UziTech

@UziTech
Copy link
Member

UziTech commented Mar 8, 2018

because it is not scalable IMHO

@Feder1co5oave What do you mean by not scalable? Jasmine can be used in the browser as well so it shouldn't be difficult to get it running on browserstack. @davisjam is also using {md: ..., html: ... } objects to load the tests so we could still keep the objects in a json file to send them elsewhere.

@Feder1co5oave
Copy link
Contributor

Feder1co5oave commented Mar 8, 2018 via email

@joshbruce
Copy link
Member Author

@Feder1co5oave: My turn to apologize for a late response. I definitely appreciate the concept of the JSON examples. The question is do we generate files (seems like what you're doing) or just loop over them every time we run tests. I would tend toward the creation of files as we may not support every example.

Is there a GFM equivalent for the table extensions and whatnot??

@joshbruce
Copy link
Member Author

@UziTech: Will the current test harness run all tests in the integration folder; or, does go against the single spec file within it. Also, what's the intent behind each folder (some of the scripts in package.json have my brain hiccuping).

Closing and converting to issue to remind myself.

screen shot 2018-03-24 at 9 21 00 am

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants