-
Notifications
You must be signed in to change notification settings - Fork 666
feat(xtask): automatically generate documentation pages for lint rules #2703
Conversation
1795cad
to
f91f5de
Compare
I believe this can be done afterwards, we just need to create an issue and we should be fine. |
f91f5de
to
7e63f2b
Compare
5b5f1b7
to
2a9310b
Compare
Deploying with Cloudflare Pages
|
Parser conformance results on ubuntu-latestjs/262
jsx/babel
symbols/microsoft
ts/babel
ts/microsoft
|
2a9310b
to
1857e56
Compare
/// | ||
/// ## Examples | ||
/// | ||
/// ### Invalid {#invalid} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we write some documentation for the "documentation" structure and what syntax is allowed. That documentation could either be part of a README
or as part of declare_rule
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd probably include it in the documentation for declare_rule
as well as the (future) section on writing lint rules in the contribution docs, but the thing is I'm not entirely sure how should actually work anyway. Currently I've implemented the valid / invalid logic by detecting sections of the markdown document with the id valid
and invalid
, but I'm also considering having the expected status of a code block as an attribute on the language tag instead (for example ```js,should_fail
) similar to rustdoc
website/src/docs/lint/rules/index.md
Outdated
<a href="/docs/lint/rules/flipBinExp">flipBinExp</a> | ||
<a class="header-anchor" href="#flipBinExp"></a> | ||
</h3> | ||
MISSING DOCUMENTATION |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My understanding is that these pages get deployed to the website. Is there already an entry point? If so, what's your plan on handling the missing documentation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The entry point for now has been deleted but we can easily restore it: https://github.com/rome/tools/pull/2185/files#diff-1010593bf20dbee354fd42eb016d4589c20308e350009f7c013b1919e04ff0f5
The file was also auto generated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the entry point I've added a link to this lint rules index page in the linting section of the main docs similarly to what existed in the JS version, but maybe it would also make sense to it to have an entry in the navigation menu ?
As for the content of the pages, the new generation script generates the individual rule documentation pages and the rules index page at the same place and with the same code as what used to exist in the JS version of Rome, including the "MISSING DOCUMENTATION" message. But I could easily change the syntax of the declare_rule
macro to make the documentation mandatory, and fail to build if there is no doc-comment on the rule
use rome_js_analyze::{analyze, metadata}; | ||
use rome_js_syntax::SourceType; | ||
|
||
fn main() -> Result<()> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it becomes important to add additional checks to our CI pipeline to make sure that this command is run for all PRs to prevent an outdated homepage. This isn't an issue specific to this command but common to many of our xtask
commands.
One way we could accomplish this is by adding a CI step that generates the formatter, grammar, lint rules, etc. and verifies that thereafter are no git changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added a "Codegen" step to the CI checks run on pull requests, it runs the grammar
, analyzer
and lintdoc
codegen commands before checking the git status is unchanged. A test / example of failing run can be found here: https://github.com/rome/tools/runs/6900285745?check_suite_focus=true
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I love it! Thank you @leops for following up on this.
What's the code size impact of including the markdown of all rules as part of the linter crate? Do you plan to expose the documentation as part of the CLI? |
Probably that's what we want. It would help the usage of our CLI when people are offline |
At the moment I don't expect it to be too significant, obviously that might change if we start to have many rules with extensive documentation. But yes the point of including the documentation in the binary is to allow it to be rendered by the CLI (I might put it behind a feature flag so it can be turned off in the LSP), although at the moment it would also require embedding a markdown parser to turn that code into markup. |
f496ced
to
8c2c469
Compare
|
||
### Invalid | ||
|
||
```jsx |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this correct? Should it be js
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The language ID emitted in the documentation page is the one used by the documentation generator and may not be exactly the same one specified in the original input: in this case the change is caused by .js
files being interpreted as JSX here.
Some feedback: https://feature-xtask-lintdoc.tools-8rn.pages.dev/docs/lint/rules/ If you check the description of a rule, the code block is not correctly rendered. But, if you open the rule page, it is correctly rendered: https://feature-xtask-lintdoc.tools-8rn.pages.dev/docs/lint/rules/noNegationElse/ |
Also, should we allow people to see the rules now? The autofix hasn't been released yet and the new rules too, so I am not sure if it makes sense? |
This page rendered in a weird way: https://feature-xtask-lintdoc.tools-8rn.pages.dev/docs/lint/rules/noImplicitBoolean/ (it actually rendered the HTML instead of using it as a text) |
This should be easy to fix, I just need to escape HTML special characters in the markup writer the same way it's done in the terminal writer
That's interesting, in the JS version the description for the rules also included markdown in a similar way, so I'm not exactly sure of I'm missing that makes the generator for the website ignore this markup
Indeed having auto-generated documentation from the latest revision of the code is going to clash with the workflow we currently have of holding off update to the documentation until the corresponding release is ready. |
We can definitely come up with a solution. Definitely not in this PR. Thank you for the insights! |
85be8aa
to
d11a69b
Compare
d11a69b
to
60146c0
Compare
…le summaries from markdown to HTML
60146c0
to
ac21fb8
Compare
Summary
This PR adds a new
cargo lintdoc
automatically generating markdown documentation pages on the website for all registered lint rulesInternally this command relies on the content of the doc-comments of each lint rule that's now available as a runtime metadata alongside the name of the rule itself. A new
metadata()
method is available on theRuleRegistry
to get the name and documentation of each active rule, and this method is aliased as the plain functionrome_js_analyze::metadata()
for the default instance of the JS rule registry.The
xtask_lintdoc
crate makes use of this metadata by parsing the content of the comments as markdown, running the detected code blocks through the analyzer (similar to the "doctest" feature ofrustdoc
), and inserting snapshots of the emitted diagnostics if the analysis returned errors (using a newly-introduced HTML printer forrome_console
markup)Test Plan
At the moment I've only run the new generation command manually, but it will probably need to be integrated to the automated test suite to ensure the generated files are kept up to date