Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow (pre)parser extension #700

Closed
twitwi opened this issue Sep 2, 2022 · 8 comments
Closed

Allow (pre)parser extension #700

twitwi opened this issue Sep 2, 2022 · 8 comments
Assignees
Labels
enhancement New feature or request stale

Comments

@twitwi
Copy link
Contributor

twitwi commented Sep 2, 2022

Is your feature request related to a problem? Please describe.
For some presentation that aggregate subslides, it takes a lot of space to even write the front matter with the two --- (one after the frontmatter, and one after the (empty) content) + the empty line.
With a custom syntax for includes, I had in a previous system:

#@chunk: title.md
#@chunk: objectives.md
#@chunk: toc.md

Which now takes a lot of vertical boilerplate:

src: title.md
---

---
src: objectives.md
---

---
src: toc.md
---

---

There are other cases where a custom compact notation would be nice to have (e.g. open a slide at every # bla title, so no --- everywhere) (e.g. alternate syntax to open a slide with a specific layout, @cover(bg.jpg), etc).
For some use cases, the solution below might be also simpler than adding rules to the markdownit parser (but the blocking point is really the things that happen before markdown parsing, so the split at ---).

Describe the solution you'd like
A flexible, minimal-change solution would be to add an extension point to the parser, that addons and the user can customize.
E.g. allow to call a (chain of) custom parsers for instance here

if (line.match(/^---+/)) {

Things to consider without too much filtering:

  • pass the list of lines to the extension, pass the current line index,
  • allow the extension to consume/replace/remove lines and to return an index for continuation (which will often be unchanged?) and whether the extensions actually did something,
  • do the default behavior if no extension did anything for a given line
  • loop properly so that an extension can process the content generated/modified by another extension
  • nice to have but maybe too much: maybe have a notion of priority/phases to control the order in the chain.

Describe alternatives you've considered
I could write a script (bash, python) that generates the actual md file...

@twitwi twitwi added the enhancement New feature or request label Sep 2, 2022
@antfu
Copy link
Member

antfu commented Sep 2, 2022

Have you tried to write an inline Vite plugin to do the transformation? If that can't help, I am fine to expose some capability for such customization

@twitwi
Copy link
Contributor Author

twitwi commented Sep 2, 2022

I'll have a look at what can be done with vite, it is still some tech I don't know very well.

@twitwi
Copy link
Contributor Author

twitwi commented Sep 2, 2022

@antfu can vite plugins "intercept" a file read with fs as done in this case?

const markdown = content ?? await fs.readFile(filepath, 'utf-8')

I tried a simple plugin but it seems to be called with individual virtual 1.md 2.md ... files (but not the entry point).

@antfu
Copy link
Member

antfu commented Sep 3, 2022

I see. PR welcome to propose your interface in mind (or if you want to start with a more detailed API design first would also be great)

@twitwi
Copy link
Contributor Author

twitwi commented Sep 3, 2022

I started thinking about it, I imagine a few callbacks from the parser to the plugin, do you prefer a style where there is an api with several methods or rather a single method with an kind of event type (i.e. the plugin implements a single method, that is a if (type == '') .... elif ....)?

For the "chain" aspect, where e.g. several (slidev) addons can each add a parser plugins and the user folder can also add parser plugins,
I think the core of the solution is related to the same question with shortcuts #629 ... any suggestions on how you would do that best? I guess it is almost handled by https://github.com/slidevjs/slidev/blob/main/packages/slidev/node/plugins/setupClient.ts#L27 so I would update there.
I PR'd a /* chained_injections */ that should be sufficient (no fine control on the order but I don't think it would be used a lot anyways) #702

@twitwi twitwi self-assigned this Sep 3, 2022
@twitwi
Copy link
Contributor Author

twitwi commented Sep 5, 2022

NB: The (chained) injection unfortunately runs in the client but the preparser is on the node side so it means I'll have to update the server extension code, i.e.,

export async function loadSetups<T, R extends object>(roots: string[], name: string, arg: T, initial: R, merge = true): Promise<R> {

@twitwi
Copy link
Contributor Author

twitwi commented Sep 8, 2022

NB: the situation is even more tricky as:

  • to know which addons are used we need the headmatter (frontmatter of the first slide)
  • so the parser basically should run addon-less until it has read the headmatter,
  • then only it can load the addon, asynchronously, and using config information from another project (the node part, which is not a dependence of the parser part)
  • then only it can go on parsing with the possible addons enabled

I've come up with a somewhat convoluted solution where the node module injects an addon loader function into the parser that calls it when it has found the headmatter.
I'll clean it a little and push it for feedback.

@stale
Copy link

stale bot commented Nov 7, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Nov 7, 2022
@stale stale bot closed this as completed Nov 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request stale
Projects
None yet
Development

No branches or pull requests

2 participants