-
-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable unique slugs across multiple files #20
Comments
A similar candidate for such a feature would be remark-reference-links (Permalink) which starts counting reference ids in a similar fashion on a per tree/per file basis. |
What is the reason you don't first concatenate the markdown? |
Fair and reasonable question.
glossarify-md would be able to handle such glossary-document-duality. But it would make every heading phrase in that file subject to "term-ination" and auto-linking which might not be what its user wants and thus should not be enforced upon users whose goal is to linkify glossary terms, only.
For the sake of transparency, I do not want to hide, that after having tested my patch locally it doesn't get me the whole way, yet. There are still some rough edges to be tackled with pandoc as the concatenator and postprocessor. But apart from that "fileset uniqueness" of heading IDs remain a step into that direction. So if we focus on the question whether such a property and option were a nice addition, particularly in case of using the plug-in with unified-engine, then I'd be willing to complete the drafted PR with tests and further provide a similar one to remark-reference-links, too. |
The main problem I see is that unified pipelines handle with one file, and if you pass that file through twice, users expect the same result. Or, if files are passed through twice in different orders (e.g., because of async), the same output could be expected as well. So I don’t really see an option like this solving your original issue. Or, perhaps it works for you, but it would have unexpected consequences for other users. I also have some experience with a similar issue: one big markdown file that’s split up into multiple HTML files (for epubs, which often do that to improve rendering speed). |
Okay, I see. In case of a watch mode we were likely to see sequentially increasing numbers by adding to the state of the previous run and results would vary depending on how often a file was changed. Well a bit of a pity but convincing. Time for going on with Sinatra 🎼 and doing it my way. Had been a pleasure to contribute. Maybe next time. |
This comment has been minimized.
This comment has been minimized.
Hi team! Could you describe why this has been marked as wontfix? Thanks, |
Would love to have you as a contributor, in the future! All the best! |
Initial checklist
Problem
Given a headline appears multiple times within a set of markdown files that is being processed by unified-engine and remark-slug. Then resetting the slugger within the transformer produces slugs that are unique on a per-file basis.
As a remark-slug and unified-engine user I would like to be able to postprocess markdown files with pandoc and concatenate them into a single output file. In such a scenario I need slugs that are unique across the whole set of files and multiple syntax trees to maintain uniqueness after concatenation.
Solution
I propose an option
multifile
which isfalse
by default for backwards compatibility but whentrue
prevents the slugger from being reset:Open questions
reset
,autoreset
,resetSlugs
Alternatives
Given the widespread use of remark and pandoc a contribution to remark-slug may be the best alternative.
The text was updated successfully, but these errors were encountered: