-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How far could I get with using the nbconvert preprocessors on their own? #111
Comments
Hey @choldgraf, that's funny I've just been contacted by @jstac, about collaborating within the Sloan Grant that he said you are also a part of, and you've literally just been cc'd in to he's response lol!
Would it not be better doing this within the Sphinx framework?
That's probably because, initially I was only working with preprocessors, then the
I'd say one of the issues with using Pandoc, within the current nbconvert conversion mechanism, |
Yo - a few responses below:
Totally - for me the biggest challenge is that my experience using Markdown in Sphinx w/ the recommonmark extension has not been great, and I really don't want folks to have to use rST :-/ That said, I do a lot of sphinx work as well so it's something I'd be happy to revisit (I'd probably want to use it under-the-hood, since Jupyter Book is meant to be language-agnostic as well).
Makes sense to me - I agree that it seems clunky to have multiple conversions steps. We worked with John from the Pandoc project last year to get |
ah, I see you've already thought about using Pandoc for notebooks in #79 :-) |
Me neither :) that's certainly a drawback about pandoc. Although, as I mentioned, panflute has made it a lot easier to manipulate the AST within python. |
Yeah I agree - that's pretty cool. I also heard that somebody is working on a Rust implementation of Pandoc, which would be way cooler than learning Haskell IMO (though to be honest I don't have time to learn either lol) Either way, am I correct in the conclusion that the nbconvert preprocessors probably are not the best way to utilize ipypublish's functionality? |
Yeh I’d say no, not for markdown To HTML conversion. |
sounds good - I'll consider this issue closed then. I'll try to figure out other ways to leverage this build system. |
Hey there 👋 I think this is a really cool project, thanks for building it!
I'm working with a similar project for publishing HTML-based books in the Jupyter ecosystem (called jupyter book). I'm wondering if I could leverage some (maybe all?) of ipypublish for the HTML generation process.
Currently, I am doing these two things in building a book:
ipynb
and text files, first build an HTML page for each (with no header)For 1, I'm using a combination of nbconvert templates and preprocessors. The goal is to output a single HTML file for each page that can be stitched together as a book by the SSG. Currently, it uses a standard nbconvert markdown -> HTML pipeline, which misses a lot of features (such as citations, math notation, captions, etc) that ipypublish seems to provide.
I'm wondering if I could use
ipypublish
for some or all of the single-page generation process (e.g. either using some of the nbconvert preprocessors, or just usingipypublish
directly instead of my own code). However, thus far I have tried to avoid a dependence on Pandoc because of the extra overhead it creates at build-time (not a big deal if you're only building one page, but problematic if you're building 100).I'm curious if you could give an idea for how far one could get using this tool with the nbconvert preprocessors alone. It seems there's some amount of functionality for processing latex tags etc, though it also seems that the pandoc filters are slightly overlapping in their feature-set as well. If any of this would be a helpful addition to the documentation, I'm happy to make some PRs to add what I learn. I'd love to be able to leverage this tool and contribute improvements upstream rather than maintaining my own HTML-generation code if it makes sense without too much added complexity.
I will continue digging into the code but I thought I'd ask in the meantime :-)
The text was updated successfully, but these errors were encountered: