Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Write a test for missing tokens #111

Closed
Carreau opened this issue Sep 21, 2015 · 3 comments
Closed

Write a test for missing tokens #111

Carreau opened this issue Sep 21, 2015 · 3 comments
Milestone

Comments

@Carreau
Copy link
Member

Carreau commented Sep 21, 2015

Missing tokens definitions when converting to Pdf

@Carreau Carreau added this to the 4.1 milestone Sep 21, 2015
@mpacer
Copy link
Member

mpacer commented Sep 22, 2015

Adding link: #110

@minrk minrk changed the title Write a test for #110 Write a test for missing tokens Sep 22, 2015
@mpacer
Copy link
Member

mpacer commented Sep 22, 2015

So in looking into this, it's easy enough to use the code linked in #110 to create an example for the purposes of testing for these particular tokens, but that seems to test for a bandaid that will only be ripped off when the token list is updated next. The changing token list has already been a problem for LaTeX template.

Looking at the preamble to the LaTeX base template, it seems that the set of tokens has changed during the course of nbconvert's development. Some of them e.g., \BuiltInTok is defined on both lines 74 and 104 in https://github.com/jupyter/nbconvert/blob/master/nbconvert/templates/latex/base.tplx (in fact, given that they both were declared with \newcommand and not \renewcommand I'm surprised it didn't throw a warning).

I've hunted down the source of the token names to be the highlighting-kate syntax highlighting engine either through pygments or pandoc. I have a hunch it's pandoc, but @jhamrick suggested it might be happening via pygments and I haven't figured out which yet… but in either case, I think we should be able to extract a list of all standard token types(regardless of the chosen style) on the fly.

And that leads my to my current question. The easiest thing to do is just run nbconvert on a notebook that happens to contain those token types, for now. The more robust solution would be to generate valid LaTeX that surely contains all the token types that can be generated by nbconvert (after extracting them directly from highlighting-kate), and ensure that that compiles.

I feel like for a test, the robust solution is preferable, but I'm not sure if that fits the goal of this issue.

@Carreau
Copy link
Member Author

Carreau commented Sep 22, 2015

Simply a notebook that fail conversion would be fine for now, which should not be too hard as the conversion is just failing to PDF. Though installing latex might be too heavy.

Go for the easier to do.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants