-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluate and Cache new Code Chunks in Documentation Mode #19
Comments
I have the same problem, please fix this. The way it works now is that i have to cache all chunks again with Pweave -f texminted -c %.texw when adding a new chunk |
I don't have time to work on this at the moment. I agree that the implementation is not ideal, you're welcome to submit a pull request if you have a suggestion on how to fix it. Note that Pweave only caches input and output text and not Python objects, so if new chunks need the data from old ones there is no easy fix to this problem. |
Gotcha. I’ve been making some small changes toward those ends, On Tue, Mar 31, 2015 at 3:34 PM, Matti Pastell notifications@github.com
|
Seems like one could simply bypass caching in documentation mode and use the caching magic in an IPython processor. A subclass of |
Has there been any activity on this? I'd really appreciate chunk-level caching functionality, which seems like it would be closely related. Thanks for creating pweave! It's encouraged me to plot more graphs, which is always good :-) |
I've been slowly taking a shot at improved caching (see here), but progress has been slow due to multiple competing interests. Namely, a desire to
|
@brandonwillard Those are multiple big changes that you are talking about. Please don't submit them as one pull request, but split it into separate ones. Note:
I suggest you first do:
I have decided not to allow multi-line chunk options as it breaks editor support and I haven't seen a compelling need for it. If you can up with a proper implementation with tests I can accept it, but put it as separate pull request. |
Oh, sorry, I hadn't done that work with a PR in mind; it was just a test branch that started with caching and turned into all sorts of stuff. If there's an interest in those latter two goals, I can separate them and make PRs. As for the nbformat idea, I can start an issue discussing my reasons. |
@brandonwillard how were you thinking of implementing |
Ah, yeah, I left off with the idea of incrementally pickling the session in At around the same time, I was experimenting with a more granular, variable-level caching that uses code/ASTs extracted from Regardless, I've gone full org-mode nowadays, so I don't know when I'll get time to jump back into this! |
Thanks @brandonwillard. |
@brandonwillard, both of the approaches you considered seem particular to python. Currently, it looks like Pweave is trying to not be tied to Python by using Jupyter to allow different kernels. Do you know if Jupyter kernel managers have a language-independent means to serialize the state of a kernel? |
Yeah, I think that any non-naive caching (e.g. more than just caching output and validating against source text differences) is necessarily language-specific. However, it seems like more than a few popular languages have straight-forward runtime bytecode tools, AST generation and — at the very least — introspection capabilities. As with Python, it's possible to implement a less naive caching with those. Regarding Jupyter, it would be fantastic to see an abstraction of bytecode and/or AST objects exposed by the client protocol. The project has a somewhat related idea in its instrospection messages. Otherwise, one can always implement smart caching at the kernel level and use custom messages. |
If I add a new chunk after the previous chunks are cached, I get the following exception:
I was assuming that the caching mechanism would notice the missing chunk, evaluate and cache it, then proceed. Is that the intended functionality?
The text was updated successfully, but these errors were encountered: