forked from mjpieters/collective.transmogrifier
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add info during processing, easing e.g. tarball download support #8
Open
tobiasherp
wants to merge
29
commits into
collective:master
Choose a base branch
from
tobiasherp:add-info
base: master
Could not load branches
Branch not found: {{ refName }}
Could not load tags
Nothing to show
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
If a section tells the transmogrifier about the created archive (by calling transmogrifier.add_info('export_content', name, info), e.g. at the end of the __iter__ method), the calling method can return this archive to the user for download. This way, transmogrifier turns into a tool which can be offered to "normal" users for flexible exports, without the need to edit the pipeline configuration first.
Allow sections to count the items they created, forwarded etc., and provide a method to print a summary. This requires the blueprints to create their count methods themselves, but this is done with a convinient factory function, and the counting itself is designed to be as performant as possible. The final summary allows e.g. to spot the sections were many items unexpectedly went lost.
Created a summary section which makes use of the counting facility. Demonstrates the usage, if count=true is configured (other sections probably won't use this condition because the counting is very efficient anyway)
Counting occurs *before* yield because this is the moment when the method looses control; this way it is more likely the counting to be accurate.
If e.g. no objects are created at all because of missing type information, this is important to know ...
A little development/debugging helper for pipelines: the itemInfo functions print a compact information about the given item. The values of a selected subset of keys are printed (_type and _path by default), and the remaining keys are listed. By default, only the first item in a loop is printed. It is possible to explicitly show the first item with a certain quality (specify showone='<key>' in the calling code). The itemInfo function returns a boolean value which tells if it decided to print.
Allow sections to count the items they created, forwarded etc., and provide a method to print a summary. This requires the blueprints to create their count methods themselves, but this is done with a convinient factory function, and the counting itself is designed to be as performant as possible. The final summary allows e.g. to spot the sections were many items unexpectedly went lost. Conflicts: src/collective/transmogrifier/transmogrifier.py
Here is all the meat!
Produces a pretty string which represents the _all attribute of a Transmogrifier object. Can return a list instead.
When ConfigurationRegistry.clear is fired unexpectedly, this can go unnoticed and cause transmogrifications to fail. Thus, this can be customized by setting the CLEANUP variable. To suppress the call by the zope.testing.cleanup facility entirely, this variable must must be changed to OFF in the module code, because this is done on import time. For other uses, the variable can be set after importing. The present SILENT value reflects the so-far behaviour. Conflicts: src/collective/transmogrifier/transmogrifier.py
If print-sections = true, a nice dump of the transmogrifier configuration (the _raw attribute) is printed.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This pull request includes the format changes of issue #7 (I like to rectify the format first before applying meaningful changes in a separate step) but mainly addresses #6, a way to accumulate information during processing. My use case is the download of a generated tarball; but there could be more. Thus I don't propose a special change for tarballs only; it is supposed to be helpful for different purposes.
Currently there is no way to get the export context (which was created by some section of a pipeline), e.g. to offer to download it; the Generic setup tool creates the context itself (and is supposed to have
quintagroup.transmogrifer
use it somehow when exporting the site content, but this doesn't work for me). However, transmogrifier sections can create export contexts themselves, and we should be able to access them in a standard way. (Currently they are completely hidden from the calling code, and not even accessible via the transmogrifier object afterwards.)The idea is to fill a list of information chunks; along with the information itself, there is a category (e.g.
export_context
) and the name of thesession
which added it. Thus, it is possible to easily get all information added by a certain session, or of a certain category. When producing a tarball for download, we could iterate over "all" information of theexport_context
type (there will be typically exactly one) and return a response which contains the tarball easily. (If the tarball has not been created, we could easily redirect to a page and display error messages instead.)Other use cases could be detailed information about some in-site processing (e.g. encoding fixed for 753 of 29384 objects), or about imported objects.