Skip to content
This repository has been archived by the owner on Jan 15, 2024. It is now read-only.

Latest commit

 

History

History
63 lines (48 loc) · 2.73 KB

README.md

File metadata and controls

63 lines (48 loc) · 2.73 KB

As of 2024, this project is archived and unmaintained. While is has achieved its mission of demonstrating that unifying computational reproducibility and provenance tracking is doable and useful, it has also demonstrated that Python is not a suitable platform to build on for reproducible research. Breaking changes at all layers of the software stack are too frequent. The ActivePapers framework itself (this project) uses an API that was removed in Python 3.9, and while it can be updated with reasonable effort, there is little point in doing so: Published ActivePapers cannot be expected to work with a current Python stack for more than a year.

If you came here because you wish to re-run a published ActivePaper, the best advice I can give is to use Guix with its time-machine feature to re-create a Python stack close in time to the paper you are working with. The ActivePapers infrastructre is packaged in Guix as python-activepapers.

If you came here to learn about reproducible research practices, the best advice I can give is not to use Python.

The following text is the README from 2018.


ActivePapers is a tool for working with executable papers, which combine data, code, and documentation in single-file packages, suitable for publication as supplementary material or on sites such as figshare.

The ActivePapers Python edition requires Python 2.7 or Python 3.3 to 3.5. It also relies on the following libraries:

Installation of ActivePapers.Py:

python setup.py install

This installs the ActivePapers Python library and the command-line tool "aptool" for managing ActivePapers.

For documentation, see the ActivePapers Web site.

ActivePapers development takes place on Github.

Runnning the tests also requires the tempdir library and either the nose or the pytest testing framework. The recommended way to run the tests is

cd tests
./run_all_tests.sh nosetests

or

cd tests
./run_all_tests.sh py.test

This launches the test runner on each test script individually. The simpler approach of simply running nosetests or py.test in directory tests leads to a few test failures because the testing framework's import handling conflicts with the implementation of internal modules in ActivePapers.