Skip to content

Commit

Permalink
Merge branch 'master' into pyup-update-dill-0.2.7.1-to-0.2.8.2
Browse files Browse the repository at this point in the history
  • Loading branch information
tunnell committed Jul 2, 2018
2 parents a21dbe1 + 0619c3b commit 7e655da
Show file tree
Hide file tree
Showing 49 changed files with 2,203 additions and 453 deletions.
6 changes: 3 additions & 3 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ deploy:
secure: kFyJJEetLpOghneLaykUEvjwN9JYE6icAgUn9cwl7uC5zXvtCQNhWDCP1fDbY70kEOkrJ9E6o/dEe8WWwSJyv/7gkzvqmBt/3BOpbDugT+VcZaCqIFd7jcDHyfpd3KOCLCmfpLNt4fI48y3eij+pNf91LKlHJzCC5Qzv5v5XuJXZBx0L19/7viuLGP2WTxkEInKzPaAZrgZh/+yYD0qb1m0a0y9ewUcXv5gEILUUyIREU66JAqd1/3ZRb2rhBgXBLmBfq0FwvfCGLYE2QMel31EOxxUU0oFdLH2DZVpDZzM9OwX8h482Z5m2z5bjVkRldGA09mo7RpDePttjIx1PPncBgLizmY7xTnY8QfKwd0mIcSLimgJffv8S7AbHxMFtcExpLsZrMiOThK3suSTvsrwlGXJ4mR8FD6OATrGYRt9z6CF8f/8zMM3VBb3LbJFlFh7ybG4VLyB88vjZ/e/VZXAWZbZyvNcQUH6SU7fTRsUAI3We5b3D0rjZsH77zaObP4uEq0Aip+j+RulMrHHFvijqb3UP2niBsW8h4iUqYfQWSfLWxbE1WHqjdTAuxHgpEX5ESxHCFUjq71eq5pGp/mJTHD6Lj5e4TgtPuAMDFApB7VHrOksHhqZSgXAe+6pftyRRQQRqX8r838pYRgWGxLuV1vUKMP+CjCIjZlKk/qU=
on:
tags: true # Only upload tags from bumpversion
condition: "$DEPLOY_ME=true"
condition: $DEPLOY_ME = true

# The build matrix over all possible setups
matrix:
Expand Down Expand Up @@ -54,8 +54,8 @@ before_install:

# This is the actual install of strax
install:
- pip install -r requirements.txt
- pip install coveralls
- pip install ripa
- ripa -r requirements.txt coveralls
- python setup.py install

# Compute the code coverage
Expand Down
49 changes: 31 additions & 18 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,32 @@
## Contribution guidelines

You're welcome to contribute to strax!
Currently it might be a little difficult since not even the core features are implemented
and there is no documentation to speak of. For the coming weeks we're probably not even following our own advice.

- Please work in a fork, then submit pull requests.
- Unless you have have a good reason (and access) to make a branch.
- Do not commit large (> 100 kB) files.
- For example, do not commit jupyter notebooks with high-resolution plots (clear the output first), or long configuration files, or binary test data.
- We'd like to keep the repository no more than a few MB.
While it's possible to rewrite history to remove large files, this is a bit of work and messes with the repository's consistency.
Once data has gone to master it's especially difficult, then there's a risk of others merging the files back in later unless they cooperate in the history-rewriting.
- This is one reason to prefer forks over branches; if you commit a huge file by mistake it's just in your fork.
- Of course, please write nice and clean code :-)
- PEP8-compatibility is great (you can test with flake8) but not as important as other good coding habits such as avoiding duplication. See e.g. the [famous beyond PEP8 talk](https://www.youtube.com/watch?v=wf-BqAjZb8M).
- In particular, don't go into code someone else is maintaining to "PEP8-ify" it (or worse, use some automatic styling tool)
- Other style guidelines (docstrings etc.) are yet to be determined.
- When accepting pull requests, prefer merge or squash depending on how the commit history looks.
- If it's dozens of 'oops' and 'test' commits, best to squash.
- If it's a few commits that mostly outline discrete steps of an implementation, it's worth keeping, so best to merge.

Currently many features are still in significant flux, and the documentation is still very basic. Until more people start getting involved in development, we're probably not even following our own advice below...

### Please fork
Please work in a fork, then submit pull requests.
Only maintainers sometimes work in branches if there is a good reason for it.

### No large files
Avoid committing large (> 100 kB) files. We'd like to keep the repository no more than a few MB.

For example, do not commit jupyter notebooks with high-resolution plots (clear the output first), or long configuration files, or binary test data.

While it's possible to rewrite history to remove large files, this is a bit of work and messes with the repository's consistency. Once data has gone to master it's especially difficult, then there's a risk of others merging the files back in later unless they cooperate in the history-rewriting.

This is one reason to prefer forks over branches; if you commit a huge file by mistake it's just in your fork.

### Code style
Of course, please write nice and clean code :-)

PEP8-compatibility is great (you can test with flake8) but not as important as other good coding habits such as avoiding duplication. See e.g. the [famous beyond PEP8 talk](https://www.youtube.com/watch?v=wf-BqAjZb8M).

In particular, don't go into code someone else is maintaining to "PEP8-ify" it (or worse, use some automatic styling tool)

Other style guidelines (docstrings etc.) are yet to be determined.

### Pull requests
When accepting pull requests, prefer merge or squash depending on how the commit history looks.
- If it's dozens of 'oops' and 'test' commits, best to squash.
- If it's a few commits that mostly outline discrete steps of an implementation, it's worth keeping, so best to merge.
17 changes: 13 additions & 4 deletions HISTORY.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,22 @@

0.2.0
------
- Start documentation
- `ParallelSourcePlugin` to better distribute low-level processing over multiple cores
- `OverlapWindowPlugin` to simplify algorithms that look back and ahead in the data
- Run-dependent config defaults
- XENON: Position reconstruction (tensorflow NN) and corrections

0.1.2 / 2018-05-09
------------------
- Failed to make last patch release.

0.1.1 / 2018-05-09
------------------
- Bug fix of not shipping all subpackages, thereby numba could not find code #19
- Autodeploy from Travis to PyPI works
- Badges
- `#19`: list subpackages in setup.py, so numba can find cached code
- Autodeploy from Travis to PyPI
- README badges

0.1.0 / 2018-05-06
------------------
- initial release
- Initial release
56 changes: 5 additions & 51 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,60 +1,14 @@
# strax
Streaming analysis for Xenon experiments
Streaming analysis for xenon experiments

[![Build Status](https://travis-ci.org/AxFoundation/strax.svg?branch=master)](https://travis-ci.org/AxFoundation/strax)
[![Readthedocs Badge](https://readthedocs.org/projects/strax/badge/?version=latest)](https://strax.readthedocs.io/en/latest/?badge=latest)
[![Coverage Status](https://coveralls.io/repos/github/AxFoundation/strax/badge.svg?branch=master)](https://coveralls.io/github/AxFoundation/strax?branch=master)
[![PyPI version shields.io](https://img.shields.io/pypi/v/strax.svg)](https://pypi.python.org/pypi/strax/)
[![Join the chat at https://gitter.im/AxFoundation/strax](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/AxFoundation/strax?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
[![Codacy Badge](https://api.codacy.com/project/badge/Grade/cc159474f2764d43b445d562a24ca245)](https://www.codacy.com/app/tunnell/strax?utm_source=github.com&utm_medium=referral&utm_content=AxFoundation/strax&utm_campaign=Badge_Grade)

Strax is an analysis framework for pulse-only digitization data,
specialized for live data reduction at speeds of 50-100 MB(raw) / core / sec.

For comparison, this is more than 100x faster than the XENON1T processor [pax](http://github.com/XENON1T/pax),
and does not require a preprocessing stage ('eventbuilder').
It achieves this due to using [numpy](https://docs.scipy.org/doc/numpy/) [structured arrays](https://docs.scipy.org/doc/numpy/user/basics.rec.html) internally,
which are supported by the amazing just-in-time compiler [numba](http://numba.pydata.org/).

Features:
* Start from unordered streams of pulses (like pax's [trigger](https://xe1t-wiki.lngs.infn.it/doku.php?id=xenon:xenon1t:aalbers:trigger_upgrade))
* Output to files or MongoDB
* Plugin system for extensibility
* Each plugin produces a dataframe
* Dependencies and configuration tracked explicitly
* Limited "Event class" emulation for code that needs it
* Processing algorithms: hitfinding, sum waveform, clustering, classification, event building

Strax is initially developed for the XENONnT experiment. However, the configuration
and specific algorithms for XENONnT will ultimately be hosted into a separate repository.

### Documentation

Documentation is under construction. For the moment, you might find these useful:
* [Tutorial notebook](https://www.github.com/AxFoundation/strax/blob/master/notebooks/Strax%20demo.ipynb)
* [Introductory talk](https://docs.google.com/presentation/d/1qZmbAKJmzn7iTbBbkzhTvHmiBqdbYyxhgheRRrDhTeY) (aimed at XENON1T analysis/DAQ experts)
* Function reference (TODO readthedocs)


### Installation
To install the latest stable version (from pypi), run `pip install strax`.
Dependencies should install automatically:
numpy, pandas, numba, two compression libraries (blosc and zstd
and a few miscellaneous pure-python packages.

You can also clone the repository, then setup a developer installation with `python setup.py develop`.

If you experience problems during installation, try installing
exactly the same version of the dependencies as used on he Travis build test server.
Clone the repository, then do `pip install -r requirements.txt`.

#### Test data

The provided demonstration notebooks require test data that is not included in the repository.
Eventually we will provide simulated data for this purpose.
For now, XENON collaboration members can find test data at:
* [Processed only](https://xe1t-wiki.lngs.infn.it/lib/exe/fetch.php?media=xenon:xenon1t:aalbers:processed.zip) (for strax demo notebook)
* Raw (for fake_daq.py and eb.py) at midway: `/scratch/midway2/aalbers/test_input_data.zip`

To use these, unzip them in the same directory as the notebooks. The 'processed' zipfile will just make one directory with a single zipfile inside.

Strax is an analysis framework for pulse-only digitization data, specialized for live data reduction at speeds of 50-100 MB(raw) / core / sec.
It's primary aim is to support noble liquid TPC dark matter searches, such as XENONnT.

For more information, please see the [strax documentation](https://strax.readthedocs.io).
6 changes: 6 additions & 0 deletions docs/make_docs.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
#!/usr/bin/env bash
make clean
rm -r source/reference
sphinx-apidoc -o source/reference ../strax
rm source/reference/modules.rst
make html
Loading

0 comments on commit 7e655da

Please sign in to comment.