Skip to content

Commit

Permalink
Merge branch 'master' into fix_urllib3
Browse files Browse the repository at this point in the history
  • Loading branch information
dachengx committed May 5, 2023
2 parents 2c9a1cc + 306d98e commit 2caed52
Show file tree
Hide file tree
Showing 10 changed files with 76 additions and 23 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/code_style.yml
Expand Up @@ -19,7 +19,7 @@ jobs:
run: |
bash .github/scripts/pre_pyflakes.sh
- name: Set up Python
uses: actions/setup-python@v4.5.0
uses: actions/setup-python@v4.6.0
with:
python-version: 3.8
- name: patch reviewdog
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/pypi_install.yml
Expand Up @@ -13,7 +13,7 @@ jobs:
steps:
# Setup steps
- name: Setup python
uses: actions/setup-python@v4.5.0
uses: actions/setup-python@v4.6.0
with:
python-version: '3.8'

Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/pytest.yml
Expand Up @@ -50,7 +50,7 @@ jobs:
uses: actions/checkout@v3

- name: Setup python
uses: actions/setup-python@v4.5.0
uses: actions/setup-python@v4.6.0
with:
python-version: ${{ matrix.python-version }}

Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/test_install.yml
Expand Up @@ -23,7 +23,7 @@ jobs:
python-version: [3.8]
steps:
- name: Setup python
uses: actions/setup-python@v4.5.0
uses: actions/setup-python@v4.6.0
with:
python-version: ${{ matrix.python-version }}
- name: Checkout repo
Expand Down
9 changes: 3 additions & 6 deletions HISTORY.md
@@ -1,6 +1,5 @@
v2.0.7 / 2023-04-25
-------------------
## What's Changed
* Bootstrax target removal after failures by @cfuselli in https://github.com/XENONnT/straxen/pull/1145
* reforming _raw_path and _processed_path by @FaroutYLq in https://github.com/XENONnT/straxen/pull/1149
* Adding correction of Z position due to non-uniform drift velocity by @terliuk in https://github.com/XENONnT/straxen/pull/1148
Expand All @@ -10,7 +9,7 @@ v2.0.7 / 2023-04-25
* Use zstd as compressor of peaks by @dachengx in https://github.com/XENONnT/straxen/pull/1154
* Bump sphinx from 5.3.0 to 6.2.0 in /extra_requirements by @dependabot in https://github.com/XENONnT/straxen/pull/1161

## New Contributors
New Contributors
* @cfuselli made their first contribution in https://github.com/XENONnT/straxen/pull/1145
* @matteoguida made their first contribution in https://github.com/XENONnT/straxen/pull/1146
* @hmdyt made their first contribution in https://github.com/XENONnT/straxen/pull/1159
Expand All @@ -20,12 +19,11 @@ v2.0.7 / 2023-04-25

v2.0.6 / 2023-03-08
-------------------
## What's Changed
* Bump supercharge/mongodb-github-action from 1.8.0 to 1.9.0 by @dependabot in https://github.com/XENONnT/straxen/pull/1140
* Small patches to restrax module by @JoranAngevaare in https://github.com/XENONnT/straxen/pull/1143, d04a3428c52c159577b61af2a28ddd0af5652027, 602b807291211f083c8f54df6768b8198fbf6b55
* Ms events by @michaweiss89 and @HenningSE in https://github.com/XENONnT/straxen/pull/1080

## New Contributors
New Contributors
* @michaweiss89 made their first contribution in https://github.com/XENONnT/straxen/pull/1080

**Full Changelog**: https://github.com/XENONnT/straxen/compare/v2.0.5...v2.0.6
Expand All @@ -36,7 +34,6 @@ Notes:

v2.0.5 / 2023-02-24
-------------------
## What's Changed
* fix xedocs for testing by @JoranAngevaare in https://github.com/XENONnT/straxen/pull/1139
* Restart python style guide by @JoranAngevaare in https://github.com/XENONnT/straxen/pull/1138
* Decrease number of chunks by @JoranAngevaare in https://github.com/XENONnT/straxen/pull/1123
Expand Down Expand Up @@ -983,7 +980,7 @@ patches and fixes:
- Bugfix in clean_up_empty_records (#210)


0.10.0 / 2020-08-187
0.10.0 / 2020-08-18
--------------------
- Neutron-veto integration (#86)
- Processing for high energy channels (#161, #176)
Expand Down
2 changes: 1 addition & 1 deletion requirements.txt
Expand Up @@ -12,6 +12,6 @@ numpy
packaging
pymongo<4.0.0
requests
strax>=1.4.0
strax>=1.5.0
utilix>=0.5.3
xedocs
17 changes: 15 additions & 2 deletions straxen/plugins/events/event_basics.py
Expand Up @@ -19,7 +19,7 @@ class EventBasics(strax.Plugin):
alternative S2 is selected as the largest S2 other than main S2
in the time window [main S1 time, main S1 time + max drift time].
"""
__version__ = '1.3.1'
__version__ = '1.3.2'

depends_on = ('events',
'peak_basics',
Expand Down Expand Up @@ -76,6 +76,17 @@ def infer_dtype(self):

dtype += self._get_si_dtypes(self.peak_properties)

dtype += [
(f's1_max_diff', np.int32,
f'Main S1 largest time difference between apexes of hits [ns]'),
(f'alt_s1_max_diff', np.int32,
f'Alternate S1 largest time difference between apexes of hits [ns]'),
(f's1_min_diff', np.int32,
f'Main S1 smallest time difference between apexes of hits [ns]'),
(f'alt_s1_min_diff', np.int32,
f'Alternate S1 smallest time difference between apexes of hits [ns]'),
]

dtype += [
(f's2_x', np.float32,
f'Main S2 reconstructed X position, uncorrected [cm]'),
Expand Down Expand Up @@ -253,7 +264,9 @@ def fill_result_i(self, event, peaks):
# Largest index 0 -> main sx, 1 -> alt sx
for largest_index, main_or_alt in enumerate(['s', 'alt_s']):
peak_properties_to_save = [name for name, _, _ in self.peak_properties]
if s_i == 2:
if s_i == 1:
peak_properties_to_save += ['max_diff', 'min_diff']
elif s_i == 2:
peak_properties_to_save += ['x', 'y']
peak_properties_to_save += self.posrec_save
field_names = [f'{main_or_alt}{s_i}_{name}' for name in peak_properties_to_save]
Expand Down
7 changes: 4 additions & 3 deletions straxen/plugins/merged_s2s/merged_s2s.py
Expand Up @@ -15,7 +15,7 @@ class MergedS2s(strax.OverlapWindowPlugin):
Merge together peaklets if peak finding favours that they would
form a single peak instead.
"""
__version__ = '1.0.0'
__version__ = '1.0.1'

depends_on = ('peaklets', 'peaklet_classification', 'lone_hits')
data_kind = 'merged_s2s'
Expand Down Expand Up @@ -133,7 +133,8 @@ def compute(self, peaklets, lone_hits):
@numba.njit(cache=True, nogil=True)
def get_merge_instructions(
peaklet_starts, peaklet_ends, areas, types,
gap_thresholds, max_duration, max_gap, max_area):
gap_thresholds, max_duration, max_gap, max_area,
sort_kind='mergesort'):
"""
Finding the group of peaklets to merge. To do this start with the
smallest gaps and keep merging until the new, merged S2 has such a
Expand All @@ -149,7 +150,7 @@ def get_merge_instructions(
peaklet_start_index = np.arange(len(peaklet_starts))
peaklet_end_index = np.arange(len(peaklet_starts))

for gap_i in np.argsort(peaklet_gaps):
for gap_i in np.argsort(peaklet_gaps, kind=sort_kind):
start_idx = peaklet_start_index[gap_i]
inclusive_end_idx = peaklet_end_index[gap_i + 1]
sum_area = np.sum(areas[start_idx:inclusive_end_idx + 1])
Expand Down
45 changes: 41 additions & 4 deletions straxen/plugins/peaklets/peaklets.py
Expand Up @@ -3,6 +3,7 @@
import strax
from immutabledict import immutabledict
from strax.processing.general import _touching_windows
from strax.dtypes import DIGITAL_SUM_WAVEFORM_CHANNEL
import straxen


Expand Down Expand Up @@ -37,7 +38,7 @@ class Peaklets(strax.Plugin):
parallel = 'process'
compressor = 'zstd'

__version__ = '1.0.2'
__version__ = '1.1.0'

peaklet_gap_threshold = straxen.URLConfig(
default=700, infer_type=False,
Expand Down Expand Up @@ -230,7 +231,7 @@ def compute(self, records, start, end):
del hits

hitlets['time'] -= (hitlets['left'] - hitlets['left_integration']) * hitlets['dt']
hitlets['length'] = (hitlets['right_integration'] - hitlets['left_integration'])
hitlets['length'] = hitlets['right_integration'] - hitlets['left_integration']
hitlets = strax.sort_by_time(hitlets)
rlinks = strax.record_links(records)

Expand Down Expand Up @@ -289,15 +290,16 @@ def compute(self, records, start, end):
peaklet_max_times = (
peaklets['time']
+ np.argmax(peaklets['data'], axis=1) * peaklets['dt'])
tight_coincidence_channel = get_tight_coin(
peaklets['tight_coincidence'] = get_tight_coin(
sorted_hit_max_times,
sorted_hit_channels,
peaklet_max_times,
self.tight_coincidence_window_left,
self.tight_coincidence_window_right,
self.channel_range)

peaklets['tight_coincidence'] = tight_coincidence_channel
# Add max and min time difference between apexes of hits
self.add_hit_features(hitlets, hit_max_times, peaklets)

if self.diagnose_sorting and len(r):
assert np.diff(r['time']).min(initial=1) >= 0, "Records not sorted"
Expand All @@ -314,6 +316,16 @@ def compute(self, records, start, end):
if n_top_pmts_if_digitize_top <= 0:
peaklets = drop_data_top_field(peaklets, self.dtype_for('peaklets'))

# Check channel of peaklets
peaklets_unique_channel = np.unique(peaklets['channel'])
if (peaklets_unique_channel == DIGITAL_SUM_WAVEFORM_CHANNEL).sum() > 1:
raise ValueError(
f'Found channel number of peaklets other than {DIGITAL_SUM_WAVEFORM_CHANNEL}')
# Check tight_coincidence
if not np.all(peaklets['n_hits'] >= peaklets['tight_coincidence']):
raise ValueError(
f'Found n_hits less than tight_coincidence')

return dict(peaklets=peaklets,
lone_hits=lone_hits)

Expand Down Expand Up @@ -368,6 +380,31 @@ def create_outside_peaks_region(peaklets, start, end):
outside_peaks[-1]['endtime'] = end
return outside_peaks

@staticmethod
def add_hit_features(hitlets, hit_max_times, peaklets):
"""
Create hits timing features
:param hitlets_max: hitlets with only max height time.
:param peaklets: Peaklets for which intervals should be computed.
:return: array of peaklet_timing dtype.
"""
hits_w_max = np.zeros(
len(hitlets),
strax.merged_dtype(
[np.dtype([('max_time', np.int64)]), np.dtype(strax.time_fields)]))
hits_w_max['time'] = hitlets['time']
hits_w_max['endtime'] = strax.endtime(hitlets)
hits_w_max['max_time'] = hit_max_times
split_hits = strax.split_by_containment(hits_w_max, peaklets)
for peaklet, h_max in zip(peaklets, split_hits):
max_time_diff = np.diff(np.sort(h_max['max_time']))
if len(max_time_diff) > 0:
peaklet['max_diff'] = max_time_diff.max()
peaklet['min_diff'] = max_time_diff.min()
else:
peaklet['max_diff'] = -1
peaklet['min_diff'] = -1

def drop_data_top_field(peaklets, goal_dtype, _name_function= '_drop_data_top_field'):
"""Return peaklets without the data_top field"""
peaklets_without_top_field = np.zeros(len(peaklets), dtype=goal_dtype)
Expand Down
11 changes: 8 additions & 3 deletions straxen/plugins/peaks/peak_basics.py
Expand Up @@ -14,7 +14,7 @@ class PeakBasics(strax.Plugin):
arrays.
NB: This plugin can therefore be loaded as a pandas DataFrame.
"""
__version__ = "0.1.3"
__version__ = '0.1.4'
parallel = True
depends_on = ('peaks',)
provides = 'peak_basics'
Expand Down Expand Up @@ -53,7 +53,11 @@ class PeakBasics(strax.Plugin):
(('Number of PMTs with hits within tight range of mean',
'tight_coincidence'), np.int16),
(('Classification of the peak(let)',
'type'), np.int8)
'type'), np.int8),
(('Largest time difference between apexes of hits inside peak [ns]',
'max_diff'), np.int32),
(('Smallest time difference between apexes of hits inside peak [ns]',
'min_diff'), np.int32),
]

n_top_pmts = straxen.URLConfig(default=straxen.n_top_pmts, infer_type=False,
Expand All @@ -68,7 +72,8 @@ class PeakBasics(strax.Plugin):
def compute(self, peaks):
p = peaks
r = np.zeros(len(p), self.dtype)
for q in 'time length dt area type'.split():
needed_fields = 'time length dt area type max_diff min_diff'
for q in needed_fields.split():
r[q] = p[q]
r['endtime'] = p['time'] + p['dt'] * p['length']
r['n_channels'] = (p['area_per_channel'] > 0).sum(axis=1)
Expand Down

0 comments on commit 2caed52

Please sign in to comment.