-
Notifications
You must be signed in to change notification settings - Fork 216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add Python 3 #28
add Python 3 #28
Conversation
@bdewilde could you review these commits? |
@@ -9,8 +9,8 @@ | |||
distributed under the License is distributed on an "AS IS" BASIS, | |||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |||
""" | |||
from exp_avg_detector import ExpAvgDetector | |||
from derivative_detector import DerivativeDetector | |||
from luminol.algorithms.anomaly_detector_algorithms.exp_avg_detector import ExpAvgDetector |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Imports in Py2 vs Py3 have been an occasionally confusing issue... If we're only supporting Py2.7, I don't think we need the from __future__ import absolute_import
directive, but may be worth double-checking. Also, I've been told that relative imports, e.g. from .exp_avg_detector import ExpAvgDetector
, are preferable in subpackage modules but not in __init__.py
modules, but that may be a matter of preference.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good point. Explicit relative imports for subpackage modules seems to be the convention.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After trying it out, with the levels we're traversing it would look like:
from ... import utils
from ...algorithms.anomaly_detector_algorithms.exp_avg_detector import ExpAvgDetector
from ...algorithms.anomaly_detector_algorithms.derivative_detector import DerivativeDetector
from ...algorithms.anomaly_detector_algorithms import AnomalyDetectorAlgorithm
from ...constants import *
from ...modules.time_series import TimeSeries
I think in this case I'm partial to the absolute imports, but either is fine with me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I'm mostly sure it doesn't matter. If this works in both Py2 and Py3, there's probably no real reason to change it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok cool, I'll leave it as is for now.
Thank you for the review :)
@@ -87,7 +88,7 @@ def _find_allowed_shift(self, timestamps): | |||
param list timestamps: timestamps of a time series. | |||
""" | |||
init_ts = timestamps[0] | |||
residual_timestamps = map(lambda ts: ts - init_ts, timestamps) | |||
residual_timestamps = list(map(lambda ts: ts - init_ts, timestamps)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A list comprehension would probably be more readable: [ts - init_ts for ts in timestamps]
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed.
@@ -9,6 +9,7 @@ | |||
distributed under the License is distributed on an "AS IS" BASIS, | |||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |||
""" | |||
from builtins import map |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like the only use of map
is Line 91; if you use a list comprehension, or if you wrap the map
in a list()
call, is there any need for this import?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we transition the list/lambda/map statements to list comprehensions then we shouldn't need this.
src/luminol/modules/time_series.py
Outdated
@@ -43,7 +45,7 @@ def timestamps_ms(self): | |||
""" | |||
Return list of timestamp values in order by milliseconds since epoch. | |||
""" | |||
return map(lambda ts: ts * 1000, self.timestamps) | |||
return list(map(lambda ts: ts * 1000, self.timestamps)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As before, list comprehension may be more readable: [ts * 1000 for ts in self.timestamps]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed.
src/luminol/modules/time_series.py
Outdated
@@ -278,7 +280,7 @@ def add_offset(self, offset): | |||
:param int offset: The number of seconds to offset the time series. | |||
:return: `None` | |||
""" | |||
self.timestamps = map(lambda ts: ts + offset, self.timestamps) | |||
self.timestamps = list(map(lambda ts: ts + offset, self.timestamps)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
list comprehension?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes :)
src/luminol/modules/time_series.py
Outdated
@@ -288,7 +290,7 @@ def normalize(self): | |||
""" | |||
maximum = self.max() | |||
if maximum: | |||
self.values = map(lambda value: value / maximum, self.values) | |||
self.values = list(map(lambda value: value / maximum, self.values)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
list comprehension?
... I know this is a matter of preference, but I have a strong preference for list comprehensions over map
in cases like this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also prefer using list comprehensions over lambda/map.
anom_scores[timestamp] = -1 * diff_percent | ||
|
||
self.anom_scores = TimeSeries(self._denoise_scores(anom_scores)) | ||
Copy of DiffPercentThreshold Algorithm from algorithms/anomaly_detector_algorithms/diff_percent_threshold.py to test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure what all this is about...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This just fixes the spacing convention from 2 to 4 spaces. With #27 recently merged, it's already committed.
src/luminol/utils.py
Outdated
@@ -42,7 +43,8 @@ def read_csv(csv_name): | |||
:return dict: a dictionary represents the data in file. | |||
""" | |||
data = {} | |||
if not isinstance(csv_name, (str, unicode)): | |||
# if not isinstance(csv_name, (str, unicode)): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case, if you don't care whether csv_name
is unicode or bytes, you could do something like this:
import sys
if int(sys.version[0]) == 2:
str_types = (str, unicode)
else:
str_types = (bytes, str)
Then if not isinstance(csv_name, str_types):
...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this, this is a cleaner approach.
@RiteshMaheshwari i think we're ready to merge :) |
Just a friendly nudge to @RiteshMaheshwari or whomever could approve and merge in these changes... 👋 |
I am reaching out to folks who know this codebase better than me to give a review. |
@@ -34,6 +34,8 @@ def __init__(self, time_series, baseline_time_series=None, smoothing_factor=None | |||
""" | |||
super(ExpAvgDetector, self).__init__(self.__class__.__name__, time_series, baseline_time_series) | |||
self.use_lag_window = use_lag_window | |||
if smoothing_factor is None: | |||
smoothing_factor = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't it be better to change the method definition to set default smoothing_factor to 0 instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will do. This is would definitely be a cleaner way to do it.
src/luminol/correlator.py
Outdated
@@ -113,4 +116,6 @@ def is_correlated(self, threshold=None): | |||
Compare with a threshold to determine whether two timeseries correlate to each other. | |||
:return: a CorrelationResult object if two time series correlate otherwise false. | |||
""" | |||
if threshold is None: | |||
threshold = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comment as above, why not just change the method definition?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is definitely better. Making the change now.
@RiteshMaheshwari Changes made, please let me know if there's anything else. |
Got it reviewed internally as well. LGTM. |
No description provided.