-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[0.3.2] Replace start timestamp with number of observations #1402
Conversation
This results in better DevX. It is more intuitive to set the number of observations than the exact date.
Windows tests occasionally timeout.
Adjust APY models tests to the refactoring which replaced the `history_start` with the `n_observations`. Refactoring commits: - 4a410913594dd6088494b4e1381f355233d2f8d8 - cc1620e1b6e11c5e339fdbfe6bc2e65544fa8aac - d6488fe14c60b3dce44675a8611c47c0cac4c0f7
Adjust APY models tests to the refactoring which replaced the `history_start` with the `n_observations`. Refactoring commits: - 4a410913594dd6088494b4e1381f355233d2f8d8 - cc1620e1b6e11c5e339fdbfe6bc2e65544fa8aac - d6488fe14c60b3dce44675a8611c47c0cac4c0f7
cc1620e1b6e11c5e339fdbfe6bc2e65544fa8aac
timeout-minutes: 65 | ||
timeout-minutes: 70 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Increased timeout as I have noticed windows tests timing out lately and I also added two new hypothesis
tests.
history_start: ${SERVICE_APY_HISTORY_START:int:null} | ||
n_observations: ${SERVICE_APY_N_OBSERVATIONS:int:120} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Set a default of 120 observations which is a good number to start experimenting with. Moreover, now the operator does not need to specify anything if they do not need to update the number of observations. Previously, they had to update the history_start
every time they restarted the service which was cumbersome.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes. There probably is some range that should be allowed, not up to infinity
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In aea-config.yaml n_observations is 10 while here it is 120. Is that right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. I plan to extend this method at some point, though we might want to introduce a proper way for validation to all of our models:
open-autonomy/packages/valory/skills/apy_estimation_abci/models.py
Lines 209 to 220 in 554269b
def __validate_params(self) -> None: | |
"""Validate the given parameters.""" | |
# Eventually, we should probably validate all the parameters. E.g., `ts_length` should be < `end` | |
for param_name in ("timeout", "window_size"): | |
param_val = self.optimizer_params.get(param_name) | |
if param_val is not None and not isinstance(param_val, int): | |
raise ValueError( | |
f"Optimizer's parameter `{param_name}` can be either of type `int` or `None`. " | |
f"{type(param_val)} was given." | |
) | |
# if the value did not exist in the config, then we set it to the default (None) returned from `.get` method | |
self.optimizer_params[param_name] = param_val |
Have you heard of pydantic? Using it might produce some cool results.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In aea-config.yaml n_observations is 10 while here it is 120. Is that right?
Yes, we need the service to run with proper values and the CI to run fast.
unit_amount_from_sec(self.params.end - self.params.start, self._unit) | ||
unit_amount_from_sec(self.params.ts_length, self._unit) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We now have a property for this:
open-autonomy/packages/valory/skills/apy_estimation_abci/models.py
Lines 204 to 207 in 554269b
@property | |
def ts_length(self) -> int: | |
"""The length of the timeseries in seconds.""" | |
return self.n_observations * self.interval |
self.context.logger.info( | ||
"Estimates have been received:\n" f"{estimates.to_string()}" | ||
) | ||
self.context.logger.info("Estimates have been received.") | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed because it was duplicate.
@@ -192,9 +194,21 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: | |||
|
|||
self.__validate_params() | |||
|
|||
@property | |||
def start(self) -> Optional[int]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
start
is now given by a property that utilizes the ts_length
. The formula is:
start = end - n_observations * interval
end
and interval
are given in UNIX, and therefore start is also in UNIX.
max_attempts: int = 20, | ||
max_attempts: int = 26, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addresses #1396. 12 extra seconds should be enough for CI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok so that's why. It's an odd number to pick though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's an even 😅 🤣
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah not a prime number 👎
packages/valory/skills/apy_estimation_abci/tests/test_models.py
Outdated
Show resolved
Hide resolved
# filter out end values that will result in negative `start` | ||
assume(end >= n_observations * interval) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this better or somehow different than than using .filter(lambda a, b, c: c >= a * b)
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that they both filter the values, and used assume
as I found it easier to read.
Codecov ReportBase: 92.70% // Head: 92.70% // No change to project coverage 👍
Additional details and impacted files@@ Coverage Diff @@
## main #1402 +/- ##
=======================================
Coverage 92.70% 92.70%
=======================================
Files 55 55
Lines 2138 2138
=======================================
Hits 1982 1982
Misses 156 156
Flags with carried forward coverage won't be shown. Click here to find out more. Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Some failing e2e.
Previous run failed because of #1168. |
Proposed changes
Replace the start timestamp in the configuration with the number of observations we want for the timeseries.
We now have to set the number of data points we wish to download in the past (starting from the configured end, using the configured interval).
This is more intuitive than setting the exact start timestamp in UNIX, resulting in better DevX.
Fixes
n/a
Types of changes
What types of changes does your code introduce? (A breaking change is a fix or feature that would cause existing functionality and APIs to not work as expected.)
Put an
x
in the box that appliesChecklist
Put an
x
in the boxes that apply.main
branch (left side). Also you should start your branch off ourmain
.Further comments
n/a