-
Notifications
You must be signed in to change notification settings - Fork 456
make risk measure more memory efficient #1034
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
This pull request was exported from Phabricator. Differential Revision: D33037848 |
Codecov Report
@@ Coverage Diff @@
## main #1034 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 113 113
Lines 9049 9051 +2
=========================================
+ Hits 9049 9051 +2
Continue to review full report at Codecov.
|
saitcakmak
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lgtm. The CVaR is pretty safe and the VaR was tested offline to make sure it has the same behavior as the original implementation.
| # `sample_shape x batch_shape x (q * n_w) x m` | ||
| return torch.quantile( | ||
| input=prepared_samples, | ||
| q=self._q, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not just use alpha as the quantile here? Seem like that's what you'd want?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It produces different results due to the way quantile is implemented.
If you have n_w=20, alpha=0.5, you want the result to be sorted_samples[..., 9]. Using q=alpha with quantile will return sorted_samples[..., 10]. Technically, both are totally fine since the result is a consistent estimator of VaR as n_w -> \infty. (I'd actually go for interpolation = "linear" if we didn't care about the result being different).
The reason we wanted the result to be consistent with the original implementation is that i) it is consistent with the way CVaR is implemented, ii) it is consistent with the definition of the VaR that is used in the papers we reference.
Summary: Pull Request resolved: meta-pytorch#1034 Using torch.quantile is way more efficient because it does not create large tensors for values and indices (which are the same size as the input). torch.topk also yields memory improvements Reviewed By: Balandat Differential Revision: D33037848 fbshipit-source-id: 12d09681151bf4841c23e06e900f322a821a2296
7b75ff1 to
e3deefb
Compare
|
This pull request was exported from Phabricator. Differential Revision: D33037848 |
Summary:
Using torch.quantile is way more efficient because it does not create large tensors for values and indices (which are the same size as the input).
torch.topk also yields memory improvements
Differential Revision: D33037848