Skip to content

Conversation

@sdaulton
Copy link
Contributor

Summary:
Using torch.quantile is way more efficient because it does not create large tensors for values and indices (which are the same size as the input).

torch.topk also yields memory improvements

Differential Revision: D33037848

@facebook-github-bot facebook-github-bot added CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported labels Dec 12, 2021
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D33037848

@codecov
Copy link

codecov bot commented Dec 12, 2021

Codecov Report

Merging #1034 (e3deefb) into main (fa1d6d2) will not change coverage.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff            @@
##              main     #1034   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files          113       113           
  Lines         9049      9051    +2     
=========================================
+ Hits          9049      9051    +2     
Impacted Files Coverage Δ
botorch/acquisition/risk_measures.py 100.00% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update fa1d6d2...e3deefb. Read the comment docs.

Copy link
Contributor

@saitcakmak saitcakmak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lgtm. The CVaR is pretty safe and the VaR was tested offline to make sure it has the same behavior as the original implementation.

# `sample_shape x batch_shape x (q * n_w) x m`
return torch.quantile(
input=prepared_samples,
q=self._q,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not just use alpha as the quantile here? Seem like that's what you'd want?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It produces different results due to the way quantile is implemented.

If you have n_w=20, alpha=0.5, you want the result to be sorted_samples[..., 9]. Using q=alpha with quantile will return sorted_samples[..., 10]. Technically, both are totally fine since the result is a consistent estimator of VaR as n_w -> \infty. (I'd actually go for interpolation = "linear" if we didn't care about the result being different).

The reason we wanted the result to be consistent with the original implementation is that i) it is consistent with the way CVaR is implemented, ii) it is consistent with the definition of the VaR that is used in the papers we reference.

Summary:
Pull Request resolved: meta-pytorch#1034

Using torch.quantile is way more efficient because it does not create large tensors for values and indices (which are the same size as the input).

torch.topk also yields memory improvements

Reviewed By: Balandat

Differential Revision: D33037848

fbshipit-source-id: 12d09681151bf4841c23e06e900f322a821a2296
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D33037848

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants