Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

[torch.arange] Small epsilon should be subtracted from end, not added to end #99853

Closed
amitani opened this issue Apr 24, 2023 · 1 comment
Closed
Labels
actionable module: docs Related to our documentation, both in docs/ and docblocks triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@amitani
Copy link
Contributor

amitani commented Apr 24, 2023

馃摎 The doc issue

In https://pytorch.org/docs/stable/generated/torch.arange.html,
"""
Note that non-integer step is subject to floating point rounding errors when comparing against end; to avoid inconsistency, we advise adding a small epsilon to end in such cases.
"""
However, this is inconsistent with the exclusion behavior of end. Now the originally intended end may be always included. To exclude the original end, the new end must be slightly smaller than that.

Example script (could be environment dependent)

import torch

print("Example of `end` is seemingly included from rounding error.")
print(f"{torch.arange(0, 1, 1/49)=}")
print("Error is not fixed by adding epsilon.")
print(f"{torch.arange(0, 1.00001, 1/49)=}")
print("Error is fixed by subtracting epsilon.")
print(f"{torch.arange(0, 0.99999, 1/49)=}")
print("Example of `end` excluded as expected.")
print(f"{torch.arange(0, 1, 1/16)=}")
print("After adding epsilon, now 1 is included, which is not consistent.")
print(f"{torch.arange(0, 1.00001, 1/16)=}")
print("Subtracting epsilon has no effect (expected).")
print(f"{torch.arange(0, 0.99999, 1/16)=}")

gives

Example of `end` is seemingly included from rounding error.
torch.arange(0, 1, 1/49)=tensor([0.0000, 0.0204, 0.0408, 0.0612, 0.0816, 0.1020, 0.1224, 0.1429, 0.1633,
        0.1837, 0.2041, 0.2245, 0.2449, 0.2653, 0.2857, 0.3061, 0.3265, 0.3469,
        0.3673, 0.3878, 0.4082, 0.4286, 0.4490, 0.4694, 0.4898, 0.5102, 0.5306,
        0.5510, 0.5714, 0.5918, 0.6122, 0.6327, 0.6531, 0.6735, 0.6939, 0.7143,
        0.7347, 0.7551, 0.7755, 0.7959, 0.8163, 0.8367, 0.8571, 0.8776, 0.8980,
        0.9184, 0.9388, 0.9592, 0.9796, 1.0000])
Error is not fixed by adding epsilon.
torch.arange(0, 1.00001, 1/49)=tensor([0.0000, 0.0204, 0.0408, 0.0612, 0.0816, 0.1020, 0.1224, 0.1429, 0.1633,
        0.1837, 0.2041, 0.2245, 0.2449, 0.2653, 0.2857, 0.3061, 0.3265, 0.3469,
        0.3673, 0.3878, 0.4082, 0.4286, 0.4490, 0.4694, 0.4898, 0.5102, 0.5306,
        0.5510, 0.5714, 0.5918, 0.6122, 0.6327, 0.6531, 0.6735, 0.6939, 0.7143,
        0.7347, 0.7551, 0.7755, 0.7959, 0.8163, 0.8367, 0.8571, 0.8776, 0.8980,
        0.9184, 0.9388, 0.9592, 0.9796, 1.0000])
Error is fixed by subtracting epsilon.
torch.arange(0, 0.99999, 1/49)=tensor([0.0000, 0.0204, 0.0408, 0.0612, 0.0816, 0.1020, 0.1224, 0.1429, 0.1633,
        0.1837, 0.2041, 0.2245, 0.2449, 0.2653, 0.2857, 0.3061, 0.3265, 0.3469,
        0.3673, 0.3878, 0.4082, 0.4286, 0.4490, 0.4694, 0.4898, 0.5102, 0.5306,
        0.5510, 0.5714, 0.5918, 0.6122, 0.6327, 0.6531, 0.6735, 0.6939, 0.7143,
        0.7347, 0.7551, 0.7755, 0.7959, 0.8163, 0.8367, 0.8571, 0.8776, 0.8980,
        0.9184, 0.9388, 0.9592, 0.9796])
Example of `end` excluded as expected.
torch.arange(0, 1, 1/16)=tensor([0.0000, 0.0625, 0.1250, 0.1875, 0.2500, 0.3125, 0.3750, 0.4375, 0.5000,
        0.5625, 0.6250, 0.6875, 0.7500, 0.8125, 0.8750, 0.9375])
After adding epsilon, now 1 is included, which is not consistent.
torch.arange(0, 1.00001, 1/16)=tensor([0.0000, 0.0625, 0.1250, 0.1875, 0.2500, 0.3125, 0.3750, 0.4375, 0.5000,
        0.5625, 0.6250, 0.6875, 0.7500, 0.8125, 0.8750, 0.9375, 1.0000])
Subtracting epsilon has no effect (expected).
torch.arange(0, 0.99999, 1/16)=tensor([0.0000, 0.0625, 0.1250, 0.1875, 0.2500, 0.3125, 0.3750, 0.4375, 0.5000,
        0.5625, 0.6250, 0.6875, 0.7500, 0.8125, 0.8750, 0.9375])

Suggest a potential alternative/fix

"""
Note that non-integer step is subject to floating point rounding errors when comparing against end; to avoid inconsistency, we advise subtracting a small epsilon from end in such cases.
"""

cc @svekars @carljparker

@lezcano lezcano added the module: docs Related to our documentation, both in docs/ and docblocks label Apr 24, 2023
@lezcano
Copy link
Collaborator

lezcano commented Apr 24, 2023

Fair enough. We'd accept a PR fixing this issue.

@mikaylagawarecki mikaylagawarecki added actionable triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Apr 24, 2023
amitani added a commit to amitani/pytorch that referenced this issue Apr 25, 2023
@amitani amitani changed the title [torch.arrange] Small epsilon should be subtracted from end, not added to end [torch.arange] Small epsilon should be subtracted from end, not added to end Apr 25, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
actionable module: docs Related to our documentation, both in docs/ and docblocks triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants