-
Notifications
You must be signed in to change notification settings - Fork 24.9k
Closed as not planned
Description
🐛 Describe the bug
Noticing the following . As I understand, python float is 64 bit. Hence , converting to torch.float32 would be lossy. What is the recommended way to preserve as much accuracy as possible in this transition ?
python -c "import torch;import time;offset=1726274430;torch.set_printoptions(precision=10);l1=[time.time()-offset,time.time()-offset];print('list:', l1, type(l1[0]));t1=torch.tensor(l1);print('cpu:', t1, t1.dtype);t1=t1.to('cuda'); print('gpu:', t1, t1.dtype);"
list: [521468.6812365055, 521468.6812376976] <class 'float'>
cpu: tensor([521468.6875000000, 521468.6875000000]) torch.float32
gpu: tensor([521468.6875000000, 521468.6875000000], device='cuda:0') torch.float32
Versions
N/A
Metadata
Metadata
Assignees
Labels
No labels