- 
                Notifications
    
You must be signed in to change notification settings  - Fork 559
 
fixed TensorType for TPU device execution, limited the unit test chec… #3116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…k fro torch::kDouble on infinity entries
        
          
                test/cpp/test_aten_xla_tensor.cpp
              
                Outdated
          
        
      | AllClose(b, xla_b); | ||
| 
               | 
          ||
| if (DebugUtil::ExperimentEnabled("nonzero")) { | ||
| if (DebugUtil::ExperimentEnabled("nonzero") && | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please rebase
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Diff is still here, can you rebase again? bridge::AtenDeviceToXlaDevice(device).hw_type == DeviceType::TPU here has been removed from master. You might need to pull first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for some odd reason, the rebase and merge operations do not work as intended saying everything is up to date and showing the latest master commit present in this branch. Clearly something is wrong given this delta is still present. I ran a git checkout --patch on this file to resolve the problem.
        
          
                torch_xla/csrc/aten_xla_type.cpp
              
                Outdated
          
        
      | auto element_type = TensorTypeToRawXlaType(self.scalar_type()); | ||
| XLATensor input_tensor = bridge::GetXlaTensor(self); | ||
| const Device& device = input_tensor.GetDevice(); | ||
| auto element_type = GetDevicePrimitiveType( | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can use MakeXlaPrimitiveType which takes scalar_type directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good shortcut. Done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
most lgtm, please pull and rebase
        
          
                test/cpp/test_aten_xla_tensor.cpp
              
                Outdated
          
        
      | AllClose(b, xla_b); | ||
| 
               | 
          ||
| if (DebugUtil::ExperimentEnabled("nonzero")) { | ||
| if (DebugUtil::ExperimentEnabled("nonzero") && | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Diff is still here, can you rebase again? bridge::AtenDeviceToXlaDevice(device).hw_type == DeviceType::TPU here has been removed from master. You might need to pull first.
…a_tensor.cpp. improved variable use in torch_xla/csrc/aten_xla_type.cpp
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you make sure that test pass on TPU?
| 
           Confirming this code passes on TPU tests. CC @JackCaoG  | 
    
Issue #3115