Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change debugger info of pytorch tensor from value to shape #1191

Closed
shenmishajing opened this issue Jan 24, 2023 · 2 comments
Closed

Change debugger info of pytorch tensor from value to shape #1191

shenmishajing opened this issue Jan 24, 2023 · 2 comments
Labels
enhancement New feature or request

Comments

@shenmishajing
Copy link

Now, in debuger we display the value str of pytorch tensor, which is meanless mostly. How about change this to the shape of pytorch tensor? It will be more helpful.

@karthiknadig karthiknadig changed the title Chaneg debuger info of pytorch tensor from value to shape Change debugger info of pytorch tensor from value to shape Jan 24, 2023
@karthiknadig karthiknadig transferred this issue from microsoft/vscode-python Jan 24, 2023
@int19h int19h added the enhancement New feature or request label Jan 24, 2023
@int19h
Copy link
Contributor

int19h commented Jan 24, 2023

We use repr() normally on the assumption that whatever the type author made it do is the preferred output (since it is also used in REPL), so ideally it should be fixed on that end so that the whole tooling ecosystem benefits.

However, if that is not feasible, it can be special-cased. Can you tell more about the scenario, and why one representation is superior to the other here?

@shenmishajing
Copy link
Author

shenmishajing commented Jan 25, 2023

I find some issues in pytorch repo about this, like pytorch/pytorch#63855 and pytorch/pytorch#35071. Seems like there are already many people ask for this feature, but pytorch does not do this, as those two issues have been stable for two or three years. I leave some commets on them, but I'm pessimistic about this. Therefore, let's make this special-cased.

First, when we are debugging some pytorch code, there maybe some error in that code, which are almost caused by wrong shape. For example, a fc layer (technically matrix multiply) will raise error when A(3*4 matrix) @ B(5*4 matrix). If vscode display the shape info of those two tensor, we can fix this by A@B.T immediately. But if vscode display some thing like tensor([[[ -38.3458, 214.4476, 767.8055, ... and tensor([[[ 29.1078, 215.7885, 676.3702, ..., they are helpless and we can not fix it until we print shape of them.

Second, we init network randomly or load pre-train checkpoint, most of the time. So the value of network's weight is just random or meanless to human. In fact, not just network's weight, almost all of tensors except just some input tensor are random or meanless. On the other hand, if we want to check whether the value of tensor is correct or not, we check the loss or metrics indirectly or display some heat map to visualize them instead of check the value directly.

For now, vscode display the value of tensor when debug, It looks like: tensor([[[ -38.3458, 214.4476, 767.8055, ... .
截屏2023-01-25 10 06 56
It will be better, if we add the shape info before the value. It will look like: tensor(shape=torch.Size([32, 80, 108]), [[[ -38.3458, .... If this is hard to implement, it will be ok if we just add the shape info before tenosr. It will look like: shape=[32, 80, 108], tensor([[[ -38.3458, ... or just [32, 80, 108], tensor([[[ -38.3458, .... This will be easy to implement because we just add shape str and original repr str.

@int19h int19h added enhancement New feature or request and removed enhancement New feature or request waiting for response labels Jan 25, 2023
@microsoft microsoft locked and limited conversation to collaborators Feb 1, 2023
@int19h int19h converted this issue into discussion #1199 Feb 1, 2023

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants