Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bugfix][MXNet] Fix edge order and builtin max bug in mx #247

Merged
merged 3 commits into from Dec 5, 2018

Conversation

jermainewang
Copy link
Member

@jermainewang jermainewang commented Dec 5, 2018

Changes:

  • Fix the bug in src_mul_edge due to csr_matrix does not match with coo order.
  • Fix bug in builtin max due to different broadcasting behaviors between MX and TH.
  • Fix bug in in utils due to different behaviors for rank-0 tensor between MX and TH.
  • Fix bug in creating from scipy sparse matrix -- nodelist is not correctly provided.

@jermainewang
Copy link
Member Author

Another bug in immutable graph index, which might need you guys efforts to fix @eric-haibin-lin @zheng-da

import os
os.environ['DGLBACKEND'] = 'mxnet'
import dgl

elist = [(1, 2), (0, 1), (0, 2)]
g = dgl.DGLGraph(elist, readonly=True)
print(g.edges(form='all'))
print(g.edge_id(0, 1))

elist = [(1, 2), (0, 1), (0, 2)]
g = dgl.DGLGraph(elist, readonly=False)
print(g.edges(form='all'))
print(g.edge_id(0, 1))

You can try the above example. The results are different and the second one is the correct output (also the same as the pytorch backend).

python/dgl/backend/mxnet/tensor.py Show resolved Hide resolved
@@ -73,7 +83,7 @@ def mean(input, dim):
return nd.mean(input, axis=dim)

def max(input, dim):
return nd.max(input, axis=dim).asnumpy()[0]
return nd.max(input, axis=dim)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i thought this need to return python scalar?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nop. This should the normal max that reduce along the given dim, similar to sum.

@@ -237,8 +237,8 @@ def build_relabel_map(x, sorted=False):
unique_x, _ = F.sort_1d(F.unique(x))
else:
unique_x = x
map_len = int(F.max(unique_x, dim=0)) + 1
old_to_new = F.zeros(map_len, dtype=F.int64, ctx=F.cpu())
map_len = int(F.asnumpy(F.max(unique_x, dim=0))) + 1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do you prefer to use asnumpy here? what happens to pytorch?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In torch, int(tensor([0])) will directly become 0. This is not valid for MX. Technically, int() should also be a backend function call. Changing it to numpy making it more consistent between the two backends.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so we should have mxnet ndarray to support int()? i think this is easy to do. i'll create an issue in the mxnet repo for this.

@zheng-da zheng-da merged commit eafcb7e into master Dec 5, 2018
@zheng-da
Copy link
Collaborator

zheng-da commented Dec 5, 2018

@jermainewang the bug you reported has been fixed.
https://github.com/jermainewang/dgl/pull/251/files

@jermainewang jermainewang deleted the fix-mx-spmv branch December 5, 2018 17:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants