New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Roadmap] v0.2 release checklist #302

Open
jermainewang opened this Issue Dec 12, 2018 · 28 comments

Comments

Projects
None yet
7 participants
@jermainewang
Copy link
Member

jermainewang commented Dec 12, 2018

Thanks everyone for the hard work. We really did a lot for a smooth beta release. With the repo being opened and more community help in-coming, it is a good time to figure out the roadmap to v0.2 release. Here is a draft proposal, feel free to reply, comment and discuss about this. Note that the list is long but we could figure out the priority later. We'd like to hear your opinions and push DGL to a next stage.

Model examples

Core system improvement

Tutorial/Blog

Project improvement

Deferred goals

(will not be included in this release unless someone takes over)

  • PinSage (@BarclayII )
  • More dataset:
  • Social networks: Reddit
  • Recommendation: Amazon product
  • Knowledge graph: YAGO, Freebase
  • Web page graph: CommonCrawl
  • Tensorflow backend
  • Kernel support max/min reducer.
  • Kernel support vector-shape edge features in src_mul_edge.
  • Kernel upport sparse src_mul_dst.
  • Improve scheduling.
    • More friendly error messages and easier debugging.
    • Other scheduling strategy: degree padding
    • Optimize schedulers for pull.
    • Cache the scheduling results for static graphs to improve performance.
  • Pytorch: Improve SPMM using coalesced indices.
  • MXNet: Support COO format.
  • MXNet: Speed up the conversion from COO to CSR, from numpy to CSR
  • MXNet: Support Gluon hybridization and optimize the computation graph to speed up.
  • Distributed training
    • Simple RPC component (@aksnzhy )
    • Distributed sampling (@aksnzhy )
    • Simple KVStore for node embedding
@BarclayII

This comment has been minimized.

Copy link
Collaborator

BarclayII commented Dec 12, 2018

M2C.

Models:

Core system improvement:

  • Multigraph support: intelligently and efficiently check for duplicate edges on simple graphs, or whether the graph is a multigraph. Right now the burden is entirely on the users.
  • Fused operators: sparse softmax (nicer for GAT/transformer/etc.)
  • Group-applying on outbound or inbound edges of the same node (nicer for Capsule). We may need builtin functions such as group-softmax for this case.
  • Specialization support: add complete graphs.
  • Think of possible specialization support for "combination of regular graph components", such as the graph from transformer networks (a complete graph and a half-complete graph combined with a bipartite graph). We don't have to work on this in 0.2 if it sounds too complicated.
  • [EDIT] Node/edge removal support.

Project Improvement:

  • Pylint using flake8
  • Type-checking using mypy (they have comment-style type checking to work with Python 2)

Others:

  • See how far symbolic computation graphs can work in general (related to Tensorflow and MXNet)
@zheng-da

This comment has been minimized.

Copy link
Collaborator

zheng-da commented Dec 13, 2018

I think we should support an operator that does dot(X1, X2.T) * adj. This is more general than spmv. When we generalize "multiply" and "addition", it'll be more general than generalized spmv. I think it's useful for transformer.

@zheng-da

This comment has been minimized.

Copy link
Collaborator

zheng-da commented Dec 13, 2018

BTW, any action item for accelerating GAT?

@jermainewang

This comment has been minimized.

Copy link
Member Author

jermainewang commented Dec 13, 2018

I think we should support an operator that does dot(X1, X2.T) * adj. This is more general than spmv. When we generalize "multiply" and "addition", it'll be more general than generalized spmv. I think it's useful for transformer.

@zheng-da Is it similar to "sparse src_mul_dst" ?

@zheng-da

This comment has been minimized.

Copy link
Collaborator

zheng-da commented Dec 13, 2018

I see what you mean by src_mul_dst. I think so. We can use this form of operations to accelerate other models such as GAT (actually, any models that use both source vertices and destination vertices in the edge computation).

How are we going to implement these operators? in DGL or in the backend? If we implement it in DGL, how to support async computation in MXNet?

@BarclayII

This comment has been minimized.

Copy link
Collaborator

BarclayII commented Dec 13, 2018

BTW, any action item for accelerating GAT?

That will be sparse softmax I proposed?

How are we going to implement these operators? in DGL or in the backend? If we implement it in DGL, how to support async computation in MXNet?

Seems that PyTorch operators can be implemented externally (https://github.com/rusty1s/pytorch_scatter) so putting that in DGL repo should be fine.

I don't know if/how external operators can hook into MXNet; should we compile MXNet from source? Also I guess MXNet can implement these operators in their own repo regardless, since having these sparse operators should be always beneficial?

@jermainewang

This comment has been minimized.

Copy link
Member Author

jermainewang commented Dec 13, 2018

In terms of implementation, it's better to be in DGL so it can be used in every framework. In general, we should follow the guidance of each framework on implementing custom operators (such as this guide in pytorch). We should avoid dependencies on the framework's C++ libraries. This leaves us few choices including:
(1) Use python extension. Such as https://mxnet.incubator.apache.org/tutorials/gluon/customop.html .
(2) Use dynamic library. Such as https://pytorch.org/docs/stable/cpp_extension.html . Don't know about MX's solution yet. But we should investigate.

In terms of async, is MX's CustomOp async or not?

@VoVAllen

This comment has been minimized.

Copy link
Collaborator

VoVAllen commented Dec 13, 2018

Is there any plan for group_apply_edges API? I think this would be useful since we cannot do out-edges reduction at current stage.

@zheng-da

This comment has been minimized.

Copy link
Collaborator

zheng-da commented Dec 13, 2018

Previously, we discussed caching the results from the schedulers. It helps us avoid the expensive scheduling. I just realized that there is a lot of data copy from CPU to GPU during the computation even though we have copied all data in Frame to GPU. The data copy occurs on Index (I suppose Index is always copied on CPUs first). Caching the scheduling result can also help avoid data copy from CPU to GPU.

@jermainewang

This comment has been minimized.

Copy link
Member Author

jermainewang commented Dec 13, 2018

@zheng-da , agree. This should be put on the roadmap.

Is there any plan for group_apply_edges API? I think this would be useful since we cannot do out-edges reduction at current stage.

This is somewhat related to the sparse softmax proposed by @BarclayII. In my mind, there are two levels. The lower-level is group_apply_edges that can be operated on both out-edges and in-edges. Built atop of it is the "sparse edge softmax" module that is widely used in many models. Agree this should be put in our roadmap.

@BarclayII

This comment has been minimized.

Copy link
Collaborator

BarclayII commented Dec 13, 2018

I assume we also need a "sparse softmax" kernel (similar to TF)? What I was thinking is to have group_apply accept a node UDF with incoming/outgoing edges (similar to the ones for reduce functions). sparse_softmax could be one of such built-in UDFs.

@zheng-da

This comment has been minimized.

Copy link
Collaborator

zheng-da commented Dec 14, 2018

We should add more MXNet tutorials in the website.

@zheng-da

This comment has been minimized.

Copy link
Collaborator

zheng-da commented Dec 14, 2018

In terms of the implementation of the new operators, CustomOp in MXNet might not be a good way of implementing new operators. It's usually very slow. For performance, it's still best to implement them in the backend frameworks directly. At least, we can do it in MXNet. Not sure about Pytorch.

@jermainewang

This comment has been minimized.

Copy link
Member Author

jermainewang commented Dec 14, 2018

Do you know why is it slow? It might be a good chance to improve that part. Also we need to benchmark Pytorch's custom op to see how much overhead it has. We should try our best to have them in DGL. Otherwise, it will be really difficult to maintain them in every frameworks.

@zheng-da

This comment has been minimized.

Copy link
Collaborator

zheng-da commented Dec 14, 2018

It calls Python code from C code. Because the operator is implemented in Python, it's expressiveness is limited. Implementing sparse softmax efficiently in Python is hard.

@eric-haibin-lin

This comment has been minimized.

Copy link
Member

eric-haibin-lin commented Dec 15, 2018

For sparse softmax I created a feature request at MXNet repo: apache/incubator-mxnet#12729

@VoVAllen

This comment has been minimized.

Copy link
Collaborator

VoVAllen commented Dec 16, 2018

Minor suggestion for project improvement:
Switch from nose to pytest for unittest. Mainly for two reasons:

  • pytest has test coverage report, which is useful to avoid bugs. And it's fully compatible with nose.
  • nose is deprecated since 2016. It has successor nose2 but seems more people chooses pytest.
@AIwem

This comment has been minimized.

Copy link

AIwem commented Dec 21, 2018

graph_nets?

@jermainewang

This comment has been minimized.

Copy link
Member Author

jermainewang commented Dec 21, 2018

graph_nets?

@Alwem, could you elaborate?

@AIwem

This comment has been minimized.

Copy link

AIwem commented Dec 22, 2018

@jermainewang Have you consulted the idea of the graph_nets model? Some of their solutions seem to be good!

@jermainewang

This comment has been minimized.

Copy link
Member Author

jermainewang commented Dec 22, 2018

We did some investigation of graph_nets. We found that DGL could cover all the models in graph_nets. Maybe we miss something. Could you point out?

@HuangZhanPeng

This comment has been minimized.

Copy link

HuangZhanPeng commented Dec 29, 2018

can node2vec with side information train on DGL, node2vec has a random walking to get the sequence.

In the future, GraphRNN will be add to DGL? it is more performance snap on large dataset

@jermainewang

This comment has been minimized.

Copy link
Member Author

jermainewang commented Dec 31, 2018

Hi @HuangZhanPeng , thank you for the suggestion. It will be great if you could help contribute node2vec and GraphRNN to DGL. From my understanding, the random walk can be done in networkx first and then used in DGL. GraphRNN is similar to the DGMG model (see our tutorials here) in that it is a generative model trained on a sequence of nodes/edges. I guess there will be many shared building blocks between the two.

@HuangZhanPeng

This comment has been minimized.

Copy link

HuangZhanPeng commented Jan 1, 2019

@jermainewang Thank you for your response. In my actual work, node2vec's random walk on networkx is not available with large scale data. If there is time, I really want to try to implement graphrnn in dgl.

@jermainewang

This comment has been minimized.

Copy link
Member Author

jermainewang commented Jan 1, 2019

@HuangZhanPeng There is always time :). Please go ahead. If you encounter any problems during the implementation, feel free to raise questions on https://discuss.dgl.ai. The team is very responsive. About the random walk, @BarclayII is surveying the common random walk algorithm and we might include APIs for them in our next release.

@jermainewang jermainewang changed the title [Roadmap] v0.2 release [Roadmap] v0.2 release checklist Feb 18, 2019

@jermainewang

This comment has been minimized.

Copy link
Member Author

jermainewang commented Feb 18, 2019

Just updated the roadmap with a checklist. Our tentative date for this release is this month (02/28).

For all committers @zheng-da @szha @BarclayII @VoVAllen @ylfdq1118 @yzh119 @GaiYu0 @mufeili @aksnzhy @zzhang-cn @ZiyueHuang , please vote +1 if you agree with this plan.

@BarclayII

This comment has been minimized.

Copy link
Collaborator

BarclayII commented Feb 19, 2019

I would rather reply with emoticon. +1 as reply would pollute the thread.

@szha szha pinned this issue Feb 19, 2019

@jermainewang

This comment has been minimized.

Copy link
Member Author

jermainewang commented Feb 20, 2019

The release plan passed by voting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment