You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I run the graphsage dist_train.py(cora data), the worker memory usage keeps increasing:
When I train model with our own data, which is a larger graph, the memory usage grows faster:
I guess if there is any memory leak? May be that some objects of the previous iterations are not free? Any advice or suggestions will be greatly appreciated.
Problem description
When I run the graphsage dist_train.py(cora data), the worker memory usage keeps increasing:
When I train model with our own data, which is a larger graph, the memory usage grows faster:
I guess if there is any memory leak? May be that some objects of the previous iterations are not free? Any advice or suggestions will be greatly appreciated.
Environment information for cora data
docker image: registry.cn-zhangjiakou.aliyuncs.com/pai-image/graph-learn:v0.1-cpu
code path: /workspace/graph-learn/examples/tf/graphsage (in docker container)
config: 2ps, 2worker / batchsize: 32 / epoch: 40000000
The text was updated successfully, but these errors were encountered: