You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the current implementation,client and server of graph co-place with TF-worker and TF-parameter-server.
When I want to use one TF-worker to train and multiple workers to sample data simultaneously(for GPU training). There will be some restrictions under the current architecture. So, any plan to decouple TF-PS and distributed graph engine to make architecture more flexible?
The text was updated successfully, but these errors were encountered:
In the current implementation,client and server of graph co-place with TF-worker and TF-parameter-server.
When I want to use one TF-worker to train and multiple workers to sample data simultaneously(for GPU training). There will be some restrictions under the current architecture. So, any plan to decouple TF-PS and distributed graph engine to make architecture more flexible?
The text was updated successfully, but these errors were encountered: