Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add distributed lookup table design #9075

Conversation

jacquesqiao
Copy link
Member

No description provided.

@jacquesqiao jacquesqiao changed the title Add distributed lookup table design [wip]Add distributed lookup table design Mar 14, 2018
@wangkuiyi
Copy link
Collaborator

Thanks to @jacquesqiao for this design. After talking with Xu laoshi, I put our comments in jacquesqiao#4. Please review it. Thanks!

Update distributed lookup table design doc
@jacquesqiao jacquesqiao changed the title [wip]Add distributed lookup table design Add distributed lookup table design Mar 15, 2018
@typhoonzero
Copy link
Contributor

Related: #9068

Copy link
Contributor

@typhoonzero typhoonzero left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically LGTM, just some questions.


<!--
Note: please update the following URL when update this digraph.
<img src='https://g.gravizo.com/svg?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One awesome tool for pasting figures!


- Pro: parameter servers do not retrieve W(x) from the storage
service, thus saves half network communication.
- Con: the storage service needs to be able to run the optimization
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this means we actually have two kinds of servers when doing training:

  1. original parameter server
  2. storage server can run optimization

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, storage server can run optimization will only handle large-scale embedding table, other parameters still use the fluid optimization operators.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, if we go this way. But let us go the other way first.

### Storage Service Doesn't Optimize

In this design, we use highly-optimized distributed storage, e.g.,
memcached, as the storage service, and we run the optimization
Copy link
Member Author

@jacquesqiao jacquesqiao Mar 15, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wangkuiyi After discussing with @ldmod LiDong, We think that a standalone distributed KV service like Memcached maybe not a good solution. It's better that parameter and the corresponding optimization happened at the same place. For large scale model training and optimization, we need to use the asynchronous update, in this condition, we need to make sure every optimizer should update the latest parameter, even when the gradient is calculated by the old parameter. If we use a standalone KV service, we need to add a lock to the parameter when some optimizer is updating it. If the parameter and optimization op is at the same place, the solution will be very simple, for example, use one thread to do the optimization.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But still, we can have two option, one is using the current optimization operators, but the parameter will be maintained by the pserver. The other one is using a distributed Storage Service which can do Optimization.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed with @Yancey1989 yesterday that we agree with this idea, mixing two servers (original pserver and embedding table pserver) is a perfect way to embed an external parameter server:

  1. Add design doc for lookup remote table in Fluid #9068 is trying to describe how to implement this opensource.
  2. embed Abucus parameter server for advanced industry model training

we can carry on this two methods at the same time, then both opensource users and industry users can have their choice.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To using distributed Storage Service, PServer would also communicate with the storage, this would make the double traffic than using embedding table pserver. And I agree with @typhoonzero , we can carry on this two methods at the same time.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let us have a baseline solution as early as possible. @jacquesqiao @ldmod.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wangkuiyi I agree. We can implement a baseline solution without Memcached.

@reyoung
Copy link
Collaborator

reyoung commented Mar 15, 2018

How to implement sparse regularization?


## Conclusion

Let us do the "storage service does not optimize" solution first, as a
Copy link
Contributor

@helinwang helinwang Mar 15, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I don't understand why we need to use memcached given that our current parameter server implementation should be faster in key lookup.

memcached is a general purpose in-memory key value store, from its website description:

Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering.

We only need a special case in-memory key value store (few keys, large values). Currently our parameter server (recv operator in fluid) does exactly this. In terms of the key lookup speed, our parameter server should be faster or at least same speed comparing to memcached.

The design doc mentioned:

Let us do the "storage service does not optimize" solution first

Our parameter server already does this (and does optimization as well), why we have to make another implementation with memcached again, is it because of performance reasons?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@helinwang I am not so clear about the table lookup implementation currently, for a large scale lookup table, the keys may be discrete in a very big range, so we need a key-value lookup module.

@helinwang helinwang dismissed their stale review March 16, 2018 18:21

Will discuss offline with @jacquesqiao

@jacquesqiao jacquesqiao merged commit d7d0c1e into PaddlePaddle:develop Mar 19, 2018
distributed lookup table automation moved this from To do to Done Mar 19, 2018
@jacquesqiao
Copy link
Member Author

Will issue a detailed design in another PR.

@jacquesqiao jacquesqiao mentioned this pull request Mar 19, 2018
15 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Development

Successfully merging this pull request may close these issues.

None yet

6 participants