Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimizer Design #4656

Merged
merged 5 commits into from
Oct 11, 2017
Merged

Conversation

jacquesqiao
Copy link
Member

@jacquesqiao jacquesqiao commented Oct 10, 2017

issue: #4679

@jacquesqiao jacquesqiao changed the title Optimizer on block Optimizer Design Oct 10, 2017
op related.
"""
...
return update_op
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When a user wants to update twice, the update_op need to trace the first update_op and all update, backward op related. Maybe we need to write some guides to point out it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, after discussing with @dzhwinter , this can be done in the current design, but not the most important thing to consider now.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

have already make the backward interface public, user and call it directly multiple times to create more gradient operators in the graph.

@@ -0,0 +1,85 @@
## Optimizer Design
In deeplearning system, `Optimizer` is used to optimize(minimize) loss thow updating a list of parameters.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thow is a typo?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed


### A typical training process:

1. run forward to calculate activation using data and parameter.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think this typical training process fits our current design.

Currently, we put every operator into one ProgramDesc. There are not three running stages explicitly.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a general abstract training process, no matter how complex the training process is, they are all composed of these stages.

In Our design, we also have functions like backward and optimize to put related operators into ProgramDesc. Here we just put the interface into Optimizer as high level API.


```python
class Optimizer(object):
def _backward(loss):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

backward and update should be public.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

3. User use the optimizer to `minimize` a certain `cost` thow updating parameters in parameter_list.

```python
opt = optimizer.minimize(cost, parameter_list=[w1, ...])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

opt should as a list.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -0,0 +1,85 @@
## Optimizer Design
In deeplearning system, `Optimizer` is used to optimize(minimize) loss thow updating a list of parameters.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This design doc doesn't explain the challenge.

It looks to me that the challenge is

The Problem

A PaddlePaddle program, or a block, is a sequence of operators operating variables. A training program needs to do three kinds of works:

  1. the forward pass, which computes intermediate results and the cost(s),
  2. the backward pass, which derives gradients from intermediate and costs, and
  3. the optimization pass, which update model parameters.

These works rely on three kinds of operators:

  1. forward operators,
  2. gradient operators, and
  3. optimization operators.

It's true that users should be able to create all these operators manually by calling some low-level API, but it would be much more convenient if they could only describe the forward pass and let PaddlePaddle create the backward and optimization operators automatically.

In this design, we propose a high-level API that automatically derives the optimisation pass and operators from the forward pass.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

## Optimizer Design
In deeplearning system, `Optimizer` is used to optimize(minimize) loss thow updating a list of parameters.

### A typical training process:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the above proposed section ## The Problem is accepted, this paragraph of three bullets can be removed.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


1. User write code to describe the network:

```python
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This Python program needs to be properly indented -- to the right of 1. in the above line.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

cost = layer.mse(hidden, labels)
```

the code above will generate forward operators in [block](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/block.md).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the => The
the code above => The above code snippet
will generate => creates

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

the code above will generate forward operators in [block](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/block.md).


2. User create a Optimizer and set parameter list that it need to update.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either The user creates or Users create

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


2. User create a Optimizer and set parameter list that it need to update.

```python
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct code snippet indentation in the Markdown doc.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


### What does optimizer do:

In PaddlePaddle, we use block of operators to describe computation. From the Python Interface we described above, we can see that `Optimizer` should add some operators to the computation block:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

block of operators => blocks of operators
we use => PaddlePaddle uses

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done, removed


```python
class Optimizer(object):
def _backward(loss):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_backward => create_backward_pass

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

...
return variables

def _update(var_list):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_update => create_optimization_pass

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


1. User write code to describe the network:

```python
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the pseudo code here not format well.


```python
class Optimizer(object):
def create_backward_pass(loss, parameter_list=None):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Parameter and variable look like interchangeable in Python API. Not sure they are referred to the same concept.

This method simply combines calls `create_backward_pass()` and
`create_optimization_pass()`.
"""
vars_grads = create_backward_pass(loss)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo create_backward_pass(loss) => create_backward_pass(loss, parameter_list)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@wangkuiyi wangkuiyi merged commit 696874a into PaddlePaddle:develop Oct 11, 2017
@jacquesqiao
Copy link
Member Author

The reason of use a uniform interface for Optimizer.

  1. when use parameter_share, and with different optimizer.
  2. one network, different optimizer.

@jacquesqiao jacquesqiao mentioned this pull request Oct 21, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants