Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add rnn op interfaces #2775

Closed
wants to merge 64 commits into from
Closed
Changes from 4 commits
Commits
Show all changes
64 commits
Select commit Hold shift + click to select a range
c418dac
add rnn op interfaces
Superjomn Jul 7, 2017
6042795
add Run
Superjomn Jul 7, 2017
13d8ca9
rename state -> memory
Superjomn Jul 7, 2017
a645ae6
change state -> memory
Superjomn Jul 7, 2017
8640f96
make compilable
Superjomn Jul 8, 2017
d4cde51
add .cc
Superjomn Jul 8, 2017
6e99289
init test
Superjomn Jul 8, 2017
63b5841
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
qingqing01 Jul 8, 2017
08f69f6
Merge branch 'develop' of github.com:PaddlePaddle/Paddle into rnnimpl
Superjomn Jul 10, 2017
007ca1e
add op fake implementation
Superjomn Jul 10, 2017
2538b2f
add CreateStepNet and CreateScopes implementation.
qingqing01 Jul 10, 2017
5eb87f0
add TODO list
luotao1 Jul 10, 2017
4dcb02e
Merge branch 'rnnimpl' of github.com:Superjom/Paddle into rnnimpl
Superjomn Jul 10, 2017
ca53f3a
init memory attributes.
qingqing01 Jul 10, 2017
671cc26
Merge branch 'rnnimpl' of https://github.com/Superjom/Paddle into fea…
qingqing01 Jul 10, 2017
1e48cc8
add LinkMemories
Superjomn Jul 10, 2017
e0cbcd0
Merge branch 'rnnimpl' of github.com:Superjom/Paddle into rnnimpl
Superjomn Jul 10, 2017
f7916a6
add PlainNet fake implementation
Superjomn Jul 10, 2017
089c448
Use std::shared_ptr<Scope> in the OpRunContext.
qingqing01 Jul 10, 2017
bffd11e
add test
Superjomn Jul 10, 2017
c7947de
disable mutable_data
Superjomn Jul 10, 2017
94766b6
Merge branch 'rnnimpl' of github.com:Superjom/Paddle into rnnimpl
Superjomn Jul 10, 2017
6dca711
finist segmentInput function
luotao1 Jul 10, 2017
eabf1bf
Merge branch 'develop' of github.com:PaddlePaddle/Paddle into rnnimpl
Superjomn Jul 11, 2017
d210b0b
enable mutable_data with a trick
Superjomn Jul 11, 2017
6674fee
RNNOp test.
qingqing01 Jul 11, 2017
778ebb4
enable LinkMemories with mutable_data
Superjomn Jul 11, 2017
c60ed35
update
qingqing01 Jul 11, 2017
8642b27
update SegmentInput function with comments
luotao1 Jul 11, 2017
b0938ed
Merge branch 'develop' of github.com:PaddlePaddle/Paddle into rnnimpl
Superjomn Jul 11, 2017
3921fbb
Merge branch 'rnnimpl' of github.com:Superjom/Paddle into rnnimpl
Superjomn Jul 11, 2017
244fe51
create rnn op and step net in unit test.
qingqing01 Jul 11, 2017
020c189
Merge branch 'rnnimpl' of https://github.com/Superjom/Paddle into rnn…
luotao1 Jul 11, 2017
8e70b37
finish ConcatOutput function
luotao1 Jul 11, 2017
4150fa7
Merge branch 'rnnimpl' of github.com:Superjom/Paddle into rnnimpl
Superjomn Jul 12, 2017
1584414
Merge branch 'develop' of github.com:PaddlePaddle/Paddle into rnnimpl
Superjomn Jul 12, 2017
ce802c0
reformat inputs and attributes
Superjomn Jul 12, 2017
a883b4c
Refine unit test.
qingqing01 Jul 12, 2017
b98cae4
Merge branch 'rnnimpl' of https://github.com/Superjom/Paddle into fea…
qingqing01 Jul 12, 2017
a81be58
Refine unit test.
qingqing01 Jul 12, 2017
acde9b7
modify inlinks.
qingqing01 Jul 12, 2017
638384e
update from develop branch.
qingqing01 Jul 12, 2017
82464f5
add OpDesc to Net
Superjomn Jul 12, 2017
bbcc149
Merge branch 'netimpl' into rnnimpl
Superjomn Jul 12, 2017
c92ce74
Merge branch 'develop' into rnnimpl
luotao1 Jul 12, 2017
5c5d890
fix bug and update unit test.
qingqing01 Jul 12, 2017
522445b
resolve conflict.
qingqing01 Jul 12, 2017
01f20be
Merge branch 'rnnimpl' of github.com:Superjom/Paddle into rnnimpl
Superjomn Jul 12, 2017
08003de
Merge branch 'rnnimpl' of github.com:Superjom/Paddle into rnnimpl
Superjomn Jul 12, 2017
a6483e8
move step scopes from inputs to outputs
Superjomn Jul 12, 2017
7b1d123
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
qingqing01 Jul 12, 2017
bcd03bf
fix merge conflict, update SegmentInput function
luotao1 Jul 13, 2017
de319bb
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
qingqing01 Jul 14, 2017
0a4a502
Merge branch 'develop' into rnnimpl
luotao1 Jul 14, 2017
e64b5d3
add RecurrentOpProtoAndCheckerMaker.
qingqing01 Jul 14, 2017
e700bf6
Merge branch 'rnnimpl' of https://github.com/Superjom/Paddle into fea…
qingqing01 Jul 14, 2017
f525390
clean the codes
luotao1 Jul 14, 2017
3a27b02
Abstract GetStepScopes and GetMaxSeqLen function
luotao1 Jul 14, 2017
aede869
refine LinkMemories
luotao1 Jul 14, 2017
45682d2
Refine code and add some comments.
qingqing01 Jul 15, 2017
497c7ff
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
qingqing01 Jul 15, 2017
fc5acee
add backward core
Superjomn Jul 15, 2017
14dd843
update for develop branch.
qingqing01 Jul 15, 2017
3c15641
Merge branch 'rnnimpl' of https://github.com/Superjom/Paddle into fea…
qingqing01 Jul 15, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
141 changes: 141 additions & 0 deletions paddle/framework/recurrent_network_op.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,141 @@
/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */

#pragma once

#include "paddle/framework/enforce.h"
#include "paddle/framework/scope.h"
#include "paddle/framework/variable.h"

namespace paddle {
namespace framework {

// fake interfaces that has not be implemented by other modules.
struct OpRunContext {
Scope* scope;
};

// TODO replace this with Net's proto.
struct NetDesc {
std::string name;
}

class OperatorBase {
public:
virtual ~OperatorBase() {}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looks to me that the constructor needs a parameter paddle::framework::proto::OperatorDesc so could it possble to call InferShape, which saves sizes of inputs/outputs into the desc. Only if so, we could have all necessary information for calling OperatorBase::Run:

class OperatorBase {
 public:
  OperatorBase(const proto::OperatorDesc& desc) : desc_(desc) {}
  virtual void Run(OpRunContext* context) const = 0;

 protected:
  virtual void InferShape(const Scope* scope) const = 0; // needs to read from and write to desc_

  proto::OperatorDesc desc_;
};

So the information in proto::OperatorDesc propagates along the path:

Operator's constructor 
  ① ↓
OperatorBase::desc_  → Operator's Run
  ②↓ ↑③            ④
Operator's InferShape

@Superjom @reyoung @jacquesqiao

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the new design of Operator, OpDesc will store in Op, and InferShape can get the information from scope, but it seems that it need not store the shape into the desc

Copy link
Collaborator

@wangkuiyi wangkuiyi Jul 7, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jacquesqiao You are right.

The first clue about input/output sizes is in training data instances, and we get the instance when we do training, i.e,. call operator's Run.

Should we just remove InferShape and let each operator defines its own shape inference methods, i.e., one method for an output, so to shorten code in its Run method like this:

template <typename Context> class MyOperator;

template <>
class MyOperator<GPUContext> : public OperatorBase {
 public:
  MyOperator(const proto::OperatorDesc& desc) : OperatorBase(desc) {}
  
  virtual void Run(OpRunContext* ctx) const {
    cudnnGemm(
      ctx->cudnnHandle,
      Output(0, ctx)->GetMutable<Tensor>(Output0Size(ctx))->mutable_data(),
      Input(0, ctx)->Get<Tensor>()->data(),
      Input(1, ctx)->Get<Tensor>()->data(),
    );
  }

 private:
  DDim Output0Size(OpRunContext* ctx) const { ...}
  DDim Output1Size(OpRunContext* ctx) const { ...}
};

virtual void Run(OpRunContext* context) const = 0;
virtual void InferShape(const Scope* scope) const = 0;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does InferShape do?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the purpose of InferShape is to inference the size of inputs/outputs from some of them that we already know the size.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

InferShape will set the output variable dim according to the input variable dim.

Copy link
Contributor Author

@Superjomn Superjomn Jul 8, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RNNOp.InferShape will just call its step net's InferShape, and will

  • check input variable/tensors' shape, raise an error if wrong
  • update all outputs' variable/tensors' shape according to this mini-batch of input

It is offered as a public method because we want to keep checking dynamically during user adding operators.


protected:
std::vector<std::string> inputs_;
std::vector<std::string> outputs_;
Copy link
Contributor Author

@Superjomn Superjomn Jul 7, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add attributes

}

class RecurrentGroupForwardOp {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RecurrentGroupForwardOp => RecurrentOp

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good, short enough.

the backward op's name?

  • RecurrentBackwardOp
  • or RecurrentGradientOp ?

public:
RecurrentGroupForwardOp(NetDesc& net_desc)
: name_(net_desc.name),
net_name_(net_desc.name + "__net__"),
step_scopes_name_(net_desc.name + "__step_scopes_") {}

virtual void InferShape(const Scope* scope) = 0;
/*
* Forward run the RNN.
*
* NOTE the context's scope is not given until `Run` called, so step scopes'
* father should be set/updated in this method.
*/
virtual void Run(OpRunContext* contex) const {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be in .cpp

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, will move to .cpp later.

We are working on a simple implementation to verify the whole process and will give a version soon.

auto scope = contex.scope;

Variable* net = scope->GetVariable(net_name_);
if (net == nullptr) {
BuildStepNet(scope);
net = scope->GetVariable(net_name_);
}
PADDLE_ENFORCE(net);

// expand lazily.
CreateScopes(scope);
ScatterLinks(scope);
PrepareMemories(scope);
Variable* step_scopes = scope->GetVariable(step_scopes_name_);
PADDLE_ENFORCE(step_scopes);

// forward
for (Scope* step_scope : step_scopes->GetMutable<std::vector<Scope*>>()) {
net->Run(step_scope);
}

// prepare outputs
GatherOutLinks(scope);
}

protected:
/*
* Prepare inputs for each stepnet.
*/
void ScatterInLinks(Scope* scope);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ScatterInLinks => SegmentInputs. Let us use accurate English wording.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


/*
* Process outputs of stepnets and merge to variables.
*/
void GatherOutLinks(Scope* scope);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GatherOutLinks => ConcatenateOutputs


/*
* Build a `Net` which is shared across all steps.
*/
void BuildStepNet(Scope* scope);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BuildStepNet => CreateStepNet


/*
* Create a scope for each step, the context's scope is shared across all
* the step scopes as the father scope. The step scopes will be stored in
* the father scope as a variable.
*/
void CreateScopes(Scope* scope);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


/*
* Prepare steps' states and relations.
*/
void PrepareMemories(Scope* scope);

protected:
/*
* these are defined in BaseOperator
*
* std::vector<std::string> inputs_;
* std::vector<std::string> outputs_;
*/

// Memory of a RNN (same as the role of `Momory` in PaddlePaddle)
struct MemoryAttr {
// name of current state variable
std::string var;
// name of previous step's state variable
std::string pre_var;
// name of the variable to init a state, which is store in context's
// scope.
std::string boot_var;
};

std::vector<MemoryAttr> memories_;
std::string name_;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this name.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


const std::string net_name_;
const std::string step_scopes_name_;
};

class RecurrentGroupBackwardOp;
} // namespace framework
} // namespace paddle