Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Row convolution operation. #2373

Merged
merged 10 commits into from
Jun 12, 2017
Merged

Conversation

qingqing01
Copy link
Contributor

@qingqing01 qingqing01 commented Jun 4, 2017

Fix #2228

Add a row convolution function.

  • CPU implementation.
  • GPU implementation.
  • pass test_LayerGrad unit test.
  • pass function comparison unit test.
  • Add Python API.
  • Add code annotation in both C++ and Python interface.

Copy link
Contributor

@luotao1 luotao1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

请在layers.rst补充row_conv信息,另外能否把layer.multiplex信息一并补上(#2308

@qingqing01 qingqing01 requested a review from pkuyym June 5, 2017 07:59
@qingqing01
Copy link
Contributor Author

@luotao1 已补充: 6e8c566 。 Thanks!

Copy link
Contributor

@hedaoyuan hedaoyuan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

另外,在实现计算kernel的时候尽量用template,避免用real,这样,后续需要支持float16或其他类型的计算时会更容易。

CHECK_EQ(in.shape().ndims(), 2UL);
CHECK_EQ(out.shape().ndims(), 2UL);
CHECK_EQ(in.shape()[1], out.shape()[1]);
CHECK_EQ(in.shape()[0], out.shape()[0]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

147-149三行可以换成CHECK(in.shape() == out.shape());

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

CHECK_EQ(in.shape().ndims(), 2UL);
CHECK_EQ(outGrad.shape().ndims(), 2UL);
CHECK_EQ(in.shape()[1], outGrad.shape()[1]);
CHECK_EQ(in.shape()[0], outGrad.shape()[0]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CHECK(in.shape() == outGrad.shape());
CHECK(in.shape() == inGrad.shape());

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

MatrixPtr wGrad = weight_->getWGrad();
size_t h = getInputValue(0)->getHeight();
size_t w = getInputValue(0)->getWidth();
outputs.addArg(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

inputGrad和weightGrad最好还是拆成两个Funciton计算。这样,可以避免创建一个空的参数。

if (inGrad) {
  backwardInput(...);
}
if (wGrad) {
  backwardWeight(...);
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

加了TODO,后续会拆分成两个function.

resetOutput(height, width);

const auto startPos = getInput(0).sequenceStartPositions->getVector(useGpu_);
wDims_ = TensorShape({contexLength_, width});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wDims_是weight_ shape吧,这里contexLength_ != weight_.height_吗?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里contexLength_ == weight_.height_

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

嗯,那修改成根据weight属性创建wDims更可读吧TensorShape(weight.height_, weight,width_)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


__shared__ real sw[BLOCK_H][BLOCK_W];

for (int i = tidy; i < context; i += blky) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

如果,context > 32,这里外层增加一个for,是否就可以去掉KeRowConv2了?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

paper里面context是19,考虑到可能这个值大于32的情况也不多,就分成了2个kernel,每个kernel读写都相对简洁一些~

}

template <>
void RowConvGrad<DEVICE_TYPE_CPU>(const CpuMatrix& outG,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里,filterG和inG放在一个Kernel里实现并没有提高计算性能,还是写层两个Kernel函数实现较好。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

加了TODO,后续PR再修改~

// check
CHECK_EQ(2UL, inputs.size());
CHECK_EQ(1UL, outputs.size());
CHECK_EQ(outputs[0].getArgType(), ADD_TO);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

forward可以实现一下ASSIGN_TO,这样可以加速inference的计算。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

加两个TODO, 后续PR再修改~

@xinghai-sun
Copy link
Contributor

xinghai-sun commented Jun 11, 2017

@qingqing01 这里可能有个问题:DS2中的row convolution是需要加在RNNs之后的,即其input layer输出类型是sequence, 而不是dense vector.

我们讨论下是否有必要开发sequence类型的2-D conv (现有sequence类型的1-D conv且不带stride),否则就需要多次sequence到dense_vector的来回转换,可能有损性能。

另外,Lookahead conv kernel 在本实现中也未支持?

@qingqing01
Copy link
Contributor Author

qingqing01 commented Jun 12, 2017

@xinghai-sun

这里可能有个问题:DS2中的Row Convolution是需要加在RNNs之后的,即其input layer输出类型是sequence, 而不是dense vector.

这里实现的row convolution的输入输出都是 sequence类型,可以看paddle/gserver/layers/RowConvLayer.cpp52行和54行。 在网络中,这层也是用在BlockExpand Layer之后的RNN网络中,所以输入是带sequence信息的。

我们讨论下是否有必要开发sequence类型的2-D conv (现有sequence类型的1-D conv且不带stride),否则就需要多次sequence到dense_vector的来回转换,可能有损性能。

依据Row Conv的输入支持sequence类型,所以这个是没有必要的吧。

Lookahead conv kernel 在本实现中也未支持?

这句我没有太明白指的是什么

@qingqing01
Copy link
Contributor Author

qingqing01 commented Jun 12, 2017

Lookahead conv kernel 在本实现中也未支持?

是filter吧? https://github.com/PaddlePaddle/Paddle/pull/2373/files#diff-7017557ab2e2d717f6816645d86eee5cR30 这里支持可学习filter (weight), 用户可以配置conv kernel的大小, 只是这里配置的context_len等于lookahead_step + 1https://github.com/PaddlePaddle/Paddle/pull/2373/files#diff-5118293a2b796585b95af1e799956915R5607 这里也写了注释和note。

@xinghai-sun
Copy link
Contributor

Got it. Thanks @qingqing01 .

@qingqing01 qingqing01 merged commit 1b8d2e6 into PaddlePaddle:develop Jun 12, 2017
@qingqing01 qingqing01 deleted the row_conv branch July 7, 2017 13:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants