Skip to content
This repository has been archived by the owner on Oct 15, 2019. It is now read-only.

Build for CPU only #2

Merged
merged 9 commits into from Dec 23, 2014
Merged

Build for CPU only #2

merged 9 commits into from Dec 23, 2014

Conversation

xianyi
Copy link
Contributor

@xianyi xianyi commented Dec 19, 2014

Now, the minerva can build on CPU only.

I added SigmoidForward, ReluForward, and TanhForward CPU basic implementation.
For apps, the minst_mlp works fine.

To-do:

  • other CPU implementations including conv, backward...

CHECK_EQ(inputs.size(), 1) << "sigmoid forward #inputs wrong";
CHECK_EQ(outputs.size(), 1) << "sigmoid forward #outputs wrong";

float * input_data = inputs[0].data();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better be float* data.

@hotpxl
Copy link
Member

hotpxl commented Dec 21, 2014

Thanks for contributing to the feature for building on CPU only. We currently don't have support for convolution operators on CPU, but that, of course, is something we want in the future. Although performance on CPU only machines might be poor, but we can still offer our easy-to-use interface.

As to contributing C++ code, please comply by Google C++ Style Guide standard for easy maintenance and readability, especially the formatting part. I have put some comment on the files for your reference. I fully acknowledge the pain to follow someone else's rules. But we will benefit from a uniform coding convention at last.

For the unit tests, I think it's better not to put macros in them to cater to CPU or GPU instances. It would be better to create unit tests for CPU and GPU operators, and use macros to stop the GPU tests from compiling if the machine does not have CUDA.

Thanks for your effort!

@@ -9,7 +9,9 @@ TEST(LeftConst, LeftConst) {
int m = 8;
int k = 100;
NArray a = NArray::Randn({m, k}, 0.0, 1.0);
#ifdef HAS_CUDA
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we do something like in unittest_elewise.cpp to create separate tests, and disable GPU ones using the macro HAS_CUDA?

@xianyi
Copy link
Contributor Author

xianyi commented Dec 21, 2014

@hotpxl,

Thank you for the suggestion. I will fix the coding style and unittest issue.

I plan to support convolution operators on CPU by CPU BLAS functions.

@xianyi
Copy link
Contributor Author

xianyi commented Dec 21, 2014

@hotpxl ,
Please review the patch. I already fixed issues.

@hjk41
Copy link
Member

hjk41 commented Dec 22, 2014

Should we have a "OpenBLAS" implementation, instead of reusing the "basic"?
I think "basic" should be used only for validation, and does not depend on
any other library. What do you think?

On Mon, Dec 22, 2014 at 2:33 AM, Zhang Xianyi notifications@github.com
wrote:

@hotpxl https://github.com/hotpxl ,
Please review the patch. I already fixed issues.


Reply to this email directly or view it on GitHub
#2 (comment)
.

HONG Chuntao
System Research Group
Microsoft Research Asia

@xianyi
Copy link
Contributor Author

xianyi commented Dec 22, 2014

If you don't want basic depending other library, including OpenBLAS. I can
just write a three-loops gemm implementation.

2014-12-22 10:03 GMT+08:00 hjk41 notifications@github.com:

Should we have a "OpenBLAS" implementation, instead of reusing the
"basic"?
I think "basic" should be used only for validation, and does not depend on
any other library. What do you think?

On Mon, Dec 22, 2014 at 2:33 AM, Zhang Xianyi notifications@github.com
wrote:

@hotpxl https://github.com/hotpxl ,
Please review the patch. I already fixed issues.


Reply to this email directly or view it on GitHub
<
https://github.com/minerva-developers/minerva/pull/2#issuecomment-67779564>

.

HONG Chuntao
System Research Group
Microsoft Research Asia


Reply to this email directly or view it on GitHub
#2 (comment)
.

@hjk41
Copy link
Member

hjk41 commented Dec 23, 2014

Let's use hand-written functions for "basic" and create an "OpenBLAS" implementation.

@hotpxl hotpxl merged commit 8301a7e into dmlc:master Dec 23, 2014
@hotpxl hotpxl self-assigned this Dec 23, 2014
@hotpxl
Copy link
Member

hotpxl commented Dec 23, 2014

Thanks!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants