Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ConcatTable Identity and CAddTable #55

Merged
merged 2 commits into from
Nov 3, 2016

Conversation

qiuxin2012
Copy link
Contributor

eltwise layer(sum) in caffe

output = if (inplace) {
input.get[Tensor[T]](1).get
} else {
val input1 = input.get[Tensor[T]](1).get
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can use Table.apply here. Same in other places

i += 1
}

while(i <= gradInput.length()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's this line for? Shouldn't gradInput.length == input.length???

class CAddTable[@specialized(Float, Double) T: ClassTag](val inplace: Boolean = false)(
implicit ev: TensorNumeric[T]) extends Module[Table, Tensor[T], T] {

gradInput = T()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make it null?

class ConcatTable[T : ClassTag](implicit ev: TensorNumeric[T])
extends Container[Activities, Activities, T] {

output = T()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

null??

var i = 0
while (i < modules.length) {
val currentOutput = modules(i).updateOutput(input)
if (!output.toTable().contains(i + 1)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

put output.toTable into a variable?

} else {
var i = 1
while (i <= out.toTable().length()) {
addTable(out.toTable().get[Activities](i).get, in.toTable().get[Activities](i).get)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

apply

if (in.isInstanceOf[Tensor[T]]) {
in.toTensor[T]().clone()
} else {
val out = T()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use table clone method

}
}

def backward(method: String, input: Activities, gradOutput: Activities,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we avoid override this???

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not override. But it's more confusing

scale : Double = 1.0) : Activities = {

val isTable = input.isInstanceOf[Table]
val wasTable = gradInput.isInstanceOf[Table]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename the varibale to isInputTable/isGradInputTable. The name is silly!

}
var i = 0
while (i < modules.length) {
method match {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't do this in Java/Scala!

@yiheng yiheng merged commit c1c5f55 into intel-analytics:master Nov 3, 2016
wzhongyuan pushed a commit to wzhongyuan/BigDL that referenced this pull request Jan 16, 2019
* feat: mkl-dnn initialize

* fix: structure of building

* fix: public final static

* fix: delete the dependencies of environments

* fix: skip tests

* add update dnn wrappers

* fix: dynamic load iomp5

* feat linear supports and some fix

* add more wrapper

* add lrn api

* fix: add bn and softmax

* fix: some fixes

* fix: mkl-dnn build

* feat: add get format api

* fix: add getSize

* feat: aligned memory

* add conv fuse relu api

* fix: add aligned storage

* add concat api

* fix: mkl envs for lib mkldnn

* fix: add mkl add method with 2 ptrs

* fix: update to Release

* fix: batch norm infer mode

* fix: update 0.5.0 -> 0.6.0

* add free (intel-analytics#5)

* feat: affinity for java thread

* fix: update core branch

* fix: delete the memset constant value for debug, and add affinity

* feat: add mkl-dnn fusion

* fix: memory format enum consistent with dnn

* feat: add auto format

* refactor: delete the MemoryFormat in MklDnn

* Memory should load MKLDnn (intel-analytics#6)

* refactor: move enums to seprate classes (intel-analytics#7)

* feat: add GetShape and GetFormat api

* fix: delete printf

* fix a bug

* add sum

* refactor: change name

* refactor: change submodule infos

* fix: set block time by default. A property to control to disable it
Le-Zheng pushed a commit to Le-Zheng/BigDL that referenced this pull request Oct 20, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants