Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

THNN functional module conversions batch 1 #547

Merged
merged 11 commits into from
Jan 3, 2016
Merged

THNN functional module conversions batch 1 #547

merged 11 commits into from
Jan 3, 2016

Conversation

andreaskoepf
Copy link
Contributor

  • ClassNLLCriterion
  • DistKLDivCriterion
  • HardShrink
  • HardTanh
  • L1Cost
  • final cross-check test with old implementation

@hughperkins
Copy link
Contributor

For some reason ad1efee has broken clnn. It would be less of a mystery if it seemed not to break either nn or cunn at the same time :-) It seems like nn/Abs.lua expects input to contain input.THNN now, though how THNN gets into input is a mystery to me, for now.

Edit, diagnosis bit by bit:

  • seems like nn now contains THNN.lua, which probably adds THNN to all tensors somehow
  • running th, and creating a torch.Tensor => no .THNN. But then doing require "THNN"causes the original tensor, from before the import, to now have a THNN property

Edit2:

Edit:

@soumith
Copy link
Member

soumith commented Jan 2, 2016

oh right. Totally forgot that this would break clnn Hugh! Sorry about that.

@hughperkins
Copy link
Contributor

:-D


function ClassNLLCriterion:__init(weights, sizeAverage)
parent.__init(self)
if sizeAverage ~= nil then
self.sizeAverage = sizeAverage
self.sizeAverage = sizeAverage
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

some of this is 3-indented, and some is 4-indented :-P

I really hate 3-indentation by the way. 2-indentation is quite nice. 4-indentation is quite standard. 3-indentation is one of the things that makes me run away and write python wrappers instead :-P

@andreaskoepf
Copy link
Contributor Author

@hughperkins Sorry for breaking clnn, actually I wanted to ask @soumith about other backends two days ago but after I remembered that cudnn uses completely different modules I felt this might be a stupid question to ask: All backends that work by keeping the existing modules but a different tensor type are affected by the THNN change.

We are using the same 'metatable' trick to pass in the tensor-type specific C-backend functions that was done previously in C, e.g. we register under the name 'THNN' in the specific tensor-metatables (e.g. the metatables of DoubleTensor, FloatTensor, CudaTensor) a table that contains the new ffi C-functions.

Great to see that you already began working on mirroring the THNN functions in CL. btw you can reach the THNN team via gitter here.

BTW sorry for the indention-errors, I will fix these. I am using 2-space indention for my normal lua and c code and I am using an editor which afiak does not allow to set project specific indentation rules .. I was just too lazy to change the setting and did the indentation by hand which is obviously not 100% ;-). Regarding 3-space indention I like the rumour that Mike Pall once said he he would refuse to take anyone seriously who used three spaces.

@andreaskoepf
Copy link
Contributor Author

Compared against non-THNN versions the converted modules via xtest.lua. nn2 is based on the last commit before THNN was added.

soumith added a commit that referenced this pull request Jan 3, 2016
THNN functional module conversions batch 1
@soumith soumith merged commit 31f71b9 into torch:master Jan 3, 2016
@soumith
Copy link
Member

soumith commented Jan 3, 2016

Thanks Andreas. This is coming along really well.

@hughperkins
Copy link
Contributor

Hmmm, seems by some miracle merging this specific pull didnt break clnn build https://travis-ci.org/hughperkins/clnn/builds/99973441 . I'm not quite sure why that is ....

Edit: tests pass too :-)

Running 63 tests
|______________________________________________________________  ==> Abs_backwardUsing NVIDIA Corporation , OpenCL platform: NVIDIA CUDA
Using OpenCL device: GeForce 940M
...
Completed 96 asserts in 63 tests with 0 errors

Edit2: I guess that there are three possible situations:

  • clnn doesnt provide a class => OK (eg: AbsCriterion)
  • clnn has own implementation => OK (eg: ClassNLLCriterion)
  • clnn uses nn implementation => FAIL (example: Abs)

Edit3: hmmm, no thats not quite correct summary. since clnn provides implementation of Abs. Hmmm...

Edit4: maybe it's something like, if nn calls [SomeTensor].nn.SomeClass_updateOutput, and clnn relies on that, then migrating nn will break clnn?

Edit5: seems like edit4 is plausibly correct, and it's possible to fix any breaks in clnn, without needing to implement everything in c/c++, for now, eg using an approach similar to https://github.com/hughperkins/clnn/blob/master/ClassNLLCriterion.lua#L5-L18 to monkey-patch things in. We shall see :-)

nn.ClassNLLCriterion.baseUpdateOutput = nn.ClassNLLCriterion.updateOutput
nn.ClassNLLCriterion.baseUpdateGradInput = nn.ClassNLLCriterion.updateGradInput

function nn.ClassNLLCriterion:updateOutput(input, target)
   if torch.type(input) ~= 'torch.ClTensor' then
      return self:baseUpdateOutput(input, target)
   end
   -- clnn implementation here ...
   -- ...
end

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants