Skip to content

Conversation

@lantiga
Copy link
Contributor

@lantiga lantiga commented Apr 29, 2017

This PR demonstrates a possible way to expose custom attributes from C++ functions.
Motivation: right now functions implemented in C++ do not expose function-specific attributes or methods (e.g. stride for ConvForward). This makes it impossible to get such attributes while traversing a computation graph.

Currently only convolution attributes have been exposed, with the aim of illustrating the approach. If the proposal gets a thumbs up, I'll proceed and expose attributes for the rest of the C++ functions.

As agreed with @apaszke on Slack #autograd-internals, this PR points to the autograd branch.

Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure how big this functions folder is going to grow. We may want to separate the attributes code in a separate sub-folder to keep this one with just the actual function implementations?

type.tp_call = THPCppFunction_call;
type.tp_methods = THPCppFunction_methods;
type.tp_getset = THPCppFunction_properties;
type.tp_methods = function_methods ? function_methods : attributes::default_methods;

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

namespace attributes {

template<class T>
PyObject* conv_stride(THPCppFunction* self, PyObject* hook)

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

{
auto& var = *self->cdata;
return PyInt_FromLong(var.output_nr);
return PyLong_FromLong(var.output_nr);

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

namespace attributes {

template<class T>
PyObject* conv_stride(THPCppFunction* self, PyObject* hook)

This comment was marked as off-topic.


namespace attributes {

PyObject* next_functions(THPCppFunction* self, PyObject* hook)

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@lantiga
Copy link
Contributor Author

lantiga commented Apr 30, 2017

Ok. I have implemented a getTupleAttr and getValueAttr, to avoid code repetition for exposing attributes. I've also removed _attr.* files, as a result of the code shrink. For the same reason I got rid of the attributes namespace.

The one thing I still have to do is to inherit default attributes.

@lantiga
Copy link
Contributor Author

lantiga commented Apr 30, 2017

I played with a few C++ possibilities for concatenating default_properties to the ones passed to _initFunctionPyTypeObject, but the macro solution felt cleaner. I'll be happy to see C++ alternatives though.

At this point, all changes that were suggested are in.

@lantiga
Copy link
Contributor Author

lantiga commented May 2, 2017

New autograd merged, great!
Should I proceed rebasing against master and send a new PR? /cc @soumith @apaszke

@soumith
Copy link
Member

soumith commented May 2, 2017

yes please. rebase against master, send a PR there.

@soumith soumith closed this May 2, 2017
hubertlu-tw pushed a commit to hubertlu-tw/pytorch that referenced this pull request Nov 1, 2022
* Gradient clipping routine with fused kernels

Identical API as PyTorch. Falls back to PyTorch impl when not computing L2 norm.

* Add unit test for gradient clipping

* Add fp16 case to gradient clipping unit test

* Tweaks to grad clipping unit test

Review suggestions from @crcrpar

* Debug gradient clipping tests

When checking that incorrect results produce assertion errors, make sure to generate a discrepancy outside the range of numerical error.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants