Skip to content

Conversation

zdevito
Copy link
Contributor

@zdevito zdevito commented May 16, 2018

This removes functionality related to @compile that is no longer used:

  • Removes interpreter_autograd_function
  • handles/handle builder in the interpreter
  • creates_handles/tracing_autograd_python_function flags
  • var_flags in PythonOp and CppOp

Know to be still remaining but now made dead:

  • stage support in the interpreter

@zdevito
Copy link
Contributor Author

zdevito commented May 18, 2018

@apaszke when you get a chance can you take a look?

@apaszke
Copy link
Contributor

apaszke commented May 18, 2018

Sorry I’ve been working on the loop unrolling PR. I’ll try to take a look over the weekend

Copy link
Contributor

@apaszke apaszke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! I'm a bit concerned about THPVariable_Wrap in createPythonOp, so please take a look at that, but ok apart from this.

@@ -444,8 +377,7 @@ bool hasHandleOutput(Node * n) {
#ifndef NO_PYTHON
Operation createPythonOperation(PythonOp* op) {
py::function func = py::reinterpret_borrow<py::function>(py::handle(op->pyobj.get()));
bool tracing_autograd_python_function = op->tracing_autograd_python_function;
bool has_handle = hasHandleOutput(op);
JIT_ASSERT(!hasHandleOutput(op));

This comment was marked as off-topic.

This comment was marked as off-topic.

@@ -54,7 +54,7 @@ void initPythonTracerBindings(PyObject* module_) {
});

m.def("_tracer_enter", [](variable_list trace_inputs, std::size_t num_backwards) {
return tracer::enter(std::move(trace_inputs), num_backwards + 1, true);
return tracer::enter(std::move(trace_inputs), num_backwards + 1);

This comment was marked as off-topic.

This comment was marked as off-topic.

} else if (arg_type == 't') {
auto var = peek(stack, next_tensor, num_inputs);
py_inputs[i] =
py::reinterpret_steal<py::object>(THPVariable_Wrap(var));

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@zdevito zdevito merged commit 286cd04 into pytorch:master May 21, 2018
petrex pushed a commit to petrex/pytorch that referenced this pull request May 23, 2018
…e2_core_hip

* 'caffe2_core_hip' of github.com:petrex/pytorch: (40 commits)
  [auto] Update onnx to 52f7528 - add more shape inference tests (onnx/onnx#971) onnx/onnx@52f7528
  JIT cleanup (pytorch#7631)
  fix to build sleef when using cmake 3.11.1 (pytorch#7679)
  Fix typo in document (pytorch#7725)
  [auto] Update onnx to 6f4b1b1 - Tests for Gemm operator (onnx/onnx#885) onnx/onnx@6f4b1b1
  [auto] Update onnx to c6c6aad - Enhance the 1-element broadcast case (onnx/onnx#902) onnx/onnx@c6c6aad
  serialization for torch.device (pytorch#7713)
  Fix compile flags for MSVC (pytorch#7703)
  Fix exporting Sum to onnx (pytorch#7685)
  Renanme ZFNet to ZFNet512 (pytorch#7723)
  Implement __reduce__ for torch.dtype (pytorch#7699)
  Remove unnecessary include in vec256_float.h (pytorch#7711)
  Update from facebook (pytorch#7696)
  fix for cuda 9.2 builds (pytorch#7709)
  make BatchSampler subclass of Sampler, and expose (pytorch#7707)
  Dont emit warning for ABI incompatibility when PyTorch was built from source (pytorch#7681)
  remove index from python bindings (fixes: pytorch#7639) (pytorch#7690)
  Update _torch_docs.py (pytorch#7700)
  Fix the wrong usage of environment variables detection in cmake
  Changes from D7881937 and D7963936 plus an edit (pytorch#7605)
  ...
weiyangfb pushed a commit to weiyangfb/pytorch that referenced this pull request Jun 11, 2018
Cleans up dead code in the JIT:

* Remove interpreter_autograd_function
* Remove Handles
* Remove HandleBuilder
* Remove creates_handles, and tracing_autograd_python_function flags
* Remove unused var_args
* Fix submodules
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants