Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug with 2 convolution layers #320

Closed
mkskeller opened this issue Oct 8, 2021 · 1 comment
Closed

Bug with 2 convolution layers #320

mkskeller opened this issue Oct 8, 2021 · 1 comment

Comments

@mkskeller
Copy link

When modifying the mpc_autograd_cnn example to have two convolutional layers as in the attached patch, I get the following error:

Traceback (most recent call last):
  File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "~/CrypTen/examples/multiprocess_launcher.py", line 64, in _run_process
    run_process_fn(fn_args)
  File "~/CrypTen/examples/mpc_autograd_cnn/launcher.py", line 92, in _run_experiment
    run_mpc_autograd_cnn(
  File "~/CrypTen/examples/mpc_autograd_cnn/mpc_autograd_cnn.py", line 63, in run_mpc_autograd_cnn
    model = crypten.nn.from_pytorch(model_plaintext, dummy_input)
  File "~/CrypTen/crypten/nn/onnx_converter.py", line 45, in from_pytorch
    f = _from_pytorch_to_bytes(pytorch_model, dummy_input)
  File "~/CrypTen/crypten/nn/onnx_converter.py", line 106, in _from_pytorch_to_bytes
    _export_pytorch_model(f, pytorch_model, dummy_input)
  File "~/CrypTen/crypten/nn/onnx_converter.py", line 131, in _export_pytorch_model
    torch.onnx.export(pytorch_model, dummy_input, f, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/onnx/__init__.py", line 271, in export
    return utils.export(model, args, f, export_params, verbose, training,
  File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 88, in export
    _export(model, args, f, export_params, verbose, training, input_names, output_names,
  File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 694, in _export
    _model_to_graph(model, args, verbose, input_names,
  File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 457, in _model_to_graph
    graph, params, torch_out, module = _create_jit_graph(model, args,
  File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 420, in _create_jit_graph
    graph, torch_out = _trace_and_get_graph_from_model(model, args)
  File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 380, in _trace_and_get_graph_from_model
    torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True)
  File "/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py", line 1139, in _get_trace_graph
    outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py", line 125, in forward
    graph, out = torch._C._create_graph_by_tracing(
  File "/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py", line 116, in wrapper
    outs.append(self.inner(*trace_inputs))
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 887, in _call_impl
    result = self._slow_forward(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 860, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "~/CrypTen/examples/mpc_autograd_cnn/mpc_autograd_cnn.py", line 177, in forward
    out = self.conv2(x)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 887, in _call_impl
    result = self._slow_forward(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 860, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 399, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 395, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [16, 16, 3, 3], expected input[1, 1, 28, 28] to have 16 channels, but got 1 channels instead

Is that a bug in CrypTen?

2conv.txt

@mkskeller
Copy link
Author

There was mistake in the code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant