You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Cannot properly convert PyTorch model to .mlmodel when there is a dynamic slice/resize/narrow involved. How to?
Relevance:
In the attached test case forward() pass, we not only have input 'x', we also need to reformat some extra data in the same shape to combine input 'x' PLUS 'extra_data'. This works for PyTorch but we need one good and working solution in combination with CoreML conversion.
You might find this logic in Transformers where the PositionalEncoding is added.
Options #1, #2 and #3 run through PyTorch and with coremltoolsv4 beta1 BUT all produce warnings based on the JIT + convert.
Questions:
What is the preferred PyTorch syntax in combination with a later conversion to .mlmodel?
In other words, which PyTorch syntax is guaranteed to work in both PyTorch and CoreML and .mlmodel after conversion?
Or, which PyTorch syntax is traceable in JIT and works in both PyTorch and CoreML and .mlmodel after conversion?
Bug #764 narrow() is missing in coremltoolsv4 conversion. RuntimeError: PyTorch convert function for op narrow not implemented )option #2) #764
Bug #765 While This model can be JIT traced, but it cannot be scripted. Set: useScriptingFlag = True, See: self.name = self.outputs[0] IndexError: list index out of range #765
Torch version : 1.5.0
CoreML tools version : 4.0b1
Python 3.7.6
macOS Catalina latest
Log with option #1 (just activate the line with 'option #1') -------------------------------------------
testSliceConvertNew2.py:24: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
my_data = self.extra_data[: mySeqLen, :] # Ok in conversion, constant dimension, seqlen 28
Log with option #2 (just activate the line) -------------------------------------------
testSliceConvertNew2.py:30: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
my_data = torch.narrow(self.extra_data.clone(),0,0,mySeqLen)
4) Convert model
Converting Frontend ==> MIL Ops: 42%|██████████████████████████████████████████████████████████████████████████████████████████████████████▉ | 5/12 [00:00<00:00, 4409.49 ops/s]
Traceback (most recent call last):
File "testSliceConvertNew2.py", line 61, in
inputs=[ ct.TensorType(name="input1", shape=dummy_input.shape) ],
File "/Library/Python/3.7/lib/python/site-packages/coremltools/converters/_converters_entry.py", line 299, in convert
**kwargs
File "/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/converter.py", line 120, in _convert
prog = frontend_converter(model, **kwargs)
File "/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/converter.py", line 62, in call
return load(*args, **kwargs)
File "/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 84, in load
raise e
File "/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 76, in load
prog = converter.convert()
File "/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 302, in convert
convert_nodes(self.context, self.graph)
File "~/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 52, in convert_nodes
"PyTorch convert function for op {} not implemented".format(node.kind)
RuntimeError: PyTorch convert function for op narrow not implemented
Log with option #3 (just activate the line) -------------------------------------------
testSliceConvertNew2.py:54: TracerWarning: resize_ can't be represented in the JIT at the moment, so we won't connect any uses of this value with its current trace. If you happen to use it again, it will show up as a constant in the graph.
my_data = self.extra_data.resize_( (x_.shape[0] , x_.shape[1] ,x_.shape[2] ) )
-------------------------- Scripting error
Torch version : 1.5.0
CoreML tools version : 4.0b1
Make model
TestModel(
(fc1): Linear(in_features=28, out_features=10, bias=True)
)
Forward on model
Trace model
testSliceConvertNew2.py:87: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
my_data = self.extra_data[: mySeqLen, :] # option Take a drop of coremltools 0.5 release. #1
~/Library/Python/3.7/lib/python/site-packages/torch/jit/init.py:1256: UserWarning: optimize is deprecated and has no effect. Use with torch.jit.optimized_execution() instead warnings.warn("optimizeis deprecated and has no effect. Usewith torch.jit.optimized_execution() instead")
Convert model
Traceback (most recent call last):
File "testSliceConvertNew2.py", line 129, in
inputs= [ ct.TensorType(name="input1", shape=dummy_input.shape) ]
File "/Library/Python/3.7/lib/python/site-packages/coremltools/converters/_converters_entry.py", line 299, in convert
**kwargs
File "/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/converter.py", line 120, in _convert
prog = frontend_converter(model, **kwargs)
File "/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/converter.py", line 62, in call
return load(*args, **kwargs)
File "/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 73, in load
converter = TorchConverter(torchscript, inputs, outputs, cut_at_symbols)
File "/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 142, in init
raw_graph, params_dict, self.inputs, cut_at_symbols
File "/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/frontend/torch/internal_graph.py", line 176, in init
self.nodes.append(InternalTorchIRNode(raw_node))
File "~/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/frontend/torch/internal_graph.py", line 97, in init
self.name = self.outputs[0]
IndexError: list index out of range
Question:
Cannot properly convert PyTorch model to .mlmodel when there is a dynamic slice/resize/narrow involved. How to?
Relevance:
In the attached test case forward() pass, we not only have input 'x', we also need to reformat some extra data in the same shape to combine input 'x' PLUS 'extra_data'. This works for PyTorch but we need one good and working solution in combination with CoreML conversion.
You might find this logic in Transformers where the PositionalEncoding is added.
Options #1, #2 and #3 run through PyTorch and with coremltoolsv4 beta1 BUT all produce warnings based on the JIT + convert.
Questions:
What is the preferred PyTorch syntax in combination with a later conversion to .mlmodel?
In other words, which PyTorch syntax is guaranteed to work in both PyTorch and CoreML and .mlmodel after conversion?
Or, which PyTorch syntax is traceable in JIT and works in both PyTorch and CoreML and .mlmodel after conversion?
Should we use scripting instead? But that does not work either.
https://coremltools.readme.io/docs/model-scripting
Bugs
(filed separately):
Bug #764 narrow() is missing in coremltoolsv4 conversion. RuntimeError: PyTorch convert function for op narrow not implemented )option #2)
#764
Bug #765 While This model can be JIT traced, but it cannot be scripted. Set: useScriptingFlag = True, See: self.name = self.outputs[0] IndexError: list index out of range
#765
Reproducible:
yes
Testcase:
yes, attached
testSliceConvertNew2.txt
Setup:
Torch version : 1.5.0
CoreML tools version : 4.0b1
Python 3.7.6
macOS Catalina latest
Log with option #1 (just activate the line with 'option #1') -------------------------------------------
testSliceConvertNew2.py:24: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
my_data = self.extra_data[: mySeqLen, :] # Ok in conversion, constant dimension, seqlen 28
Log with option #2 (just activate the line) -------------------------------------------
testSliceConvertNew2.py:30: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
my_data = torch.narrow(self.extra_data.clone(),0,0,mySeqLen)
4) Convert model
Converting Frontend ==> MIL Ops: 42%|██████████████████████████████████████████████████████████████████████████████████████████████████████▉ | 5/12 [00:00<00:00, 4409.49 ops/s]
Traceback (most recent call last):
File "testSliceConvertNew2.py", line 61, in
inputs=[ ct.TensorType(name="input1", shape=dummy_input.shape) ],
File "
/Library/Python/3.7/lib/python/site-packages/coremltools/converters/_converters_entry.py", line 299, in convert/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/converter.py", line 120, in _convert**kwargs
File "
prog = frontend_converter(model, **kwargs)
File "
/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/converter.py", line 62, in call/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 84, in loadreturn load(*args, **kwargs)
File "
raise e
File "
/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 76, in load/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 302, in convertprog = converter.convert()
File "
convert_nodes(self.context, self.graph)
File "~/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 52, in convert_nodes
"PyTorch convert function for op {} not implemented".format(node.kind)
RuntimeError: PyTorch convert function for op narrow not implemented
Log with option #3 (just activate the line) -------------------------------------------
testSliceConvertNew2.py:54: TracerWarning: resize_ can't be represented in the JIT at the moment, so we won't connect any uses of this value with its current trace. If you happen to use it again, it will show up as a constant in the graph.
my_data = self.extra_data.resize_( (x_.shape[0] , x_.shape[1] ,x_.shape[2] ) )
-------------------------- Scripting error
Torch version : 1.5.0
CoreML tools version : 4.0b1
TestModel(
(fc1): Linear(in_features=28, out_features=10, bias=True)
)
testSliceConvertNew2.py:87: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
my_data = self.extra_data[: mySeqLen, :] # option Take a drop of coremltools 0.5 release. #1
~/Library/Python/3.7/lib/python/site-packages/torch/jit/init.py:1256: UserWarning:
optimizeis deprecated and has no effect. Usewith torch.jit.optimized_execution() instead warnings.warn("optimizeis deprecated and has no effect. Usewith torch.jit.optimized_execution() instead")Traceback (most recent call last):
File "testSliceConvertNew2.py", line 129, in
inputs= [ ct.TensorType(name="input1", shape=dummy_input.shape) ]
File "
/Library/Python/3.7/lib/python/site-packages/coremltools/converters/_converters_entry.py", line 299, in convert/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/converter.py", line 120, in _convert**kwargs
File "
prog = frontend_converter(model, **kwargs)
File "
/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/converter.py", line 62, in call/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 73, in loadreturn load(*args, **kwargs)
File "
converter = TorchConverter(torchscript, inputs, outputs, cut_at_symbols)
File "
/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 142, in init/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/frontend/torch/internal_graph.py", line 176, in initraw_graph, params_dict, self.inputs, cut_at_symbols
File "
self.nodes.append(InternalTorchIRNode(raw_node))
File "~/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/frontend/torch/internal_graph.py", line 97, in init
self.name = self.outputs[0]
IndexError: list index out of range