🐞Describing the bug
CoreML ignores the alpha parameter in torch.add/torch.sub and gives incorrect output.
To Reproduce
import torch
class Model(torch.nn.Module):
def forward(self, x, y):
return torch.sub(x, y, alpha=5)
model = Model()
inputs = (
5 * torch.ones(10),
2 * torch.ones(10),
)
ep = torch.export.export(model.eval(), inputs)
import coremltools as ct
import numpy as np
ep = ep.run_decompositions({})
eager_outputs = model(*inputs)
mlmodel = ct.convert(ep)
coreml_inputs = mlmodel.get_spec().description.input
coreml_outputs = mlmodel.get_spec().description.output
predict_inputs = {str(ct_in.name): pt_in.detach().cpu().numpy().astype(np.int32) for ct_in, pt_in in zip(coreml_inputs, inputs)}
out = mlmodel.predict(predict_inputs)
print("Eager", eager_outputs)
print("CoremL", out)
The outputs are:
Eager tensor([-5., -5., -5., -5., -5., -5., -5., -5., -5., -5.])
CoremL {'sub': array([3., 3., 3., 3., 3., 3., 3., 3., 3., 3.], dtype=float32)}
CoreML is ignoring the alpha parameter.
System environment (please complete the following information):
- coremltools version: 8.3
- OS (e.g. MacOS version or Linux type): macOS15
🐞Describing the bug
CoreML ignores the alpha parameter in torch.add/torch.sub and gives incorrect output.
To Reproduce
The outputs are:
CoreML is ignoring the alpha parameter.
System environment (please complete the following information):