Skip to content

[Bug] Relax ONNX Sinh/Cosh overflow to inf while ONNX Runtime returns finite values #19559

@ALinrunrun

Description

@ALinrunrun

Expected behavior

TVM Relax should execute ONNX Sinh and Cosh consistently with ONNX Runtime for large but still representable float32 inputs.

For inputs around x = 89, ONNX Runtime returns finite float32 values close to the upper range of float32.

Actual behavior

TVM Relax returns inf / -inf while ONNX Runtime still returns finite values:

Sinh  input: [ 88.85  89.    89.2  -88.95]
  ORT: [ 1.9321198e+38  2.2448064e+38  2.7418043e+38 -2.1353193e+38]
  TVM: [ inf  inf  inf -inf]

Cosh  input: [ 88.85  89.    89.2  -88.95]
  ORT: [1.9321198e+38 2.2448064e+38 2.7418043e+38 2.1353193e+38]
  TVM: [inf inf inf inf]

The discrepancy appears when importing ONNX Sinh and Cosh models through the Relax ONNX frontend and compiling them for the llvm target.

Environment

TVM: 0.14 environment / Relax ONNX frontend
ONNX Runtime: 1.23
Python: 3.11
Target: llvm
OS: Linux

Steps to reproduce

import warnings

warnings.filterwarnings("ignore")

import numpy as np
import onnxruntime as ort
import tvm
from onnx import TensorProto, helper
from tvm import relax
from tvm.relax.frontend.onnx import from_onnx


def build_model(op):
    node = helper.make_node(op, ["x"], ["y"])

    graph = helper.make_graph(
        [node],
        "g",
        [helper.make_tensor_value_info("x", TensorProto.FLOAT, [4])],
        [helper.make_tensor_value_info("y", TensorProto.FLOAT, [4])],
    )

    model = helper.make_model(graph, opset_imports=[helper.make_opsetid("", 20)])
    model.ir_version = 9
    return model


x = np.array([88.85, 89.0, 89.2, -88.95], dtype=np.float32)

for op in ("Sinh", "Cosh"):
    model = build_model(op)

    sess = ort.InferenceSession(
        model.SerializeToString(),
        providers=["CPUExecutionProvider"],
    )

    ort_out = sess.run(None, {"x": x})[0]

    mod = from_onnx(model)

    with tvm.transform.PassContext(opt_level=3):
        ex = tvm.compile(mod, target=tvm.target.Target("llvm"))

    vm = relax.VirtualMachine(ex, tvm.cpu())

    out = vm["main"](tvm.runtime.tensor(x, tvm.cpu()))
    tvm_out = (out[0] if isinstance(out, (list, tuple)) else out).numpy()

    print(f"{op} input:", x)
    print("  ORT:", ort_out)
    print("  TVM:", tvm_out)

Triage

  • needs-triage

Metadata

Metadata

Assignees

No one assigned

    Labels

    needs-triagePRs or issues that need to be investigated by maintainers to find the right assignees to address ittype: bug

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions