Skip to content

[ci] Flaky tutorial run: Could not find any valid schedule for task #13284

@driazati

Description

@driazati

Seen here and elsewhere: https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/main/4628/pipeline

[2022-11-02T15:27:34.310Z]   File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
[2022-11-02T15:27:34.310Z]   File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 871, in verify_pass
[2022-11-02T15:27:34.310Z]     raise InstantiationError("Skipped because of invalid gpu kernel")
[2022-11-02T15:27:34.310Z] tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel
[2022-11-02T15:27:34.310Z] 
[2022-11-02T15:27:34.310Z] Traceback (most recent call last):
[2022-11-02T15:27:34.310Z]   24: TVMFuncCall
[2022-11-02T15:27:34.310Z]         at ../src/runtime/c_runtime_api.cc:477
[2022-11-02T15:27:34.310Z]   23: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1217
[2022-11-02T15:27:34.310Z]   22: Call
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1213
[2022-11-02T15:27:34.310Z]   21: operator()
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1731
[2022-11-02T15:27:34.310Z]   20: unpack_call<tvm::IRModule, 5, tvm::<lambda(tvm::te::Schedule, const tvm::runtime::Array<tvm::runtime::ObjectRef>&, const tvm::runtime::String&, const tvm::runtime::Map<tvm::te::Tensor, tvm::tir::Buffer>&, bool)> >
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1671
[2022-11-02T15:27:34.310Z]   19: run<>
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1631
[2022-11-02T15:27:34.310Z]   18: run<tvm::runtime::TVMMovableArgValueWithContext_>
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1631
[2022-11-02T15:27:34.310Z]   17: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1631
[2022-11-02T15:27:34.310Z]   16: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1631
[2022-11-02T15:27:34.310Z]   15: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1631
[2022-11-02T15:27:34.310Z]   14: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1646
[2022-11-02T15:27:34.310Z]   13: operator()
[2022-11-02T15:27:34.310Z]         at ../src/driver/driver_api.cc:391
[2022-11-02T15:27:34.310Z]   12: tvm::LowerSchedule(tvm::te::Schedule, tvm::runtime::Array<tvm::runtime::ObjectRef, void> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_map<tvm::te::Tensor, tvm::tir::Buffer, std::hash<tvm::te::Tensor>, std::equal_to<tvm::te::Tensor>, std::allocator<std::pair<tvm::te::Tensor const, tvm::tir::Buffer> > > const&, tvm::GlobalVarSupply, bool)
[2022-11-02T15:27:34.310Z]         at ../src/driver/driver_api.cc:377
[2022-11-02T15:27:34.310Z]   11: tvm::LowerWithPassList(tvm::IRModule, tvm::runtime::Array<tvm::transform::Pass, void>)
[2022-11-02T15:27:34.310Z]         at ../src/driver/driver_api.cc:272
[2022-11-02T15:27:34.310Z]   10: tvm::transform::Pass::operator()(tvm::IRModule) const
[2022-11-02T15:27:34.310Z]         at ../src/ir/transform.cc:258
[2022-11-02T15:27:34.310Z]   9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
[2022-11-02T15:27:34.310Z]         at ../src/ir/transform.cc:274
[2022-11-02T15:27:34.310Z]   8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
[2022-11-02T15:27:34.310Z]         at ../src/ir/transform.cc:453
[2022-11-02T15:27:34.310Z]   7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
[2022-11-02T15:27:34.310Z]         at ../src/ir/transform.cc:274
[2022-11-02T15:27:34.310Z]   6: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
[2022-11-02T15:27:34.310Z]         at ../src/tir/ir/transform.cc:100
[2022-11-02T15:27:34.310Z]   5: tvm::runtime::TypedPackedFunc<tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)>::operator()(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext) const
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1750
[2022-11-02T15:27:34.310Z]   4: tvm::tir::PrimFunc tvm::runtime::detail::typed_packed_call_dispatcher<tvm::tir::PrimFunc>::run<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::runtime::PackedFunc const&, tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&)
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1694
[2022-11-02T15:27:34.310Z]   3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&) const
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1618
[2022-11-02T15:27:34.310Z]   2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1217
[2022-11-02T15:27:34.310Z]   1: Call
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1213
[2022-11-02T15:27:34.310Z]   0: operator()
[2022-11-02T15:27:34.310Z]         at ../src/runtime/c_runtime_api.cc:534
[2022-11-02T15:27:34.310Z]   File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
[2022-11-02T15:27:34.310Z]   File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 871, in verify_pass
[2022-11-02T15:27:34.310Z]     raise InstantiationError("Skipped because of invalid gpu kernel")
[2022-11-02T15:27:34.310Z] tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel	[('tile_f', [-1, 16, 32, 1]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 2, 64]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,1324444
[2022-11-02T15:27:34.310Z] WARNING:root:Could not find any valid schedule for task Task(func_name=tutorial/conv2d_no_batching, args=(1, 7, 7, 512, 512, 3, 3, (1, 1), (1, 1)), kwargs={}, workload=('tutorial/conv2d_no_batching', 1, 7, 7, 512, 512, 3, 3, (1, 1), (1, 1))). A file containing the errors has been written to /tmp/tvm_tuning_errors_0x9nxpws.log.
[2022-11-02T15:27:34.310Z] DEBUG:autotvm:Finish loading 20 records
[2022-11-02T15:27:34.310Z] WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -arch=sm_75 -max_num_threads=1024 -thread_warp_size=32, workload=('tutorial/conv2d_no_batching', 1, 7, 7, 512, 512, 3, 3, (1, 1), (1, 1)). A fallback configuration is used, which may bring great performance regression.
[2022-11-02T15:27:34.310Z] DEBUG:autotvm:Finish loading 20 records

cc @Mousius @areusch @gigiblender @leandron

Metadata

Metadata

Assignees

No one assigned

    Labels

    type: doctype:ciRelates to TVM CI infrastructure

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions