-
Notifications
You must be signed in to change notification settings - Fork 3.8k
Open
Labels
needs-triagePRs or issues that need to be investigated by maintainers to find the right assignees to address itPRs or issues that need to be investigated by maintainers to find the right assignees to address ittype: bug
Description
Actual behavior
2025-05-09 18:24:12 [INFO] Logging directory: ./tune_tmp/logs
2025-05-09 18:24:15 [INFO] LocalBuilder: max_workers = 32
2025-05-09 18:24:16 [INFO] LocalRunner: max_workers = 1
2025-05-09 18:24:16 [INFO] [task_scheduler.cc:159] Initializing Task #0: "main"
2025-05-09 18:24:18 [INFO] [task_scheduler.cc:320]
ID | Name | FLOP | Weight | Speed (GFLOPS) | Latency (us) | Weighted Latency (us) | Trials | Done
-----------------------------------------------------------------------------------------------------
0 | main | 434176 | 1 | N/A | N/A | N/A | 0 |
-----------------------------------------------------------------------------------------------------
Total trials: 0
Total latency (us): 0
2025-05-09 18:24:18 [INFO] [task_scheduler.cc:180] TaskScheduler picks Task #0: "main"
Traceback (most recent call last):
File "/data/qshenaf/remote_pc/TirFuzz/bugs/05-03_20-50/topi.nn.pool_grad_0.py", line 11, in <module>
database = ms.tir_integration.tune_tir(mod=sch.mod, target='llvm --num-cores=16', work_dir='./tune_tmp', max_trials_global=1, num_trials_per_iter=1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/envs/tvm/python/tvm/meta_schedule/tir_integration.py", line 146, in tune_tir
return tune_tasks(
^^^^^^^^^^^
File "/data/qshenaf/envs/tvm/python/tvm/meta_schedule/tune.py", line 122, in tune_tasks
task_scheduler.tune(
File "/data/qshenaf/envs/tvm/python/tvm/meta_schedule/task_scheduler/task_scheduler.py", line 132, in tune
_ffi_api.TaskSchedulerTune( # type: ignore # pylint: disable=no-member
File "tvm/_ffi/_cython/./packed_func.pxi", line 339, in tvm._ffi._cy3.core.PackedFuncBase.__call__
File "tvm/_ffi/_cython/./packed_func.pxi", line 284, in tvm._ffi._cy3.core.FuncCall
File "tvm/_ffi/_cython/./base.pxi", line 185, in tvm._ffi._cy3.core.CHECK_CALL
File "/data/qshenaf/envs/tvm/python/tvm/_ffi/base.py", line 468, in raise_last_ffi_error
raise py_err
File "/data/qshenaf/envs/tvm/src/meta_schedule/task_scheduler/gradient_based.cc", line 54, in tvm::meta_schedule::GradientBasedNode::Tune(tvm::runtime::Array<tvm::meta_schedule::TuneContext, void>, tvm::runtime::Array<tvm::FloatImm, void>, int, int, int, tvm::meta_schedule::Builder, tvm::meta_schedule::Runner, tvm::runtime::Array<tvm::meta_schedule::MeasureCallback, void>, tvm::runtime::Optional<tvm::meta_schedule::Database>, tvm::runtime::Optional<tvm::meta_schedule::CostModel>)
TaskSchedulerNode::Tune(tasks, task_weights, max_trials_global, max_trials_per_task,
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/envs/tvm/src/meta_schedule/task_scheduler/task_scheduler.cc", line 190, in tvm::meta_schedule::TaskSchedulerNode::Tune(tvm::runtime::Array<tvm::meta_schedule::TuneContext, void>, tvm::runtime::Array<tvm::FloatImm, void>, int, int, int, tvm::meta_schedule::Builder, tvm::meta_schedule::Runner, tvm::runtime::Array<tvm::meta_schedule::MeasureCallback, void>, tvm::runtime::Optional<tvm::meta_schedule::Database>, tvm::runtime::Optional<tvm::meta_schedule::CostModel>)
task->ctx->search_strategy.value()->GenerateMeasureCandidates()) {
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/envs/tvm/src/meta_schedule/search_strategy/evolutionary_search.cc", line 447, in tvm::meta_schedule::EvolutionarySearchNode::GenerateMeasureCandidates()
return this->state_->GenerateMeasureCandidates();
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/envs/tvm/src/meta_schedule/search_strategy/evolutionary_search.cc", line 717, in tvm::meta_schedule::EvolutionarySearchNode::State::GenerateMeasureCandidates()
std::vector<Schedule> unmeasured = SampleInitPopulation(pop - measured.size());
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/envs/tvm/src/meta_schedule/search_strategy/evolutionary_search.cc", line 524, in tvm::meta_schedule::EvolutionarySearchNode::State::SampleInitPopulation(int)
support::parallel_for_dynamic(0, num, self->ctx_->num_threads, f_proc_unmeasured);
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/envs/tvm/src/support/parallel_for.cc", line 128, in tvm::support::parallel_for_dynamic(int, int, int, std::function<void (int, int)> const&)
LOG(FATAL) << "RuntimeError: parallel_for_dynamic error with " << e.what();
^^^^^^^^^^^^^^^^^^^^^^^^^^^
tvm._ffi.base.TVMError: Traceback (most recent call last):
5: tvm::meta_schedule::GradientBasedNode::Tune(tvm::runtime::Array<tvm::meta_schedule::TuneContext, void>, tvm::runtime::Array<tvm::FloatImm, void>, int, int, int, tvm::meta_schedule::Builder, tvm::meta_schedule::Runner, tvm::runtime::Array<tvm::meta_schedule::MeasureCallback, void>, tvm::runtime::Optional<tvm::meta_schedule::Database>, tvm::runtime::Optional<tvm::meta_schedule::CostModel>)
at /data/qshenaf/envs/tvm/src/meta_schedule/task_scheduler/gradient_based.cc:54
4: tvm::meta_schedule::TaskSchedulerNode::Tune(tvm::runtime::Array<tvm::meta_schedule::TuneContext, void>, tvm::runtime::Array<tvm::FloatImm, void>, int, int, int, tvm::meta_schedule::Builder, tvm::meta_schedule::Runner, tvm::runtime::Array<tvm::meta_schedule::MeasureCallback, void>, tvm::runtime::Optional<tvm::meta_schedule::Database>, tvm::runtime::Optional<tvm::meta_schedule::CostModel>)
at /data/qshenaf/envs/tvm/src/meta_schedule/task_scheduler/task_scheduler.cc:190
3: tvm::meta_schedule::EvolutionarySearchNode::GenerateMeasureCandidates()
at /data/qshenaf/envs/tvm/src/meta_schedule/search_strategy/evolutionary_search.cc:447
2: tvm::meta_schedule::EvolutionarySearchNode::State::GenerateMeasureCandidates()
at /data/qshenaf/envs/tvm/src/meta_schedule/search_strategy/evolutionary_search.cc:717
1: tvm::meta_schedule::EvolutionarySearchNode::State::SampleInitPopulation(int)
at /data/qshenaf/envs/tvm/src/meta_schedule/search_strategy/evolutionary_search.cc:524
0: tvm::support::parallel_for_dynamic(int, int, int, std::function<void (int, int)> const&)
at /data/qshenaf/envs/tvm/src/support/parallel_for.cc:128
File "/data/qshenaf/envs/tvm/src/support/parallel_for.cc", line 128
RuntimeError: parallel_for_dynamic error with ScheduleError: (not rendered)
Environment
tvm-0.21.dev0'
Steps to reproduce
import tvm
from tvm import te, topi, tir
from tvm import meta_schedule as ms
grads = te.placeholder((1, 16, 32, 32), dtype='float32', name='grads')
data = te.placeholder((1, 16, 32, 32), dtype='float32', name='data')
op_config = {'grads': grads, 'data': data, 'kernel': [3, 3], 'stride': [2, 2], 'padding': [1, 1, 1, 1], 'pool_type': 'max', 'ceil_mode': False, 'count_include_pad': False, 'layout': 'NCHW', }
op_output = topi.nn.pool_grad(**op_config)
sch = tir.Schedule(te.create_prim_func([grads, data, op_output]).with_attr('target', tvm.target.Target('llvm')))
database = ms.tir_integration.tune_tir(mod=sch.mod, target='llvm --num-cores=16', work_dir='./tune_tmp', max_trials_global=1, num_trials_per_iter=1)
sch = ms.tir_integration.compile_tir(database, sch.mod, 'llvm --num-cores=16')
Triage
- needs-triage
- tune:meta_schedule
cc @ibsidorenko
Metadata
Metadata
Assignees
Labels
needs-triagePRs or issues that need to be investigated by maintainers to find the right assignees to address itPRs or issues that need to be investigated by maintainers to find the right assignees to address ittype: bug