Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Commit

Permalink
[OpPerf] Fix axis_shape and function mismatch for LTS (#17894)
Browse files Browse the repository at this point in the history
  • Loading branch information
ChaiBapchya committed Apr 5, 2020
1 parent b6edefb commit 2fff11d
Show file tree
Hide file tree
Showing 3 changed files with 7 additions and 6 deletions.
4 changes: 3 additions & 1 deletion benchmark/opperf/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,8 @@ python incubator-mxnet/benchmark/opperf/opperf.py --output-format json --output-
4. **profiler** : `native` or `python`. By default, 'native'. You can override and set the global profiler for all operator benchmarks. Example: --profiler 'python'.
Native profiler uses MXNet C++ based built-in profiler. Python profiler uses Python package time. Generally, native profiler is used by developers and python profiler is used by users.

5. **int64-tensor** : `on` or `off`. By default, 'off'. You can override and set the large tensor flag to ON. Example: --int64-tensor ON

## Usecase 2 - Run benchmarks for all the operators in a specific category

For example, you want to run benchmarks for all NDArray Broadcast Binary Operators, Ex: broadcast_add, broadcast_mod, broadcast_pow etc., You just run the following python script.
Expand Down Expand Up @@ -199,7 +201,7 @@ By default, MXNet profiler is used as the profiler engine.

All contributions are welcome. Below is the list of desired features:

1. Cover all MXNet operators.
1. ~~Cover all MXNet operators~~.
2. Enhance MXNet profiler with additional APIs to programmatically fetch and process profiler data.
3. Integration with CI/CD system to run operator benchmarks for PR builds, nightly builds.
4. Dashboards and other modes of presentation of results for analyzing and planning tasks such as operator performance improvements.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ def run_rearrange_operators_benchmarks(ctx=mx.cpu(), dtype='float32', profiler='
mx_rearrange_ops = get_all_rearrange_operators()

# Run benchmarks
mx_rearrange_op_results = run_op_benchmarks(mx_rearrange_ops, dtype, ctx, profiler, warmup, runs)
mx_rearrange_op_results = run_op_benchmarks(mx_rearrange_ops, dtype, ctx, profiler, int64_tensor, warmup, runs)
return mx_rearrange_op_results


Expand Down Expand Up @@ -129,7 +129,7 @@ def run_shape_operators_benchmarks(ctx=mx.cpu(), dtype='float32', profiler='nati
mx_shape_ops = get_all_shape_operators()

# Run benchmarks
mx_shape_op_results = run_op_benchmarks(mx_shape_ops, dtype, ctx, profiler, warmup, runs)
mx_shape_op_results = run_op_benchmarks(mx_shape_ops, dtype, ctx, profiler, int64_tensor, warmup, runs)
return mx_shape_op_results


Expand Down Expand Up @@ -161,7 +161,7 @@ def run_expanding_operators_benchmarks(ctx=mx.cpu(), dtype='float32', profiler='
mx_expanding_ops = get_all_expanding_operators()

# Run benchmarks
mx_expanding_op_results = run_op_benchmarks(mx_expanding_ops, dtype, ctx, profiler, warmup, runs)
mx_expanding_op_results = run_op_benchmarks(mx_expanding_ops, dtype, ctx, profiler, int64_tensor, warmup, runs)
return mx_expanding_op_results


Expand Down Expand Up @@ -193,7 +193,7 @@ def run_rounding_operators_benchmarks(ctx=mx.cpu(), dtype='float32', profiler='n
mx_rounding_ops = get_all_rounding_operators()

# Run benchmarks
mx_rounding_op_results = run_op_benchmarks(mx_rounding_ops, dtype, ctx, profiler, warmup, runs)
mx_rounding_op_results = run_op_benchmarks(mx_rounding_ops, dtype, ctx, profiler, int64_tensor, warmup, runs)
return mx_rounding_op_results


Expand Down
1 change: 0 additions & 1 deletion benchmark/opperf/rules/default_params.py
Original file line number Diff line number Diff line change
Expand Up @@ -603,7 +603,6 @@
"p": DEFAULT_P,
"k_nd": DEFAULT_K_ND_LARGE_TENSOR,
"p_nd": DEFAULT_P_ND_LARGE_TENSOR,
"axis_shape": DEFAULT_AXIS_SHAPE,
"axis": DEFAULT_AXIS,
"weight" : DEFAULT_WEIGHT_LARGE_TENSOR,
"weight32" : DEFAULT_WEIGHT_LARGE_TENSOR,
Expand Down

0 comments on commit 2fff11d

Please sign in to comment.