Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How much runtime overhead to advices induce? #2

Open
danidiaz opened this issue Jan 24, 2021 · 1 comment
Open

How much runtime overhead to advices induce? #2

danidiaz opened this issue Jan 24, 2021 · 1 comment

Comments

@danidiaz
Copy link
Owner

Putting the arguments into the NP and spreading them again isn't likely to be cheap. I should benchmark.

The internals of Advice are hidden. Perhaps, if optimization were required, Advice could become a sum type:

  • One branch for "do-nothing" advices.
  • One branch for advices which don't care about the arguments, and only tweak executions.
  • One branch for advices which do care about the arguments, but don't tweak them, only analyze them somehow. I guess this would avoid having to spread them again.
  • One branch for advices with the "full behaviour".

Specialized constructor functions (beyond makeArgsAdvice and makeExecutionAdvice) should also be exported from the module.

advise and mappend could pattern-match on the new constructors and adopt more efficient implementations when possible.

@danidiaz
Copy link
Owner Author

danidiaz commented Jan 29, 2021

I've added a very simple benchmark in dc58403.

The benchmark uses the "identity advice" on a function with 4 parameters.

Results with -O0:

benchmarking adviceOverhead/not instrumented
time                 22.46 ms   (20.71 ms .. 24.53 ms)
                     0.971 R²   (0.945 R² .. 0.990 R²)
mean                 20.64 ms   (20.02 ms .. 21.69 ms)
std dev              1.846 ms   (1.328 ms .. 2.644 ms)
variance introduced by outliers: 41% (moderately inflated)

benchmarking adviceOverhead/instrumented id advice
time                 109.5 ms   (104.9 ms .. 116.4 ms)
                     0.997 R²   (0.993 R² .. 1.000 R²)
mean                 104.2 ms   (102.3 ms .. 106.8 ms)
std dev              3.475 ms   (2.020 ms .. 4.776 ms)

benchmarking adviceOverhead/instrumented locally defined id advice
time                 110.5 ms   (109.5 ms .. 111.4 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 112.3 ms   (111.6 ms .. 113.3 ms)
std dev              1.258 ms   (835.0 μs .. 1.779 ms)
variance introduced by outliers: 11% (moderately inflated)

Results with -O2:

benchmarking adviceOverhead/not instrumented
time                 740.5 μs   (738.7 μs .. 742.2 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 738.6 μs   (737.0 μs .. 740.2 μs)
std dev              5.767 μs   (4.905 μs .. 6.780 μs)

benchmarking adviceOverhead/instrumented id advice
time                 741.6 μs   (739.1 μs .. 744.8 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 742.7 μs   (741.7 μs .. 744.0 μs)
std dev              3.675 μs   (2.682 μs .. 5.001 μs)

benchmarking adviceOverhead/instrumented locally defined id advice
time                 740.2 μs   (738.3 μs .. 742.2 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 741.3 μs   (740.0 μs .. 742.6 μs)
std dev              4.446 μs   (3.607 μs .. 5.692 μs)

It seems that, with -O2, GHC is able to optimize away the overhead. Then again, it's only the identity Advice.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant