Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix incremental compilation of runtime/test #8975

Merged
merged 41 commits into from
Feb 13, 2024
Merged
Show file tree
Hide file tree
Changes from 38 commits
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
3fce708
Move all runtime/test to runtime-tests
Akirathan Feb 5, 2024
77861af
Add a new runtime-tests project
Akirathan Feb 5, 2024
a3a2b2d
Add runtime-tests to the enso aggregate
Akirathan Feb 5, 2024
5c97fcc
Add some GraalVM dependencies to runtime-tests
Akirathan Feb 5, 2024
7673b1f
Remove warning about non-existing directories for module-path
Akirathan Feb 5, 2024
51895ef
Move all runtime/bench to runtime-benchmarks
Akirathan Feb 5, 2024
a94096b
Create new runtime-benchmarks project
Akirathan Feb 5, 2024
69b771e
Remove most of Scala code from runtime-benchmarks
Akirathan Feb 6, 2024
4a5f3da
Add necessary dependencies to runtime-benchmarks project
Akirathan Feb 6, 2024
e27ac91
Fix bench runtime problems
Akirathan Feb 6, 2024
716fa4c
Add withDebug command to runtime-benchmarks
Akirathan Feb 6, 2024
664cf7f
Make sure Rust build script invokes runtime-benchmarks
Akirathan Feb 6, 2024
81c6f7d
fmt
Akirathan Feb 6, 2024
d77bef5
Import Nothing as type, not as module
Akirathan Feb 6, 2024
0f7a244
Add MetaTest that Nothing should be null
Akirathan Feb 6, 2024
48e590a
Remove unnecessary unmanagedClasspath set in runtime-benchmarks
Akirathan Feb 6, 2024
d460add
Inroduce benchmarks-common project
Akirathan Feb 6, 2024
f74a0b3
Remove Scala RegressionTest
Akirathan Feb 6, 2024
b109ead
Merge BenchmarksRunner from bench-processor
Akirathan Feb 7, 2024
0eec816
std-benchmarks use BenchRunner from benchmarks-common
Akirathan Feb 7, 2024
8e05805
runtime-benchmarks has only Compile configuration
Akirathan Feb 7, 2024
a73a492
Update the benchmarks.md docs
Akirathan Feb 7, 2024
be3ec07
Update the sbt command to compile runtime-benchmarks in the Rust buil…
Akirathan Feb 7, 2024
d6e7892
Include runtime-benchmarks and benchmarks-common in the enso aggregate
Akirathan Feb 7, 2024
92ee78c
fmt
Akirathan Feb 7, 2024
8ceb130
Make sure Truffle dsl is run in runtime-tests
Akirathan Feb 7, 2024
df69ccd
Enable JVM assertions in runtime-tests
Akirathan Feb 8, 2024
16fba0d
Rename runtime-tests to runtime-integration-tests
Akirathan Feb 8, 2024
5d57cc2
Add JVM assertions are enabled test
Akirathan Feb 8, 2024
63b1c7b
Merge branch 'develop' into wip/akirathan/8968-fix-runtime-test-build
Akirathan Feb 9, 2024
c36b8fa
Remove Bench configuration from std-benchmarks
Akirathan Feb 9, 2024
de7fd6a
std-benchmarks depends on the runtime-fat-jar assembly
Akirathan Feb 9, 2024
6ed5c85
fmt
Akirathan Feb 9, 2024
f25de91
Fix --add-exports for org.slf4j.nop provider
Akirathan Feb 12, 2024
919606d
runtime-benchmarks/javaOptions does not take value from std-benchmarks.
Akirathan Feb 12, 2024
68a6bf7
fmt
Akirathan Feb 12, 2024
d1618f6
Merge branch 'develop' into wip/akirathan/8968-fix-runtime-test-build
Akirathan Feb 12, 2024
95665cf
runtime-benchmarks/bench depends on buildEngineDistribution
Akirathan Feb 12, 2024
2377540
Update path.yaml in the build script
Akirathan Feb 12, 2024
f873840
Fix bench-report.xml patch in the build script
Akirathan Feb 12, 2024
00fb45e
One more fix bench-report.xml patch in the build script
Akirathan Feb 12, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
538 changes: 292 additions & 246 deletions build.sbt

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion build/build/src/engine.rs
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ impl Benchmarks {
pub fn sbt_task(self) -> Option<&'static str> {
match self {
Benchmarks::All => Some("bench"),
Benchmarks::Runtime => Some("runtime/bench"),
Benchmarks::Runtime => Some("runtime-benchmarks/bench"),
Benchmarks::Enso => None,
Benchmarks::EnsoJMH => Some("std-benchmarks/bench"),
}
Expand Down
2 changes: 1 addition & 1 deletion build/build/src/engine/context.rs
Original file line number Diff line number Diff line change
Expand Up @@ -440,7 +440,7 @@ impl RunContext {
} else {
if self.config.build_benchmarks {
// Check Runtime Benchmark Compilation
sbt.call_arg("runtime/Benchmark/compile").await?;
sbt.call_arg("runtime-benchmarks/compile").await?;

// Check Language Server Benchmark Compilation
sbt.call_arg("language-server/Benchmark/compile").await?;
Expand Down
48 changes: 30 additions & 18 deletions docs/infrastructure/benchmarks.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,26 +13,38 @@ To track the performance of the engine, we use
benchmarks:

- [micro benchmarks](#engine-jmh-microbenchmarks) located directly in the
`runtime` SBT project. These benchmarks are written in Java, and are used to
measure the performance of specific parts of the engine.
`runtime-benchmarks` SBT project. These benchmarks are written in Java, and
are used to measure the performance of specific parts of the engine.
- [standard library benchmarks](#standard-library-benchmarks) located in the
`test/Benchmarks` Enso project. These benchmarks are entirelly written in
Enso, along with the harness code.
`test/Benchmarks` Enso project. These benchmarks are entirely written in Enso,
along with the harness code.

## Engine JMH microbenchmarks

These benchmarks are written in Java and are used to measure the performance of
specific parts of the engine. The sources are located in the `runtime` SBT
project, under `src/bench` source directory.
specific parts of the engine. The sources are located in the
`runtime-benchmarks` SBT project, under `engine/runtime-benchmarks` directory.

### Running the benchmarks

To run the benchmarks, use `bench` or `benchOnly` command - `bench` runs all the
benchmarks and `benchOnly` runs only one benchmark specified with the fully
qualified name. The parameters for these benchmarks are hard-coded inside the
JMH annotations in the source files. In order to change, e.g., the number of
measurement iterations, you need to modify the parameter to the `@Measurement`
annotation.
To run the benchmarks, use `bench` or `benchOnly` command in the
`runtime-benchmarks` project - `bench` runs all the benchmarks and `benchOnly`
runs only one benchmark specified with the fully qualified name. The
aforementioned commands are mere shortcuts to the
[standard JMH launcher](https://github.com/openjdk/jmh/blob/master/jmh-core/src/main/java/org/openjdk/jmh/Main.java).
To get the full power of the JMH launcher, invoke simply `run` with cmdline
options passed to the launcher. For the full options summary, see the
[JMH source code](https://github.com/openjdk/jmh/blob/master/jmh-core/src/main/java/org/openjdk/jmh/runner/options/CommandLineOptions.java),
or invoke `run -h`.

You can change the parameters to the benchmarks either by modifying the
annotations directly in the source code, or by passing the parameters to the JMH
runner. For example, to run the benchmarks with 3 warmup iterations and 2
measurement iterations, use:

```
sbt:runtime-benchmarks> run -w 3 -i 2 <bench-name>
```

### Debugging the benchmarks

Expand Down Expand Up @@ -94,12 +106,12 @@ the full options summary, either see the
or run the launcher with `-h` option.

The `std-benchmarks` SBT project supports `bench` and `benchOnly` commands, that
work the same as in the `runtime` project, with the exception that the benchmark
name does not have to be specified as a fully qualified name, but as a regular
expression. To access the full flexibility of the JMH launcher, run it via
`Bench/run` - for example, to see the help message: `Bench/run -h`. For example,
you can run all the benchmarks that have "New_Vector" in their name with just 3
seconds for warmup iterations and 2 measurement iterations with
work the same as in the `runtime-benchmarks` project, with the exception that
the benchmark name does not have to be specified as a fully qualified name, but
as a regular expression. To access the full flexibility of the JMH launcher, run
it via `Bench/run` - for example, to see the help message: `Bench/run -h`. For
example, you can run all the benchmarks that have "New_Vector" in their name
with just 3 seconds for warmup iterations and 2 measurement iterations with
`Bench/run -w 3 -i 2 New_Vector`.

Whenever you add or delete any benchmarks from `test/Benchmarks` project, the
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
package org.enso.interpreter.bench.benchmarks;

import org.enso.interpreter.bench.BenchmarksRunner;
import org.openjdk.jmh.runner.RunnerException;

public class RuntimeBenchmarksRunner {
public static void main(String[] args) throws RunnerException {
BenchmarksRunner.run(args);
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,285 @@
package org.enso.interpreter.bench.benchmarks.semantic;

import java.nio.file.Paths;
import java.util.concurrent.TimeUnit;
import java.util.logging.Level;
import org.enso.interpreter.bench.Utils;
import org.enso.polyglot.RuntimeOptions;
import org.graalvm.polyglot.Context;
import org.graalvm.polyglot.Value;
import org.graalvm.polyglot.io.IOAccess;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
import org.openjdk.jmh.annotations.Fork;
import org.openjdk.jmh.annotations.Measurement;
import org.openjdk.jmh.annotations.Mode;
import org.openjdk.jmh.annotations.OutputTimeUnit;
import org.openjdk.jmh.annotations.Scope;
import org.openjdk.jmh.annotations.Setup;
import org.openjdk.jmh.annotations.State;
import org.openjdk.jmh.annotations.Warmup;
import org.openjdk.jmh.infra.BenchmarkParams;
import org.openjdk.jmh.infra.Blackhole;

@BenchmarkMode(Mode.AverageTime)
@Fork(1)
@Warmup(iterations = 5)
@Measurement(iterations = 5)
@OutputTimeUnit(TimeUnit.MILLISECONDS)
@State(Scope.Benchmark)
public class AtomBenchmarks {

private static final Long MILLION = 1_000_000L;
private static final String MILLION_ELEMENT_LIST =
"""
Akirathan marked this conversation as resolved.
Show resolved Hide resolved
import Standard.Base.Data.List.List
import Standard.Base.Data.Numbers

main =
generator fn acc i end = if i == end then acc else @Tail_Call generator fn (fn acc i) i+1 end
res = generator (acc -> x -> List.Cons x acc) List.Nil 1 $million
res
"""
.replace("$million", MILLION.toString());

private static final String GENERATE_LIST_CODE =
"""
import Standard.Base.Data.List.List
import Standard.Base.Data.Numbers

main = length ->
generator = acc -> i -> if i == 0 then acc else @Tail_Call generator (List.Cons i acc) (i - 1)

res = generator List.Nil length
res
""";
private static final String GENERATE_LIST_QUALIFIED_CODE =
"""
import Standard.Base.Data.List.List
import Standard.Base.Data.Numbers

main = length ->
generator = acc -> i -> if i == 0 then acc else @Tail_Call generator (List.Cons i acc) (i - 1)

res = generator List.Nil length
res
""";
private static final String REVERSE_LIST_CODE =
"""
import Standard.Base.Data.List.List
import Standard.Base.Data.Numbers

main = list ->
reverser = acc -> list -> case list of
List.Cons h t -> @Tail_Call reverser (List.Cons h acc) t
List.Nil -> acc

res = reverser List.Nil list
res
""";
private static final String REVERSE_LIST_METHODS_CODE =
"""
import Standard.Base.Data.List.List
import Standard.Base.Data.Numbers

List.rev self acc = case self of
List.Cons h t -> @Tail_Call t.rev (List.Cons h acc)
_ -> acc

main = list ->
res = list.rev List.Nil
res
""";
private static final String SUM_LIST_CODE =
"""
import Standard.Base.Data.List.List
import Standard.Base.Data.Numbers

main = list ->
summator = acc -> list -> case list of
List.Cons h t -> @Tail_Call summator acc+h t
List.Nil -> acc

res = summator 0 list
res
""";
private static final String SUM_LIST_LEFT_FOLD_CODE =
"""
import Standard.Base.Data.List.List
import Standard.Base.Data.Numbers

main = list ->
fold = f -> acc -> list -> case list of
List.Cons h t -> @Tail_Call fold f (f acc h) t
_ -> acc

res = fold (x -> y -> x + y) 0 list
res
""";
private static final String SUM_LIST_FALLBACK_CODE =
"""
import Standard.Base.Data.List.List
import Standard.Base.Data.Numbers

main = list ->
summator = acc -> list -> case list of
List.Cons h t -> @Tail_Call summator acc+h t
_ -> acc

res = summator 0 list
res
""";
private static final String SUM_LIST_METHODS_CODE =
"""
import Standard.Base.Data.List.List
import Standard.Base.Data.Numbers

List.sum self acc = case self of
List.Cons h t -> @Tail_Call t.sum h+acc
_ -> acc

main = list ->
res = list.sum 0
res
""";
private static final String MAP_REVERSE_LIST_CODE =
"""
import Standard.Base.Data.List.List
import Standard.Base.Data.Numbers

List.mapReverse self f acc = case self of
List.Cons h t -> @Tail_Call t.mapReverse f (List.Cons (f h) acc)
_ -> acc

main = list ->
res = list.mapReverse (x -> x + 1) List.Nil
res
""";
private static final String MAP_REVERSE_LIST_CURRY_CODE =
"""
import Standard.Base.Data.List.List
import Standard.Base.Data.Numbers

List.mapReverse self f acc = case self of
List.Cons h t -> @Tail_Call t.mapReverse f (List.Cons (f h) acc)
_ -> acc

main = list ->
adder = x -> y -> x + y
res = list.mapReverse (adder 1) List.Nil
res
""";

private Context context;
private Value millionElementsList;
private Value generateList;
private Value generateListQualified;
private Value reverseList;
private Value reverseListMethods;
private Value sumList;
private Value sumListLeftFold;
private Value sumListFallback;
private Value sumListMethods;
private Value mapReverseList;
private Value mapReverseListCurry;

@Setup
public void initializeBenchmarks(BenchmarkParams params) {
this.context =
Context.newBuilder()
.allowExperimentalOptions(true)
.option(RuntimeOptions.LOG_LEVEL, Level.WARNING.getName())
.logHandler(System.err)
.allowIO(IOAccess.ALL)
.allowAllAccess(true)
.option(
RuntimeOptions.LANGUAGE_HOME_OVERRIDE,
Paths.get("../../distribution/component").toFile().getAbsolutePath())
.build();

var millionElemListMethod = Utils.getMainMethod(context, MILLION_ELEMENT_LIST);
this.millionElementsList = millionElemListMethod.execute();
this.generateList = Utils.getMainMethod(context, GENERATE_LIST_CODE);
this.generateListQualified = Utils.getMainMethod(context, GENERATE_LIST_QUALIFIED_CODE);
this.reverseList = Utils.getMainMethod(context, REVERSE_LIST_CODE);
this.reverseListMethods = Utils.getMainMethod(context, REVERSE_LIST_METHODS_CODE);
this.sumList = Utils.getMainMethod(context, SUM_LIST_CODE);
this.sumListLeftFold = Utils.getMainMethod(context, SUM_LIST_LEFT_FOLD_CODE);
this.sumListFallback = Utils.getMainMethod(context, SUM_LIST_FALLBACK_CODE);
this.sumListMethods = Utils.getMainMethod(context, SUM_LIST_METHODS_CODE);
this.mapReverseList = Utils.getMainMethod(context, MAP_REVERSE_LIST_CODE);
this.mapReverseListCurry = Utils.getMainMethod(context, MAP_REVERSE_LIST_CURRY_CODE);
}

@Benchmark
public void benchGenerateList(Blackhole bh) {
var res = generateList.execute(MILLION);
bh.consume(res);
}

@Benchmark
public void benchGenerateListQualified(Blackhole bh) {
var res = generateListQualified.execute(MILLION);
bh.consume(res);
}

@Benchmark
public void benchReverseList(Blackhole bh) {
var reversedList = reverseList.execute(millionElementsList);
bh.consume(reversedList);
}

@Benchmark
public void benchReverseListMethods(Blackhole bh) {
var reversedList = reverseListMethods.execute(millionElementsList);
bh.consume(reversedList);
}

@Benchmark
public void benchSumList(Blackhole bh) {
var res = sumList.execute(millionElementsList);
if (!res.fitsInLong()) {
throw new AssertionError("Should return a number");
}
bh.consume(res);
}

@Benchmark
public void sumListLeftFold(Blackhole bh) {
var res = sumListLeftFold.execute(millionElementsList);
if (!res.fitsInLong()) {
throw new AssertionError("Should return a number");
}
bh.consume(res);
}

@Benchmark
public void benchSumListFallback(Blackhole bh) {
var res = sumListFallback.execute(millionElementsList);
if (!res.fitsInLong()) {
throw new AssertionError("Should return a number");
}
bh.consume(res);
}

@Benchmark
public void benchSumListMethods(Blackhole bh) {
var res = sumListMethods.execute(millionElementsList);
if (!res.fitsInLong()) {
throw new AssertionError("Should return a number");
}
bh.consume(res);
}

@Benchmark
public void benchMapReverseList(Blackhole bh) {
var res = mapReverseList.execute(millionElementsList);
bh.consume(res);
}

@Benchmark
public void benchMapReverseCurryList(Blackhole bh) {
var res = mapReverseListCurry.execute(millionElementsList);
bh.consume(res);
}
}