Want to be part of this project but don’t know what you can do to help? You should have a look at the low hanging fruit issues!
You need the following:
- Git
- JDK 8
- SBT
And be sure to be familiar with the design.
- Make sure you have signed the Scala CLA.
- You should perform your work in its own Git branch.
- Then open a pull request on GitHub, with
master
as base branch.
Have a look at the Waffle.io board to quickly know which issues are ready and which are already in progress.
collectionsJVM
/collectionsJS
(in thecollections/
directory): contains the implementation of the collections;junit
(intest/junit/
directory): unit tests;scalacheck
(intest/scalacheck/
directory): properties;timeBenchmark
(inbenchmarks/time/
directory): benchmarks measuring execution time;memoryBenchmark
(inbenchmarks/memory/
directory): benchmarks measuring the memory footprint of the collections;collections-contribJS
/collections-contribJVM
(in thecollections-contrib/
directory): implementation of decorators or additional features;collection-strawman
(in the root directory): root project;- Also, in directory
scalafix/
there is an independent project containing the implementation of the migration tool.
-
Compile the collections and run the tests:
>; compile; test; junit/test; scalacheck/test
-
Run the memory benchmark:
> memoryBenchmark/charts
-
Run the execution time benchmark:
> timeBenchmark/charts
-
Charts are produced as .png files in the
benchmarks/time/target/
directory. Each@Benchmark
method produces a .png chart with the same name (e.g. theforeach
benchmark produces aforeach.png
chart). In each chart, we aggregate results from all the benchmark classes that have a benchmark with the same name (e.g. theforeach.png
chart aggregates information from theListBenchmark
’sforeach
method,LazyListBenchmark
’sforeach
method, etc.). -
Running the whole benchmark suite takes time (several hours) and produces charts containing series for each collection type. You can restrict the benchmarks to be run by JMH to get more readable results. For instance, to run only benchmarks whose name contain
Array
:> timeBenchmark/charts Array
-
Several levels of contribution are possible!
Create an issue tagged with the
migration label.
Embrace diff
s to describe differences between the standard collections and
the strawman:
- xs.toIterator
+ xs.iterator()
Even better, instead of providing a diff, you can directly add it as a test case!
-
Fork this repository and create a separate branch;
-
Add a file in the
scalafix/input/src/main/scala/fix/
directory with code that uses the standard collections:
class toIteratorVsIterator(xs: Iterable[Int]) {
xs.toIterator
}
- Add a corresponding file in the
scalafix/output/src/main/scala/fix/
directory with the same code but using the strawman:
import strawman.collection.Iterable
class toIteratorVsIterator(xs: Iterable[Int]) {
xs.iterator()
}
-
Check that your code example compiles
- locally publish the strawman by running
sbt publishLocal
from the project root directory, - run sbt from the
scalafix/
directory and then run the following tasks; input/compile ; output/compile
;
- locally publish the strawman by running
-
Commit your changes, push your branch to your fork and create a pull request.
Then maybe someone will take over and implement your use case… or maybe you will (see next section)!
Even better, complete the migration tool implementation to support the missing case!
After you have added the missing case (see previous section), run the following
sbt task (with sbt started from the scalafix/
directory) to run the
migration tool on the input files and check whether the result matches the
expected output files:
> tests/test
Fix the implementation of the rule (in the
rules/src/main/scala/fix/Collectionstrawman_v0.scala
file) until the
tests are green. You can find more help about the scalafix API in its
documentation.