Instrumenting coverage tool for .net (framework 2.0+ and core) and Mono, reimplemented and extended almost beyond recognition from dot-net-coverage, plus a set of related utilities for processing the results from this and from other programs producing similar output formats.
Never mind the fluff -- how do I get started?
Start with the Quick Start guide
What's in the box?
For Mono, .net framework and .net core, except as noted
AltCover, a command-line tool for recording code coverage (including dotnet and global tool versions)
- MSBuild tasks to drive the tool, including
- An API for the above functionality, with Fake and Cake integration
- A PowerShell module (not mono) containing a cmdlet that drives the tool, and other cmdlets for manipulating coverage reports
- A coverage visualizer tool
- For .net framework and mono (for .net framework, needs GTK# v2.12.xx installed separately -- see https://www.mono-project.com/download/stable/#download-win )
- For .net core (needs GTK+3 installed separately -- for Windows, see e.g. https://github.com/GtkSharp/GtkSharp/wiki/Installing-Gtk-on-Windows)
- General purpose install -- excludes the C# API and FAKE integration
- API install -- excludes the .net Framework/mono/GTK#2 Visualizer
- dotnet CLI tool install -- excludes the visualizer in all forms
- dotnet global tool install -- excludes the visualizer in all forms
- Visualizer dotnet global tool -- just the .net core/GTK#3 Visualizer as a global tool
- FAKE build task utilities -- just AltCover related helper types for FAKE scripts (v5.9.3 or later), only in this package
As the name suggests, it's an alternative coverage approach. Rather than working by hooking the .net profiling API at run-time, it works by weaving the same sort of extra IL into the assemblies of interest ahead of execution. This means that it should work pretty much everywhere, whatever your platform, so long as the executing process has write access to the results file. You can even mix-and-match between platforms used to instrument and those under test.
In particular, while instrumenting .net core assemblies "just works" with this approach, it also supports Mono, as long as suitable
.pdb, in recent versions) symbols are available. One major limitation here is that the
.mdb format only stores the start location in the source of any code sequence point, and not the end; consequently any nicely coloured reports that take that information into account may show a bit strangely.
Why altcover? -- the back-story of why it was ever a thing
Back in 2010, the new .net version finally removed the deprecated profiling APIs that the free NCover 1.5.x series relied upon. The first version of AltCover was written to both fill a gap in functionality, and to give me an excuse for a ground-up F# project to work on. As such, it saw real production use for about a year and a half, until OpenCover reached a point where it could be used for .net4/x64 work (and I could find time to adapt everything downstream that consumed NCover format input).
Fast forwards to autumn 2017, and I get the chance to dust the project off, with the intention of saying that it worked on Mono, too -- and realise that it's déja vu all over again, because .net core didn't yet have profiler based coverage tools either, and the same approach would work there as well.
On old-fashioned .net framework, the
ProcessExitevent handling window of ~2s is sufficient for processing significant bodies of code under test (several 10s of kloc, as observed in production back in the '10-'11 timeframe); under
dotnet testthe window seems to be rather tighter (about 100ms, experimentally, about enough for 1kloc). Therefore, the preferred way to perform coverage gathering for .net core, except for the smallest programs, is to run with AltCover in the "runner" mode. By their nature, unit tests invoking significant frameworks are not small programs, even if the system under test is itself small.
Under Mono on non-Windows platforms the default values of
--debug:pdbonlygenerate no symbols from F# projects -- and without symbols, such assemblies cannot be instrumented. Unlike with C# projects, where the substitution appears to be automatic, to use the necessary
--debug:portableoption involves explicitly hand editing the old-school
.fsprojfile to have
|Unit Test coverage||Coveralls|
See the Wiki page for details
See the current project for details
It is assumed that the following are available
You will need Visual Studio VS2017 (Community Edition) v15.8.latest with F# language support (or just the associated build tools and your editor of choice). The NUnit3 Test Runner will simplify the basic in-IDE development cycle. Note that some of the unit tests expect that the separate build of test assemblies under Mono, full .net framework and .net core has taken place; there will be up to 20 failures when running the unit tests in Visual Studio from clean when those expected assemblies are not found.
For the .net 2.0 support, if you don't already have FSharp.Core.dll version 220.127.116.11 (usually in Reference Assemblies\Microsoft\FSharp.NETFramework\v2.0\18.104.22.168), then you will need to install this -- the Visual F# Tools 4.0 RTM
FSharp_Bundle.exe is the most convenient source.
For GTK# support, the GTK# latest 2.12 install is expected -- try https://www.mono-project.com/download/stable/#download-win
It is assumed that
mono (version 5.14.x) and
dotnet are on the
PATH already, and everything is built from the command line, with your favourite editor used for coding.
Start by setting up
dotnet fake with
dotnet restore dotnet-fake.fsproj
dotnet fake run ./Build/setup.fsx to do the rest of the set-up.
dotnet fake run ./Build/build.fsx performs a full build/test/package process.
dotnet fake run ./Build/build.fsx --target <targetname> to run to a specific target.
If the build fails
If there's a passing build on the CI servers for this commit, then it's likely to be one of the intermittent build failures that can arise from the tooling used. The standard remedy is to try again.
The tests in the
Tests.fs file are ordered in the same dependency order as the code within the AltCover project (the later
Runner tests aside). While working on any given layer, it would make sense to comment out all the tests for later files so as to show what is and isn't being covered by explicit testing, rather than merely being cascaded through.