Skip to content

BoneJ Ops

Richard Domander edited this page Sep 27, 2018 · 7 revisions

One of the most important goals of the project is to make the algorithms in BoneJ reusable. Rewriting them as ImageJ ops is the best way to achieve this - the mantra of the framework is "write once, run anywhere". This page discusses first what kind of code makes for a good op, and then goes into more detail explaining special ops, and op matching. Finally it discusses how to first add an op to BoneJ, and then move it to imagej-ops.

Op Design

An op should be a generic algorithm that usable for all kinds of images independent of their subject or domain. Ideally an op should work for all n-dimensional images, but in practice the implementation can often become too complicated to achieve this. Also, some algorithms are only defined for certain dimensionality. However, you can write several ops that implement the same method for different kinds of images. The matching system (see below), will automatically select the one that matches the image, so in "user-land" the code looks more or less the same. For an example of a n-dimensional op, see BoxCount, and a 3D op, see EulerCharacteristic26NFloating.

A good candidate for an op is an algorithm for a problem that has many alternative solutions. You should be able to override an op. Your op may implement an algorithm for a specific type of image such as Img<BitType, or a certain heuristic for a problem such as Dijktra's algorithm for finding the shortest path. If you implement the Contingent interface, you'll have more fine grained control over when your op can run.

Ops are low level operations, and shouldn't deal with higher level concerns. For example, an op may check that its input image has a certain number of dimensions, but it shouldn't worry if those dimensions are spatial or not. Furthermore, ops often handle such abstract types, e.g. RandomAccessibleInterval that they don't have the necessary metadata available. Thus if necessary, the types of the image dimensions (available e.g. in the ImgPlus class) should be checked higher up before the op is called. In fact, in order to be reusable, ops should use as generic input types and as specific output types as possible.

Special Ops

There are several special types of ops that allow more flexible usage. First you decide if you need a NullaryOp, UnaryOp or BinaryOp. These terms refer to the number of "primary" inputs: 0, 1 or 2. The op may have more inputs, but they don't work the same way with the matching system (see below). Then you choose between FunctionOp, ComputerOp, InplaceOp and HybridOp. A FunctionOp creates a new output and a ComputerOp fills the answer into a pre-existing output, for example to avoid allocating memory for a large image buffer. An InplaceOp changes or mutates the input, and a HybridOp is any combination of the three types. For example, a UnaryHybridCF is a special op that takes in one input, and can either create a new or use a pre-allocated output.

Op matching

The Ops framework is centred around the concept of matching. You can ask the system to find you a specific implementation of a certain type of op that matches your inputs. For example, you can write

imageJ.op().math().add(1, 5);
imageJ.op().math().add(1.0, 2.0);

where the first call finds you an op that implements add for two integers, and the second for two doubles. This matching mechanism allows you to write even more abstract code than interfaces, since you don't even have to know the specific signature of the implementation you need.

The example above is possible because there are op methods for the Add-type ops in the Math namespace. In imagej-ops each op, must be of a certain type, be in a namespace, and have an op method. This allows you to call them with the opService.nameSpace().opType() syntax as well as the more generic opService.run(MyOp.class) call.

The idea behind matching is that the system finds you an op of a certain type, that matches your inputs. You can let it decide, which implementation to use, or ask it to find a specific implementation. The matching system chooses the implementation based on the op type & class, inputs and other rules, such as the Contingent.conforms() method.

Special ops allow pre-matching ops that can improve performance. This is handy especially if you want to repeatedly call the same op. You pre-match this by calling one of the convenience methods in the Functions class. For example, if you want to match 'EulerCharacteristic26NFloating', you call 'Hybrids.unaryCF(opService, EulerCharacteristic26NFloating.class, DoubleType.class, interval);' and store the result to a variable of the type UnaryHybridCF<RandomAccessibleInterval<BitType>, DoubleType>. In this case the interval argument has to be a concrete instance of a class that implements RandomAccessibleInterval. It cannot be RandomAccessibleInterval.class, because the op implements Contingent, and the particular implementation would throw a NullPointerException if the first input wasn't concrete.

With special ops you can also "lock" certain inputs. For example, after matching you can call the Rotate3d either via calculate(Vector3d) or calculate(Vector3d, Quaterniondc). In the former the op uses the same rotation quaternion as when it was matched, only the vector operated changes.

BoneJ Ops

The first step in implementing a new op is adding it to the bonej-ops artefact. It's a good idea to keep the code in the same repository at first to give time for its design to mature. It's more difficult, and certainly slower to change an implementation that's already in an external repository. That's why you should take some time to check if an op fits the needs of the wrapper plug-ins. The downside is that first writing an op into bonej-ops and then submitting it into imagej-ops requires a bit of extra effort compared to coding it just once.

When reimplementing the Fractal dimension plug-in I first created a single op, which took in a binary image, and returned a collection of points that could have a linear curve fitted into them. However in the process I realized that there's an independent pre-step to the method: first it hollows objects in the input image. Thus, I created a separate op called Outline which works on any n-dimensional image. First Fractal dimension calls the Outline op, and then feeds the result to BoxCount op, which calculates the collection of points mentioned earlier. This refined design shows the value of first keeping the implementation within the BoneJ project.

Contributing to imagej-ops

When an op is ready to migrate from BoneJ the imagej-ops you first have to decide, which namespace suits it best. After you've forked imagej-ops, the implementation starts by adding the op to the selected namespace in templates/net/imagej/ops/Ops.list. Then you build the project with mvn clean package, which generates the necessary interfaces automatically. Let's say you added myOp to the morphology namespace in the Ops.list. Then your class should have the annotation @Plugin(type = Ops.Morphology.MyOp.class). You also need to add op methods to he MorphologyNamespace class. If myOp is a UnaryFunctionOp that takes in a RandomAccessibleInterval<BitType> and returns the same type, then it's enough to add

@OpMethod(op = net.imagej.ops.morphology.MyOp.class)
public RandomAccessibleInterval<BitType> myOp(final RandomAccessibleInterval<BitType> in)
{
    return (RandomAccessibleInterval<BitType>) ops().run(net.imagej.ops.Ops.Morphology.MyOp.class, in);
}

The op-method makes it possible to call imageJ.ops().morphology().myOp(interval) Then finally, in the unit tests, you extend the AbstractOpTest class. For examples, see my past pull requests to imagej-ops.

If there isn't a suitable namespace for your new op, you can add a new one. The only difference in the PR is also adding the new namespace to Ops.list file. However I'd say adding a new namespace is more likely to create discussion about the PR before it's merged. NB at the the time of writing this, imagej-ops is in the process of splitting in several different modules. For example, some ops will move to scijava-ops, which will host algorithms that aren't limited to just image processing. As far as I know, adding code to the new modules shouldn't be too different from contributing to imagej-ops.