Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature serialization and incremental updates - take 2. #106

Merged
merged 38 commits into from Sep 18, 2019

Conversation

@tinevez
Copy link
Member

commented Jul 10, 2019

Dear Tobias,

Here is my latest attempt at getting the feature serialization right. For several of the projects Mastodon uses and that I could witness or pilot, it turned to be an important feature of the scientific workflows we want to address.

I try to put this PR in a shape that can be recycled for the documentation or the Materials and Methods of the future paper. Yet, it is in the current shape addressed to you.

Feature serialization and incremental updates.

Mastodon has the central ambition to harness the possibly very large data generated when analyzing large images. The Mastodon Feature framework offers numerical (also non numerical) data defined for the objects of the Mastodon-app model, that can be defined and created by third-parties developers. It should abide to Mastodon ambition and facilitate harnessing large data too.

Feature serialization.

Some feature can take very long to compute on large models, e.g. the spot gaussian-filtered intensity. Roughly speaking, a batch of 1000 spots takes about a second to compute. The time invested in computing these feature values should not be lost when the model is reloaded in a later session, so the Mastodon-app must offer serialization of feature values along with the model.

I remember that in my first attempt you did not like that the feature computer or the feature classes themselves had information and/or methods related to serialization. To design the current solution I simply adapted the design we already have for feature computers.

Feature serialisers.

The central interface for classes that can de/serialise a feature is org.mastodon.feature.io.FeatureSerializer. It is typed with the feature type and feature target type (vertex or edge):

public interface FeatureSerializer< F extends Feature< O >, O > extends SciJavaPlugin

And it is a SciJavaPlugin because of course we want to feature serializers discoverable by a specialized service, like for feature computers. The interface defines 3 methods:

public FeatureSpec< F, O > getFeatureSpec();

because we want to know the spec of the feature we can de/serialize. Also there are de/serialization methods based on object streams:

public void serialize( F feature, ObjectToFileIdMap< O > idmap, ObjectOutputStream oos );

public F deserialize( final FileIdToObjectMap< O > idmap, final RefCollection< O > pool, ObjectInputStream ois );

Note that the deserialize method returns the exact feature type, and not a super-class. We want serializers to produce an exact feature of the right class. So we will need one feature serializer for every feature we define. This means we are not able to write generic serializers for generic features.

Here is how a serializer looks like for a simple feature like SpotNLinksFeature:

@Plugin( type = SpotNLinksFeatureSerializer.class )
public class SpotNLinksFeatureSerializer implements FeatureSerializer< SpotNLinksFeature, Spot >
{

	@Override
	public Spec getFeatureSpec()
	{
		return SpotNLinksFeature.SPEC;
	}

	@Override
	public void serialize( final SpotNLinksFeature feature, final ObjectToFileIdMap< Spot > idmap, final ObjectOutputStream oos ) throws IOException
	{
		final IntPropertyMapSerializer< Spot > propertyMapSerializer = new IntPropertyMapSerializer<>( feature.map );
		propertyMapSerializer.writePropertyMap( idmap, oos );
	}

	@Override
	public SpotNLinksFeature deserialize( final FileIdToObjectMap< Spot > idmap, final RefCollection< Spot > pool, final ObjectInputStream ois ) throws IOException, ClassNotFoundException
	{
		final IntPropertyMap< Spot > map = new IntPropertyMap<>( pool, -1 );
		final IntPropertyMapSerializer< Spot > propertyMapSerializer = new IntPropertyMapSerializer<>( map );
		propertyMapSerializer.readPropertyMap( idmap, ois );
		return new SpotNLinksFeature( map );
	}
}

Nothing special; we reuse the property map serializers you made for the model serialization. Note however that the serializer must have access to the said property map for serialization (default visibility of the final field map) and to a constructor that accepts a property map for deserialization (also default visibility). Note the annotation with @Plugin; this is how the feature serializer service will pick it up.

This way we resemble the FeatureComputer framework a lot. The cool thing is that a feature does not have to know it is serializable. And I could make most of the Mastodon-app features serializable without modifying the feature class. Now a feature with a computer and a serializer looks like this in Eclipse:

SpotNLinksEclipseDeclaration

The feature serializer service.

Again, we emulate what we have for the feature computation, but simpler. There is a FeatureSerializationService interface that as a default implementation in DefaultFeatureSerializationService. Both of them are generic in terms of target object class.

The interface defines a single method

@Override
public FeatureSerializer< ?, ? > getFeatureSerializerFor( final FeatureSpec< ?, ? > spec );

that returns a feature serializer for a given feature specification. That's it.

Feature serialization in Mastodon-app.

Now, let's orchestrate serialization and deserialization of features for the Mastodon-app, along with the Model serialization.

Serialization.

I modified the ProjectWriter interface in the MamutProject class so that it now has a new method:

OutputStream getFeatureOutputStream( String featureKey ) throws IOException;

This method should return a new output stream for the feature with the specified key. In practice, both the folder and the zip versions of the writer creates a folder features, and store feature data in files named with the feature key appended with .raw. For instance after serialization you will find the following in a .mastodon zip file or in a project folder:

$ ls -1
	features/
	model.raw
	project.xml
	tags.raw
$ ls -1 features/
	Link displacement.raw
	Link velocity.raw
	Spot N links.raw
	Spot gaussian-filtered intensity.raw
	Spot track ID.raw
	Track N spots.raw

The actual serialization logic appends in the class MamutRawFeatureModelIO and requires

  • the SciJava Context (to get the feature serialization service),
  • the FeatureModel,
  • the GraphToFileIdMap< Spot, Link > idmap generated when serializing the model,
  • and the ProjectWriter (used to generate the output streams for each feature).

Because of these arguments, this method is called in the ProjectManager class, in the saveProject(File projectRoot) method.

Also because we need the GraphToFileIdMap< Spot, Link >, I changed the model.saveRaw( writer ) method to return it instead of void. I could not find a way to calls the feature serialization logic directly in this method, mainly because we need the SciJava Context, which is not a field of the Model class. Another possibility would be to pass the FeatureSerializationService to the saveRaw() method.

Deserialization.

The deserialization happens in a similar way, but have extra logic.

First we need to know what feature to deserialize. So the ProjectReader interface has a new method

Collection< String > getFeatureKeys();

that returns the keys of the features saved in the project. From each of these keys, the ProjectReader can generate an input stream with the method

InputStream getFeatureInputStream( String featureKey ) throws IOException;

​ The deserialization logic also happens in the MamutRawFeatureModelIO class and is also called from the ProjectManager class. Here is how:

  1. First we get a FeatureSerializationService.
  2. From it, we get the list of feature keys to deserialize.
  3. Then we get a FeatureSpecsService to retrieve feature specs from feature keys.
  4. For each of the feature keys in the project, we try to get a feature spec.
  5. If we can, we try to get a FeatureSerializer for the spec.
  6. If we can, we get the target class of the feature. Depending on whether it is a Spot feature or an Edge feature, we pass the correct FileIdToObjectMap< O > instance to the serializer.
  7. It returns the deserialized instance of the right class, that we declare in the FeatureModel.

That's it.

Example.

The file org.mastodon.mamut.feature.SerializeFeatureExample in src/test/java gives an example of serialization / deserialization of features.

Incremental updates during feature computation.

Serializing features is a good first step to conveniently harness large data. Thanks to serialization, the time spent on feature computation is not lost when we save and reload the data. However it is not good enough alone.

Mastodon is not limited to be a viewer of large data but allows for editing the model at any scale. You can run the detection and linking algorithms and create a large amount of data. But you can also make single spot editing and for instance change the position of one spot within a model made of several millions of them.

The possibility to edit single vertices or edges - or point-wise editing - in Mastodon creates a challenge for feature computation. Contrary to TrackMate, the Mastodon-app does not keep the features in sync with point-wise editing. Feature computation must be triggered by the user manually. This is a choice we made based on out experience with TrackMate, that becomes much less responsive when editing large models. Therefore, as soon as we make an edit, the feature values becomes invalid until the user recomputes them. This is fine, but if the model is very large, we will then need to spent a large amount of time on feature computation, while in reality possibly a small number of objects have changed. The feature incremental update mechanism aims at solving this problem.

To rely on incremental update, a feature computer has to declare a dependency on a special feature, that returns the collection of vertices or edges that changed since the last time the feature was computed. This way, it can only process these objects and not the full model.

I give some details of the incremental update mechanisms below, in reverse order of they work:

  1. First how it is used in feature computers that want to use incremental updates.
  2. A word of caution about using incremental feature update with features that can be serialized.

These first two paragraphs are good enough for developers that want to implement their own feature computer based on incremental update. The following two gives information about how the incremental update mechanism works in the Mastodon-app.

  1. How the update stack objects are built when the user edit the model and trigger feature computation.

  2. How the Mastodon-app link listeners to model changes to the update stacks to properly built it.

Using incremental updates in a feature computer.

Roughly speaking a feature computer that has incremental update looks like the following. It has a dependency on a special input, declared with SciJava @Parameter annotation:

@Parameter
private SpotUpdateStack stack;

This update contains the collection of spots that changed since the last feature computation. A similar class exists for links. To get the changes for our feature, we need to call:

Update< Spot > changes = stack.changesFor( FEATURE_SPEC );

Where FEATURE_SPEC is the feature specifications object. If the value of changes is null, then the feature value must be recomputed for the full model. If not, changes can be retrieved and used for computation. The Update class has two main public methods:

public RefSet< O > get();

public RefSet< O > getNeighbors();

The get() method returns the collection of objects that were modified (created, moved or edited). The getNeighbors() returns the collection of the neighbors of the objects that were modified. For instance, if you move a spot:

  • it will be added to the collection returned by get() of the spot update;
  • all its edges will be added to the collection returned by getNeighbors() of the link update.

It is up to the feature computer to decide how to use these collections to recompute values. For instance, a generic feature computer that uses the incremental computation mechanism for spot would work like this for instance:

@Override
public void run()
{
  final Update< Spot > changes = stack.changesFor( SPEC );
  final Iterable< Spot > vertices = ( null == changes )
    ? model.getGraph().vertices()
    : changes.get();

  // Compute values.
  for ( final Spot s : vertices )
  {
    double val = ...
    output.map.set( s, val );
  }
}

The SpotGaussFilteredIntensityFeatureComputer is an example of such a feature computer (its logic is a bit more complicated because it sorts the spots per frame before computation).

Incremental update and feature serialization.

Note that the feature computers that use incremental feature update must operate always on the same feature instance. Lest the feature exposed in the feature model would only contain values for the last incremental update.

So special care must be taken when using incremental update on features that can be serialized. The feature deserialization will produce a new instance of the feature, and the feature computer has a method #createOutput() that can too, resulting in a conflict. For this reason, it is wise in the feature computer #createOutput() method to check if an instance of the desired feature exists already in the feature model.

For instance in the SpotGaussFilteredIntensityFeatureComputer we have:

@Override
public void createOutput()
{
	if ( null == output )
	{
		//  Try to get it from the FeatureModel, if we deserialized a model.
		final Feature< ? > feature = model.getFeatureModel().getFeature( SpotGaussFilteredIntensityFeature.SPEC );
		if (null != feature )
		{
			output = ( SpotGaussFilteredIntensityFeature ) feature;
			return;
		}
    // Create a new one.
    // ...  
		output = new SpotGaussFilteredIntensityFeature( means, stds );
	}
}

The same caution must be applied to feature that are not computed (with a FeatureComputer) but updated elsewhere in other processes. For instance the DetectionQualityFeature (in mastodon-tracking) keeps track of the quality value of the spots that are created by a SpotDetectorOp. It is serializable and therefore has a static method that works similarly:

public static final DetectionQualityFeature getOrRegister( final FeatureModel featureModel, final RefPool< Spot > pool )
{
  final DetectionQualityFeature feature = new DetectionQualityFeature( pool );
  final DetectionQualityFeature retrieved = ( DetectionQualityFeature ) featureModel.getFeature( feature.getSpec() );
  if ( null == retrieved )
  {
    featureModel.declareFeature( feature );
    return feature;
  }
  return retrieved;
}

Building an incremental update stack.

In this paragraph we explain how the SpotUpdateStack and LinkUpdateStack are built as feature computation happens. We assume that they are wired to listeners that update them with model changes, and we will describe how it is done in the next paragraph.

A difficulty in returning the right changes for a feature computer, is that in Mastodon-app the user is free to select what features he wants to be computed. So the feature model can be up-to-date for some features, and not for others. So when we call

Update< Spot > changes = stack.changesFor( FEATURE_SPEC );

in the feature computer, the update stack must return the collection of spots that were changed or added in the model since the last time the feature with specs FEATURE_SPEC was calculated. For instance, consider a model made of only two spots s1 and s2, for which two features named A and B can be computed, both using the incremental update mechanism:

Building incremental updates

The update stack is the core component of the incremental update mechanism. It is made of a stack of update items. Each update item works like a map from a collection of feature specs as key, and a collection of graph objects (vertices for the SpotUpdateStack) that were modified or added since the features were calculated as value. In reality it is more a Pair than a Map but I use this vocable here.

On the example described in the drawing above, at t0 the features are not computed. The update stack is initialized with a single item with empty key and empty value.

At t1, the user triggers a computation of both features A and B. Because none of the feature specs for A and B can be found in the update stack, the #changesFor( FEATURE_SPEC ) method returns null, which triggers a computation of the feature values for all the spots of the model.

At t2 once the computation is finished, a new update item is pushed on stack. It is initialized with an empty collection for value, and the specs of the features that were calculated (A and B) are stored as key.

At t3 the user moves the spot s1. Because there are some listeners wired to the update stack, s1 is added to the value collection of the top item in the stack. That is: the one with A and B specs as key that was created after A and B computation. Both features A and B are marked as not up-to-date.

At t4 the user triggers the computation of feature A only. Since the feature computer for A uses incremental feature update, it queries the changes for its feature. A call for #changesFor( A ) does the following:

  • The method iterates through the update stack items, top to bottom.
  • The first update item is inspected for its key. The key is a collection of feature specs.
  • Since the first key contains the feature specs for A, iteration stops, and the method returns the value of this item.
  • This value is a collection simply made of the spot s1, so the feature computer recompute the feature value only for s1.

At t5, after computation, a new update item is added to the update stack. Since we computed only A, its key contains only A specs. As before, the item is initialized with an empty collection as value. A is marked up-to-date.

At t6 the user moves the spot s2. As before, it is added to the value collection of the top item in the stack. This time, it is the one with A specs as key.

Now at t7 the user wants to compute feature B, which is not up-to-date since t2. Again, the feature computer for B uses incremental feature computation. The call to #changesFor( B ) does the following:

  • The method iterates through the update stack items, top to bottom.
  • The first update item does not contain the B specs, so we move down to the next item.
  • The second item contains the B specs. We stop there.
  • Now that we need to return the collection of spots to update, we iterate back to the first item, bottom to top:
  • First we take the collection of spots for the second item: s1.
  • The we move up to the first update item in the stack, that contains s2, and concatenate with it. We end up with a collection made of { s1, s2 }.
  • We recompute B only for s1 and s2 which makes it up-to-date.

After this computation (t8) a new update item is pushed to the update stack, with B specs as key.

And it goes on like this. If after the steps exemplified here the user would recompute all features, the changes for B would be empty, and the changes for A would be built by iterating to the second item in the stack, that contains only s2.

The stack itself has a limited capacity. It can stores 10 update items, after that the old items are discarded. This results in triggering the full computation for 'forgotten' feature updates.

Registering the incremental update in feature computation.

The following describes how we provide the changes to the feature computer. We give it specifically for the Mastodon-app (Spot and Link).

When the MamutFeatureComputerService receives the Model instance to operate on, first the update stacks are created:

FeatureModel featureModel = model.getFeatureModel();
SpotUpdateStack spotUpdates = SpotUpdateStack.getOrCreate( featureModel, graph.vertices() );
LinkUpdateStack linkUpdates = LinkUpdateStack.getOrCreate( featureModel, graph.edges() );

They are created via static methods getOrCreate(), and we will explain why later.

We then add several listeners to the graph:

ModelGraph graph = model.getGraph();
graph.addGraphListener( GraphFeatureUpdateListeners.graphListener( spotUpdates, linkUpdates, graph.vertexRef() ) );
// Listen to changes in spot properties.
SpotPool spotPool = ( SpotPool ) graph.vertices().getRefPool();
PropertyChangeListener< Spot > vertexPropertyListener = GraphFeatureUpdateListeners.vertexPropertyListener( spotUpdates, linkUpdates );
spotPool.covarianceProperty().addPropertyChangeListener( vertexPropertyListener );
spotPool.positionProperty().addPropertyChangeListener( vertexPropertyListener );

These listeners are defined in the GraphFeatureUpdateListeners and consist of listeners that feed changes of the graph to the two update stacks. For instance, the one that listens to vertex properties is as follow:

private static final class MyVertexPropertyChangeListener< V extends Vertex< E >, E extends Edge< V > > implements PropertyChangeListener< V >
{

    private final UpdateStack< V > vertexUpdates;

    private final UpdateStack< E > edgeUpdates;

    public MyVertexPropertyChangeListener( final UpdateStack< V > vertexUpdates, final UpdateStack< E > edgeUpdates )
    {
        this.vertexUpdates = vertexUpdates;
        this.edgeUpdates = edgeUpdates;
    }

    @Override
    public void propertyChanged( final V v )
    {
        vertexUpdates.addModified( v );
        for ( final E e : v.edges() )
            edgeUpdates.addNeighbor( e );
    }
}

This ensures that the two update stacks will receive the changes.

Also, the update stack class that is the super class for the spot and link update stacks implements Feature< O >:

public abstract class UpdateStack< O > implements Feature< O >

This will be important for serialization, as we will see below. It also has the advantage that we do not have to do anything special to provide it to feature computers. Since it is a feature, it will be provided to feature computers that declare it as a dependency as any other feature.

Note that the feature computers that use incremental feature update must operate always on the same feature instance. Lest the feature exposed in the feature model would only contain values for the last incremental update.

So special care must be taken when using incremental update on features that can be serialized. The feature deserialization will produce a new instance of the feature, and the feature computer has a method #createOutput() that can too, resulting in a conflict. For this reason, it is wise in the feature computer #createOutput() method to check if an instance of the desired feature exists already in the feature model.

For instance in the SpotGaussFilteredIntensityFeatureComputer we have:

@Override
public void createOutput()
{
	if ( null == output )
	{
		//  Try to get it from the FeatureModel, if we deserialized a model.
		final Feature< ? > feature = model.getFeatureModel().getFeature( SpotGaussFilteredIntensityFeature.SPEC );
		if (null != feature )
		{
			output = ( SpotGaussFilteredIntensityFeature ) feature;
			return;
		}
    // Create a new one.
    // ...  
		output = new SpotGaussFilteredIntensityFeature( means, stds );
	}
}

The same caution must be applied to feature that are not computed (with a FeatureComputer) but updated elsewhere in other processes. For instance the DetectionQualityFeature (in mastodon-tracking) keeps track of the quality value of the spots that are created by a SpotDetectorOp. It is serializable and therefore has a static method that works similarly:

public static final DetectionQualityFeature getOrRegister( final FeatureModel featureModel, final RefPool< Spot > pool )
{
  final DetectionQualityFeature feature = new DetectionQualityFeature( pool );
  final DetectionQualityFeature retrieved = ( DetectionQualityFeature ) featureModel.getFeature( feature.getSpec() );
  if ( null == retrieved )
  {
    featureModel.declareFeature( feature );
    return feature;
  }
  return retrieved;
}

Serialization of the incremental update state.

The incremental update mechanism leads to new programming challenges along with feature serialization.

Indeed, the feature computation is triggered by the user on demand. So the feature values might not be up-to-date with the model when we serialize them. In such cases, the features have to be recomputed for the whole model after reloading. This voids the advantage brought by the incremental feature computation: We loose the benefit of recomputing features for just the spots and links that have been modified after reloading, and the time spent on computing is lost after saving the model.

The solution to this is evident: we have to serialize the update stacks along with the model and the feature values. This is what is done currently when the model is saved. We give here details about how this happens.

Update stacks objects are features.

SpotUpdateStack and LinkUpdateStack both inherit from UpdateStack< O > which implements Feature< O >. So these objects are features. They have however 0 feature projections and cannot be used in a feature color mode and are not displayed in the data table. They are features because this makes it very convenient to:

  • store them somewhere that makes sense in the application. They are declared in the FeatureModel and can be retrieved from it by consumers that need to play with the FeatureModel anyway.

  • provide them as an input to feature computers, using @Parameter private SpotUpdateStack stack; as seen above. The feature computation service will handle them with no added logic.

  • serialize them along with the other features. This is what we will see now.

We could give them meaningful projections. For instance - and that would be helpful for debugging - we could have one projection per update item in the stack that returns an int number whether a spot or link is marked as changed, neighbor or changed or not changed. But for now, they are stowaway features of the feature model.

both have matching serializers that inherit from UpdateStackSerializer< F, O>

Serialization of update stack objects.

Because they are features, we just need to provide an implementation of FeatureSerializer for them, that will be handled automatically by the feature serialization service described in the first section of this document.

There exists for convenience an abstract class UpdateStackSerializer:

public abstract class UpdateStackSerializer< F extends UpdateStack< O >, O > implements FeatureSerializer< F, O >

It handles the serialization of any subclass. The UpdateStack has a unique field to serialize: the stack of update items itself. So we serialize in order:

  • the number of update items in the stack;
  • and for each item:
    • the collection of FeatureSpec used as key for the update item;
    • and the changes themselves, which is made of two RefSets, one for the object directly changed and one for the object neighbors of direct changes.

A project saved with update stacks looks like this on disk:

$ ls -1 features/
    Link displacement.raw
    Link velocity.raw
    Spot N links.raw
    Spot gaussian-filtered intensity.raw
    Spot track ID.raw
    Track N spots.raw
    Update stack Link.raw
    Update stack Spot.raw

Deserialization of update stack objects.

Deserialization happens in reverse, but because concrete serializers have to return a new instance of the right class of UpdateStack implementation, the UpdateStackSerializer is abstract and offers instead a method:

protected SizedDeque< UpdateState< O > > deserializeStack(...)

which returns a new stack of update item, that can be used to instantiate the right class.

In the update items we use FeatureSpecs as key.s Notice that we do not serialize nor deserialize the true class of each FeatureSpec but a generic one made of the right field (key, info, multiplicity, …). Since in incremental updates we only use the #equals() method of FeatureSpec, and that it is based on some of its fields, this is ok. But because this is only true for incremental update, the FeatureSpec serialization only has package visibility.

JUnit tests.

In org.mastodon.feature.update of src/test/java there are two JUnit tests that serialize a model with pending changes for incremental feature computation, and test for proper computation after reloading the model.

tinevez added 30 commits May 14, 2019
Modeled upon the feature computation service.
SERIALIZATION:
- Features are serialized in 1 .raw file per feature.
- The file name if the KEY of the feature + '.raw'.
- These files are stored in a subolder 'features' of the main
project folder (zipped or not).

DESERIALIZATION:
- The content of the 'features' folder is listed.
- The feature key is extracted from the file name.
- The feature spec is retrieved from the key, thanks to the
FeatureSpecService.
- A serializer is retrieved for the feature key.
- The feature is deserialized and declared in the feature model.
- Split the graph update stack into two instances, one for vertices,
one for edges (using a type of a common class).
- Make them serializable as a feature with no projections.
Now that the incremental update stacks are stored as features in the
feature model, we don't need to make something special in the
computer service to provide them to the features that want to
depends on it.
We do not want to display the feature computer that have been
annotated as visible = false in the @plugin annotation. We do this
so that the UpdateStackFeature do not appear there.
So that it:
- is de/serializable.
- exploits the new incremental update mechanisms.
This is mainly to ensure tthat the update stack features won't be visible
in the feature color mode UI.
How to deserialize feature values without loading the model?
You can but only if you know exactly how the feature was serialized.
This is aimed at features that contain an undetermined number
of scalar projections, whose number is determined at runtime.
- TrackMate feature values are imported in a two Mastodon features
(one for links, one for spots), that have the SINGLE multiplicit,
but many scalar features.

- They are de/serializable.
Says it cannot resolve 'Spot' otherwise.
Use an actual TrackMate XML file, generated from the TrackMate
test image. We test for:
- Completion of the import.
- Getting the expected number of spots.
- Getting the expected number of links.
- Getting the expected number of tracks.
- Presence of the specs for the TrackMateImportedSpotFeatures.
- Presence of the specs for the TrackMateImportedLinkFeatures.
- Presence of all expected spot projections, with proper dimension
and whether they can be casted as integer value projections.
- The same for link projections.
- Some spot feature values.
- Some link feature values.
Add MaMuT v0.28.3 as a test scope dependency.
Fix duplicate classes compile error.
Conflict between version of JGraphT imported brom bdv and MaMuT.
Replace the .h5 image test data by a dummy one, much smaller.
@tpietzsch

This comment has been minimized.

Copy link
Member

commented Jul 18, 2019

@tinevez There is a problem with exporting/importing/exporting/importing/... to TrackMate/MaMuT
See
4cbc754
for demonstration.

Feature names get more and more inflated, e.g.,
Track N spots -->
Track_N_spots -->
TrackMate_Spot_features_Track_N_spots -->
TrackMate_Spot_features_TrackMate_Spot_features_Track_N_spots -->
...

The solution should probably be re-exporting the TrackMateImportedFeatures with their imported names.

Now there will be a name-collision between TrackMateImportedFeatures:Track_N_Spots and the export name for the Mastodon Track N spots feature. I would resolve that by the Mastodon feature taking precedence, i.e., only if Track N spots is not in the FeatureModel the TrackMateImportedFeatures:Track_N_Spots is re-exported.

@tinevez

This comment has been minimized.

Copy link
Member Author

commented Jul 18, 2019

Ok I will work on this.

@tpietzsch

This comment has been minimized.

Copy link
Member

commented Jul 18, 2019

The failed tests on Travis are probably all due to AWT being used.
I modified one of the "erroring" tests to explicitly catch all exceptions and print to stdout.
ec34e21

Travis says:

Running org.mastodon.feature.update.UpdateStackSerializationSeriesTest
WARNING: Creating property map for a collection/pool that does not manage PropertyMaps!
WARNING: Creating property map for a collection/pool that does not manage PropertyMaps!
WARNING: Creating property map for a collection/pool that does not manage PropertyMaps!
WARNING: Creating property map for a collection/pool that does not manage PropertyMaps!
Jul 18, 2019 11:21:37 AM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
TrackScheme style file /home/travis/.mastodon/trackschemestyles.yaml not found. Using builtin styles.
Bdv style file /home/travis/.mastodon/rendersettings.yaml not found. Using builtin styles.
ColorMap file /home/travis/.mastodon/colormaps.yaml not found. Using builtin colormaps.
Feature color mode file /home/travis/.mastodon/colormodes.yaml not found. Using builtin styles.
Keymap list file /home/travis/.mastodon/keymaps//keymaps.yaml not found. Using builtin styles.
java.awt.HeadlessException: 
No X11 DISPLAY variable was set, but this program performed an operation which requires it.
	at java.awt.GraphicsEnvironment.checkHeadless(GraphicsEnvironment.java:204)
	at java.awt.Window.<init>(Window.java:536)
	at java.awt.Frame.<init>(Frame.java:420)
	at java.awt.Frame.<init>(Frame.java:385)
	at javax.swing.SwingUtilities$SharedOwnerFrame.<init>(SwingUtilities.java:1763)
	at javax.swing.SwingUtilities.getSharedOwnerFrame(SwingUtilities.java:1838)
	at javax.swing.JDialog.<init>(JDialog.java:272)
	at javax.swing.JDialog.<init>(JDialog.java:206)
	at org.mastodon.revised.mamut.TgmmImportDialog.<init>(TgmmImportDialog.java:62)
	at org.mastodon.revised.mamut.ProjectManager.<init>(ProjectManager.java:112)
	at org.mastodon.revised.mamut.WindowManager.<init>(WindowManager.java:163)
	at org.mastodon.feature.update.UpdateStackSerializationSeriesTest.createProjectWithPendingChanges(UpdateStackSerializationSeriesTest.java:96)
	at org.mastodon.feature.update.UpdateStackSerializationSeriesTest.test(UpdateStackSerializationSeriesTest.java:75)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
	at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59)
	at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:115)
	at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:102)
	at org.apache.maven.surefire.Surefire.run(Surefire.java:180)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:350)
	at org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1021)
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.043 sec <<< FAILURE!

A simple workaround is disabling these tests on Travis, something like this
https://github.com/bigdataviewer/bigdataviewer-vistools/blob/d6479aef7e4994e94b68d9c3aee6f160a188d6e4/src/test/java/bdv/util/BdvHandlePanelGarbageCollectionTest.java#L22

A better fix would be to fix ProjectManager etc to be able to run headless.
I'm not sure how difficult that is. Maybe for now, we just create an issue for it?

@tinevez

This comment has been minimized.

Copy link
Member Author

commented Jul 18, 2019

I would be in favor of not disabling the tests. If we do it we fix the Travis error but nothing good comes out of it.

@tpietzsch

This comment has been minimized.

Copy link
Member

commented Jul 18, 2019

True, but in principle that can be done after merging this PR.
(The problem is not in anything touched by this PR.)

@tinevez

This comment has been minimized.

Copy link
Member Author

commented Aug 2, 2019

Alright @tpietzsch this should be fixed now. Sorry it took me so long.

tpietzsch and others added 5 commits Jul 17, 2019
…ures.

When we export and re-imported mode, we got this:
Feature names get more and more inflated, e.g.,
Track N spots -->
Track_N_spots -->
TrackMate_Spot_features_Track_N_spots -->
TrackMate_Spot_features_TrackMate_Spot_features_Track_N_spots -->
...

This commit fixes this problem, by cropping the `TrackMate_Spot_features_`
on export.

Also in case of name clash with features that were computed in Mastodon,
we only export the latter.

Noticed by @tpietzsch
@tpietzsch tpietzsch force-pushed the serialize-feature-model2 branch from d4172f5 to fd7a2a1 Sep 17, 2019
@tpietzsch tpietzsch force-pushed the serialize-feature-model2 branch from 1ccc1f7 to 344ba42 Sep 18, 2019
@tpietzsch

This comment has been minimized.

Copy link
Member

commented Sep 18, 2019

Thanks! I fixed the tests to run headless.

@tpietzsch tpietzsch merged commit 052c7ae into master Sep 18, 2019
1 check passed
1 check passed
continuous-integration/travis-ci/pr The Travis CI build passed
Details
@tpietzsch tpietzsch deleted the serialize-feature-model2 branch Sep 18, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants
You can’t perform that action at this time.