Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Git Compatible Mastodon Storage Format #21

Open
5 of 14 tasks
maarzt opened this issue Apr 18, 2023 · 3 comments
Open
5 of 14 tasks

Git Compatible Mastodon Storage Format #21

maarzt opened this issue Apr 18, 2023 · 3 comments
Labels
enhancement New feature or request

Comments

@maarzt
Copy link
Collaborator

maarzt commented Apr 18, 2023

This is a part of: #12

Currently Mastodon provides two storage formats. As folder or as *.mastodon. Both file formats might have their problems when used inside a git repository:

  1. When text files are used: Git merge might introduce inconsistencies in the text files, when merge two changes to the same dataset. Conflicting changes, like removing a spot & adding a edge two the spot. Might not appear as conflicting changes.
  2. When binary files are used: Commit many different version of a large binary file might blow up the repository size and and a size limitation in git or GitHub might ruin the attempts to version Mastodon files.

Possible solutions:

  • Text based
  • Or binary files divided into blocks (preferred)
    • Make sure to always change one small file, whenever there is a change to the mastodon project. This will cause a git merge conflicts, and prevent git from destroying the dataset during merge attempts, i.e. no automatic merge will be done by git.

Binary files divided into blocks

Introduce a key Mastodon rewrites the spot ids when saving a projects. Non constant spot ids are problematic. A small change in the ModelGraph can easily change a large number of spot ids. This is a problem for efficient storage (delta compression) of multiple versions with git. It is therefor necessary to have a key value that normally doesn't change.

The data in a mastodon project could be expressed in two tables

spot table:
------------
1. spot-key (unique and constant)
2. timepoint
3. x
4. y
5. z
6. label
7. covariance matrix
?? is there more

link table
-----------
1. link-key (unique and constant)
2. source spot-key
3. target spot-key
4. outgoing edge index
5. incoming edge index

Additionally, tables for properties should be created. These cover tagsets and features

property tables:
------------
1. spot-key or link-key (spot vs link can currently be derived from the filename)
2. value (e.g. feature value, tag value as boolean)

These tables could be stored in a simple chunked binary format (easiest solution using DataOutputStream, which would be Java-specific) or as ZARR tables (could potentially be read also using Python scripts). A table should be sorted by the key and be chunked. For example keys 0-1000 are written into the first file, keys 1000-2000 are written into the second file etc.

Using UUID as key value

  • advantage:
    • avoiding clashes.
  • disadvantages:
    • unevenly distributed
    • not dense waste memory
    • Data would need to be stored in kind of a chunked hash table. Which is pretty complicated. Unsolved question: which UUID entry would be stored in which chunk?

Possible alternative to UUID use a simple counter (integer or long). Each new spot receives an id from this counter. When spots are deleted this id will never be assigned again.

  • advantages:
    • less memory required
    • quicker iteration
    • may potentially used as indices for arrays (having "wholes")
  • disadvantage:
    • clashes may occur when merging edits from two users. the merge tool would need to resolve this and assign new unused ids

If spots are deleted, "wholes" are not filled. Spots need subsequently to be removed from link-tables, tag-tables and feature tables.

Text format using UUID (not-preferred)

Each spot and link gets a UUID. The spot table and link tables are stored in text files with rows:

spot table
--------
spot_UUID,timepoint,x,y,z,label,covariance matrix

link table
--------
link_UUID, source_spot_UUID, target_spot_UUID, outgoing_edge_index, incomming_edge_index

The text format has the advantages:

  • git diff would do a proper job out of the box
  • git blame would work as expected

and disadvantages:

  • git merge would cause false positive conflicts, and more critically false negative conflicts.

It's probably still necessary to divide the text file into chunks in order to save memory when using git.

TODOs:

  • Decide, if ZARR can be used to store the tables. Check if Strings can be stored in ZARR tables.
  • Decide, if UUID should be as key or simple counter
  • Write a test class that pressure tests the file format while being used together with git. Simulate the iterative generation of a dataset and the commits along the way. Measure the size of the git history. My idea is to open one of Mettes datasets, copy the spots a links in batches to the new dataset. Along the way delete some parts of the dataset and add them later on.
    • Write a test that grows a graph. Store intermediate stages as Masgitoff format. And makes several commits. Finally measure the size of the git history. Compare the test to a run with the standard Mastodon format.
      • Result: Standard format explodes the git history. Masgitoff format works well.
    • Make the test more realistic by not only growing the graph by adding spot. But also removing some spots some time. -> still works well.
    • Make the test more realistic by not only writing to Masgitoff but also closing the Mastodon instance. Reopen and then grow further.
    • Pressure test with renaming spots.
    • Pressure test with changing tags.
  • Write code to launch and open Mastodon from a Masgitoff dataset.
  • Make notes on the design decisions made.
  • Integrate the Masgitoff file format into the Mastodon collaboration plugin.
  • Currently label set ids are rewritten on every save operation, make sure that's not a problem.
  • What about tag ids. How do they behave if tags are: renamed, added, deleted. Do we need to consider that.
  • Is the edges order correctly recovered when opening a Mastodon dataset.
@stefanhahmann
Copy link
Collaborator

stefanhahmann commented Jan 3, 2024

Usage of ZARR file format has been checked by @maarzt. There is too little existing libraries in Java to use ZARR for the purpose of this issue.

@stefanhahmann
Copy link
Collaborator

Simple counter seems to be the better solution for the implementation. Reasons are mentioned above.

@maarzt maarzt transferred this issue from mastodon-sc/mastodon-tomancak Mar 12, 2024
@maarzt maarzt transferred this issue from mastodon-sc/mastodon Jul 11, 2024
@maarzt
Copy link
Collaborator Author

maarzt commented Jul 19, 2024

Compare File Reading Performance UUID vs int32 performance

The results where very counter intuitive. Reading a 128-bit from a file plus lookup in a HashMap<UUID, Object> almost as performant as reading 32-bit int plus lookup in a ArrayList<Object>. So it my experiments result is don't hesitate to use UUID, the read performance can be the same as 32-bit int. But it's very important to use a BufferedInputStream and BufferedOutputStream.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants