Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Union of tree sequences #623

Merged
merged 2 commits into from
Jul 17, 2020
Merged

Union of tree sequences #623

merged 2 commits into from
Jul 17, 2020

Conversation

mufernando
Copy link
Member

@mufernando mufernando commented May 18, 2020

If we have two tree sequences which share part of their past histories, we might want to graft them together (also see #381). One use case could be that one population split into two and you might want to simulate their branches independently as a way of parallelising the simulations.

Here, @petrelharp and I implemented a method to graft two tables, a base table and its sister (details of dev process can be found in this repo). The main rationale is:

  • First, identify which nodes are shared between tables, and encode them in a node_map;
  • If the tree sequences for a different number of generations (e.g., in a forward simulator) , it is possible the times along the nodes in the node_map don't match. For that reason, we also implemented a method to add_time to the node times and migration times of a Table Collection;
  • Now, we can copy the base as a new table and start adding the non-shared bits of the sister table to it;
  • All the nodes of the sister table not in the node_map are added to new -- these are the target nodes;
    • These nodes are assigned new individuals;
    • The population of all target nodes are considered new, and are added to new population table;
  • Edges connecting target nodes are added to new;
  • Mutations falling in any of the target nodes are also added;
    • At first pass all these mutations are assigned new sites, which are later deduplicated;
    • We do not assign a parent mutation, but late recompute them;
  • Migrations within base are kept, and migrations within sister are added, with nodes and populations being remapped;
  • A new row is added to the provenance table -- the way it's currently done is NOT final;
  • At last, we can sort, deduplicate sites, and compute the parent mutations;

We implemented some tests, which rely on simplification. The idea is that if you simulate a history of one population splitting into two, simplify independently on the nodes of population 0 and 1, and graft them together you should get equivalent TreeSequences.

The implementation is not finalised, this is a slow -- but likely correct -- solution to the problem.

We would like any input on implementation/interface, but most importantly there are two points of improvement:

  • We cannot currently test how the MigrationTable is grafted, because our tests rely on simplification and MigrationTables are not dealt with by simplify (see here)
  • We need help figuring out how to deal with the ProvenanceTable of the grafted tree sequence.

@petrelharp
Copy link
Contributor

ps. A bit of clarification on how we're specifying where to graft. The signature of the proposed TableCollection method is

     def graft(self, other, node_map, check_overlap=False):

Here node_map is a dictionary whose keys are node IDs in other, and whose values are node IDs in self; these represent the shared nodes. Concretely, we add to self:

  1. Any non-shared nodes in other , as well as associated individuals, populations, and mutations.
  2. Any edges in other whose child is a non-shared node.

This means that:

  • We're grafting structure that comes more recently - if, say, other has an edge that says a non-shared node is parent to a shared node, that edge won't be copied. Or if other has a bunch of long-ago history that self doesn't.
  • We can do a sanity check that all the relationships between shared nodes are identical, but this is a bit expensive, so it's optional (check_overlap).

Here are two use cases, with different sorts of overlap:

(A) Simulate population (a) forwards in time for a while; then save the state, and start two independent simulations, each one starting a new population (call these (b) and (c)) with offspring of the final generation of the first simulation. The two resulting sets of tables should agree for everything in population (a), so we can graft together on all nodes in population (a). Their overlap should agree, as long as the two subsequent simulations don't do something funny.

(B) Simulate population (a) for a while, recording everyone alive at time T along the way. Say there were N(T) individuals alive at that time. Then, simulate another population, (b), starting from N(T) individuals but no prior history, and you'd like to say that the first generation of population (b) is the same as the that time slice in (a). So, the shared nodes are only those alive at time T; not all of prior history. We can't check overlap, since for instance, some of those N(T) individuals might be parent-child pairs, but those relationships won't be there in (b). Since the "same" individuals are alive in both, we could also worry that they would eg accumulate mutations in one that aren't present in the other. Luckily, however, this won't happen in SLiM, since mutations occur at birth only (unless you add them manually); and it won't happen in msprime, since we'd do this by a census event.

Note that we could also do recapitation by grafting an msprime simulation onto the top of a SLiM simulation..

As I think through these use cases more, I'm thinking we don't need check_overlap. Most errors would be caught by the assertion that all time shifts are the same anyhow.

@jeromekelleher
Copy link
Member

This looks interesting! I'm not quite following what this is doing yet, but I'll think about it some more. Two high-level questions

  1. It seems inconsistent to update to return a new table collection of self + other, when other methods operate in-place on tables. Can we do that here?
  2. The name doesn't really work for me. How about merge? As in, we merge the new stuff in other into this set of tables? This seems to follow the purpose: two simulations fork off from a common base and then we merge them back together later?

@jeromekelleher
Copy link
Member

This would also seem a bit easier to do/better defined if we expressed it as TableCollection.merge(self, others), where self is the state of the tree sequence before others branched off, and others is a list of table collections that have gone off in parallel. So, we're merging the information from a list of n parallel, independent simulations which all diverged from the current tree sequence state, back into a single table collection.

@petrelharp
Copy link
Contributor

other methods operate in-place on tables. Can we do that here?

good idea, yes

How about merge?

Could do, although for me merge sounds like you'll get all the structure in other, which isn't necessarily true.

This would also seem a bit easier to do/better defined if we expressed it as TableCollection.merge(self, others), where self is the state of the tree sequence before others branched off, and others is a list of table collections that have gone off in parallel.

Oh, so the idea is that the top part of all the table collections in others are all the same, and equal to self? That would be easier to understand and easier to implement, and we could use that to do what we want in SLiM. But, I don't think this is so hard, and it seems useful for other things - for instance, say you want to add a few offspring to a particular node in a tree sequence? I suppose that's easy enough to do directly, though. What do you think @mufernando?

@mufernando
Copy link
Member Author

I agree with Peter about the name. A graft operation is usually defined as: break tree A wherever and put the resulting subtree somewhere along a branch in tree B. This is basically what we are doing here: breaking the other at the non-shared part of the tree and adding it onto self. The only difference is that we are doing this in a principled way -- because we rely on the two trees having some shared history.

This would also seem a bit easier to do/better defined if we expressed it as TableCollection.merge(self, others), where self is the state of the tree sequence before others branched off, and others is a list of table collections that have gone off in parallel.

I don't know enough about tree sequences to see how this would be easier to implement. But I feel like this would be more redundant. I agree it would be easier to understand, though. Basically we are replacing the node_map with the top part of the tree.

@jeromekelleher
Copy link
Member

I don't like the fact that consistency checking is something we do optionally in the current formulation and that we have to resort to using simplify() in order to do it. It seems like a sledgehammer cracking a nut. I think we'd end up spending quite a bit of time and effort trying to figure out what all the corner cases are under the current semantics and then trying to write test cases to exercise them. Simply stated semantics usually translate into simpler code once you've closed all the loopholes. Particularly since we'll (probably?) want to do this in C, for SLiM.

So, I'd advocate for grafting a list of other table collections onto self, which all must have self as the prefix.

@petrelharp
Copy link
Contributor

I don't like that either. I guess that the other use case I'm thinking of - sticking two tree sequences together that don't overlap - could be a different method, say, append( ) or something. (and, we don't need to make it until someone needs it) It's just tempting to do both at the same time.

So, the proposed interface simplifies things because the user doesn't have to pass in this node mapping? That's good; I like that. It does mean they need the tree-sequence-up-to-the-split point handy, but that seems OK (if we had a table collection method to cut out a time chunk that'd be easy to get).

@jeromekelleher
Copy link
Member

Another thought. What if we made the others list consist of only the stuff that happened after the split? I can imagine that if we were doing this for a real simulation and a large number of threads, we'd want to minimise the number of copies of the "base". This might be quite large, and so keeping k copies of it in memory is wasteful. I'm thinking about the SLiM case and I'm sure Ben would ask why we're wasting so much memory!

The semantics are even simpler then, because there is no overlap between self and other. We just merge others together, and glue then onto the top of self.

@petrelharp
Copy link
Contributor

@mufernando and I have done a bunch more thinking about this. Observations:

  1. SLiM does not actually give us table collections where all the shared nodes are up top of the node table.
  2. Even if it did, we'd still have to subset out sections of other tables to compare the shared bits; we can't just "compare the tops of the tables" as I was informally thinking.
  3. Requiring the tables have no overlap is a nice idea, but the tool we'd need to do that would be one that deletes everything in a tree sequence that is genealogically above a set of nodes - you can't just take a simple time slice, because of overlapping generations.

However, it should be true that if you "subset" the two tables you want to graft by their shared nodes (see below), and possibly reorder them, then the resulting table collections should be equal, up to provenance.

So, I'm back to our original proposal: tables.graft(other, node_map), and that we do a consistency check with a subset-and-reorder operation.

The proposed subset-and-reorder method is: tables.subset_nodes(nodes) modifies the tables to retain exactly those:

  1. nodes that are listed in nodes and in the order listed
  2. mutations whose node entry is in nodes
  3. sites with remaining mutations
  4. edges whose parent and child entries are in nodes
  5. individuals referred to by the nodes in nodes, and in the order encountered in nodes
  6. only populations referred to by nodes in nodes, and in the order encountered in nodes
  7. migrations whose nodes are in nodes

Note that this would also provide the reordering operation that Yan wanted recently.

@jeromekelleher
Copy link
Member

The proposed subset-and-reorder method is: tables.subset_nodes(nodes) modifies the tables to retain exactly those:

This is simplify with a few options set, isn't it? But the semantics are slightly different? That would seem like a lot of work...

Requiring the tables have no overlap is a nice idea, but the tool we'd need to do that would be one that deletes everything in a tree sequence that is genealogically above a set of nodes - you can't just take a simple time slice, because of overlapping generations.

That sounds like a useful general tool to me, and wouldn't be too hard to do? I'm worried that we're writing something that's solving quite specific problem here, that is bound very tightly to how SLiM does things, but giving it a general name in graft. If what we're doing would only ever really be applied to SLiM tree sequences, maybe it's better of in pyslim? Then at least you can make strong assumptions about the input.

@mufernando
Copy link
Member Author

I don't see this as being SLiM-specific, but definitely agree it is more useful for forward simulations. A graft operation only makes sense going forward-in-time. If you want to do something similar but back in time you would use msprime's from_ts (as you do with recapitation). Even the alternative implementation with self being the top of the tree assumes there is a top and others happened after it (forward-in-time), no?

@petrelharp
Copy link
Contributor

petrelharp commented May 21, 2020

It's quite a bit simpler than simplify; it is only re-indexing, mostly. Here's the start of a draft, to give you the idea:

def subset(tables, nodes):
  new = tables.copy()
  n = tables.nodes
  new.nodes.set_columns(
    flags = n.flags[nodes],
    population = n.flags[nodes],
    individual = n.flags[nodes], # TODO: reindex
    time = n.time[nodes],
    **_subset_array(n.metadata, n.metadata_offset nodes))
  node_map = np.arange(tables.nodes.num_rows)
  node_map[nodes] = np.arange(new.nodes.num_rows)
  keep_indivs = np.unique(new.nodes.individual) # TODO: put in order
  i = tables.individuals
  new.individuals.set_columns(
    flags = i.flags[keep_indivs],
    **_subset_array(i.location, i.location_offset, keep_indivs),
    **_subset_array(i.metadata, i.metadata_offset), keep_indivs))
  e = tables.edges
  keep_edges = np.logical_and(np.isin(e.parent, nodes), np.isin(e.child, nodes))
  new.edges.set_columns(
    left = e.left[keep_edges],
    right = e.right[keep_edges],
    parent = node_map[e.parent[keep_edges]],
    child = node_map[e.child[keep_edges]],
    **_subset_array(e.metadata, e.metadata_offset, keep_edges))
  # etcetera

@petrelharp
Copy link
Contributor

As for being SLiM-specific: it's true, we don't have other current use cases for this more restrictive version that requires checking overlap. But, what about, for instance, when someone working with decode's genealogy figures out that ancestor A in pedigree X is the same as ancestor B in pedigree Y, and thus wants to merge pedigree X and pedigree Y? (hm, I said "merge"...)

@jeromekelleher
Copy link
Member

It all sounds plausible @petrelharp - I guess I just don't have the time to think about this properly right now as there's a lot of stuff queuing up for release. Is this something you and @mufernando would like to see in 0.3.0?

@petrelharp
Copy link
Contributor

Well, on the one hand we can be flexible, but on the other, @mufernando is needing it basically now, so we're going to be developing it now, and so it'd make most sense to get it in while it's fresh. 0.3.0 or not doesn't matter, though. We could get input from others, perhaps?

@petrelharp
Copy link
Contributor

Update: @mufernando and I just spent quite a while talking through the various options. Here's what we think is the right way forward. We would like to move forward, if you're comfortable, @jeromekelleher - maybe @gtsambos and/or @hyanwong could give us a sanity check? I'll try to make this self-contained, so you don't need to wade through the stuff above.

Goal: implement a method, provisionally called graft, to glue together two tree sequences.

Use cases:

  1. Simulate different branches of a species tree independently, and then put them all back together again at the end.

  2. (speculative) Parallelize a simulator so that it runs different populations on different processors, and passes migrants back and forth between them; each process writes out its own tree sequence and the two must be glued together along the migrants, afterwards.

API options: Input should be two sets of tables, tables1 and tables2 and some way of saying how to glue them together. This has to be either (a) a way of saying that these nodes in tables1 are the 'same' as those nodes in tables2; or else (b) some extra edges that connect nodes in tables1 to nodes in tables2 and possibly vice-versa. In case (a), there is overlap between the sets of tables, so as a sanity check we should make sure that this overlap agrees (ie, that it actually makes sense to say that the two sets of nodes are the same). In case (b), there is no overlap between them, so no such check is necessary, but we do need to specify two sets of edges (from 2->1 and from 1->2).

We think that option (a) is the simpler one, because the tool we need to check for overlap is straightforward ("subset"; see below), and constructing the edge tables needed for (b) is more error-prone. And, for (b) you need to pass in two things, not just one.

So, the proposed method of a table collection is:

     def graft(self, other, node_map):

where

  • node_map is a dictionary with keys that are node IDs of other and values that are node IDs of self; call these the "shared nodes"
  • This will modify the table collection (self) in place, so that all non-shared nodes in other are added to self, along with all mutations, individuals, and populations referred to by those non-shared nodes, and all edges that have parent and/or child a non-shared node.
  • This requires, as a sanity check, that the overlapping bits are the same: i.e., if we subset both self and other by the shared nodes, we get the same sets of tables.

To do the sanity check we need a "reorder and subset" method of a table collection, which is described here, and is something that Yan has recently wanted (the reordering bit anyhow). We'll describe this in a different PR.

Questions:

  • is this an easy-to-understand operation?
  • does it seem general enough that it should be in tskit, as opposed to pyslim?
  • are there any other use cases for gluing together two tree sequences? would this work for those use cases?
  • what's a good name? graft()? merge()? glue()?

ps: changes from previous proposal:

  • no longer automatically add new populations
  • no longer assume that other comes after self

pps: @jeromekelleher previously proposed doing this by assuming no overlap between the two sets of tables; this would be pretty easy using the "subset" method, and would require passing in edge tables, as in (b) above, which is why we went with this proposal.

Copy link
Member

@jeromekelleher jeromekelleher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good. The operation looks simple enough and makes sense (I think - once a few things have been clarified). There's a lot of work to be done in testing it though, as I'm sure there's a bunch of icky corner cases we've not considered and won't come out of the woodwork until we've tested it on a bunch of different simulated topologies.

python/tskit/tables.py Outdated Show resolved Hide resolved
python/tskit/tables.py Outdated Show resolved Hide resolved
python/tskit/tables.py Outdated Show resolved Hide resolved
python/tskit/tables.py Outdated Show resolved Hide resolved
python/tskit/tables.py Outdated Show resolved Hide resolved
python/tskit/tables.py Outdated Show resolved Hide resolved
python/tests/test_tables.py Outdated Show resolved Hide resolved
python/tests/test_tables.py Outdated Show resolved Hide resolved
python/tests/test_tables.py Outdated Show resolved Hide resolved
python/tskit/tables.py Outdated Show resolved Hide resolved
@hyanwong
Copy link
Member

  1. (speculative) Parallelize a simulator so that it runs different populations on different processors, and passes migrants back and forth between them; each process writes out its own tree sequence and the two must be glued together along the migrants, afterwards.

I like this speculative idea. One possibility is that, if a migrant in population (a) comes from a different population (b), then the ts in (a) wouldn't actually need to track the migrant back in time until the MRCA: it would be tracked within population (b), and there would be no need to duplicate the history within (a) too. That would mean "cutting off" the migrant once it entered the other population, and once all its relevant genome was covered by parent edges in the other population. The migrant would appear to lead to a separate root in (a), and it would only be after the merge that a singly rooted tree would emerge. However, there's a fair bit of extra work here required for this, such as checking that span covered by the edges leading to this migrant is also covered by edges leading out of the migrant, etc.

We think that option (a) is the simpler one,

Agree.

  • This requires, as a sanity check, that the overlapping bits are the same: i.e., if we subset both self and other by the shared nodes, we get the same sets of tables.

For comparison I guess we need to be careful about sorting order? Does this not, for instance, require that TableCollection.sort() also sorts mutations & parents in the same way (i.e. we need to solve #27 + #651)?

Questions:

  • is this an easy-to-understand operation?

Yes, I think so. A graphical representation would be nice. It would be trivial to also make this a non-in-place method of a tree sequence, as well as an in-place method on a TableCollection, just like we do for simplify, trim, etc.

  • does it seem general enough that it should be in tskit, as opposed to pyslim?

Definitely tskit, IMO.

  • are there any other use cases for gluing together two tree sequences? would this work for those use cases?

perhaps my comment above covers one of those. What about tree sequences for separate chromosomes in the same (haploid?) population? Might we want to combine those somehow (seems a little different, though)

  • what's a good name? graft()? merge()? glue()?

I vote for merge(). graft() implies an asymmetry which I think isn't there. glue() implies that you might still be able to see the join afterwards :)

ps: changes from previous proposal:

  • no longer automatically add new populations

Down the line, it presumably would be useful to have a separate function which adds and re-indexes populations on this basis of the populations in another TS, to allow a merge to occur when a new population appears in one of the tree sequences.

Presumably we would allow the node map to contain sample nodes too, so that the two tree sequences need not be describing different sample populations?

@hyanwong
Copy link
Member

By the way, is recapitation just a limited version of the functionality proposed here? I.e. instead of recapitating an existing TS, you could set up a new msprime simulation with the individuals from the start of the SLiM simulation, simulate in msprime, then graft the two together? It might be elegant to rephrase recapitation in these terms.

@petrelharp
Copy link
Contributor

For comparison I guess we need to be careful about sorting order? Does this not, for instance, require that TableCollection.sort() also sorts mutations & parents in the same way (i.e. we need to solve #27 + #651)?

Yes, good spotting. Strictly speaking this will be the case, but in practice this won't be a problem, although it's extra incentive to get a total order sort implemented.

@gtsambos
Copy link
Member

gtsambos commented Jun 4, 2020

Hi all, sorry for the radio silence, it's been very busy down here for me 😫 just chiming in to say that I plan to get to grips with this sometime over the next day or two!

@petrelharp
Copy link
Contributor

Another possible name for this is union().

@hyanwong
Copy link
Member

hyanwong commented Jun 9, 2020

Another possible name for this is union().

You previously said "for me merge sounds like you'll get all the structure in other, which isn't necessarily true.". Union sounds even more like this, in a nod to set theory. Is this more like a intersection or a database-like LEFT JOIN? I've lost track of whether it is asymmetrical.

@mufernando
Copy link
Member Author

Another possible name for this is union().

You previously said "for me merge sounds like you'll get all the structure in other, which isn't necessarily true.". Union sounds even more like this, in a nod to set theory. Is this more like a intersection or a database-like LEFT JOIN? I've lost track of whether it is asymmetrical.

Aha, interesting question! I think this doesn't characterize as a join operation in that sense, because we are expanding the rows -- not the columns (see here).

So union encompasses the meaning that rows are being added together and is more precise than merge because it also implies that shared bits are not duplicated.

@petrelharp
Copy link
Contributor

You previously said "for me merge sounds like you'll get all the structure in other, which isn't necessarily true.".

Initially it was more asymmetrical. We've put this aside while implementing subset, which will make this easier, but here is I think the current plan: sanity check whether subset(self, shared nodes) equals subset(other, shared nodes); and then graft everything that's in other but not in subset(other, shared nodes) together with self. So, I think this really is a union, because we're sticking on everything that's not in the intersection.

Ah, but you can't use it for #675 because the mutations would be associated with already-existing nodes. (I mean, maybe we could get it in there, but it would get in the way of this simple interpretation.)

@petrelharp
Copy link
Contributor

Hm, maybe this one is union_nodewise and you're proposing union_sitewise? Or something?

@jeromekelleher jeromekelleher removed the AUTOMERGE-REQUESTED Ask Mergify to merge this PR label Jul 16, 2020
@mufernando
Copy link
Member Author

how did this happen? the tests had passed before and are not failing locally. I'll look into it.

@petrelharp
Copy link
Contributor

Gee, I thought it was passing, too! Well, it fails locally now, but it didn't just a bit ago, so something must have happened in the last few commits?

@mufernando
Copy link
Member Author

I think the bot that rebased against upstream/master changed sth...

@mufernando
Copy link
Member Author

well, now I tested with both gcc and clang and it worked fine. I'm pretty sure it was something about the AdminBot-tskit that effed it up.

@mufernando
Copy link
Member Author

wtf is going on here... is it possible to rerun all the tests on this same commit?

@petrelharp
Copy link
Contributor

Well, for me 47b95a3 failed locally, but the current head (10b9eff) passes. It looks like all the tests pass, so I'm going to request mergify merges this (unless you know of something wrong?).

@mufernando
Copy link
Member Author

when I rebase bad stuff happens... but the rebase worked fine and there were no conflicts. so not sure what is going on.

@andrewkern
Copy link
Member

what sort of bad stuff?

@mufernando
Copy link
Member Author

mufernando commented Jul 16, 2020

the tests start failing. see 10b9eff and b030523 for instance.

@petrelharp petrelharp removed the AUTOMERGE-REQUESTED Ask Mergify to merge this PR label Jul 17, 2020
@petrelharp
Copy link
Contributor

petrelharp commented Jul 17, 2020

Ah-ha: the problem is because of actual behavior that changed upstream. This fixes it:

     ret = tsk_site_table_add_row(&tables.sites, 0.4, "A", 1, NULL, 0);
     CU_ASSERT_FATAL(ret >= 0);
     ret = tsk_mutation_table_add_row(
-        &tables.mutations, 0, 0, TSK_NULL, NAN, NULL, 0, NULL, 0);
+        &tables.mutations, 0, 0, TSK_NULL, TSK_UNKNOWN_TIME, NULL, 0, NULL, 0);
     CU_ASSERT_FATAL(ret >= 0);
-    ret = tsk_mutation_table_add_row(&tables.mutations, 0, 0, 0, NAN, NULL, 0, NULL, 0);
+    ret = tsk_mutation_table_add_row(&tables.mutations, 0, 0, 0, TSK_UNKNOWN_TIME, NULL, 0, NULL, 0);
     CU_ASSERT_FATAL(ret >= 0);
     ret = tsk_mutation_table_add_row(
-        &tables.mutations, 1, 1, TSK_NULL, NAN, NULL, 0, NULL, 0);
+        &tables.mutations, 1, 1, TSK_NULL, TSK_UNKNOWN_TIME, NULL, 0, NULL, 0);
     CU_ASSERT_FATAL(ret >= 0);
     ret = tsk_table_collection_build_index(&tables, 0);
     CU_ASSERT_EQUAL_FATAL(ret, 0);

@petrelharp
Copy link
Contributor

Looks like you should do the rebase yourself and check everything still works.

@mufernando mufernando force-pushed the graft_impl branch 2 times, most recently from 5c12b7d to 985644e Compare July 17, 2020 01:56
@mufernando
Copy link
Member Author

something about rebasing against upstream/master is royally fucked up. now I broke the tests for table_collection_check_integrity. how do I get out of this rebasing hell?

@mufernando
Copy link
Member Author

unless someone has a better idea, tomorrow morning I'll reset against upstream/master and manually add all my changes.

@petrelharp
Copy link
Contributor

You just changed a NAN that was actually supposed to be a NAN in the integrity tests - I did that, too!

@petrelharp petrelharp added the AUTOMERGE-REQUESTED Ask Mergify to merge this PR label Jul 17, 2020
@mergify mergify bot merged commit d3a5a9f into tskit-dev:master Jul 17, 2020
@mergify mergify bot removed the AUTOMERGE-REQUESTED Ask Mergify to merge this PR label Jul 17, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants