Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Moments Sketch custom aggregator #6581

Merged
merged 6 commits into from
Feb 13, 2019
Merged

Moments Sketch custom aggregator #6581

merged 6 commits into from
Feb 13, 2019

Conversation

edgan8
Copy link
Contributor

@edgan8 edgan8 commented Nov 6, 2018

Initial pull request for a druid aggregation extension that supports the moments sketch. The moments sketch is a compact, efficiently mergeable approximate quantile sketch. This extension wraps the library available here: https://github.com/stanford-futuredata/momentsketch . The post aggregator can be used to extract quantile estimates from the aggregator.

The aggregator is parameterized by k, the size of the sketch, and a boolean parameter "compress" which will compress the range of input values, improving accuracy for very long-tailed distributions, but slightly reducing accuracy for values more uniformly distributed across their range.

MomentSketchAggregatorTest shows how the custom aggregation can be constructed during either ingest or query time.

@edgan8
Copy link
Contributor Author

edgan8 commented Nov 6, 2018

@gianm @fjy I am not familiar with the best practices so let me know if you need more information. Thanks!

@fjy fjy added this to the 0.13.1 milestone Nov 7, 2018
@fjy
Copy link
Contributor

fjy commented Nov 7, 2018

@edgan8 we'll review soon! Thanks for the contrib!

@jon-wei jon-wei self-assigned this Nov 7, 2018
private MomentSketchWrapper momentsSketch;

public MomentSketchBuildAggregator(
final ColumnValueSelector<Double> valueSelector,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should use BaseDoubleColumnValueSelector instead

private final boolean compress;
private final byte cacheTypeId;

private static final byte MOMENTS_SKETCH_CACHE_ID = 0x51;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This constant should belong to AggregatorUtil

@Override
public int getMaxIntermediateSize()
{
return (k + 2) * 8 + 2 * 4 + 8;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should use constants like Integer.BYTES and also explain each addend in comments

@Override
public Object combine(Object lhs, Object rhs)
{
MomentSketchWrapper union = (MomentSketchWrapper) lhs;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both lhs and rhs could be null

final byte cacheTypeId
)
{
if (name == null) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Objects.requireNonNull is shorter

private final boolean compress;

public MomentSketchBuildBufferAggregator(
final ColumnValueSelector<Double> valueSelector,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should use BaseDoubleColumnValueSelector instead

}

@Override
public synchronized void aggregate(final ByteBuffer buffer, final int position)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should use finer, read-write concurrency; see HllSketchBuildBufferAggregator for example

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was told here #6381 that synchronization is not needed in buffer aggregators.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@leventov do you suggestions on how to proceed here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the buffer aggregator doesn't need synchronization, but the non-buffer aggregator does since those are used in realtime ingestion tasks which can be queried as they're ingesting

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some buffer aggregators do need synchronization for OffheapIncrementalIndex. See #3956.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK I will keep both the buffer and non-buffer aggregators synchronized then.

}

@Override
public synchronized void close()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't need to be synchronized

public class MomentSketchMergeAggregatorFactory extends MomentSketchAggregatorFactory
{
public static final String TYPE_NAME = "momentSketchMerge";
private static final byte MOMENTS_SKETCH_MERGE_CACHE_ID = 0x52;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This constants should belong to AggregatorUtil

public byte[] getCacheKey()
{
final CacheKeyBuilder builder = new CacheKeyBuilder(
AggregatorUtil.QUANTILES_DOUBLES_SKETCH_TO_QUANTILES_CACHE_TYPE_ID).appendCacheable(field);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure that this constant should be used

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to the Druid style, should ).appendCacheable(field); should be on a separate line

@leerho
Copy link
Contributor

leerho commented Nov 8, 2018

We just became aware of the underlying paper for this submission a few days ago and are still in the process of reviewing it.

It is not up to me whether this code should be merged into Druid as a custom aggregator. However, the code has almost no Javadocs and the paper may be difficult for many users to fully understanding the trade-offs, advantages and disadvantages for how and when to use this kind of quantiles sketch as compared to the DataSketches (DS) quantiles sketches already available in Druid.

Quoting from the paper's Abstract:

Empirical evaluation shows that the moments sketch can achieve less than 1 percent quantile error with 15× less overhead than comparable summaries, improving end query time in the MacroBase engine by up to 7× and the Druid engine by up to 60×.

This is a very exciting claim, but understanding some of the assumptions behind this claim requires a bit of a deep-dive into the paper and unraveling what this sketch would be great at doing, and where it may not be so great.

The Moments-Quantiles (M-Sketch) has been optimized for merge-time performance and for this metric the M-Sketch really shines. It's merge speed can be an order-of-magnitude faster than the DS-sketch. Meanwhile, obtaining a M-Sketch quantile estimate can be milliseconds, while the DS-Sketch is in the microsecond range.

The paper defines the primary metric for sketch performance as total query-time, where the number of merges are large and the number of get-quantile estimate is rare, and perhaps for many Druid queries, this trade-off makes sense.

The other major tradeoff is how sketch accuracy is defined and measured.

  1. The paper is very clear: "The M-Sketch accuracy is dataset dependent". By comparison, the accuracy of the DS-Sketch is data independent.

This difference has several practical real-world implications.

The DS-Sketch doesn't care how ugly your input data stream is. It can have negative values, zeros, gaps and multiple spikes or blocks in its distribution of values, the values can range over many orders-of-magnitude, and the error guarantees of the DS-Sketch will still hold.

However, the authors of the paper make it clear in several places that M-Sketch error gurantees only apply to relatively well-behaved and smooth value distributions (what the paper calls "non-pathological"). Unfortunately the term "non-pathological" is not well-defined and the user has no way of knowing whether or not any given input data stream is appropriately "non-pathological" without performing extensive brute-force quantile analysis of the stream and comparing it with the M-Sketch results.

Another subtle difference between the two types of sketches is how error performance is measured and quoted.

(This part is greatly simplified.)

The M-sketch paper effectively defines a total area difference between two distribution curves; one being the real underlying distribution, and the other being the curve effectively modeled by the moments computed by the sketch. Then the paper defines the maximum error as effectively the maximum average error (the integral) of all queries along all points of the distribution. The DS-Sketch defines the maximum error as the maximum difference between the two curves at any point along the full range of the distribution.

This means that the actual error from the M-Sketch could be huge (many times the quoted maximum error) for parts of the distribution, and very small for other parts of the curve so that, on average, if you perform queries over the full range of values of the distribution, the average error would be pretty good. But the user would have no clue where along the distribution the error is very low, or outrageously high.

In contrast, the DS-Sketch error guarantee is for any single query and for all queries.

The other consequence of M-Sketch's error dependance on the data is that the M-sketch cannot give the user any before-the-fact guidance on what the error will be after the data has been sketched. The DS-Sketch does provide this guidance.

So as long as you know a great deal about the underlying distributions of your data, or you don't care too much about error, and you are only concerned about total-query-time, go ahead and use the M-Sketch.

If you don't know anything about the underlying distributions of your data and you do care about error, and slower total-query-time is an affordable trade-off, then I would advise you use the DataSketches/quantiles sketch.

Cheers,

Lee.

@edgan8
Copy link
Contributor Author

edgan8 commented Nov 8, 2018

@leerho thanks for posting the great summary! Author of the paper here, I wanted to confirm that I fully agree with your analysis of when the M-sketch would or would not be appropriate. The actual error is data dependent. In many situations I would prefer to use the datasketches library as well which I found to be fairly reliable in my experiments. One other point to keep in mind is that the space usage of the M-sketch is also extremely low (usually < 150 bytes) so I often think of it as serving as a supplement to basic statistics and tiny histograms rather than a full-fledged sketch.

@gianm gianm requested a review from jon-wei November 13, 2018 22:14
Copy link
Contributor

@jon-wei jon-wei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Finished initial review, can you add a doc page under docs/content/development/extensions-contrib as well?

Copy link
Contributor

@jon-wei jon-wei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about adding a min/max post-agg for moments sketch, similar to http://druid.io/docs/latest/development/extensions-core/approximate-histograms.html#min-post-aggregator?

@edgan8
Copy link
Contributor Author

edgan8 commented Nov 25, 2018

Thanks for the review @jon-wei, @leventov, I'll make the changes and add the min/max post-aggregators.

@jon-wei
Copy link
Contributor

jon-wei commented Jan 2, 2019

@edgan8 Is this ready for review again?

@edgan8
Copy link
Contributor Author

edgan8 commented Jan 3, 2019

@jon-wei not yet, I need to write the documentation and fix some more issues. Will ping this thread in the next day or two!

@edgan8
Copy link
Contributor Author

edgan8 commented Jan 8, 2019

@jon-wei , @leventov this is ready for review again. Thanks for your help!

@@ -103,6 +107,8 @@ public MomentSolver getSolver()
public double[] getQuantiles(double[] fractions)
{
MomentSolver ms = new MomentSolver(data);
// Constants here are chosen to yield maximum precision while keeping solve times ~1ms on 2Ghz cpu
// Grid size can be increased if longer solve times are acceptable
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When multiple quantiles are requested, the estimation could be more efficient if MomentSolver had a getQuantiles method that accepts an array, the cdf could be reused and you would only need one pass of the latter for loop.

https://github.com/stanford-futuredata/momentsketch/blob/master/momentsolver/src/main/java/com/github/stanfordfuturedata/momentsketch/MomentSolver.java#L88

}

@Override
public synchronized void aggregate(final ByteBuffer buffer, final int position)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the buffer aggregator doesn't need synchronization, but the non-buffer aggregator does since those are used in realtime ingestion tasks which can be queried as they're ingesting

@jon-wei
Copy link
Contributor

jon-wei commented Jan 14, 2019

Hi @edgan8, can you take another look, I've left some comments

public Object extractValue(final InputRow inputRow, final String metricName)
{
Object rawValue = inputRow.getRaw(metricName);
if (rawValue instanceof MomentSketchWrapper) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This explicitness doesn't add a lot of value, just cast and let it throw ClassCastException. It will also log the wrong class, unlike the current code.


public class MomentSketchComplexMetricSerde extends ComplexMetricSerde
{
private static final MomentSketchObjectStrategy strategy = new MomentSketchObjectStrategy();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Static field name should be all uppercase

MomentSketchMaxPostAggregator.TYPE_NAME
)
).addSerializer(
MomentSketchWrapper.class,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unnecessary breakdown.

public List<? extends Module> getJacksonModules()
{
return ImmutableList.of(
new SimpleModule(getClass().getSimpleName()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Current formatting is not aligned with the Druid style. It could be

new SimpleModule(
    getClass().getSimpleName()
).registerSubtypes(
    ...

Or

new SimpleModule(getClass().getSimpleName())
    .registerSubtypes(
        ...

public void configure(Binder binder)
{
String typeName = MomentSketchAggregatorFactory.TYPE_NAME;
if (ComplexMetrics.getSerdeForType(typeName) == null) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please replace this boilerplate pattern with a single method registerSerde(String, Supplier<ComplexMetricSerde>) thoughout the code?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll create the method and updated my module to use it but don't feel comfortable updating other modules in this PR. Maybe this could be migrated in a future PR?

public MomentSketchWrapper fromByteBuffer(ByteBuffer buffer, int numBytes)
{
if (numBytes == 0) {
return EMPTY_SKETCH;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does EMPTY_SKETCH.toByteArray() result in an empty array? Currently there is a discrepancy between fromByteBuffer() and toBytes() that looks suspicious when just reading the code.

Copy link
Contributor Author

@edgan8 edgan8 Jan 15, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, I'll make empty_bytes consistently correspond to a null sketch


import java.nio.ByteBuffer;

public class MomentSketchWrapper
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This class contains mathematical bits that are not obvious. It's javadoc comment should refer to some document (a paper at least, but preferably something more approachable) after reading which a reader of this class could understand what is "moment solving", ArcSinh compression concept, what does log(x + sqrt(1 + x^2)) mean, etc.

Ideally it's just explained inline.

P. S. I see those things are explained to some extent in MomentSketchAggregatorFactory. Please link to this class in the javadoc comment too.

}

@Override
public Object combine(Object lhs, Object rhs)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Parameters should be annotated @Nullable.

return serializedSketch;
}
throw new ISE(
"Object is not of a type that can be deserialized to a Moments Sketch"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add space in the end of the string

}

@Override
public synchronized void aggregate(final ByteBuffer buffer, final int position)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some buffer aggregators do need synchronization for OffheapIncrementalIndex. See #3956.

@edgan8
Copy link
Contributor Author

edgan8 commented Jan 15, 2019

Thank you for the comments, I will update this later in the week!

@edgan8
Copy link
Contributor Author

edgan8 commented Jan 16, 2019

@jon-wei , @leventov , thank you for the review, I've updated to address the comments.

@leventov
Copy link
Member

leventov commented Jan 16, 2019

@edgan8 as a side note, please don't "mark converations as resolved". I wish this Github feature could be turned off. As a reviewer I anyway have to revisit each conversation and verify myself that it's resolved, marking it as resolved just adds clicking work for me.

@edgan8
Copy link
Contributor Author

edgan8 commented Jan 16, 2019

@leventov Ah I did not realize it affected your workflow like that. Sure thing, I will be careful with that in the future!

@jon-wei
Copy link
Contributor

jon-wei commented Jan 17, 2019

@edgan8 There are a couple of checkstyle errors

@jon-wei
Copy link
Contributor

jon-wei commented Jan 22, 2019

@edgan8 Can you fix the checkstyle errors and conflict when you get a chance?

@edgan8
Copy link
Contributor Author

edgan8 commented Jan 24, 2019

@jon-wei just pushed

@leventov
Copy link
Member

Copy link
Contributor

@jon-wei jon-wei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Had a few minor comments, LGTM otherwise

|name|A String for the output (result) name of the calculation.|yes|
|fieldName|A String for the name of the input field (can contain sketches or raw numeric values).|yes|
|k|Parameter that determines the accuracy and size of the sketch. Higher k means higher accuracy but more space to store sketches. Usable range is generally [3,15] |no, defaults to 13|
|compress|Flag for whether the aggregator compresses numeric values using arcsinh. Can improve robustness to skewed and long-tailed distributions, but reduces accuracy slightly on more uniform distributions.|| no, defaults to true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

distributions.|| no, defaults to true has a formatting error

{
final CacheKeyBuilder builder = new CacheKeyBuilder(
PostAggregatorIds.MOMENTS_SKETCH_TO_QUANTILES_CACHE_TYPE_ID
).appendCacheable(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I think it'd be better to keep each builder call on the same line, e.g. .appendCacheable(field)

public MomentSketchAggregatorFactory(
@JsonProperty("name") final String name,
@JsonProperty("fieldName") final String fieldName,
@JsonProperty("k") final Integer k,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should have @Nullable as well

@jon-wei
Copy link
Contributor

jon-wei commented Jan 29, 2019

@edgan8 There's a new conflict from #6397, probably in the aggregator cache ID definitions

@leventov Can you take another look when you get a chance?

@jon-wei jon-wei removed this from the 0.14.0 milestone Feb 5, 2019
@leerho
Copy link
Contributor

leerho commented Feb 5, 2019

I just completed a comparative analysis of the Druid Approximate Histogram, the Moments Sketch, and the DS-Quantiles sketch against some actual time-spent data we collected from one of our servers.
You might want to take a look.

This data has a well-behaved and smooth distribution so any reasonable quantile or histogram tool should be able to handle it without any issues.

@edgan8
Copy link
Contributor Author

edgan8 commented Feb 5, 2019

@leerho thank you for the careful analysis. I believe the larger than expected errors are due to the huge spike of zero values. I can update the moments sketch to handle zero values separately since they are so common and see how they affect accuracy, though that should not affect any of the druid integration code.

@leerho
Copy link
Contributor

leerho commented Feb 6, 2019

@edgan8 Thank you for your reply.

...larger than expected errors...

  1. What is the "expected error"? The fact is that the Moments Sketch is an empirical algorithm and you cannot state a priori what the expected error will be on any subsequent query.

  2. Providing a patch to fix the zeros spike is but a band-aid for this particular data set. It does not provide any guarantees for better error on other data sets. As I stated in the study, "real data can be quite ugly", and it is not hard to find real data sets with lots of strange bumps, spikes and values that do not fit the theoretically smooth and well-behaved distributions that the Moments Sketch relies on.

  3. If a user does not bother to perform an exact analysis or compare the Moment Sketch results with a sketch that provides, a priori, useful (e.g. L_inf) error bounds, he/she would never know that there was a severe error problem! This is my biggest concern. Even the Druid engineers do not know what kinds of data that their customers will be asking Druid to analyze.

  4. There is no argument that the Moments Sketch is fast and small in size, but what good is that if it can produce large errors as a result? Worse, the end user has no way to know, after the fact, what the error actually is!

  5. Hopefully, we can learn from Druid's experience with the Approximate Histogram and the Druid HLL sketch. The fact that early on Druid incorporated those two algorithms into their core, users assumed that they must be good and that they were sufficiently tested and studied. After all, the Druid engineers are a bunch of really smart people!

Unfortunately, Druid end users did not bother to find and read the AH paper where the authors admit that the AH algorithm has serious limitations. And they likely didn't have the necessary skills to do a deep dive into the Druid HLL sketch algorithm to uncover its problems. Now, unfortunately, both groups of users are stuck with lots of historical data of dubious quality with no means of recovery.

@AlexanderSaydakov
Copy link
Contributor

AlexanderSaydakov commented Feb 6, 2019

Currently the Moments Sketch can estimate the quantile value for a given a rank.
What about the inverse query: estimate the rank of a given value? What about the probability mass function (histogram)?

@edgan8
Copy link
Contributor Author

edgan8 commented Feb 6, 2019

@leerho I completely agree, it seems the concern is about the recommendations we provide to users. Due to concerns about robustness, I don't think the moments sketch should be a recommended first choice in standard environments either. For users with extreme demands, my understanding was that putting an extension package in contrib was a convenient to location allow them to experiment and can work on updating the documentation to make that more clear. If the druid maintainers have different plans for contrib then I can move this back to an external repository, I don't have a stake.

@jon-wei please let me know your thoughts on the best place to put this package moving forward, I am also happy keeping it in my own repository if that is more convenient.

@AlexanderSaydakov those features are not difficult to add if people find them important.

@jon-wei
Copy link
Contributor

jon-wei commented Feb 6, 2019

@jon-wei please let me know your thoughts on the best place to put this package moving forward, I am also happy keeping it in my own repository if that is more convenient.

I think extensions-contrib is a good place for this aggregator, we can make the characteristics and limitations clear in the docs, and have it as something that users can experiment with while recommending the DS-Sketch for general use.

@leerho
Copy link
Contributor

leerho commented Feb 6, 2019

@jon-wei I hope that the studies that I reference in this thread as well as my concerns and @edgan8's agreement should provide sufficient information for someone to generate relevant documentation. Since I don't really know how (style, place, format, etc.) the Druid team wants to document these algorithms, someone on the Druid team should do that. Once generated I would be happy to review and comment, but I really need someone to take ownership of the documentation of these algorithms for Druid.

@jon-wei
Copy link
Contributor

jon-wei commented Feb 7, 2019

@leerho @edgan8 I'll take care of writing the Druid doc pages based on the information you've provided, thanks a lot for the help!

@AlexanderSaydakov
Copy link
Contributor

@jon-wei you may want to update the docs for the Approximate Histogram and Druid HLL as well.

@jon-wei
Copy link
Contributor

jon-wei commented Feb 8, 2019

@AlexanderSaydakov that sounds good, the 0.14.0 docs will try to move users away from ApproximateHistogram and the old Druid HLL

@jon-wei
Copy link
Contributor

jon-wei commented Feb 12, 2019

@leventov did you have more comments on this

@leventov
Copy link
Member

I'm OK with this PR.

@jon-wei jon-wei merged commit 90c1a54 into apache:master Feb 13, 2019
@jon-wei
Copy link
Contributor

jon-wei commented Feb 13, 2019

@edgan8 thanks for the contrib!

@glasser
Copy link
Contributor

glasser commented Feb 13, 2019

This seems to have broken CI on master.
See https://travis-ci.org/apache/incubator-druid/builds/492938510 from master (and my PRs on top of it are failing too).

@jon-wei
Copy link
Contributor

jon-wei commented Feb 13, 2019

@glasser I'm making a PR to fix the build issues

@edgan8
Copy link
Contributor Author

edgan8 commented Feb 14, 2019

Thanks for your help @jon-wei, @leventov, and @leerho !

@edgan8
Copy link
Contributor Author

edgan8 commented Apr 10, 2019

@jon-wei do you have an update on when / how this will be included in a future build? Just curious if I should follow-up on anything.

@gianm
Copy link
Contributor

gianm commented Apr 11, 2019

@edgan8 This'll be available as a contrib extension starting in 0.15.0. No need to do anything on your end!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants