Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add mempool statistics collector #8501

Closed
wants to merge 2 commits into from

Conversation

@jonasschnelli
Copy link
Member

@jonasschnelli jonasschnelli commented Aug 12, 2016

This PR adds a statistics collector class which aims to collect various types of statistics up to the configurable maximum memory target. At the moment, only mempool statistics will be collected.

Motivation

Adding more statistics and visualization to the GUI would leverage its usage. To do so, we need stats that are collected even when the visualization is not visible (example: the GUI network graph will only draw data when it's visible which is kinda unusable)

How it works

This PR adds a simple stats manager that polls stats over a repetitive CScheduler task.

The samples are not guaranteed to be const-time. Each sample contains a time delta to the last one (uint16_t).

API

  • -statsenable default disabled
  • -statsmaxmemorytarget 10MB default maximal memory target to use for statistics.
  • RPC: getmempoolstats
    ==
[
  {
    "percision_interval": 2,
    "time_from": 1494252401,
    "samples": [
      [
        11, 
        1, 
        1008, 
        0
      ], ....
   }...
]

Features

-> CScheduler driven sample collecting (current interval 2000ms)
-> Relevant mempool data (size, memory requirement, minfee) gets written to an atomic cache (no additional locking required)
-> Multiple precision levels (currently three, 2s, 60s, 1800s)
-> Memory target that will calculate how many samples do fit in the given target
-> Sample do only have a 2byte time-delta to the last sample, allowing to save some memory
-> Flexible design, adding more data-points should be simple (bandwidth, utxo-stats, etc.).

@jonasschnelli jonasschnelli force-pushed the jonasschnelli:2016/08/stats_rpc branch 2 times, most recently Aug 12, 2016
@isle2983
isle2983 reviewed Aug 13, 2016
View changes
src/stats/stats.cpp Outdated

// return all available samples
fromTime = mempoolStats.startTime + mempoolStats.vSamples.front().timeDelta;
;

This comment has been minimized.

@isle2983

isle2983 Aug 13, 2016
Contributor

is this semicolon unintentional?

@isle2983
isle2983 reviewed Aug 14, 2016
View changes
src/stats/stats.h Outdated
#include <boost/signals2/signal.hpp>

struct CStatsMempoolSample {
uint32_t timeDelta; //use 32bit time delta to save memmory

This comment has been minimized.

@isle2983

isle2983 Aug 14, 2016
Contributor

s/memmory/memory/

@isle2983
isle2983 reviewed Aug 14, 2016
View changes
src/stats/stats.h Outdated
/* set the target for the maximum memory consuption (in bytes) */
void setMaxMemoryUsageTarget(size_t maxMem);

/* get the statictis module help strings */

This comment has been minimized.

@isle2983

isle2983 Aug 14, 2016
Contributor

s/statictis/statistics/

@isle2983
isle2983 reviewed Aug 14, 2016
View changes
src/stats/stats.h Outdated
/* get all mempool samples (non interpolated) */
mempoolSamples_t mempoolGetValuesInRange(uint64_t& fromTime, uint64_t& toTime);

/* set the target for the maximum memory consuption (in bytes) */

This comment has been minimized.

@isle2983

isle2983 Aug 14, 2016
Contributor

s/consuption/consumption/

@isle2983
isle2983 reviewed Aug 14, 2016
View changes
src/stats/stats.cpp Outdated
if (mempoolStats.vSamples.size() && mempoolStats.vSamples.back().timeDelta + SAMPLE_MIN_DELTA_IN_SEC >= now - mempoolStats.startTime)
return;

// calculate the current time detla and add a sample

This comment has been minimized.

@isle2983

isle2983 Aug 14, 2016
Contributor

s/detla/delta/

@isle2983
isle2983 reviewed Aug 14, 2016
View changes
src/stats/stats.cpp Outdated
const bool CStats::DEFAULT_STATISTICS_ENABLED = false;
std::atomic<bool> CStats::statsEnabled(false); //disable stats by default

CStats defaultStats;

This comment has been minimized.

@isle2983

isle2983 Aug 14, 2016
Contributor

This 'defaultStats' instance is not used.

@isle2983
Copy link
Contributor

@isle2983 isle2983 commented Aug 15, 2016

Hi Jonas,

I understand the concern in your description for the addMempoolSample() stat
bookkeeping designed to be as lightweight as possible in the critical execution
path. However, I have a few (perhaps under-informed, neophyte) questions which
would help me understand the design considerations better:

  1. the comment in rpc_stats.cpp hints that the overhead of the JSON string
    generation is best optimized to be this 'flat' encoding as opposed to some
    encoding like:

{ "fieldNames" : ["delta_in_secs", "tx_count", "dynamic_mem_usage", "min_fee_per_k"], "samples" : [[val1, val2, val3, val4], [val1, val2, val3, val4], ] }

Is the 'flat' encoding strictly needed? or is there some other concern with
outputting a slightly more convenient format than 'flat'?

  1. It appears possible to set the maximum memory target very hight such that
    many, many samples are collected and the overhead of the computation
    mempoolGetValuesInRange() inside the lock might become onerus (assuming I am
    correctly understanding how the lock works and the implications of holding it
    too long). Have you considered taking a copy of 'mempoolStats' in a way that
    lets you return the lock earlier, and doing the dataset computation outside the
    lock? (Is that even currently possible under the current execution model?)

Cheers,

Isle

@jonasschnelli
Copy link
Member Author

@jonasschnelli jonasschnelli commented Aug 15, 2016

@isle2983 Welcome to github.
Thanks for your feedback and your nitpicks. I really appreciate this and i'll process them during the next hours.

For your questions/inputs:

  1. My idea with the JSON flat output was to bypass the JSON encoding/decoding.[val1, val2, val3, val4], [val1, val2, val3, val4], should also work. I just though a single string would result in faster encoding and decoding performance. But your approach seems to be the better choice, although not sure if we want to use UniValue for encoding or just appending strings... maybe we should start with the first and use a more optimized encoding if the JSON overhead is a problem.

  2. Yes. That's a good point. Copying the samples vector could result in a memory peak when using large amount of -maxmemorytarget.

@jonasschnelli jonasschnelli force-pushed the jonasschnelli:2016/08/stats_rpc branch 4 times, most recently Aug 15, 2016
@MarcoFalke
MarcoFalke reviewed Aug 19, 2016
View changes
src/stats/rpc_stats.cpp Outdated
flatData += std::to_string(sample.txCount) + ",";
flatData += std::to_string(sample.dynMemUsage) + ",";
flatData += std::to_string(sample.minFeePerK) + ",";
}

This comment has been minimized.

@MarcoFalke

MarcoFalke Aug 19, 2016
Member

Would it make sense to add a line break after each sample's values?

This comment has been minimized.

@MarcoFalke

MarcoFalke Aug 19, 2016
Member

Or just add the names of the columns as another entry in the dict.

Otherwise I fail to see how this rpc call is useful.

@isle2983
Copy link
Contributor

@isle2983 isle2983 commented Aug 20, 2016

I have been playing around making my own changes off these commits (isle2983:getmempoolstats). Mostly to just to get some hands on with the code and try to get my C++ up to par.

But anyway, I made the rpc output of the samples full JSON:

{
  "enabled": true,
  "maxmemorytarget": 10485760,
  "currentusage": 1734416,
  "time_from": 1471573271,
  "time_to": 1471657376,
  "sampleCount": 27131,
  "sampleFieldNames": [
    "timeDelta", 
    "txCount", 
    "dynMemUsage", 
    "minFeePerK"
  ],
  "samples": [
    [
      0, 
      1, 
      1728, 
      0
    ], 
    [
      4, 
      11, 
      15232, 
      0
    ], 
    ...
    (snip)
    ]
}

The JSON 'pretty' print through bitcoin-cli is definitely unwieldy. However, the computational overhead in doing the wrangling doesn't seem so bad.

The 1.7MM of stat data is from collecting just overnight. With that data, I can pull it off the node, parse and convert the JSON into CSV with a python script and plot it in gnuplot in under a second.

$ time myJunk/plotTxCount.sh 

real    0m0.966s
user    0m0.460s
sys     0m0.128s

Not sure what the comparable is with the qt gui stuff branch that is running, but this doesn't seem too bad on the face of it.

Also, if getting this info from the node to the UI quickly is a concern, perhaps a more dense, binary-like format is worth considering i.e:

{"stats_blob":"8b292cf....."}

One could imagine it being more efficient than even the 'flat' format, depending on the sophistication.

@jonasschnelli
Copy link
Member Author

@jonasschnelli jonasschnelli commented Aug 22, 2016

Thanks @isle2983 for the testing, benchmarks and improvements.
I have switched to the proposed array format for the samples (rather then the flat structure). A more performant binary format (inside of the JSON format) would be a hack. More performance would probably be possible over ZMQ.. but its currently a push only channel.

I also though again about copy the samples hash before filtering it. I came to the conclusion that it's not worth generating a memory peak (by a copy of the whole samples vector) in order to allow a faster release of the LOCK. The filtering should be very fast because it only compares some uint32 and does construct a new vector with a from-/to-iterators (should also preform fast).

@jonasschnelli jonasschnelli force-pushed the jonasschnelli:2016/08/stats_rpc branch Sep 9, 2016
@jonasschnelli
Copy link
Member Author

@jonasschnelli jonasschnelli commented Sep 9, 2016

Needed rebase.

@jonasschnelli jonasschnelli force-pushed the jonasschnelli:2016/08/stats_rpc branch Oct 20, 2016
@jonasschnelli
Copy link
Member Author

@jonasschnelli jonasschnelli commented Oct 20, 2016

Rebased.

@jonasschnelli jonasschnelli force-pushed the jonasschnelli:2016/08/stats_rpc branch Oct 20, 2016
@MarcoFalke MarcoFalke added this to the 0.14.0 milestone Nov 10, 2016
@MarcoFalke
Copy link
Member

@MarcoFalke MarcoFalke commented Nov 10, 2016

Assigning "would be nice to have" for 0.14 per meeting today.

@morcos
Copy link
Member

@morcos morcos commented Dec 5, 2016

Just saw @gmaxwell's comment on #8550 (which I completely agree with) and it reminded to look at that PR and this one. Sorry for not getting involved sooner, but I really like the idea. Unfortunately I can think of many many stats (dozens and dozens) that we might want to collect, both to potentially show to users in whiz-bangy gui's and also would be useful for developers and businesses trying to understand unusual behavior on the network.

If we envision that there might be 1 KB of different stats data, then maybe rather than just saving sample data points and trimming when they hit memory usage, we should be smart about saving it along different time frames. For instance we could have second, minute, hour, day sampling intervals and we could save 1000 points or more on each and still have quite reasonable memory usage, but they could be auto trimmed.. so if you wanted to look at data from hours ago, you couldn't look at it on the second time frame...

@jonasschnelli
Copy link
Member Author

@jonasschnelli jonasschnelli commented Dec 5, 2016

@morcos: thanks for the comment. Yes. I completely agree. I think this is a first start and the current design allows features like you mentioned.
I once started with interpolating values instead of just trimming the back, you could in theory just reduce the "density" of the sample and interpolate the in-between values (to a point where this could make sense).
But yes, adding more stats probably require individual limits and trim-behaviours.

@morcos
Copy link
Member

@morcos morcos commented Dec 5, 2016

@jonasschnelli Well I guess what I was thinking was that one general framework might fit all stats. You log it with whatever frequency you want. And it's stored in up to 4 different histories (by second, minute, hour, day) and each of those is trimmed to some limit (say 1000 or 2000 data points each). Is there any type of stat that such a general framework might not work well with?

src/stats/stats.cpp Outdated
{
std::string strUsage = HelpMessageGroup(_("Statistic options:"));
strUsage += HelpMessageOpt("-statsenable=", strprintf("Enable statistics (default: %u)", DEFAULT_STATISTICS_ENABLED));
strUsage += HelpMessageOpt("-statsmaxmemorytarget=<n>", strprintf(_("Set the memory limit target for statictics in bytes (default: %u)"), DEFAULT_MAX_STATS_MEMORY));

This comment has been minimized.

@paveljanik

paveljanik May 12, 2017
Contributor

statictics -> statistics

src/stats/stats_mempool.cpp Outdated
const static unsigned int MAX_SAMPLES = 5000;


const static unsigned int fallbackMaxSamplesPerPercision = 1000;

This comment has been minimized.

@paveljanik

paveljanik May 12, 2017
Contributor

fallbackMaxSamplesPerPercision -> fallbackMaxSamplesPerPrecision

@jonasschnelli jonasschnelli force-pushed the jonasschnelli:2016/08/stats_rpc branch May 12, 2017
@jonasschnelli
Copy link
Member Author

@jonasschnelli jonasschnelli commented May 12, 2017

Fixed @paveljanik points and nits.

src/stats/rpc_stats.cpp Outdated

};

void RegisterStatsRPCCommands(CRPCTable& tableRPC)

This comment has been minimized.

@paveljanik

paveljanik May 12, 2017
Contributor

Wshadow emits:

stats/rpc_stats.cpp:78:42: warning: declaration shadows a variable in the global namespace [-Wshadow]

This comment has been minimized.

@jonasschnelli

jonasschnelli May 12, 2017
Author Member

Fixed.

@jonasschnelli jonasschnelli force-pushed the jonasschnelli:2016/08/stats_rpc branch 2 times, most recently May 12, 2017
Copy link
Contributor

@ryanofsky ryanofsky left a comment

Few more comments

// use "flat" json encoding for performance reasons
UniValue samplesObj(UniValue::VARR);
for (const struct CStatsMempoolSample& sample : samples) {
UniValue singleSample(UniValue::VARR);

This comment has been minimized.

@ryanofsky

ryanofsky Jun 13, 2017
Contributor

In commit "Add mempool statistics collector"

It seems like it would be more user friendly and extensible if this were an object instead of an array, so the elements could be named. It's a little awkard to have to remember that the first element in the array is time delta, second element is transaction count, etc.

This comment has been minimized.

@ryanofsky

ryanofsky Oct 5, 2017
Contributor

Thread #8501 (comment)

Any thoughts on this? Named fields seems more natural than tuples in a JSON data structure.

This comment has been minimized.

@Sjors

Sjors Jan 8, 2018
Member

I tend to agree. I would also use the timestamp directly instead of an offset.

@@ -545,6 +546,7 @@ bool AcceptToMemoryPoolWorker(CTxMemPool& pool, CValidationState& state, const C
{
const CTransaction& tx = *ptx;
const uint256 hash = tx.GetHash();
CFeeRate poolMinFeeRate;

This comment has been minimized.

@ryanofsky

ryanofsky Jun 13, 2017
Contributor

In commit "Add mempool statistics collector"

Would be good to move declaration closer to where this is being set (at least as close as possible).


const size_t DEFAULT_MAX_STATS_MEMORY = 10 * 1024 * 1024; // 10 MB
const bool DEFAULT_STATISTICS_ENABLED = false;
const static unsigned int STATS_COLLECT_INTERVAL = 2000; // 2 secs

This comment has been minimized.

@ryanofsky

ryanofsky Jun 13, 2017
Contributor

In commit "Add mempool statistics collector"

Most of the other settings seem to be in seconds instead of milliseconds. Seems like it would be good to switch this one to seconds for consistency, and to be able to get rid of the division by 1000 in CStatsMempool::addMempoolSamples.

This comment has been minimized.

@ryanofsky

ryanofsky Oct 5, 2017
Contributor

Thread #8501 (comment)

Would be nice to use either seconds or milliseconds internally in the stats classes instead of both.

startTime += vSamplesPerPrecision[i][j].timeDelta;
}
// remove element(s) at vector front
vSamplesPerPrecision[i].erase(vSamplesPerPrecision[i].begin(), vSamplesPerPrecision[i].begin() + (vSamplesPerPrecision[i].size() - vMaxSamplesPerPrecision[i]));

This comment has been minimized.

@ryanofsky

ryanofsky Jun 13, 2017
Contributor

In commit "Add mempool statistics collector"

Seems like it would be more efficient to switch to a circular buffer, or use std::deque instead of deleting from the beginning of the vector.

This comment has been minimized.

@ryanofsky

ryanofsky Oct 5, 2017
Contributor

Thread #8501 (comment)

Still worth considering alternatives to erasing from the beginning of a vector, I think.

std::atomic<uint64_t> startTime; //start time
std::vector<MempoolSamplesVector> vSamplesPerPrecision; //!<multiple samples vector per precision level
std::vector<size_t> vMaxSamplesPerPrecision; //!<vector with the maximum amount of samples per precision level
std::vector<size_t> vTimeLastSample; //!<vector with the maximum amount of samples per precision level

This comment has been minimized.

@ryanofsky

ryanofsky Jun 13, 2017
Contributor

In commit "Add mempool statistics collector"

All three of these vectors have the same length (number of precision levels). Seems like it would be cleaner to have one vector containing a struct with all the information that needs to be stored for a given precision level. It would also make the c++ data structure more consistent with the json data structure returned from rpc.

This comment has been minimized.

@jonasschnelli

jonasschnelli Jun 13, 2017
Author Member

That would be possible though I think that the current approach makes the core more readable.

This comment has been minimized.

@ryanofsky

ryanofsky Oct 5, 2017
Contributor

Do you think having parallel lists is more readable in c++ and having one list is more readable in json? Also, why are the first two vectors labeled perPrecision, and the third one not? I guess I think something like the following would be less ambiguous and duplicative:

struct PrecisionLevel {
    std::vector<CStatsMempoolSample> samples;
    size_t max_num_samples;
    int64_t last_sample_time;
};
std::vector<PrecisionLevel> m_precision_levels;

Could also rename cs_mempool_stats to cs_precision_levels to make it clearer what the lock is protecting.


intervalCounter++;

if (intervalCounter > biggestInterval) {

This comment has been minimized.

@ryanofsky

ryanofsky Jun 13, 2017
Contributor

In commit "Add mempool statistics collector"

I don't understand what this is doing. It seems unnecessary, and strange because it is comparing the counter against numbers of seconds without taking collectInterval into account. If you will keep this, I think it'd be good to add a comment with an example of interval and counter values that explains how this is supposed to work.

This comment has been minimized.

@jonasschnelli

jonasschnelli Jun 13, 2017
Author Member

I have added this to avoid an overflow of intervalCounter. I though resetting it when we exceeded the highest possible interval seems sane but I agree that a size_t overflow will very likely happens never... would you recommend to remove it?

This comment has been minimized.

@ryanofsky

ryanofsky Jun 13, 2017
Contributor

would you recommend to remove it?

I'd say remove it unless there's a way to reset it that makes sense (I'm sure there is one but nothing immediately comes to mind).

This comment has been minimized.

@ryanofsky

ryanofsky Oct 5, 2017
Contributor

Thread #8501 (comment)

Should follow up by removing this or adding comments suggested above.

@luke-jr
Copy link
Member

@luke-jr luke-jr commented Aug 21, 2017

Rebased on my repo.

git checkout stats_rpc && git fetch git://github.com/luke-jr/bitcoin stats_rpc && git reset --hard FETCH_HEAD && git push ...
@jonasschnelli jonasschnelli force-pushed the jonasschnelli:2016/08/stats_rpc branch 2 times, most recently from b3f506e Oct 5, 2017
@jonasschnelli jonasschnelli force-pushed the jonasschnelli:2016/08/stats_rpc branch to 7af0ea4 Oct 5, 2017
@jonasschnelli
Copy link
Member Author

@jonasschnelli jonasschnelli commented Oct 5, 2017

Rebased (rebased @luke-jr's version).

# test mempool stats
time.sleep(15)
mps = self.nodes[0].getmempoolstats()
assert_equal(len(mps), 3) #fixed amount of percision levels

This comment has been minimized.

@ryanofsky

ryanofsky Oct 5, 2017
Contributor

s/percision/precision

// use "flat" json encoding for performance reasons
UniValue samplesObj(UniValue::VARR);
for (const struct CStatsMempoolSample& sample : samples) {
UniValue singleSample(UniValue::VARR);

This comment has been minimized.

@ryanofsky

ryanofsky Oct 5, 2017
Contributor

Thread #8501 (comment)

Any thoughts on this? Named fields seems more natural than tuples in a JSON data structure.


const size_t DEFAULT_MAX_STATS_MEMORY = 10 * 1024 * 1024; // 10 MB
const bool DEFAULT_STATISTICS_ENABLED = false;
const static unsigned int STATS_COLLECT_INTERVAL = 2000; // 2 secs

This comment has been minimized.

@ryanofsky

ryanofsky Oct 5, 2017
Contributor

Thread #8501 (comment)

Would be nice to use either seconds or milliseconds internally in the stats classes instead of both.

startTime += vSamplesPerPrecision[i][j].timeDelta;
}
// remove element(s) at vector front
vSamplesPerPrecision[i].erase(vSamplesPerPrecision[i].begin(), vSamplesPerPrecision[i].begin() + (vSamplesPerPrecision[i].size() - vMaxSamplesPerPrecision[i]));

This comment has been minimized.

@ryanofsky

ryanofsky Oct 5, 2017
Contributor

Thread #8501 (comment)

Still worth considering alternatives to erasing from the beginning of a vector, I think.

const bool DEFAULT_STATISTICS_ENABLED = false;
const static unsigned int STATS_COLLECT_INTERVAL = 2000; // 2 secs

CStats* CStats::sharedInstance = NULL;

This comment has been minimized.

@ryanofsky

ryanofsky Oct 5, 2017
Contributor

Maybe use unique_ptr


intervalCounter++;

if (intervalCounter > biggestInterval) {

This comment has been minimized.

@ryanofsky

ryanofsky Oct 5, 2017
Contributor

Thread #8501 (comment)

Should follow up by removing this or adding comments suggested above.

void startCollecting(CScheduler& scheduler);

/* set the target for the maximum memory consumption (in bytes) */
void setMaxMemoryUsageTarget(size_t maxMem);

This comment has been minimized.

@ryanofsky

ryanofsky Oct 5, 2017
Contributor

Should this be private since it is called by parameterInteraction?

Either way, it seems like this needs a comment about when it is safe to be called. E.g. it seems like this will not always work properly if it is called after startCollecting because it won't schedule the callback. But maybe it will work to change the size if it was previously greater than 0?

{
private:
static CStats* sharedInstance; //!< singleton instance
std::atomic<bool> statsEnabled;

This comment has been minimized.

@ryanofsky

ryanofsky Oct 5, 2017
Contributor

Why have this as a separate variable when it seems to just be true when maxStatsMemory != 0. Maybe eliminate this or else add comments documenting the intended interfaction between the two variables.


/* SIGNALS
* ======= */
boost::signals2::signal<void(void)> MempoolStatsDidChange; //mempool signal

This comment has been minimized.

@ryanofsky

ryanofsky Oct 5, 2017
Contributor

Nobody seems to be listening to this signal. Should add comment explaining who intended listeners are, if keeping this.


MempoolSamplesVector samples = CStats::DefaultStats()->mempoolCollector->getSamplesForPrecision(i, timeFrom);

// use "flat" json encoding for performance reasons

This comment has been minimized.

@ryanofsky

ryanofsky Oct 5, 2017
Contributor

Should expand comment, unclear what it's referring to.

Copy link
Member

@Sjors Sjors left a comment

I think a table would be more human readable. To also make it easier for other software to parse, you could make the sample interval an argument and add the timestamp to each sample:

bitcoin-cli getmempoolstats 2
timestamp  | tx_count | dynamic_mem_usage | min_fee_per_k  
------------------------------------------------------------
1515419437 | 1042     | 1797344           | 0       
1515419439 | 1050     | 1837344           | 0       
...

You could add another argument for the output format (CSV, JSON, etc).

Multiple precision intervals seems redundant, or at least something that could be added later, if users want long term charts without storing too much detail.

std::string CStats::getHelpString(bool showDebug)
{
std::string strUsage = HelpMessageGroup(_("Statistic options:"));
strUsage += HelpMessageOpt("-statsenable=", strprintf("Enable statistics (default: %u)", DEFAULT_STATISTICS_ENABLED));

This comment has been minimized.

@Sjors

Sjors Jan 8, 2018
Member

Should there be an RPC method to turn stats collecting on? If not, maybe the getmempoolstats help message can inform the user to launch bitcoind with -statsenable=1?

// use "flat" json encoding for performance reasons
UniValue samplesObj(UniValue::VARR);
for (const struct CStatsMempoolSample& sample : samples) {
UniValue singleSample(UniValue::VARR);

This comment has been minimized.

@Sjors

Sjors Jan 8, 2018
Member

I tend to agree. I would also use the timestamp directly instead of an offset.

@laanwj laanwj modified the milestones: 0.16.0, 0.17.0 Jan 11, 2018
@jonasschnelli jonasschnelli removed this from the 0.17.0 milestone Jul 19, 2018
@DrahtBot
Copy link
Contributor

@DrahtBot DrahtBot commented Nov 8, 2018

There hasn't been much activity lately and the patch still needs rebase, so I am closing this for now. Please let me know when you want to continue working on this, so the pull request can be re-opened.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

You can’t perform that action at this time.