Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fee changes from ArticMine #7819

Merged
merged 2 commits into from
Apr 18, 2022
Merged

Conversation

moneromooo-monero
Copy link
Collaborator

No description provided.

@moneromooo-monero
Copy link
Collaborator Author

For the record, it conflicts with both the BP+ patch and the triptych patch.

@erciccione
Copy link
Contributor

erciccione commented Jul 30, 2021

Attaching the PDF linked in the commit to my comment, so that it will be available in this PR in case ArticMine's repository is deleted/moved/whatever. It's good to have references directly on the PR.

MoneroScaling2021-02.pdf

@SamsungGalaxyPlayer
Copy link
Collaborator

Allowing the maximum block size to continuously grow at ~32x per year seems overly generous to me. I would rather lower the value from 2 to 1.7 max. This will still allow substantial maximum growth of 14x per year.

In cases that we exceed such a huge growth rate, it will be dealt with a higher fee market, incentives to use slightly more efficient transactions, and incentives to use some off-chain transfer method (such as a wrapped asset). We will still have an enormously generous growth rate which will provide substantial pressures to keep transactions costs down as available block size space grows.

I don't see temporary situations of full blocks as a failure case. This is a natural situation in other blockchains like Bitcoin and Ethereum, and providing a generous growth rate as a form of escape valve is more than sufficient imo.

Reference: monero-project/research-lab#70

@ArticMine
Copy link
Collaborator

ArticMine commented Aug 3, 2021

I commented on this in monero-project/research-lab#70.

With respect to the cap I considered 1.6 / 0.625, 2 / 0.5 and 2.5 / 0.4. I considered transaction rate changes not only in Monero but also Bitcoin (before the limit was reached), Litecoin, Dogecoin, Dash, and Bitcoin Cash. Here is a chart for Monero, Bitcoin and Litecoin. https://bitinfocharts.com/comparison/transactions-btc-ltc-xmr.html#log. Increases of 8x over a six month period and even 80x over a 1 year period have occurred in the past and can occur in the future. This is especially the case if there is a disruption in a competing coin as occurred in 2017 with Bitcoin reaching a hard limit triggering an 80x increase in Litecoin transactions. Setting this cap below 2 does run the real risk of triggering this issue if one allows the short term median to remain above the long term median for many months. On the other hand increasing the limit to say 2.5 / 0.4 would lead to a much larger spread in fees. There is also little evidence for uncontrolled growth over multiple years even in chains with no penalty to increase the blocksize below the hard cap such as Litecoin. One can take a look at Litecoin transactions in 2018 or Monero transactions in early 2017 for example. Also it is best to use figures where the reciprocal can be expressed as an exact decimal. 2 / 0.5 vs say 1.7 / 0.58823529... .

Since I made the above comment there have been developments in Dogecoin that illustrate how a sharp increase in fees can have a very significant negative impact in adoption https://bitinfocharts.com/comparison/transactionfees-transactions-doge.html#1y Note how the fall in transactions per day to a level well below that before the fee increase, followed the sharp increase in fees. Furthermore just recently the daily transactions per day of Dogecoin have fallen significantly below that of Monero. This has never happened before. https://bitinfocharts.com/comparison/transactions-doge-xmr.html#log. I will note that in Dogecoin full blocks was not the issue, rather it was a very high minimum fee cause by an increase in price of ~200x over a period of a few months. This is a real risk in Monero if we lower the growth of the long term median below 2.

I do see temporary sharp increases in fees as a very significant failure. In fact this is what research-lab issue 70 is about. I also consider that such sharp increases in fees can have long lasting adverse impacts that can be very difficult to foresee. Bitcoin has chosen high fees and constrained block sizes because of the eventual need to replace the block rewards with transaction fees. The result has been the failure of Bitcoin as a "A Peer-to-Peer Electronic Cash System". The fact that the Bitcoin community has very successfully re-positioned Bitcoin as a store of value does negate Bitcoin's failure as a transactional currency. The critical difference here is that Monero has a tail emission so competitive fees do not mean a loss in POW security.

@SamsungGalaxyPlayer
Copy link
Collaborator

SamsungGalaxyPlayer commented Aug 9, 2021

I think we simply disagree on this point. I see DOGE's increase as mostly temporary, which Monero would also be somewhat susceptible to. This is a cap on the long-term growth cap, which seems aggressively permissive. DOGE's network also became extremely unreliable for other reasons, including antiquated code and weak node infrastructure. The increase in transactions and their accompanying sizes certainly didn't help matters.

I would like to hear other comments on this point. Should we allow Monero's block size to increase 32 times per year with this proposal. This seems reckless to me with a harmful bias towards avoiding temporarily high fees at enormous permanent cost.

@ArticMine
Copy link
Collaborator

ArticMine commented Aug 9, 2021

@SamsungGalaxyPlayer

I guess we disagree. The recent events with DOGE is just one of many examples. Furthermore focusing on the annual rate is also very arbitrary, since the rapid raises can occur over much shorter time frames typically a 2 - 6 month period. The current limit of 1.4 for the long term median means a limit of less than 2 over 4.5 months. This has been exceeded on a regular basis not only occurred in Monero in the past in the but also is many other coins.

I suggest taking a close look at the historical data not only in XMR

https://bitinfocharts.com/comparison/monero-transactions.html#log

Note: the sustained rate of ~2.4x a year going back to May 2016, and the significant volatility in transaction rates. The current rate (1.4) provides ~5.4x over a year. A rate of 1.7 provides 14.2x a year. On the surface this sounds like it might work until one factors in the current ~2.4x long term rate. This leaves under 6x a year. The simply does not work to cover historical fluctuations let alone an external event such as COVID-19 or a serious problem in a competing coin. With 2 we would have a net rate of ~13.3 which would at least cover most historical fluctuations and at least mitigate external events. I would consider 2 to be the absolute minimum. There is actually a case for 2.5 even with the much higher fee spreads.

but also in other POW coins

Here are additional charts that show how anything under 2 could under many scenarios cause serious problems.

BTC

https://bitinfocharts.com/comparison/transactions-btc-xmr.html#log

For BTC the perioud before 2013 is very relevant since there was no constraint on the blocksize

LTC

https://bitinfocharts.com/comparison/transactions-ltc-xmr.html#log

BCH

https://bitinfocharts.com/comparison/transactions-bch-xmr.html#log

I will note that artificially inflating fees by allowing the short term median to remain well above the long term median for an extended period of time opens the door to among other things the miners accepting transactions directly on a website providing a lower price in exchange for personal information.

We must keep in mind that the long term median was introduced to deter spam attacks such as Flood XMR by forcing to sustain the short term median for over 2 months while the long term median catches up. It was never intended to put an arbitrary low cap over a period of 3 months to a year.

Copy link
Contributor

@UkoeHB UkoeHB left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review part 1

src/cryptonote_basic/cryptonote_format_utils.cpp Outdated Show resolved Hide resolved
src/cryptonote_basic/cryptonote_format_utils.cpp Outdated Show resolved Hide resolved
src/cryptonote_basic/cryptonote_format_utils.cpp Outdated Show resolved Hide resolved
src/cryptonote_basic/cryptonote_format_utils.cpp Outdated Show resolved Hide resolved
src/cryptonote_config.h Outdated Show resolved Hide resolved
src/cryptonote_core/blockchain.cpp Outdated Show resolved Hide resolved
Copy link
Contributor

@UkoeHB UkoeHB left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review part 2

src/cryptonote_core/blockchain.cpp Outdated Show resolved Hide resolved
src/cryptonote_core/blockchain.cpp Outdated Show resolved Hide resolved
src/cryptonote_core/blockchain.cpp Outdated Show resolved Hide resolved
src/cryptonote_core/blockchain.cpp Outdated Show resolved Hide resolved
Copy link
Contributor

@UkoeHB UkoeHB left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review part 3 (fin)

Comment on lines 3780 to 3852
// we want Mlw = median of max((min(Mbw, 2Ml), Zm), Ml/2)
// Mbw: block weight for the last 99990 blocks, 0 for the next 10
// Ml: penalty free zone (dynamic), aka long_term_median, aka median of max((min(Mb, 2Ml), Zm), Ml/2)
// Zm: 300000 (minimum penalty free zone)
//
// So we copy the current rolling median state, add 10 (grace_blocks) zeroes to it, and get back Mlw

epee::misc_utils::rolling_median_t<uint64_t> rm = m_long_term_block_weights_cache_rolling_median;
for (size_t i = 0; i < grace_blocks; ++i)
rm.insert(0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// we want Mlw = median of max((min(Mbw, 2Ml), Zm), Ml/2)
// Mbw: block weight for the last 99990 blocks, 0 for the next 10
// Ml: penalty free zone (dynamic), aka long_term_median, aka median of max((min(Mb, 2Ml), Zm), Ml/2)
// Zm: 300000 (minimum penalty free zone)
//
// So we copy the current rolling median state, add 10 (grace_blocks) zeroes to it, and get back Mlw
epee::misc_utils::rolling_median_t<uint64_t> rm = m_long_term_block_weights_cache_rolling_median;
for (size_t i = 0; i < grace_blocks; ++i)
rm.insert(0);
// we want Mlw = median of max((min(Mbw, 2Ml), Zm), Ml/2)
// Mbw: block weight for the last 99990 blocks, 0 for the next 10
// Ml: penalty free zone (dynamic), aka long_term_median, aka median of max((min(Mb, 2Ml), Zm), Ml/2)
// Zm: 300000 (minimum penalty free zone)
//
// So we copy the current rolling median state, add 10 (grace_blocks) zeroes to it, and get back Mlw
epee::misc_utils::rolling_median_t<uint64_t> rm = m_long_term_block_weights_cache_rolling_median;
for (size_t i = 0; i < grace_blocks; ++i)
rm.insert(rm.median() / 2);

I think it is supposed to be recursive using the median of max((min(Mb, 2Ml), Zm), Ml/2) equation. This change may have no practical effect, but better correct than sorry.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The intent here is to have a fee level that's known to be good for grace_blocks blocks, and the worst case is 0 block size. So I think this is right.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the worst case is 0 block size

max(min(0, 2Ml), Zm, Ml/2) = max(Zm, Ml/2)

The max() calculation is in the next line (3790). A fee level that is good for grace_blocks blocks therefore should use recursive over Ml/2 like in my suggestion.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not understand that, and since it works, you can change it later if you think it should be.

Copy link
Contributor

@UkoeHB UkoeHB Sep 16, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The long term median is recursive over max((min(Mb, 2Ml), Zm), Ml/2). If your block weight is Mb = 0, then it simplifies to recursive over max(Zm, Ml/2) (for the blocks where Mb = 0). It is incorrect to push back 0s (note that it is ok to push back 0s in the short term median, which is not computed recursively).

It is not a matter of what I think it 'should' be.

Copy link
Contributor

@UkoeHB UkoeHB Sep 22, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ArticMine This function copies the ledger's rolling long term median variable, then inserts 0 at the end for num_graceblocks. It doesn't affect consensus, if that was your concern. I may be misunderstanding your proposal though.

I have been arguing this function should push back max((min(Mb, 2Ml), Zm), Ml/2) with Mb = 0 for num_graceblocks.

Copy link
Collaborator

@ArticMine ArticMine Sep 22, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function copies the ledger's rolling long term median variable, then inserts 0 at the end for num_graceblocks. It doesn't affect consensus, if that was your concern. I may be misunderstanding your proposal though.

That is my understanding also. What I am missing is why we have recursion after inserting the 0 at the end for num_graceblocks. For the next block we repeat the process starting again with the rolling long term median variable.

Edit 1: I understand what @UkoeHB is proposing: The worst scenario is max(Zm, Ml/2) calculated recursively for the next 10 blocks, with Mb=0. Still if we replace this with 0 or even Zm I do not see how this would change the wallet fee end result since we would be replacing the lowest values for only 10 blocks in the median calculation with even lower values. This should work even in a recursive calculation. Still one can argue that using max(Zm, Ml/2) is more accurate.

As far as I can see this will produce the same result.

Edit 2: My understanding is that none of the wallet fee calculations are consensus.

Copy link
Contributor

@UkoeHB UkoeHB Sep 22, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I am missing is why we have recursion after inserting the 0 at the end for num_graceblocks.

I am a bit confused by this sentence. In the PR's code, there is no recursion in this spot (right now). So, 0 is pushed back into the vector used to find the long term median, then 0 is pushed back for the next block.

In the consensus code, m_long_term_block_weights_cache_rolling_median is built recursively: each block weight pushed back goes through max((min(Mb, 2Ml), Zm), Ml/2) first.

Copy link
Collaborator

@ArticMine ArticMine Sep 22, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes to clarify. There is no recursion in the PR code for the wallet fees using 0, but if I understand this correctly if we use max(Zm, Ml/2) instead we would also have to use recursion for 10 blocks each time to calculate the wallet fee median from the consensus long term median.

Edit 1: The way the code is proposed Ml/2 does not change over the 10 grace blocks so it could be higher than if recursion is applied.

Edit 2: I am going to recommend we stay with 0 since the proposed change does not include recursion for the 10 grace blocks.

Edit 3: I would not have a problem with changing to the recursive version (include recursion for the 10 grace blocks) of max(Zm, Ml/2). As @moneromooo-monero has pointed out this could be done at a later time.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok since @ArticMine thinks this is fine, I will accept it.

src/cryptonote_core/blockchain.cpp Outdated Show resolved Hide resolved
src/cryptonote_core/blockchain.cpp Show resolved Hide resolved
src/cryptonote_core/blockchain.cpp Show resolved Hide resolved
src/wallet/wallet2.cpp Show resolved Hide resolved
src/wallet/wallet2.cpp Show resolved Hide resolved
src/wallet/wallet2.cpp Show resolved Hide resolved
Comment on lines 184 to 189
#define HF_VERSION_DETERMINISTIC_UNLOCK_TIME 13
#define HF_VERSION_2021_SCALING 17

#define PER_KB_FEE_QUANTIZATION_DECIMALS 8
#define CRYPTONOTE_SCALING_2021_FEE_ROUNDING_PLACES 2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Btw there is a line in this file: "// New constants are intended to go here". Are these new constants supposed to go there?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Someone clearly thought so :) AFAICT this is an attempt at having new constants be constants (as opposed to macros) in namespaces. I prefer adding things in the groups where they belong. Others may add elsewhere.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see

Comment on lines 25 to 35
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

#define IN_UNIT_TESTS

#include "gtest/gtest.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#define IN_UNIT_TESTS
#include "gtest/gtest.h"
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
////
// References:
// - https://github.com/ArticMine/Monero-Documents/blob/master/MoneroScaling2021.pdf
// - https://github.com/monero-project/research-lab/issues/70
///
#define IN_UNIT_TESTS
#include "gtest/gtest.h"

Copy link
Contributor

@UkoeHB UkoeHB left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you

@ArticMine
Copy link
Collaborator

I am proposing changing the rate of growth of ML from 2 to 1.7 as per monero-project/research-lab#70 (comment)

@moneromooo-monero
Copy link
Collaborator Author

I think that's all that's needed to switch the patch to 1.7. I'll squash before it gets merged.

{
// effective median = short_term_median bounded to range [long_term_median, 50*long_term_median], but it can't be smaller than the
// minimum penalty free zone (a.k.a. 'full reward zone')
effective_median_block_weight = std::min<uint64_t>(std::max<uint64_t>(m_long_term_effective_median_block_weight, short_term_median), CRYPTONOTE_SHORT_TERM_BLOCK_WEIGHT_SURGE_FACTOR * m_long_term_effective_median_block_weight);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

short_term_constraint isn't updated a few lines up for the change from 1.4->1.7 (which feeds into m_long_term_effective_median_block_weight here)

@ArticMine I hope I get some cred for my position in the other thread for spotting this :)

@@ -4448,6 +4544,7 @@ bool Blockchain::check_blockchain_pruning()
return m_db->check_pruning();
}
//------------------------------------------------------------------
// returns min(Mb, 2Ml) as per https://github.com/ArticMine/Monero-Documents/blob/master/MoneroScaling2021-02.pdf from HF_VERSION_LONG_TERM_BLOCK_WEIGHT
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit confused in this function -- it looks like it's returning min(max(Mb, max(Zm, Ml)/1.7), max(Zm, Ml)*1.7)

But the spec is max(min(Mb , 1.7Ml), Zm, Ml/1.7)

I'm not seeing any issues with this because when the rolling long term median gets set, it's set based on the max of Zm and whatever value is returned here, but maybe I'm missing something.. can't tell if this is intended or not

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAICT it's returning min(Mb, 1.7 x) with x being max(long term median, 300000). and seems to be assuming that median(max(x,y)) == max(median(x),y), which I think is true (a test program I've just made seems to corroborate). So I'm not sure why you think this functoin should return something else. Do you disagree with how it's used maybe ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No I think it's ok given how it's used and that this is ok. Was just a little confused by it since it seemed a little different and wanted to double check

const uint8_t version = get_current_hard_fork_version();
const uint64_t db_height = m_db->height();
const uint64_t nblocks = std::min<uint64_t>(m_long_term_block_weights_window, db_height);
const uint64_t long_term_median = get_long_term_block_weight_median(db_height - nblocks, nblocks);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unused, and doesn't seem like it matters but I can't tell for certain

{
// min_fee_per_byte = 0.2 * block_reward * ref_weight / (min_penalty_free_zone * fee_median)
div128_64(hi, lo, min_block_weight, &hi, &lo, NULL, NULL);
lo /= 5;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing the assert(hi == 0) from above?

@@ -182,8 +182,10 @@
#define HF_VERSION_EXACT_COINBASE 13
#define HF_VERSION_CLSAG 13
#define HF_VERSION_DETERMINISTIC_UNLOCK_TIME 13
#define HF_VERSION_2021_SCALING 17
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

15?

base_reward = BLOCK_REWARD_OVERESTIMATE;
}

return get_dynamic_base_fee_estimate_2021_scaling(grace_blocks, base_reward, Mnw, Mlw_penalty_free_zone_for_wallet, fees);
Copy link
Collaborator

@j-berman j-berman Feb 4, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also surprised the compiler doesn't catch returning a value on a function declared void

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That call is also void.

{
const uint8_t version = get_current_hard_fork_version();
const uint64_t db_height = m_db->height();
const uint64_t nblocks = std::min<uint64_t>(m_long_term_block_weights_window, db_height);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry I hate to do this but this is also unused. was just going to make a new PR for this, but figure makes sense to double check this is safe by you to remove now. Everything looks good to me

Copy link
Collaborator

@j-berman j-berman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

priority = 1;
else if (priority > 4)
priority = 4;
--priority;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think this general logic (starting from the comment) belongs inside if (use_2021_scaling), otherwise this causes a discrepancy with current v17.3.0 clients until the hard fork (get_fee_multiplier(priority) gets called with a value of priority 1 less than it should be called with down in the else)

@@ -238,13 +239,24 @@ boost::optional<std::string> NodeRPCProxy::get_dynamic_base_fee_estimate(uint64_
m_dynamic_base_fee_estimate = resp_t.fee;
m_dynamic_base_fee_estimate_cached_height = height;
m_dynamic_base_fee_estimate_grace_blocks = grace_blocks;
m_dynamic_base_fee_estimate_vector = std::move(resp_t.fees);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion:

m_dynamic_base_fee_estimate_vector = !resp_t.fees.empty() ? std::move(resp_t.fees) : std::vector<uint64_t>{m_dynamic_base_fee_estimate};

Fixes a seg fault when wallet tries to construct a transaction before the fork (tries to access fees[0] on an empty vector in get_dynamic_base_fee_estimate below)

@@ -80,10 +80,10 @@ def _test_generateblocks(self, blocks):
assert ok

res = daemon.get_fee_estimate()
assert res.fee == 234562
assert res.fee == 1200000
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't see why yet but this is causing a test to fail. Working through it

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably because it's not on v15 yet. You need to add that fork to the table.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, adding to the end of the mainnet table does the trick. Added it in the wrong way and was getting a weird result

@luigi1111 luigi1111 merged commit 2b999f5 into monero-project:master Apr 18, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants