# Next generation SDiff #584

Closed
opened this Issue Feb 20, 2017 · 36 comments

Projects
None yet
9 participants
Member

### raedah commented Feb 20, 2017 • edited

 Price change percentage would correspond with percent change in ticket pool size. I have modeled this here https://play.golang.org/p/QSV4ssuZ4O Example, starting with pool at target size with 40960. Maximum tickets purchased in a window is 144 * 20 = 2880. Tickets redeemed in that window is 144 * 5 = 720. So actual ticket increase is 2880 - 720 = 2160. pool size 40960 + added tickets 2160 = 43120. New window pool size 43120 divided by last window pool size 40960 = 1.0527. So 5.27% would be the maximum price increase in a window when all possible ticket purchase slots are filled. If no tickets were bought then the target pool size of 40960 would fall by 144 * 5 = 720 to 40240. New window pool size 40240 divided by last window pool size 40960 = 0.982%, which is a maximum 1.75% decrease in ticket price from the last window when no tickets are bought. So, a ticket price of 50 DCR could adjust up 5.27% to 52.63 DCR, or adjust down 1.75% to 49.12 DCR.
Member

### davecgh commented Feb 20, 2017

 I'll have to look at this more closely, but the first biggest issue I see just looking at the numbers is it allows the pool size to get way too large. After just five rounds, the pool size will have grown to 51,760, which is whopping 25% increase over the target while the current algorithm is keeping it within a 5% band. Such a large pool would undesirably skew the average payout times, reduce the probability of payout before expiration, and therefore significantly change the social contract. The primary purpose of the algorithm must be to keep the ticket pool size relatively constant with the secondary concern of the actual price fluctuations.
Member

### raedah commented Feb 20, 2017

 The example shows 5 rounds of full increase, but this is probably not possible in practice because it would require people to have funds to make those purchases. I will look into adding the PoS pool total funds value into the model.

### animedow commented Feb 23, 2017 • edited

 Decred new PoS sdiff algorithm proposal This proposal is based on an asymptotic function to calculate the new ticket price, or sdiff, in a new PoS pool buying window. The function has two variables, the average ticket price in the PoS pool and the pool size. X=0 should be defined as the desired “target pool size”. Due to its asymptotic nature, the function has two boundary values. These values effectively function as the maximum and minimum pool size. This function aims to assess a fair price for the tickets in the PoS system based on the demand, which is expressed as the difference to the desired target pool size, and the current average price of tickets in the PoS pool. If the current pool size is over the target pool size, demand is high and price goes up accordingly. If the current pool size is under the target pool size, demand is low and the price goes down accordingly. The asymptotic function determining the sdiff has the form of: Where, x = amount of ticket deviation from the target pool size; a = a modifier controlling the slope of the function; b = the maximum boundary; c = the minimum boundary; d = the average ticket price in pool. By adding the average ticket price in pool to the function and pivot around it, the average ticket price will slowly adjust as the demand for tickets grows or declines. In the following example the current values for the modifier a and boundaries b and c have been chosen as such: a = 100000 b = 2880 c = 2880 d = 40 The value for a is somewhat arbitrarily chosen, it close to doubles the sdiff at a full buying window of 2880 tickets at the current average price d, while growing the price at 50% and 25% at an average ticket price of 100DCR and 200DCR, respectively. Boundaries b and c have been chosen as the number of a full buying window, namely 2880 tickets, or four times the number of voting tickets per window. d is the current average price in the PoS pool as of the writing of this document. Inputting these values give the following graph. In line with the value chosen for d the current pool size is 42068 tickets. The “target pool size” at x=0 is assumed to be 40960 tickets. This function would put the sdiff price for the next window at 57DCR. In other words, it evaluates the fair price based on the pool size and current average price at 57DCR for the next window. If no tickets are bought at that price and the pool size drops to 41348 tickets, the price for the next window would be approximately 44 DCR. In case 2880 tickets are bought at the target pool size the next price will be ~99.5DCR at a pool size of 43210 tickets. In scenarios where the maximum boundary is exceeded a linear increase price function should take over starting at the maximum average ticket price in DCR’s lifetime eg. 512.69DCR. This linear function should take over starting at a ticket pool size of 2880 in order to avoid the infinity price that would break the function at the exact value of x=2880 or could cause it to overshoot on the other segment of the function which develops at x=2881. In case this is undesirable the modifier a could be turned into a function itself, taking into account the total amount of available DCR for staking (total DCR – DCR in PoS pool). This requires further research and is arguably the biggest improvement to be made to the function. (note: my intuition tells me this is undesirable as you'd have to guarantee that at (max size - 2160) the price has to be equal to the average price for total available DCR and even higher after that. This gimps the slope tremendously at current boundaries of 2880 and allows an effective buy window of +-720 tickets around the origin. If the boundaries were set at 4000-5000 or higher this could be considered.) In scenarios where the pool size requires a ticket price of 0 the minimum price should be set at a fraction above zero.

Closed

Closed

Closed

Member

### davecgh commented Mar 15, 2017 • edited

 As an update, I wrote a simulation tool which models the full Decred proof-of-stake system behavior. It needs a better demand distribution function to properly model expected user demand as the current one is very basic and not very realistic, however the infrastructure is fully fleshed out and nice graphs of the resulting pool sizes and ticket prices per block are generated. The graphs can be explored by selecting the desired sections. https://github.com/davecgh/dcrstakesim
Member

### davecgh commented Apr 5, 2017

 The following is a suggested algorithm from @coblee. Simple: `ticket_price = locked_dcr / target_pool_size` Expanded: `x*(locked_dcr/target_pool_size) + (1-x)*(locked_dcr/pool_size_actual)` The expanded equation reduces to the simple equation when x=1.
Member

### davecgh commented Apr 5, 2017 • edited

 I've updated dcrstakesim to include a demand distribution function (DDF) that takes into account both the yield and the volume-weighted average price (VWAP). In the case the price is higher than the VWAP (indicating zero demand from that term), and there is 100% demand due to the yield that would be produced by purchasing tickets, the DDF will chase the yield. This more closely models typical purchasing behavior since the typical criteria a rational ticket buyer will use is whether or not the yield produced is acceptable and whether or not delaying until a future interval has a lower price than the VWAP which will increase their yield. Running the simulation with this updated DDF and the existing sdiff algorithm shows that the form produced by the simulation is similar to what actually happened on mainnet. In particular it produces wild ticket price swings, ticket prices that increase over time and are within a similar range, and a pool size that is consistently higher than the target. One notable difference is that simulation doesn't ramp up quite as slowly as mainnet did, but that is most definitely due to external factors such as the relative lack of a user-friendly GUI for staking early on and the fact the devs intentionally did not stake more than their proportion of the coins in the name of fairness and ensuring their voting power was never greater than the rest of the community. For reference, the following are the results using the real mainnet data and existing sdiff algorithm: While the following is a simulation run using the DDF and existing sdiff algorithm: Given these results, I will be running the proposed algorithms and posting the results of the simulation using them. For the simulation results, the algorithms will be named/numbered as follows: Proposal 1: The proposal from @raedah Proposal 2: The proposal from @animedow Proposal 3: The proposal from @coblee There will be an additional proposal forthcoming that is a joint effort between myself and jy-p. More details will be posted when finalized.

### jyap808 commented Apr 5, 2017

 I'll write up a quick proposal too. Thanks.
Member

## Proposal 1 Results

Here are the simulation results of Proposal 1 using two different demand distribution functions (DDFs):

DDF A -- The DDF discussed above that produces the most realistic results.

DDF B -- Purely based upon the yield and used as a point of comparison

Pros:

1. Both the pool size and ticket price are relatively stable
2. The algorithm itself is easy to understand and efficiently computable

Cons:

1. The proposal fails to encourage enough ticket purchases to reach the target pool size.
2. Since it's based on the relative pool change, it does not work well during the initial population of the pool size since adding 2160 tickets to a pool size of 10000 is a much larger relative percentage than of 40960. This is important because `testnet` and `simnet` chains are frequently reset, thus any algorithm must work well under all circumstances, not just the current state of `mainnet`.
3. It does not handle a sudden change in overall stake participation well since it is too slow to react.
4. It does not handle steady state properly. That is to say that even though the pool size is significantly under target, purchasing the same number of tickets as are voting does not exert upward pressure.

## Summary

I think the simplicity and general form of the proposal are on the right track when the pool size is near the target, however it is not usable as is due to the aforementioned issues.

Member

## Proposal 2 Results

Here are the simulation results of Proposal 2 using two different demand distribution functions (DDFs):

DDF A -- The DDF discussed above that produces the most realistic results.

DDF B -- Purely based upon the yield and used as a point of comparison

These graphs are using the proposed values, however, I also simulated with other variants such as `b = 2 * targetPoolSize` and `c = 0` along with various slopes for `a`. The conclusions below are primarily based upon the proposed values, however, they mostly hold for other values as well.

Pros:

1. After the large initial spike during bringup, the ticket price is relatively stable
2. It handles steady state properly

Cons:

1. The proposal does not work when the pool size deviation is more than the boundaries which is always the case during initial conditions. This is important because `testnet` and `simnet` chains are frequently reset, thus any algorithm must work well under all circumstances, not just the current state of `mainnet`.
2. While it might work for an already established system with low volatility in the pool size, it doesn't handle unexpected swings such as a huge influx of new coins to stake or a large loss of staked coins due to a large portion of coins being taken offline. This could certainly be mitigated to a degree by increasing the boundaries, but the problem would still persist due to the asymptotic nature of the function.

## Summary

As previously discussed, I personally think the idea of having relatively linear adjustments whenever the pool size is close to the target and increasingly large price adjustments the further away you are from the target is the most desirable behavior. Unfortunately, as this proposal is, it would only work in a very narrow set of pool sizes and doesn't handle. I suspect the general forum could work well as part of a piece-wise function that handles the extreme cases differently.

Member

### davecgh commented Apr 9, 2017

 As an update, Wednesday of this coming week (Apr 12, 2017) is the deadline for proposals. We need time to implement, simulate, and analyze them in the simulator and then the actual software before release along with a testing period.

# Next generation SDiff #584 - Proposal

Being an active staker and stakepool admin with some understanding of the dynamics and desirability of this targeting mechanism I thought I would give my input. Some of this has been discussed before but here I am trying to distill it down to simplicity, flexibility (works with different network parameters) with an aim to maintain the essential goals and requirements of a healthy staking network.

I didn't have much time to refine this but I wanted to make sure I put in my submission before the deadline in 2 days.

In the Decred Mainnet we find these variables. A short variable is also defined for this submission:

• PoolTarget = Target Pool size = 40960
• MaxTickCycle = Maximum Tickets per Purchase cycle (20 * 144) = 2880
• TickVotesCycle = Tickets Voted per Purchase cycle (5 * 144) = 720

Here are some additionally defined variables:

• TickPrice = Current Ticket Price
• NewTickPrice = New Ticket Price
• TickBought = Tickets Bought
• PoolTickets = Tickets in Pool

The goals are:

• Maintaining a steady pool size is of great importance to follow a predictable Voting probability.
• Ticket price should favor a slow linear increase in price as overall coin supply increases.
• Ticket price should have low swings to discourage fee wars.

## Generalized formula

``````If PoolTarget >= PoolTickets:
NewTickPrice = TickPrice + (Bounds * Scaling Factor)
Else:
NewTickPrice = TickPrice + (-Bounds * Scaling Factor)
``````

## Detailed formula

``````Bounds = TickPrice *  TickVotesCycle / MaxTickCycle
``````

This formula can probably be improved when looking over a larger window of past blocks by adding additional scaling factors per block window.

This formula is designed to work on a Mid-staking cycle. This would be the typical case where Staking has reached a point of balancing out equilibrium forces. Further consideration would be needed when staking on the network has first started out.

I probably need to revisit the Bounds calculation to make it higher. The ScalingFactor looks OK to me.

## Example Calculations

Ticket price: 50
Tickets purchased in cycle: 2880
Pool tickets are less than PoolTarget
NewTickPrice = 50 + (50.0 * 720 / 2880) * (2880 - 720) / (2880 - 720)
NewTickPrice = 62.5

Ticket price: 75
Tickets purchased in cycle: 0
Pool tickets are greater than PoolTarget
NewTickPrice = 75 - (-75.0 * 720 / 2880) * (0 - 720) / (2880 - 720)
NewTickPrice = 68.75

Member

## Proposal 3 Results

Here are the simulation results of Proposal 3 using two different demand distribution functions (DDFs) and two different constants for the x value:

DDF A -- The DDF discussed above that produces the most realistic results.

`x=1`

`x=2`

DDF B -- Purely based upon the yield and used as a point of comparison

`x=1`

`x=2`

Pros:

1. The algorithm itself is easy to understand and efficiently computable
2. The pool size eventually converges on the target value

Cons:

1. The results vary significantly depending on the demand distribution. For example, looking at the difference between DDF A and DDF B, we can see that the shape of the curves vary significantly.
2. Purely chasing the yield allows the pool size to get around 116k and takes nearly 40000 blocks to bring the pool size back to the target.
3. Using a more moderate demand distribution such that users wait for lower prices under the VWAP before purchasing as opposed to purely chasing yield still results in the pool size reaching around 62k and it takes nearly 100k blocks to bring the pool size back to the target.
4. It fails to prevent runaway ticket purchasing during the initial population of the pool size since the ticket price rises too slowly even when the pool size is significantly over the target. This is important because testnet and simnet chains are frequently reset, thus any algorithm must work well under all circumstances, not just the current state of mainnet.
5. While it might work for an already established system that is near the target pool size and has low volatility, it doesn't handle unexpected swings such as a huge influx of new coins to stake or a large loss of staked coins due to a large portion of coins being taken offline.

## Summary

The algorithm itself is really simple, which is quite attractive, and it does eventually converge on the target pool size. However, as mentioned in the cons, the algorithm is a bit too slow to react to various scenarios such as a huge influx or a large loss of staked coins which causes the pool size to remain significantly over the target for an extended period of time. Alternatively, when using higher values of `X` a relatively high amount of volatility is introduced in both the pool size and the ticket price.

EDIT: I added a bit more to the summary based on the post below with different `X` values.

### coblee commented Apr 11, 2017

 You should try with a larger X. I think that will address most of the cons. … On Tue, Apr 11, 2017, 10:02 AM Dave Collins ***@***.***> wrote: Here are the simulation results of Proposal 3 using two different demand distribution functions (DDFs) and two different constants for the x value: DDF A -- The DDF discussed above that produces the most realistic results. x=1 [image: proposal3ddfa] x=2 [image: proposal3ddfax2] DDF B -- Purely based upon the yield and used as a point of comparison x = 1 [image: proposal3ddfb] x = 2 [image: proposal3ddfbx2] Pros: 1. The algorithm itself is easy to understand and efficiently computable 1. The pool size eventually converges on the target value Cons: 1. The results vary significantly depending on the demand distribution. For example, looking at the difference between DDF A and DDF B, we can see that the shape of the curves vary significantly. 2. Purely chasing the yield allows the pool size to get around 116k and takes nearly 40000 blocks to bring the pool size back to the target. 3. Using a more moderate demand distribution such that users wait for lower prices under the VWAP before purchasing as opposed to purely chasing yield still results in the pool size reaching around 62k and it takes nearly 100k blocks to bring the pool size back to the target. 4. It fails to prevent runaway ticket purchasing during the initial population of the pool size since the ticket price rises too slowly even when the pool size is significantly over the target. This is important because testnet and simnet chains are frequently reset, thus any algorithm must work well under all circumstances, not just the current state of mainnet. 5. While it might work for an already established system that is near the target pool size and has low volatility, it doesn't handle unexpected swings such as a huge influx of new coins to stake or a large loss of staked coins due to a large portion of coins being taken offline. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#584 (comment)>, or mute the thread .
Member

### davecgh commented Apr 11, 2017 • edited

 @coblee, I did try with higher X values as well. Namely 4, 5, 6, 10 and 100. However, I didn't post all the results in the name of brevity. The end result was effectively the same, although some differences. Effectively, lower X values see the initial period (and any huge influx of coins at any point) cause the pool size to shoot up significantly over target and then it takes a long time to relax back down to the target pool size while higher X values introduce more volatility in both the pool size and the ticket price. For example, here is DDFA with `x=10` (and a large influx of coins around block 80k). And DDFA with `x=100`: Purely buying if the current yield is acceptable without considering the VWAP (which granted isn't super likely behavior as users typically will wait for a price drop to boost their yield which is what DDFA models, but is still useful for studying the algorithm's behavior), with `X = 10` results in an initial pool size of around 70k and roughly 10k blocks to return to the target. Then with an influx of coins around block 80k, it shoots up to just over 44k and takes around 6k blocks to return the target. With `X=100`, the results are quite similar to DDFA with `X=100`.
Member

### davecgh commented Apr 11, 2017 • edited

 I've updated dcrstakesim to include the code for first three proposals that have been analyzed above as well as added an additional behavior which increases the percentage of staked coins from 40% to 50% at the block that is 60% of the number of blocks to simulate and then drops it back down to 40% at the block that is 80% of the number of blocks to simulate in order to simulate a sudden surge and drop in the number of staked coins. Please feel free to download the code and play around with the parameters in the `proposals.go` file in order to independently verify these results as well as analyze what happens as the parameters are tuned. The graphs generated by the simulator are interactive unlike the images I'm posting here, so it allows you to zoom in and really explore the data.

### edsonbrusque commented Apr 12, 2017

 I was reading this and thinking about applying PID (Proportional Integral Derivative) control theory. A simple Proportional controller would be something like: NewTickPrice = (PoolTarget - PoolTickets) * Kp A simple Integral controller could be: NewTickPrice = TickPrice + (PoolTarget - PoolTickets) * Ki I don't believe a Derivative controller would give good results. It can become wild with a sudden change on pool size. A PI (Proportional Integral) controller could be implemented with something like: error = (PoolTarget - PoolTickets) integral = integral + error * dt NewTickPrice = Kp * error + Ki * integral I think we have sufficient historical data to use Ziegler–Nichols method to find good Kp and Ki values. It could also be adaptative but that could be overkill and too complex. Anyway, I think I'm late. :( Best regards!
Member

### davecgh commented Apr 13, 2017 • edited

 For an update, I've updated dcrstakesim to further improve the modelling. In particular, I've modified it to use the DDF prior to the stake validation height instead of just assuming 50% demand. While the 50% more closely matched what happened on mainnet, it was artificially limited by the fact the system was new and relatively difficult to use at the time as opposed to purely rational behavior. It's pretty clear that if things were to reset today, tickets at the starting price of 2 DCR would have 100% demand! Second, I updated the yield DDF such that it will calculate the yield based on the expected future return of the average expected vote time (28 days). In addition, the two newest proposals will be named/numbered as follows: Proposal 4: The proposal from @jyap808 Proposal 5: The proposal from @edsonbrusque In addition, there are several more proposals that have not been pasted here and are being carried out via other channels such as slack. There has been a lot more simulation going on, but it's too much to post all of the results here. More results to come. EDIT: I should have also noted that I reran the proposals that have already been reported with the changes to the simulator and the conclusions are the same. The values for the initial bringup phase are slightly different, but the end result did not change.
Member

### chappjc commented Apr 13, 2017

 This was put forth in Slack, so I guess I'll declare the algorithm implemented in my branch finalized, and make it it a proposal as well. Well, I will this evening at least. https://github.com/chappjc/dcrstakesim/tree/chappjc
Member

### chappjc commented Apr 13, 2017

 I've tersely documented my proposal with code comments. Please see the following lines for a description: https://github.com/chappjc/dcrstakesim/blob/chappjc/proposals.go#L37-L83 There's nothing complicated (or clever, really), but it seems to work. The semantics of the parameter are a bit different since the last commit, and it's something like a limit on the percent change in price from one window to the next, and the default of 9 is OK.

### edsonbrusque commented Apr 13, 2017

 Hello! Posting test results after some tunnings. // STATIC PUBLIC VARIABLES var integral = 0.0 var previousError = 0.0 // TICKET CALCULATION USING PID (Proportional Integral Derivative) CONTROLLER Kp = 0.0017 Ki = 0.00005 Kd = 0.0024 curPoolSize := int64(s.tip.poolSize) error := float64(curPoolSize-40960) integral = integral + error derivative := (error - previous_error) nextDiff := int64(100000000 * (error * Kp + integral * Ki + derivative * Kd) ) previousError = error Regards!
Member

### davecgh commented Apr 13, 2017

 I just pushed an update to dcrstakesim to allow `-ddf=b` to be specified on the CLI to use the alternate demand distribution function mentioned above for those playing along and running their own simulations.
Member

### davecgh commented Apr 14, 2017

 I've just pushed another update to dcrstakesim to include a graph of the total versus staked supply to help show the changes in the participation that are being driven by the ticket price algorithm. For reference, the following shows the results using the real mainnet data and existing sdiff algorithm:
Member

### chappjc commented Apr 14, 2017 • edited

 Looking at DDF B results makes it pretty clear that starting the simulation with a low stake difficulty – a ridiculously high yield – does not make a ton of sense. I added a flag (`-minsdiff`) to test the impact of this initial condition. After doing this, the difficulty algorithm from my branch gives reasonable results for both DDF A and B, where previously the ramp-up period was trouble with DDF B.
Member

### davecgh commented Apr 14, 2017 • edited

 I'd say its utility is useful even though I agree with the sentiment. Both `testnet` and `simnet` are often reset which means the algorithm needs to handle bring-up. I'm pretty positive that if things were to reset today with the software more user-friendly, there would likely be an incredibly high demand for min price tickets and I doubt users would care at all about the pool size. As long as the ticket price was offering super high yields, they might be inclined to purchase. That said, the reason DDF A is the default is because it's a bit more rational in that it effectively says even if the yield is super high, if I just wait a period or two, my yield will be even higher, so I'm going to wait a bit. However, as we know, people aren't always particularly rational either, though. EDIT: I have not looked at the most recent algorithms in depth yet as I've been working on improving the simulator and a joint proposal as well. I will however offer an analysis on them and if they are better they will be chosen.
Member

# Ticket Price Algorithm Proposal (Proposal 7)

## Abstract

The following is a joint proposal by @raedah, @behindtext, and @davecgh that is primarily based on the original proposal in this issue (proposal 1) but has been expanded to address its shortcomings and incorporate some of qualitative behavior insights made available during the process of simulating the various proposals. Many thanks to everyone who has participated in this extremely important process.

## Requirements and Goals

Before diving into the specifics of the proposal, let's address the primary requirements and goals of an ideal ticket price algorithm:

• The pool size must remain fairly stable around the target pool size since sizes that are too large or too small would substantially alter the social contract between stakeholders and the network, disrupting the predictable nature of average time to vote a ticket, percentage of tickets that expire, and the rate of return. This is by far the most important requirement.
• The ticket price must not fall into a resonant pattern that only encourages ticket purchases during a single interval.
• The ticket price must adjust quickly enough to encourage or discourage the desired purchasing behavior necessary to maintain pool size stability and should ideally adjust smoothly when the system is running at or near equilibrium to encourage good price exploration and discourage fee wars.
• The algorithm must be able to absorb sudden surges and ebbs in demand and staking proportions and recover back to expected steady state behavior.
• The algorithm must not be overfitted to a specific demand distribution function (DDF) or the specific `mainnet` parameters since it will be used on the `testnet` and `simnet` networks and the parameters of `mainnet` could change in a future hard fork vote.
• The algorithm must be reasonably efficient to compute
• The algorithm should ideally be simple to understand and implement

## Theory of Operation

The general idea of this proposal is rooted in the concept of forces that react to and drive changes to the pool size. The first force acts to counteract the relative change in the pool size, while the second force acts as a restorative force to push the pool size towards the target value. There are minimum and maximum bounds imposed to prevent runaway behavior.

## Formulas

### Generalized

Explanation of terms:

`S_N` = Stake difficulty of the current interval
`S_N-1` = Stake difficulty of the previous interval
`F_c` = Force to counteract relative change in pool size
`F_r` = Restorative force to push the pool size towards target
`S_lb` = Lower bound of final computed stake difficulty
`S_ub` = Upper bound of final computed stake difficulty

### Detailed

Explanation of new terms:

`totalPoolSize_N` = Current pool size including immature tickets
`totalPoolSize_N-1` = Previous interval's pool size including immature tickets
`totalTargetPoolSize` = Target pool size to maintain including ticket maturity period (42,240 on `mainnet`)
`totalSupply` = Total current coin supply
`targetPoolSize` = Target pool size to maintain (40,960 on `mainnet`)
`votesPerBlock` = Number of votes in each block (5 on `mainnet`)

## Simulator Notes

It is worth noting that dcrstakesim intentionally aims for several worst case scenarios in order to help ensure the algorithms behave well under extreme conditions and attack scenarios.

• All coins are managed as a single entity conspiring to manipulate the price. In real world usage scenarios, the coins are distributed among different parties who act in their own best interest which is more stable than the simulated behavior.
• The simulator provides much larger surges and ebbs in demand and staking proportions as compared to what is more likely to happen in the real world. More concretely, it instantly surges the amount of staked coins by a full 20% of the entire supply while simultaneously doubling the demand and then drops them by the same.
• One of the demand distribution functions (DDF B) solely chases nominal estimated yield which is quite irrational in that it completely ignores things like opportunity cost, the volume-weighted average price, and the fact that the actual expected yield is lower when the pool size is larger. The vast majority of real-world stakers generally make much more rational decisions.

## Results

Pros:

• Both the pool size and ticket price are quite stable
• The algorithm itself is easy to understand and can be efficiently computed
• The algorithm handles the initial bringup period well with realistic and rational staking behavior
• This is also a good indicator that it will behave nicely under extreme surge conditions
• The algorithm handles steady state properly
• The algorithm recovers well from sudden shocks to the system

Cons:

• The initial bringup period with the more irrational staking behavior (DDF B) allows the pool size to surge to roughly 70k

## Summary

I believe this proposal satisfies all of the hard requirements and manages to achieve the ideal goals as well. For example, the mathematical form of the algorithm is simple and its theory of operation is easy to understand.

After running over 300 different simulations with this algorithm and manually tweaking the DDFs to intentionally throw various forms of irrational and attack-like behavior at it, the only notable con I can find is that under unrealistically extreme conditions the pool size can spike up higher than would be ideal, however, that is mitigated by the fact it fairly quickly self-corrects and returns to the desired behavior versus experiencing the runaway behavior of every other algorithm I've tested up to this point, including the existing algorithm on `mainnet`, under those same circumstances.

Member

### davecgh commented Apr 15, 2017 • edited

The two newest proposals will be named/numbered as follows:

Proposal 6: The proposal from @chappjc
Proposal 7: The joint proposal from @raedah, @behindtext, and @davecgh

I will be simulating the remaining proposals which have not already been done (4, 5, and 6) and providing an analysis of each this coming week.

### kandiru commented Apr 15, 2017 • edited

 @davecgh Nice work :) If you remove the S_ub term, then you should remove that initial poolsize surge? Does the S_ub max come into play much, or is it only allowing that poolsize surge? To me, it looks like the S_ub serves to allow the poolsize max to be exceeded, in exchange for more level ticket prices, when staking massively increases. is this what we want? You could try removing it, or doubling S_ub? It should remove that inital surge to 70k in that case.
Member

### davecgh commented Apr 15, 2017 • edited

 @kandiru: Thanks! The goal of `S_ub` is primarily to limit the amount of force than can be applied in a single interval to smooth out shocks to the system. Without it, a huge surge will cause the pool size to oscillate too much since it would allow the ticket price to go to astronomical heights which would then take a while to relax back down to a range where purchasing makes sense again. Meanwhile, the pool size would have dropped to the point it needs to encourage heavy buying again. The system will eventually converge without the upper bound, but it would take a lot longer. For example, here is the simulation with DDF-B without the upper bound:
Member

### marcopeereboom commented Apr 15, 2017

 @davecgh I have given this a once over and the last suggestion seems to get us where we want to be. I am going to spend a bit more time on this to think through the corners.
Member

## Proposal 4 Results

Here are the simulation results of Proposal 4 using two different demand distribution functions (DDFs):

DDF A

NOTE: The simulation had to be manually stopped early because the algorithm results in the pool size falling to zero due to failing to encourage ticket purchases which in turn causes the chain to be unable to advance due to a lack of votes.

DDF B

NOTE: The simulation also had to be manually stopped early for this DDF for the same reason as DDF A.

## Summary

I've skipped listing pros and cons since (as suspected by the author himself) this algorithm would only potentially work with an already established system and failed to complete the simulation for either DDF. As a result, I don't believe it is suitable since one of the primary requirements for the new algorithm is that it works for bringup as well as be able to absorb sudden surges and ebbs in demand and staking proportions and recover back to expected steady state behavior. The algorithm in its current form is unable to perform that requirement.

### jyap808 commented Apr 18, 2017

 @davecgh thanks for taking the time to run through the algorithm proposal. it was mostly to provide feedback so it's nice to see how Proposal 7 is progressing.
Member

## Proposal 5 Results

Here are the simulation results of Proposal 5 using two different demand distribution functions (DDFs) and the constants proposed by the author, namely, `Kp=0.0017`, `Ki=0.00005`, and `Kd=0.0024`.

DDF A

DDF B

Pros:

1. The pool size eventually converges on the target and the ticket price is relatively stable
2. The algorithm handles instant surges and ebbs reasonably well once the system has been running for a while
3. The algorithm is mathematically sound since it uses the well-known PID controller model which is used for control feedback loops

Cons:

1. It fails to prevent runaway ticket purchasing during the initial bringup phase with both DDFs and takes nearly 25000 blocks (over 2 months at the target block time) to bring the pool size back to the target.
2. The introduction of specific constants causes the algorithm to be overfitted to the specific `mainnet` parameters and DDFs used by the simulator
3. Implementing the algorithm would pose some complexity issues for consensus because it requires the error and integral terms to be calculated all the way back to the genesis block in order to maintain consensus and that information is not available in the headers, unlike the previous ticket price. This would create an additional burden to somehow cache that information in a database or otherwise calculate and cache it during startup for every actor involved.

## Summary

The algorithm itself is based on the mathematically sound PID controller and it does eventually converge on the target pool size as well as has the ability to handle instant surges and ebbs in demand after the system has been operational for while which are desirable properties.

However, as mentioned in the cons, this algorithm struggles with the initial bringup phase with both DDFs, is relatively slow to converge, and also would not react well to any future changes to the parameters used by `mainnet` since it is overfitted to its specific parameters by the introduction of the three constant terms. Also, implementing it would bring additional complexity due to the requirement of external state that is not already available in the block headers.

Member

### davecgh commented Apr 18, 2017

 @jyap808 No problem. Thank you for participating! Regardless of the final proposal that gets selected, all of the proposals, feedback, and results have helped zero in on a solution.

Closed

Member

## Proposal 6 Results

Here are the simulation results of Proposal 6 using two different demand distribution functions (DDFs) and the constants proposed by the author, namely, `s1 = 100` and exponentiation by a factor of `2`.

Pros:

• Both the pool size and ticket price are quite stable
• The algorithm itself is fairly easy to understand and can be computed with reasonable efficiency
• The algorithm handles the initial bringup period well with realistic and rational staking behavior
• The algorithm handles steady state properly
• The algorithm recovers well from sudden shocks to the system

Cons:

• The initial bringup period with the more irrational staking behavior (DDF B) allows the pool size to surge to roughly 63k

## Summary

The results of this proposal are nearly identical that of proposal 7 which is not surprising since it shares the same general principle of forces, was updated to include some of the details from proposal 7, such as including immature tickets in the calculation, and even has the same dominant term. It is interesting to note that initially both proposals were originally worked on independently and arrived to very similar methodologies.

As such, I believe this proposal, like proposal 7, satisfies all of the hard requirements and manages to achieve the ideal goals. Since the two proposals are virtually identical, both in form and function, further analysis with different DDFs and a more in-depth look at the minor differences in the qualitative behavior and the complexity of the math and implementation will be needed to make a final determination.

Member

### chappjc commented Apr 20, 2017 • edited

 Perhaps the right way to look at prop 6 is to decide if their is anything of value (via further analysis and/or theoretical assessment), and simply incorporate the ideas into prop 7. I'm not sure how prop 7 has evolved recently, but I personally don't have any further plans to improve my proposal. In any case, I want to be clear that I am only interested in reaching the best possible solution. Given the theoretical similarities, I think the only questions are Does the different pool velocity term (similar effect to the "Fc" term in prop 7) achieve the goal any better? Is there a need for a term to attenuate price change based on the previous price change magnitude (the exponential in prop 6)? The idea was to slow down price swings, and I've considered a second derivative as well.
Member

### davecgh commented Apr 21, 2017 • edited

 We have been working with some various tweaks to proposal 7 as well, one of which includes an exponential term which either attenuates or amplifies the ratio depending on the periodic distance from the target pool size (as opposed to price change). It has shown some excellent results in terms of addressing the final con of proposal 7. I believe the difference in the `Fc` term between the two proposals really is a wash without applying any further transformations. I've exchanged them in both algorithms and the results don't change in any appreciable way. We are experimenting with some other approaches to attenuating and amplifying the relative velocity as well however, because one of the challenges is that the final algorithm will need to be implemented without the use of floating point math because it has to produce identical results on all languages and machines due to its role in consensus and floating point math is notoriously unreliable in terms of producing exactly identical results between machines and languages due to a variety of factors.

Merged