Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversions with predictable oracle prices #190

Closed
AroundTheBox opened this issue Jul 31, 2019 · 38 comments
Closed

Conversions with predictable oracle prices #190

AroundTheBox opened this issue Jul 31, 2019 · 38 comments

Comments

@AroundTheBox
Copy link

AroundTheBox commented Jul 31, 2019

PegNet conversions allow unlimited volume at a predetermined price, so any case where the price is stale or predictable provides users with a free embedded option no resourced by the network. For volatile crypto-assets these options are very valuable, and users could extract about 8% per day of value from PNT holders, compounded indefinitely with no limit.

For example I ran an Excel simulation with ETH and USD pairs, using 5 days worth of price data at 10 second intervals. At 9 minutes and 30 seconds into each block, current ETH prices are compared to average prices for the block so far. If the current ETH price is higher than the average then the simulation converts any USD to ETH. If the current price is lower then it converts any ETH to USD. Otherwise it waits for the next block. I assumed the oracle price for the block converged on the average over the period

For the data period I looked at the simulation made a conversion about every 2.8 blocks and had an average profit of 15 bps per conversion. This equated to about 8% a day, or ~10x if extrapolated to a month. The results were positive (although with more variance) if I assumed oracle prices converged to a single random price over each 10 minute block, or converged to the median block price (which is closest to what I think the grading method will actually return assuming continuous polling). Although the algorithm can certainly lose in any specific conversion, any variance is pretty quickly washed away since gains and losses are realized in 10 minute blocks.

I think there are two general approaches to addressing this, both which have problems:

1. Ensure transaction requests happen before OPR mining. The most obvious way to do this is to process conversions in the next block, but that requires users waiting up to 20 minutes.

2. Force miners to converge on prices near the end of the block. This strategy has its own issues, because even if you define the target as prices at minute 9, or example, it's in miners' interest to continue mining in the first 9 minutes at the most recent available price, since this is their best prediction for the minute 9 price and the alternative is idling.

One possibility would be to include in the hash an externally verifiable, public, random value that appears in minute 9. For example you could force OPRs to include the POW value of the next Ethereum block mined after minute 9. That would be difficult to implement, adds dependencies on another blockchain, and adds a ton of complexity.

I don't know the best solution, but I think one needs to be found before pM2, and possibly at least discussed before pM1 is complete.

EDIT: Related to issue 187 which has been closed, consolidating the discussion here.

@WhoSoup
Copy link
Contributor

WhoSoup commented Aug 1, 2019

2. Force miners to converge on prices near the end of the block. This strategy has its own issues, because even if you define the target as prices at minute 9, or example, it's in miners' interest to continue mining in the first 9 minutes at the most recent available price, since this is their best prediction for the minute 9 price and the alternative is idling.

This would leave miners with less than a minute of mining. Adding entries to factom isn't instant and has some time overhead. I haven't measured it but you'll likely have to give it at least a five second buffer for your #oprs*2 messages to be propagated through the p2p network. More if the network is busy, which it would be if every miner is submitting their OPRs.

Please keep that technical restriction in mind when doing theorycrafting!

@PaulSnow
Copy link
Contributor

PaulSnow commented Aug 1, 2019

We could mine twice in a block, and conversions in minute 1 to 5 are done at the prices that settle in minute 9, and prices in minute 6 to 9 settle in minute 5 of the next block.

Conversions would settle in 10 minutes, and we would be doing more mining, but that's not necessarily a bad thing.

@Emyrk
Copy link
Member

Emyrk commented Aug 1, 2019

@AroundTheBox I was pondering this too, and was going to write a script that would go back in the last year as a simulation. I figured this would be possible. Can you post your findings/excel somewhere?

Do you believe if we take the prices from the next block vs the current on a conversion, that would prevent the ability to do this kind of converting? I know you suggested this, just curious if you ran an experiment on this setup too.

@Emyrk
Copy link
Member

Emyrk commented Aug 1, 2019

Also this conversion affects the conversion price: #11

@PaulSnow
Copy link
Contributor

PaulSnow commented Aug 1, 2019

I think this is interesting and compelling enough to say that the current mining approach will require 20 minutes to settle.

While mining twice in a block is easily possible, it is too complex to drop in right now, and isn't needed if we have a work around for conversions (the 20 minute settlement works for that)

A huge range of solutions are possible, but this takes time to consider. Fees, different weightings on values to give later prices an edge over earlier prices.

@AroundTheBox
Copy link
Author

AroundTheBox commented Aug 1, 2019

Do you believe if we take the prices from the next block vs the current on a conversion, that would prevent the ability to do this kind of converting?

Yes, to formalize the issue a little bit, the conversion price will be some function of prices between:
St - The mining start time
Et - The mining end time

We'll call Ct the conversion request deadline time.

Right now there's a long period between St and Et, allowing for significant price movements over the period. Because Ct is roughly equal to Et, users can predict the oracle price and compare it to current prices and execute on an embedded option.

One way to solve the problem is to change the function to somehow weight prices near the end of the period, but I can't think of a way to do that. Another way is to significantly shorten St to Et period, but as Who pointed out that has other problems.

The only way to guarantee users can't take advantage of price information is to require Ct to be before St. With the current setup that means processing transactions in the next block.

Can you post your findings/excel somewhere?

I stuck a 3 day subset of the data in the attached Excel file. The larger set I was looking at was non contiguous and it's simpler not to worry about the breaks. It's nothing fancy, but you can copy any column you want from the "block data" tab into the conversions tab. So for example you can pick a random trial and drop it in and it will act as if the oracle prices converged to that result. It's also really easy to add a gap threshold or transaction costs or whatever you'd like to columns F and G in the trading tab, but it won't really impact the conclusions.

I also left Bitcoin data if you want to experiment with that as well. You can get a very small boost from finding 3 way conversions between BTC, ETH, and USD, but it's smaller than you'd think because BTC and ETH are so heavily correlated over short time frames.

Also note in the trading formula that the current price has no impact at all on the conversion except as a information on whether to convert or not. In the end the oracle prices are all that matter, but the next block's oracle price is correlated more heavily with the current price than it is the last block's oracle price. That's where the information advantage is actually realized.
Stale OPR Trading Data.xlsx

EDIT: Forgot to mention, take the "OPR" column in the "block data" tab with a grain of salt. I used the square of squares grading methodology as if ETH was the only price. This converges to something close to the median. For a series of prices any one price might be closer to the mean, but it's really complex and depends on the relative volatility or each asset. In actually if miners are truly optimizing their chances I think the oracle prices will converge to the beginning of period prices. Either way it doesn't change any broad conclusions.

@WhoSoup
Copy link
Contributor

WhoSoup commented Aug 5, 2019

thanks @AroundTheBox for providing the 3 day data.

I just ran a very simple arbitrage script on the data that followed two very simple rules:

  1. If at minute 9, BTC is worth more than 9 minutes ago, convert all USD to BTC
  2. otherwise, convert all BTC to USD

I started with $100 and after three days ended up with $137.44 minus $0.23 worth of transactions. There's no way this is a feasible system.

Not processing transactions until the next block is an unfortunate solution from an UX standpoint but it would fix the problem.

Another potential solution that no one has brought up is transaction fees. The fees could be recycled to the miners to give additional incentive. I tried it with both a 0.05% and a flat 20 us-cent fee, and both broke my arbitrage script, so it likely doesn't have to be very high.

@AroundTheBox
Copy link
Author

Another potential solution that no one has brought up is transaction fees. The fees could be recycled to the miners to give additional incentive. I tried it with both a 0.05% and a flat 20 us-cent fee, and both broke my arbitrage script, so it likely doesn't have to be very high.

Fees would help the issue, but not fix it. Even for a very high fee like 20 bps, if you set your script to only trade when the difference is above the fee you'll still see a decent profit.

Processing transactions in the next block helps from a process perspective, but as you said has drawbacks from a UX perspective and could have big impacts on appetite for arbitrage. An arbitrager would be trading spot prices on an exchange against processing through the protocol at an unknown price at least 10 minutes into the future. As a result pure arbitrage would no longer be possible.

I don't think there are any easy answers.

@PaulSnow
Copy link
Contributor

PaulSnow commented Aug 5, 2019

How does a commitment to conversion prior to Oracle data settlement not fix the issue?

What I was suggesting is OPRs in minute 5 and OPRs in minute 9. Conversions in minute 9-4 use the next min5 opr, and conversions in minute 5-8 use OPRs in min 9.

@AroundTheBox
Copy link
Author

With your proposed method I'd just check every minute 4.5 and minute 8.5 to see how spot prices compare to predicted OPR prices. I haven't done the math but I bet I'd get similar results. Two OPRs per block allows for less profit per transaction but twice the transaction opportunities. The only way to ensure you can't take ever take advantage of the protocol is to require that transactions happen before the start of the OPR mining period.

Besides UX being worse, I'm saying an issue with requiring the wait is it kills any opportunities for pure arbitrage as described in the whitepaper. If pBTC is trading for $1,050 on an exchange while BTC is at $1,000, an arb trade is to sell my pBTC on the exchange and convert pUSD to pBTC through the protocol. If we force waiting a block, however, I end up selling pBTC on exchange at current prices while my conversion will happen at a future unknown price. On average it will still be profitable, but it's no longer the risk free arb opportunity I think you're hoping will attract liquidity providers.

@PaulSnow
Copy link
Contributor

PaulSnow commented Aug 5, 2019

But can't you do that with simple trades on an Exchange? Guess that the price isn't going to change for 30 seconds? Most arbitrage has to deal with slippage on trades between exchanges, so it isn't risk free.

As you approach a boundary where you are pretty sure the price isn't going to change much, you can arbitrage on that basis against a token off its peg on an exchange. And traders can do that even with the 20 minute max settlement, because they can do a conversion in minute 9 of one block, and the prices of minute 0 (or minute 1) will not likely change so much, even if 10 minutes are required to see the settlement.

@PaulSnow
Copy link
Contributor

PaulSnow commented Aug 5, 2019

A very good off peg price on an exchange isn't going to carry that much currency risk even earlier, as PegNet conversions (at least in the beginning) isn't going to move the market on real exchanges.

@AroundTheBox
Copy link
Author

...because they can do a conversion in minute 9 of one block, and the prices of minute 0 (or minute 1) will not likely change so much, even if 10 minutes are required to see the settlement.

Great point. OPRs converging to prices at the beginning of a mining period is a liability with conversions in the same block, but it's an asset with conversions submitted the block before.

@PaulSnow
Copy link
Contributor

PaulSnow commented Aug 5, 2019

Right, so as long as we create two conversion points (minute 5 and minute 9) then I think this solves the issue, and gives two good points for arbitrage. We would need a data element in minute 5 to prove the prices were collected in minute 5. Right now that doesn't easily exist. Until then, we realize that arbitrage will be safer towards the end of the block.

@PaulSnow
Copy link
Contributor

PaulSnow commented Aug 5, 2019

Also, arbitrage is quite complex. Simple arbitrage is relatively risk free, but there are other strategies that gain more return for more risk. I'm not deep enough into the process to make strong statements, but I do get the sense that the liquidity in the PegNet for the conversion of assets benefits the process. Particularly with assets that otherwise would have very little liquidity in some markets.

@WhoSoup
Copy link
Contributor

WhoSoup commented Aug 6, 2019

I've been thinking the more-than-once-per-block approach and I'd like to pitch the following idea:

Part One: One Entry, Multiple OPRs

More OPRs mean both more TPS required as well as a higher cost of mining. If we replace the massively bloated JSON format with binary, we could fit multiple OPRs into a single factom entry. We would lose human readability in the factom explorer but gain a lot of additional space.

What the OPR (a factom entry) would look like:

ExtId[0] = Nonce0
ExtId[1] = Nonce1
...
ExtId[X] = NonceX
Content:
    32 bytes    payout address in bytes (decoded base58 address)
    variable    id as string
    4 bytes     height
    32 bytes    winners (sha256 hash of the previous 10 winners)

    assets {
        256 bytes   the 0th list of 32 assets with 8 bytes per assets
        256 bytes   the 1st list of 32 assets
        ...
        256 bytes   the Xth list of 32 assets
    }

The formula if we wanted to write our own binary format would be:

       asset data          nonces          overhead
(#assets * 8 * #oprs) + (#oprs * 8) + (32 + 32 + 4 + #id)

For 3 samples per block and 32 assets, that'd be 860 + space for factom's overhead and the miner id.

If we used protobuf, there'd be some additional protobuf overhead. I wrote a sample and with a 20-byte miner id, the protobuf was 911 bytes vs the 880 theoretical minimum. Protobuf is however highly compatible with all other programming languages.

Part Two: How would it work?

The difficulty continues to be the first 8 bytes of LXRHash(SHA256(data) | nonce).

Each section has its own difficulty. Miners would:

  1. Start mining a nonce0 for base0 = SHA256(payout | id | height | winners | 0th list of assets), starting at minute 1
  2. At the next breakpoint, get your highest difficulty and now start mining nonce1 for base1 = SHA256(base0 | nonce0 | 1st list of assets)
  3. At every breakpoint X, mine nonceX for baseX = SHA256(baseX-1 | nonceX-1 | Xth list of assets)

This approach would force miners to mine these records sequentially, since X depends upon data from X-1. If we now grade that OPR like before, only we compare all X breakpoints at once, miners would have to spend equal time on all sections in order to maximize their difficulty for each breakpoint. This would make the grading algorithm a lot harder but filtering by deviation should still work.

Part Three: How many breakpoints

At 1 EC, the most we can do with 32 assets is three: at minute, ie at minutes 1-3, 3-6, 6-9.

In order to have one opr per minute (9 total), we'd need 3 EC. (theoretical 2464 bytes, protobuf 2532 bytes, each with a few bytes of factom overhead).

To have 9 entries and 2 EC, we'd need to drop down to 25 assets.
To have 9 entries and 1 EC, we'd need to drop down to 10 assets.

To have 4 entries and 1 EC, we'd need to drop down to 26 assets.
We can do 4 entries and 2 EC with 32 assets. (~1182 bytes)

Part Four: Transactions

Transactions would be subject to the price for the factom minute they were submitted in for that block. This would present a problem if we ever have a block that doesn't have any winners, though.

Anyway, just wanted to provide this as an alternative to submitting multiple factom entries per block. Doing it in one go is possible and you can soft-enforce miners to mine each breakpoint for an equal amount of time.

@AroundTheBox
Copy link
Author

Wow. Really interesting proposal. The technical points on format of the binary are beyond me, but I have a question on how you'd grade the final resulting difficulty.

With a single breakpoint per block, mining is linear in that if I have 10 times the hashpower I should expect 10 times the payout.

How are you proposing we calculate the difficultly with two breakpoints per block, for example? If you sum the difficulty or use a minimum threshold per block I think you get nonlinear results, where with 10 times the hashpower I might expect 100 times the payout. This would of course quickly lead to centralization.

@Emyrk
Copy link
Member

Emyrk commented Aug 7, 2019

I really like the increases resolution of the 1min vs 10min between prices.
I was assuming the payout is still 5k per block, where the winner has to win the overall across all 10 breakpoints. We'd have to tighten down the grading for sure.

If we took the average hashpower across all 10 breakpoints, than if you mine super hard on 1, it doesn't leave you much time for another. But I think we'd have to see the discovery rates to see what the best way to prevent over mining 1 min on purpose to rank higher. Maybe take the quartile ranges to exclude like the top 3 difficult breakpoints, and average the rest? Whisker box plots can get rid of outliers pretty easily.

@AroundTheBox
Copy link
Author

Emyrk,

I don't think you can just add or average the hashpower for each segment because a mining pool or large entity can easily dominate and the protocol will either quickly centralize or be 51% attacked.

Right now if you have 5 times the hashpower as me you'll get a higher PoW score than me 5 times as often, just as you'd hope. With the average of PoW scores over 10 segments/breakpoints, if you have 5 times the hashpower as me you'll get a higher PoW over 99.9% of the time.

One solution would be to grade and payout for each segment individually, although as usual this creates its own problems.

@WhoSoup
Copy link
Contributor

WhoSoup commented Aug 7, 2019

How are you proposing we calculate the difficultly with two breakpoints per block, for example?

Part Five: Grading & Difficulty

Right now we're already grading 32 different assets. There shouldn't be any functional difference if we start grading 64, 96, etc. Grade all sets of assets at the same time using the current process and then one entry wins and decides the prices-per-section for that block.

I don't think we should have a separate winner for every minute, that doesn't make sense in my format. The point is that people can't just mine an individual section, they have to mine all sections.

For the difficulty, I would use the minimum of all breakpoints. Theoretically, mining the same amount of time for every section will result in a similar difficulty, in which case the minimum is okay. If someone wants to spend more time mining a section because they want a higher difficulty, it leaves them less time to mine the next section, which increases the likelihood they get an even lower difficulty there.

@WhoSoup
Copy link
Contributor

WhoSoup commented Aug 7, 2019

Right now if you have 5 times the hashpower as me you'll get a higher PoW score than me 5 times as often, just as you'd hope. With the average of PoW scores over 10 segments/breakpoints, if you have 5 times the hashpower as me you'll get a higher PoW over 99.9% of the time.

99.756% 🐱

I agree that taking the average of 10 entries eliminates a lot of the variance, which is not good.

@AroundTheBox
Copy link
Author

I think taking the minimum PoW across the segments has the same issue as taking the average, where with 5x the hashpower you'll beat me... lets say over 99% of the time.

@WhoSoup
Copy link
Contributor

WhoSoup commented Aug 7, 2019

I think taking the minimum PoW across the segments has the same issue as taking the average, where with 5x the hashpower you'll beat me... lets say over 99% of the time.

Can you elaborate? My thought was that taking the minimum difficulty is no different than what we're doing now, which is one difficulty per factom entry.

I do think there's a different problem, though and that is that it would be harder to submit different OPRs.

In the first round, you have miners all trying to get the highest difficulty, so you end up with a top X records. In the second round, why would you ever take anything but the highest difficulty from round 1 to continue from?

Essentially you'd end up with a list of OPRs where the first N-1 sections all use the same nonce, only nonceN is different. If the lowest difficulty is somewhere in nonce0...nonceN-1, then all those records would be identical.

@AroundTheBox
Copy link
Author

Let's say PoW is just rolling a 20 sided dice and the highest roll wins. I have one dice and you have 5.

Score after one trial and you'll beat me 5 times as often as I beat you. Take the minimum score out of 10 trials and how often will you beat me?

If each of your five dice (die?) was an independent entity then we'd be on an even playing field, but you get the advantage of picking your best result from all 5 dice in round 1, the best result from all 5 in round 2, etc.

@WhoSoup
Copy link
Contributor

WhoSoup commented Aug 7, 2019

If each of your five dice (die?) was an independent entity then we'd be on an even playing field, but you get the advantage of picking your best result from all 5 dice in round 1, the best result from all 5 in round 2, etc.

Just doublechecked it via a simulation and you're right. Turns out to be exactly the same as taking the average.

@Emyrk
Copy link
Member

Emyrk commented Aug 7, 2019

@WhoSoup I do like the idea of a finer resolution for the pricing. As with 10 breakpoints, a trade is only maximum 1min away from a price point (assuming miners actually poll at 1min intervals).

From @AroundTheBox's comments, it is obvious this complicates things like grading. There other side effects like bumping us out of the free tier limits for I think every data source we support. This would take a development effort to effectively rewrite the grading, update the miner, find out polling solutions.

Do you think it would be feasible to see this as a protocol upgrade? And therefore not need it at launch of pM1 (where we are focued on mining and PNT distribution). I'm not sure at what point we'd say this finer resolution is required (pM2? Future?).

Does the current worst case of 20min resolution provide any problems that someone can profit from?

@PaulSnow
Copy link
Contributor

PaulSnow commented Aug 7, 2019

To level set, I think we have time to figure this out. I do think this can be done as an upgrade later. It would be best to do prior to being on the exchanges, but we have some time to do so.

@mberry
Copy link
Contributor

mberry commented Aug 8, 2019

Tried these tests using the median instead of arithmetic mean?

Offered a few other suggestions for robust methods of statistical averaging here: #106

In particular Winsorizing to balance outliers: https://en.wikipedia.org/wiki/Winsorizing

Though I suspect it's not ideal, especially for smaller miners that may get lucky.

@WhoSoup
Copy link
Contributor

WhoSoup commented Aug 8, 2019

If each of your five dice (die?) was an independent entity then we'd be on an even playing field, but you get the advantage of picking your best result from all 5 dice in round 1, the best result from all 5 in round 2, etc.

Okay, I triple checked this and you're only half right. Your math relies on the assumption that each miner only submits one OPR. If you only submit one OPR, then someone with 5 times the hashpower will always beat your submissions. However, this is not the case. The goal is to just get into the top 10 in order to get paid.

With my system, submitting ten OPRs would mean the first 9 difficulties are the same, but the last one contains your top ten mined in the last minute. That means in order for diff1 to diff9 containing the minimum, you'd have to find ten difficulties all better than what you mined in the first nine. Since all sections have equal time, that's not very likely.

In a perfect world, you'd end up with one entry where the minimum is in the first 9. and nine entries where the minimum is in the last entry. I don't think it's quite that clean cut but the principle still applies.

I wrote another simulator that simulates this, and if someone has 5 times your hashpower, the likelihood of your best entry beating at least one of their ten entries is about 25%.

This of course changes drastically if we enter the top 50 instead of just beating one person. I'll have to get back to you on that.

@AroundTheBox
Copy link
Author

You're right, I left it at two participants because it gets really complex with more, and i hadn't fully thought through the math like you're doing now, but I think the basic premise holds that there's a network effect which makes it unsustainable. In any other PoW scheme I have no incentive to join one mining pool over another except for reliability, frequency of payout, etc. With this scheme my only incentive is to join the largest mining pool because it provides the largest expected return. The bigger the pool the larger the advantage over any other participants.

In your simulation, what's the total difference in expected payout? If the 5x miner is getting 100% of it 75% of the time and 80-90% of it 25% of the time the total must still be really high given only a 5x advantage in hashpower.

@PaulSnow
Copy link
Contributor

PaulSnow commented Aug 9, 2019

@AroundTheBox Almost all mining pools split up their rewards across the pools, so the largest pool does get the biggest reward, but it pays the most people too. As these numbers grow, the payout begins to better fit the proportion of hashing power of each pool.

One of the other ideas was to not sort by difficulty but to promote to the top 50 a survey of the entries submitted. So you sort by difficulty, then take the entries by difficulty, so you make 50 bins that have the same hash power. You take a few per bin at the top, and many more OPRS in bins at the bottom, to more or less get the same difficulty in each bin. Then take the highest difficulty in each bin.

Now a mining pool's difficulty increases the odds of winning, but doesn't exclude other pools from winning.

@AroundTheBox
Copy link
Author

Paul, I agree. As long as mining pools or large entities receive payouts proportional to hashpower there's no problem. Any system with network effects, where a mining pool or large entity gets a greater payout than their proportion of hashpower, will be unsustainable. This applies even if we find clever ways to lower the effect, so if a mining pool with 30% of the hashpower gets 40% of the rewards (instead of the 90%+ we're talking about in these examples) everyone still has an incentive to consolidate into the biggest pool.

I think most schemes that have some kind of private checkpoint, where you select your best hash before continuing to the next step will have this kind of network effect, even if we change how we rank and select by difficulty.

I think the obvious solution from a PoW perspective would be to incentivize the partial OPR data to be made public at each step. If everyone can build the minute 2 OPR off of the best minute 1 hash then you basically have a mini blockchain within each 10 minute block and there wouldn't be a problem.

The way to do this would be to have payouts shared for each step in the final OPR. So as part of the segment 1 hash I include my address and publicize the partial OPR. It's then in everyone's interest to use the segment 1 OPR data since it has the best PoW score so far, even though it contains my address. The final OPR submitted could include payouts to up to 10 addresses.

Unfortunately this would solve the PoW grading but at the cost of consolidating the asset price data too early (the top 50 records would all have the same asset prices for the first 9 minutes). The challenge is to find a system that both encourages submitting unconsolidated asset prices for each breakpoint and has expected rewards proportional to hashpower, assuming asset prices are submitted honestly.

@WhoSoup
Copy link
Contributor

WhoSoup commented Aug 11, 2019

Had some more time to think about my multi-opr and one way to make the difficulty work would be to filter non-unique nonce0s, just like we're doing now. This would force the multi-opr to have a unique nonce0 as a "starting point" (the top 10 for minute 1). Since the following breakpoints are all based on nonce0, you'd have to find an individual best result for each chain. This would mean every OPR you submit has a unique chain of X difficulties, rather than X-1 being the same. We still take the minimum from that, so the goal is to find a PoW higher than nonce0 for each minute.

The drawback is that it would be basic miners to still submit ten OPRs. We've seen LXRHash perform best at one opr per core, so an 8 core miner would likely only be able to submit 8 oprs. Personally, I think that would be an acceptable tradeoff. Mining has strong hardware requirements, it's reasonable to expect that someone just running a miner for fun on consumer hardware would not be able to reach the best conditions.

If mining pools get involved, personal miners would be shut out anyway, and mining pools would have no problems dealing with the above system.

@WhoSoup
Copy link
Contributor

WhoSoup commented Aug 11, 2019

With this scheme my only incentive is to join the largest mining pool because it provides the largest expected return. The bigger the pool the larger the advantage over any other participants.

This is the case for any proof of work system, in my opinion, though mining pools typically do take a cut. The higher the overall hashpower of the network is, the less likely an individual will be able to win. Mining pools would compete each other through their cut %. Three mining pools make up 51% of the bitcoin network.

@AroundTheBox
Copy link
Author

Who, I think your proposal to require a unique nonce0 is a simple but great idea. There's still a network effect / nonlinear response to hashpower, but you're now mostly focusing it to be at the level of the OPR. This leaves a mining pool or large entity with an interesting strategy. After the first segment they should decide how many OPRs they will attempt to complete. Choose a low enough number of OPRs and you can virtually guarantee they all make it to the top 50. Choose more and you may lower the chance of each one making it, but increase your overall expected value. This is different from the current system where you just submit as many OPRs as make a threshold.

The two open questions I'm curious about (and If I get a chance I might write a simulation for) are:

  1. In equilibrium what would be the minimum hashpower to effectively contribute, as a percent of the network? After X% you'd need to join a mining pool or you'll end up with no rewards.

  2. Once an entity has over 2% of the total hashpower (so enough to efficiently submit at least one OPR) what advantage remains for having a greater proportion of the total network's power? There is definitely some advantage, but it might be small. For example if I'm working on segment 3 of a single OPR and I find a PoW score higher than the minimum of segments 1 and 2, there's no point in working further on this segment. I'd just idle until segment 4 starts, unless I want to risk working on segment 4 early with early prices. If I'm working on segment 3 with multiple OPRs then I would have all my resources find a PoW hash higher than the minimum for OPR 1, and then instead of idling all my resources I'd find a PoW hash higher than the minimum for OPR2, and so on. More efficient use of resources must be an advantage, but I don't know how big.

We've seen LXRHash perform best at one opr per core

Can you explain that? Maybe there's something I'm missing. From each core's perspective, why does it matter if the data it is hashing is from one OPR, a second OPR, or is shifted halfway through the segment?

an 8 core miner would likely only be able to submit 8 oprs

A single person with an 8 core miner would realistically be able to work on at most 1 OPR. A single OPR with 8 cores should be hundreds or thousands of times more likely to make the top 50 than 8 individual OPRs with a single core running each.

The higher the overall hashpower of the network is, the less likely an individual will be able to win. Mining pools would compete each other through their cut %. Three mining pools make up 51% of the bitcoin network.

As long as expected return is proportional to hashpower that's fine. In Bitcoin if I have 0.01% of the total hashpower I have the same expected return whether I solo mine, join a pool that has 5% of the hashpower, or a pool that has 30%. The only difference is the frequency of payment and specifics like the miner's cut.

If the return is not proportional I might have an expectation of ~0% as a solo miner and maybe 0.02% if I join the pool with 30% of the hashpower. It's not just the payout frequency that changes, but the actual expected return.

@Cahl-Dee
Copy link

My understanding is that we have settled on 20 minute settling. Unless someone opposes, in which case re-open this issue, we will stick with that for now.

PegNet pM2 - Conversions & Transactions automation moved this from To Do to Done Aug 28, 2019
@ilzheev
Copy link

ilzheev commented Oct 24, 2019

I think we should re-initiate discussion about conversions.
The current implementation of conversions creates a delay 3-11 minutes between a moment, when you create conversion, and a moment, when exchange rate for your transaction is calculated.

This makes completely unable to make arbitrage trading on Pegnet — show me an arbitrage trader, who agrees on any of 2 options, that PegNet is provided:

  1. Unknown exchange rate (delay 3-11 minutes)
  2. Limiting the exchange rate while create tx, but at the same time you create possibility that your conversion will never happen

As arbitrage trading is about creating 2 reverse transactions at the same time, using PegNet for 1 of this txs may result:

  1. Losing money cause unknown exchange rate
  2. Completing 1st leg of tx on traditional exchange, and not completing 2nd leg of tx on PegNet

This makes arbitrage trading with PegNet completely useless, cause a risk appears in any case of above, and arbitrage is about being risk-less.

@Cahl-Dee
Copy link

I have not heard complaints from arbitrage traders either though. So there's no data at this point. Just speculation from developers and product managers :-)

I think we need to host the trading/arbitrage competition and seek feedback from experienced traders.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
No open projects
Development

No branches or pull requests

7 participants