zArchive~Important Statements Regarding PoS

NeuralMiner edited this page Oct 18, 2018 · 1 revision
  1. For all blocks: Reject any block with an invalid CPID. We already have a good function for that and its necessary for us to even EXIST. (That already ensures the CPID is genuine and has team Gridcoin on it when the user signed up).

  2. Only verify the magnitude on unconfirmed network blocks (so we really only need 6 confirms for the Magnitude). We'll standardize the magnitude so the value in the block will always equal the value verified

  • so that will make it simple.
  1. For old blocks (with more than 6+ confirms) we don't need to verify the Magnitude with Netsoft - so that solves the efficiency problem.

The only downside to this is new blocks will cause sort of a small delay when being accepted but I think we can ensure that is on the background thread so the user wont notice. I think that solves all of the block acceptor problems using single magnitudes. We will still have to ensure no block is accepted where it results in a POR total average daily payout *BY CPID* > the CPID magnitude * 1 : this is important since a user can run stake on multiple machines with one CPID. So to summarize, this last rule is the big one that needs to be verified that it works properly. I'm not as concerned about the top 3 - if the client is syncing, we can tell they are out of sync when the last block is over an hour old (to the node itself) - we'll make that part of the check for rules 1-3. For the final rule, we have consensus issues to deal with.

Rob Halford July 30th, 2014

Gridcoin Stake Changes:

- Please delete and upgrade (not compatible)

- Added boost version to rpcconsole

- Added showblock

- Incremented protocol version

- Added Gridcoin genesis block

- Fixed execute testhash

- Includes POS 2.0

 (If this works properly the p2p checkpoint feature should work with no central checkpoints required)

  • I believe we need 5 nodes on the Net to verify syncing. I have 3 on the net and now 2 are in agreement.

' Rob Halford July 31st, 2014'

- Increased last PoW block to 1500

- Confirmed the PoW blocks take 500 to mature - hence the reason we did not start staking

I restarted my 3 nodes. Let me know if anyone feels this version does not sync- I have no opinion yet as this behavior may be normal on POS 2.0 (chain trust, node trust etc).

Its possible as we start staking with balances, node trust may rise - therefore chain trust will rise.

I observed one of my nodes being banned from the other node for 30 mins yesterday- you can check for this using getpeerinfo - look for "banscore".

As long as you connect without the testnet flag and use the, I think it should sync.

However, I am thinking this version of stake is not trusting any other nodes. I just did a compare of 3 block hashes (for block 1000, getblockhash 1000), across my 3 nodes, and none of them agree (another words, I have 3 of my own chains here).

I verified the genesis block is good, the start chain time is correct, and don't see anything glaring out in the code that looks bad. The zerocoin trusted modulus looks OK.

Ill do a diff between a Prod POS coin and the original source and see if we need an addl parameter set for this to work properly.

Rob Halford August 3rd, 2014

I think all the coins based on Bitcoin are considered Gen 1; they have elements using the original BerkeleyDB/LevelDB blockchain, standard RPC interface, base58 encoded keypairs for sending receiving, and the bitcoin ScriptSig language for processing payments to multiple recipients (IE a TXs inputs + outputs). To be a Gen 2 coin you really have to throw all the foundation away and re-write the entire coin from scratch.

Yes there are some pros to doing that but its very risky- you are throwing out thousands of hours of debugging the original code base- things that deal with security - the difficulty calculation per block - literally thousands of hours of refinement that occurred over at least 60 developers.

I think there is a possibility that some day we can make a decentralized p2p SQL protocol that sits On Top of the existing Gen 1 base that can be tested in a safe manner along side the current coin. But its such a huge undertaking to debug it would be at least a one year project. If it is ever debugged, and the day ever came where we could rely on the new blockchain and retire the old we could then consider it a Gen 2 coin.

However we do have a lot of features in PoR that add value such as Upgrade, all the rpc commands, the .net unreleased code for counterpartyd, etc.

- Enabled Stake + PoW rewards - Added Gridcoin Vouchering System:

 a.  Nodes Tally network averages, then pick a random CPID to vouch for (once every 6 minutes - we can change this ASAP to be more efficient)
 b.  Nodes scan the entire Vouched CPID, calculate the magnitude, cpid-rac and network rac, and store this until the node stakes a block
 c.  The Vouched CPID, Magnitude, and Rac and Net Rac are stored inside the block along with the solvers information
 d.  Modified ShowBlock to show the Vouching information

- Modified Tally Network Averages to create a consensus including Node Magnitude and Node Payment information

 a.  This will be used to calculated Outstanding Payments owed to pure PoR nodes (allowing Catch Up payments)
 b.  This allows mining with one cpid across gridcoin nodes for accurate payments (BAM support)

- Added listitem magnitude

 a. This report shows all CPIDs network wide:  
    For each cpid, you will find the network consensus Magnitude, Total Payments in the last 6 months, Total Owed amount, and Avg Daily Payments

- Modified Magnitude to use Verified RAC for an accurate consensus per node

Lederstrumpf, on 04 Aug 2014 - 01:35 AM, said:

Synced and solved some blocks - waiting for them to stake.

Have some questions regarding the vouching:

From what list does your client pick a cpid to vouch for? Is it CPIDs in the blockchain? From the Gridcoin team? Entirely random? Ok, it appears to pick solely from the Gridcoin team. How long does a CPID that has been vouched for remain valid in the chain before it must be updated with info from Netsoft/other credit checking farm? Do these credits degenerate via the RAC formula/another mechanism or do 30 day old verified CPIDs retain their value like fresh ones? How do we punish nodes that fake/alter the magnitude of the CPID they vouch for?

I see how this could be very useful if we have to go into DR, or even just to take load off Netsoft (at least for CPIDs that are being vouched for that appear in the blockchain regularly).

What happens if someone modifies the client to not pick these CPIDs at random, but to choose them from a list of owned CPIDs, or colluding friends? This would be in their interest, since if we go down into DR, their CPIDs would remain verified, but a lot of random CPIDs that they should have verified are not. Since we're talking PoS here, this is naturally not a security issue (doublespend), but more of a fairness/integrity problem.

Hi, thanks for testing...

Note: I did not want to give the impression this was the "final" version or the "best" release candidate - its just a new feature that I think *can* help improve the integrity/performance of the client once we work together to polish the feature.

Even if we find it cannot be the primary credit check algorithm it can be left it to work alongside some other things. For example, the same process that collects the magnitudes collects the total payments - and that is obviously a plus so that a hard consensus can be drawn from the network cpid list for more accurate stake payments.

But anyway, on to your observations:

1 - Its harvesting its list of CPIDs from the distinct CPIDs in the chain in the last 6 months in the TallyNetworkAverages() function.

2 - It picks a random CPID from this list; runs a credit check on it; looks through the entire CPID; and divides each project RAC by the Network RAC (yes, so it does use the network consensus Project Averages) for the calc; thus coming to a Vouched Magnitude level for that CPID based on real conditions.

3 - This is the initial idea subject to tweaks - suggestions: All vouchers are stored on every block in dedicated fields - vouched cpid; vouched magnitude ; vouched Network RAC; Vouched User RAC, and are possible to be duplicates obviously. When we retally net averages over 6 months, we always look at the most recent 6 months to come to a consensus. All CPID magnitudes are averaged out - so dupes only help create a more accurate mag for the report.

4 - Collusion: I thought about this too - I knew you would be the first person to mention this, thanks. I realize its an issue after I deployed it. We cant really rely 100% on the vouched mags since a fraudulent client can alter the magnitude readings for friends-family and vouch for themselves. So, yes, its very hard to get around not checking every blocks with netsoft - as a matter of fact if we dont solve this problem, we will have to do that. But as I said in point 1 it was not a useless effort since there is a good use for the report and for the outstanding payments ... So, this is something I can use some help on... And yes, punishing users who alter things may be one way to skin the cat; but how do we do that.

Rob Halford:

1. Trying to research unpopular projects to raise your magnitude will not work in the long run. You are underestimating the power of arbitrage and the fallacies involved in taking advantage of the unfair averages. By virtue of participating, you will be changing the average very quickly. Also, it is highly unlikely with 1300 participating CPU miners, the averages will stay bent for a long period of time. I went as far as adding code in the program to deliberately create unpopular projects. They only lasted 30 days before the arbs were gone. I call the boinc user who searched for the highest paying project a "hunter". The hunters only made marginal gains for short periods before being stabbed by their own device. The problem is, once you attach a project and build up RAC, when you detach it to move on to the next lowest project, you must build up rac on the newest attached project - losing pent up credit on the first as it decays (or if detached, its completely gone from your averages) - and starting over on the new project - spreading resources out. (Dont worry I tried it both ways - leaving projects attached, adding lowest projects every 7 days, 6 months, 30 days) it does not matter - it harms your average more than helping. I had an intuition this happens and the program confirms it.

2. Creating a bot-net of virtualized devices to get around our payment system and earn unfair amount in relation to others: This actually does not work at all, and confirms the fact that doing this actually lowers your average Mag. The reason why is when you spread your resources out this thin (I Used 25 vms as an example, with 25 cpids in the code), you end up with a lot of projects barely meeting the minimum rac and some < 100 rac. This affects your total averages and does affect the mag negatively. By about 50% over the long run.

3. Confirmation that our magnitude system actually works by confirming a power user with more crunching cores (computing units) are compensated proportionately via mag to core count : The simulation proved - without a doubt - the magnitude was exactly higher in proportion to computing units. 25 core machines earned 500 mag, while 4 core earned 80-100. More details later. Cobblestones are built in and cobbles per project are in.

4. More CPIDs than normal: similar results to #2. Payments went down (slightly) due to the propensity of working projects with rac lingering near or below 100. This arb only works if your mag is > 500 (as I stated before programming this).

5. One CPID - Normal use: The program confirms that for normal users or users who work with one cpid, they can without fear earn fair rewards without playing games.

By all means if you want to test the system in Prod or in stake (stake is perfect for your testing as most projects are unpopular with only 5 cpids participating) you will quickly see the net averages converge - please by all means test it and reaffirm these results.

As I said, Ill spend more time explaining the detailed logs once I have time to post them.

Moving on to one more aspect of magnitude measurements we started to consider in the beginning of the thread : I added in the ability to measure two types of mags (Type #1 : Our current system that only weighs your mag based on the projects you are participating in ), Type #2: Mag weighted across all gridcoin projects in the program. The program simulates a payment per day for 100 researchers for two years using both Mag Types, and all 4 Researcher Types.

This particular result was actually valuable and offers an option for us to pursue - switching magnitude calculations over to Type #2:

For Normal users through multiple CPID users, while Mag #1 varied between 100-200, Mag #2 was much smaller (IE a reading between 10-20 (about 10% of our standard reading), however with mag #2, Power Users (ie 5* the cores, did measure a 110 with a 500 reading in our current Mag #1 sytem).

So, being that it does compensate power users appropriately for their cores, it is a *possible* algorithm for us to consider switching over to - if the group wants it.

RTM, I think we should create a survey item for this : Calculate based on participating projects, or Calculate mag based on all projects.

To clarify if we go with #2, the code penalizes you with a weight of 0 for all projects except the ones you participate in bringing the mag way down to 15.

     *Obviously*, we would them naturally consider paying a higher subsidy multiplier * mag and keeping the cap, but we can discuss that later.

You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
Press h to open a hovercard with more details.