Skip to content

Commit

Permalink
Merge pull request #3 from greggdourgarian/master
Browse files Browse the repository at this point in the history
small typo corrections
  • Loading branch information
robkorn committed Sep 15, 2020
2 parents 87609c4 + 1e2ea50 commit c7da78f
Show file tree
Hide file tree
Showing 4 changed files with 14 additions and 14 deletions.
8 changes: 4 additions & 4 deletions ergo/Governance-Stake-Slashing-Pool-Spec.md
Expand Up @@ -115,7 +115,7 @@ Progression into the proceeding epoch from this stage (and thus into the [Live E
During this epoch preparation period collateral slashing can be initiated and collecting [Pool Deposit](<#Stage-Pool-Deposit>) boxes can be done. This provides a period of establishing equilibrium after every datapoint collection for both the individual oracles who may have been wronged, and for the funds of the oracle pool to be collected in order to allow for the protocol to continue.

If the oracle pool has insufficient funds and no [Pool Deposit](<#Stage-Pool-Deposit>) boxes are available to collect, the pool may skip an epoch due to it being underfunded.
If an epoch is skipped then a new epoch (following a new posting schedule) must be created via [Create New Epoch](<#Action-Create-New-Epoch>). This is only possible once the pool box has had it's funds replenished and it can pay out oracle once more.
If an epoch is skipped then a new epoch (following a new posting schedule) must be created via [Create New Epoch](<#Action-Create-New-Epoch>). This is only possible once the pool box has had its funds replenished and it can pay out oracle once more.

The oracle pool box at this stage must also hold the pool's NFT/singleton token. This NFT is required in order to guarantee the identity of the pool thereby differentiating it from another instance of the same contract posted by an unknown bad actor. [Read more about the NFT here.](<#Action-Bootstrap-Oracle>)

Expand Down Expand Up @@ -246,7 +246,7 @@ In order to commit a datapoint, the output [Datapoint](<#Stage-Datapoint>) box m

If an oracle never had their collateral slashed, then their previously held collateral is sufficient and therefore they do not need to provide extra input boxes holding Ergs.

When a new datapoint is commit, the [Live Epoch](<#Stage-Live-Epoch>) must be used as a data-input in order to acquire it's box id. This box id is then put in R5 of the new [Datapoint](<#Stage-Datapoint>) output, thereby ensuring that the datapoint was posted in the current epoch.
When a new datapoint is commit, the [Live Epoch](<#Stage-Live-Epoch>) must be used as a data-input in order to acquire its box id. This box id is then put in R5 of the new [Datapoint](<#Stage-Datapoint>) output, thereby ensuring that the datapoint was posted in the current epoch.

An oracle can also include a "vote" for a new oracle pool price by placing an integer in R7.

Expand Down Expand Up @@ -422,7 +422,7 @@ If a pool is ever underfunded, then this action must be performed to increase th
1. Input #1 holds the oracle pool NFT (the NFT id is hardcoded in the [Pool Deposit](<#Stage-Pool-Deposit>) contract)
2. Output #1 holds the oracle pool NFT.
3. Output #1 has exactly the same registers as Input #1.
4. Output #1 holds equivilant to it's previous total plus the summed value of all input [Pool Deposit](<#Stage-Pool-Deposit>) boxes.
4. Output #1 holds equivilant to its previous total plus the summed value of all input [Pool Deposit](<#Stage-Pool-Deposit>) boxes.
---


Expand Down Expand Up @@ -464,7 +464,7 @@ If the finish block height of an epoch has passed without the live epoch being s

## Action: Create New Epoch

If the oracle pool is in the [Epoch Preparation](<#Stage-Epoch-Preparation>) stage and is underfunded, it can miss starting it's next Live Epoch (because [Start Next Epoch](<#Action-Start-Next-Epoch>) requires sufficient funds).
If the oracle pool is in the [Epoch Preparation](<#Stage-Epoch-Preparation>) stage and is underfunded, it can miss starting its next Live Epoch (because [Start Next Epoch](<#Action-Start-Next-Epoch>) requires sufficient funds).

Therefore, this action allows creating a brand new upcoming epoch after funds have been collected and a previous epoch has been missed. This is done by checking R5 of the [Epoch Preparation](<#Stage-Epoch-Preparation>) box and seeing if the block height has passed. If so, it means that none of the oracles started said epoch (which they have a game theoretic incentive to do so because they get paid) due to the pool not having sufficient funds to payout the oracles for the next epoch.

Expand Down
10 changes: 5 additions & 5 deletions oracles/Oracle-Pools.md
Expand Up @@ -107,7 +107,7 @@ Oracle Pools As Core Infrastructure
---
One of the great things about oracle pools on top of extended UTXO systems (which support data-inputs) is that everyone on the network can benefit from utilizing the same oracle pool for a given datapoint.

Rather than each dApp creating it's own price feed made of custom oracles & accumulator contracts, the blockchain ecosystem can focus on creating large oracle pools which provide a highly accurate and trustworthy data source. Thanks to incentives being a key part of oracle pools (especially with collateral slashing), the larger the set of oracles which take part, the harder it is for a subset of the oracles to go rogue and try to disrupt the system.
Rather than each dApp creating its own price feed made of custom oracles & accumulator contracts, the blockchain ecosystem can focus on creating large oracle pools which provide a highly accurate and trustworthy data source. Thanks to incentives being a key part of oracle pools (especially with collateral slashing), the larger the set of oracles which take part, the harder it is for a subset of the oracles to go rogue and try to disrupt the system.

Since oracle pool postings are on a schedule and provide publicly available data in a UTXO, this means that oracle pools act as a public good for all users on the network. Thus even tiny p2p dApps, which only involve 2 participants, still have the ability to utilize the oracle pool datapoints for free by using the pool UTXO as a data-input. This potentially has a waterfall effect in aiding the development of the dApp ecosystem as the barrier of entry for new developers shrinks. Oracle pool datafeeds can one day become commonplace and the equivalent of public/core infrastructure.

Expand All @@ -122,7 +122,7 @@ Stake slashing oracle pools by themselves provide a good balance of both incenti

That said, one of the possible strategies that oracles can pursue in order to try to maximize earning in the short-term is to simply copy the datapoint of the first oracle who posts a datapoint. This guarantees that they will be within the margin of error and relinquishes the oracle of their duty of sourcing the data themselves. Thus this becomes a way to leech to the pool of funds while decreasing the accuracy and trustworthiness of the oracle pool data.

Do note, this is not as major of an issue as it may seem at first. Oracle pools are open for the world to see, meaning it is reasonably trivial for other oracles, or external actors, to notice when an oracle always copies another oracle's datapoint and never posts first. This means market incentives can come into play such that the oracle pool may lose the trust of it's users, and thus with no userbase, the funds dry up.
Do note, this is not as major of an issue as it may seem at first. Oracle pools are open for the world to see, meaning it is reasonably trivial for other oracles, or external actors, to notice when an oracle always copies another oracle's datapoint and never posts first. This means market incentives can come into play such that the oracle pool may lose the trust of its users, and thus with no userbase, the funds dry up.

Furthermore, oracle pools with governance mechanisms in place can be responsive to these market dynamics. Oracles part of a pool wish to preserve their image as a trustworthy source of oracle data so that they can keep operations moving smoothly & thereby earning money. Oracles within a pool are incentivized to keep check that all other actors in the pool are indeed doing their job properly in sourcing their own data. This means that through governance means, a vote can be held to remove a specific oracle who is clearly malicious & not sourcing their own data.

Expand All @@ -135,7 +135,7 @@ For this goal, we have two approaches available. Either we completely prevent th
#### Direct Prevention
In this approach, an epoch is divided into two periods. A hash submission period, and a datapoint reveal period.

All oracles are first required to submit a hash (with salt added) of their datapoint on-chain during the hash submission period. This period is a predefined number of blocks within an epoch, and no oracle is allowed to submit a datapoint without first submitting it's hash.
All oracles are first required to submit a hash (with salt added) of their datapoint on-chain during the hash submission period. This period is a predefined number of blocks within an epoch, and no oracle is allowed to submit a datapoint without first submitting its hash.

Once the oracle pool epoch moves into the datapoint reveal period, then oracles can reveal their datapoint by posting it on-chain to the pool (along with the salt used). The datapoint + salt must hash to the same result as the oracle posted in the hash submission period. Otherwise their datapoint is considered invalid and not accepted.

Expand Down Expand Up @@ -194,11 +194,11 @@ An oracle pool is technically a tier-2 datapoint hierarchy of confidence. That i

Now, what if we were to build out a tier-3 datapoint hierarchy of confidence? If a tier-2 entity is a collection of oracles into a pool, then a tier-3 entity would be a collection of pools into a "pool of pools". Thus a tier-3 entity collects the datapoints of numerous pools which are all sourcing the same datapoint in order to then finalize them into a new high-assurance tier-3 datapoint.

Thus we can now classify any oracle datapoint that is posted to a blockchain by it's hierarchy of confidence tier. If an oracle simply posts a datapoint by themselves, then that is a tier-1 datapoint. If an oracle pool (or some other scheme where a group of oracles average their datapoints together) produces a datapoint, then then that is considered a tier-2 datapoint. And of course, a pool of oracle pools produces a tier-3 datapoint.
Thus we can now classify any oracle datapoint that is posted to a blockchain by its hierarchy of confidence tier. If an oracle simply posts a datapoint by themselves, then that is a tier-1 datapoint. If an oracle pool (or some other scheme where a group of oracles average their datapoints together) produces a datapoint, then then that is considered a tier-2 datapoint. And of course, a pool of oracle pools produces a tier-3 datapoint.

With oracle pools used within the datapoint hierarchy of confidence, the higher tier that a datapoint is the higher level of assurance that can be expected from said datapoint. This is because at the core of oracle pools we have strong incentives/disincentives in place to keep oracles acting properly. This fact is magnified further with datapoint hierarchies of confidence. (Unfortunately current oracle schemes today typically have limited incentives/disincentive mechanisms, thus it is harder to build trustworthy hierarchies out of them)

The very same carrots/sticks used by oracles pools to keep oracles in check can be used on the tier-3 level for the oracle pools themselves. As such, in a 3 tier datapoint hierarchy of confidence, an oracle pool does not acquire funds directly from it's users. Instead, it receives funds if it acted properly (and thus is required to put up stake). Thus, just like an individual oracle, the oracle pool must:
The very same carrots/sticks used by oracles pools to keep oracles in check can be used on the tier-3 level for the oracle pools themselves. As such, in a 3 tier datapoint hierarchy of confidence, an oracle pool does not acquire funds directly from its users. Instead, it receives funds if it acted properly (and thus is required to put up stake). Thus, just like an individual oracle, the oracle pool must:
- Provide a datapoint that is within a margin of error of the averaged out final datapoint (in order to be paid out)
- Post the pool's finalized datapoint on time within the tier-3 entity's epoch (or get stake slashed)
- The tier-3 datapoint collector must accurately collect all oracle pool datapoints submit in the current epoch (or get stake slashed)
Expand Down
Expand Up @@ -13,7 +13,7 @@ Given that smart contract powered UTXO systems are nascent, there is a distinct

That said, UTXO-based smart contract protocols have a direct correlation with State Machines. Basic protocols rely on a simple state machine which transitions from state to state across transactions with only a single UTXO carrying the data/coins. More complex protocols however are made of more than a single state machine with certain state transitions requiring two or more state machines to converge. During this convergence the data/tokens within the current state of each state machine is used as input in order to transition both forward. These convergence-based transitions may result in full convergence where they join into a single state machine, or partial convergence where they only use each other as inputs and still continue separately parallelized.

Furthermore in this new extended UTXO-model of smart contracts, it is possible for a state transition to generate a whole new and parallelized state machine that will function on it's own. This can happen during a convergence of state machines or merely in running of a single one. This new facet is where much of the novel complexity originates from.
Furthermore in this new extended UTXO-model of smart contracts, it is possible for a state transition to generate a whole new and parallelized state machine that will function on its own. This can happen during a convergence of state machines or merely in running of a single one. This new facet is where much of the novel complexity originates from.

This document summarizes several UTXO-based smart contract design patterns at the highest level, thereby abstracting out any blockchain-specific details. Any smart contract powered extended UTXO system which provides the ability to read the coins, data, and the address (contract/contract hash) of both inputs and outputs should be able to use all of the following design patterns. These patterns start out relying on a single state machine, and progress into the realm of multi-state machine protocols with convergence and new state machine generation.

Expand Down Expand Up @@ -89,7 +89,7 @@ Branching protocols can be useful when the decisions of actors taking part in th

Parallelized Protocols
---
As we saw in the previous section, branching acts as an OR path, meaning that only one output UTXO from the spending transaction is created. In contrast, parallelized protocols act as an AND path where 2 or more output UTXOs are generated each in it's own stage/phase. These parallelized UTXOs can then converge back together once they have performed all of the required computations for a new consolidated state of the protocol.
As we saw in the previous section, branching acts as an OR path, meaning that only one output UTXO from the spending transaction is created. In contrast, parallelized protocols act as an AND path where 2 or more output UTXOs are generated each in its own stage/phase. These parallelized UTXOs can then converge back together once they have performed all of the required computations for a new consolidated state of the protocol.

This allows for multiple actors to perform actions in the same instance of a protocol in the same block, thereby increasing the “throughput” of the specific protocol (not the blockchain itself). This also makes txs cheaper for each party if the state was split between the parallelized UTXOs as tx size will be smaller.

Expand Down
6 changes: 3 additions & 3 deletions smart-contracts/Unlocking The Potential Of The UTXO Model.md
Expand Up @@ -43,9 +43,9 @@ When a UTXO at `smart contract address Y` is used as an input in a transaction,

The smart contract reads the above listed data as input and if it executes to the equivalent of `True`, then the transaction is valid and passes. This is the core workflow which UTXO-based smart contracts use.

What this means is that every time you wish to update data held by a dApp (inside of a UTXO), you must spend the original UTXO (thereby destroying it) and create a new UTXO at the same address & holding the same assets. This new UTXO however has a new value in it's data, thereby causing a state transition to happen from the old data value(s) to the new data value(s).
What this means is that every time you wish to update data held by a dApp (inside of a UTXO), you must spend the original UTXO (thereby destroying it) and create a new UTXO at the same address & holding the same assets. This new UTXO however has a new value in its data, thereby causing a state transition to happen from the old data value(s) to the new data value(s).

Each UTXO holds it's own personal state in the data it has attached to it. As the data and assets move from one UTXO to another, they pass through state transitions which can cause them to split, accumulate, be deleted, or be used with other assets/data from other UTXOs. These higher-order actions allow for more complex logic to be encoded with potential for multiple input UTXOs and multiple output UTXOs. This ends up being one of the key basic building blocks for developing dApps.
Each UTXO holds its own personal state in the data it has attached to it. As the data and assets move from one UTXO to another, they pass through state transitions which can cause them to split, accumulate, be deleted, or be used with other assets/data from other UTXOs. These higher-order actions allow for more complex logic to be encoded with potential for multiple input UTXOs and multiple output UTXOs. This ends up being one of the key basic building blocks for developing dApps.



Expand All @@ -55,7 +55,7 @@ As we have seen, spending UTXOs is at the core of the extended UTXO smart contra

The astute reader may have already noticed that since we have state(data) attached individually to each UTXO, every time a state transition happens the result is reflected in said data. As such the data often is "preprocessed", wherein it already exists and contains information that could be useful for other dApps/contracts to reference without any further execution required.

An example of useful information that could be used by other smart contracts would be oracle data. Using such data held in a UTXO in a naive manner would entail spending the UTXO. By using the UTXO (that has oracle data) as an input you are spending it and thereby providing access to it's data to your other transaction inputs. This is how your dApp can attain access to data held under UTXOs locked under other smart contracts.
An example of useful information that could be used by other smart contracts would be oracle data. Using such data held in a UTXO in a naive manner would entail spending the UTXO. By using the UTXO (that has oracle data) as an input you are spending it and thereby providing access to its data to your other transaction inputs. This is how your dApp can attain access to data held under UTXOs locked under other smart contracts.

That said, having to spend every single UTXO which you wish to read data from has a number of strong drawbacks:
- The smart contract of the UTXO with the data must execute, thereby increasing computation complexity/cost.
Expand Down

0 comments on commit c7da78f

Please sign in to comment.