Skip to content
This repository has been archived by the owner on Jun 26, 2023. It is now read-only.

Commit

Permalink
Merge branch 'dnadales/add-first-ideation-phase-properties' of github…
Browse files Browse the repository at this point in the history
….com:input-output-hk/decentralized-software-updates into dnadales/add-first-ideation-phase-properties
  • Loading branch information
dnadales committed Dec 12, 2019
2 parents 94a123c + d9065f1 commit 4cb10a8
Show file tree
Hide file tree
Showing 5 changed files with 109 additions and 70 deletions.
70 changes: 1 addition & 69 deletions formal-spec/candidateproperties.tex
@@ -1,34 +1,4 @@
\section{Requirements for the Software Update Mechanism}

\begin{enumerate}
\item \textbf{Updates are open to the community.} \label{req:upd-open}
Any stakeholder should be able to propose a software update to be voted by the stakeholders' community. So proposals are open to the community.

\item \textbf{Updates utilize the blockchain} \label{req:upd-blockchain}
Update events are stored in the blockchain as immutable events.

\item \textbf{Decentralized Governance} \label{req-dec-gov}
The stakeholders' community is the only authority to decide on the approval, or rejection, of the proposed software updates.

\item \textbf{Secure Activation} \label{req-sec-act}
Upon activation of a software update the update mechanism guarantees a secure transition to the updated ledger; a transition in which:
\begin{itemize}
\item The security assumptions of the updated ledger will hold (e.g., there will be a sufficient percent of upgraded honest stake)
\item The updated ledger successfully incorporates the old ledger, i.e., the state of the old ledger will be successfully moved into the new ledger.
\item There will be no chain splits due to the activation.
\end{itemize}

\item \textbf{Conflict Resolution.}\label{req:conflict-res}
Two or more software updates that have conflicting changes will not be applied both, but a conflict resolution will take place.

\item \textbf{Update Dependencies.}\label{req:update-dep}
A software update will never be applied if its dependency requirements are not met

\item \textbf{Extensible voting period.} \label{req:ext-vper} The period required for approving a software update is not fixed but can be extended by the stakeholders' community.

\end{enumerate}

\section{Candidate Properties}
\section{Properties} \label{sec:cand-properties}

\subsection{Global Properties}

Expand Down Expand Up @@ -88,44 +58,6 @@ \subsection{Global Properties}

\paragraph{Classify:} Property-based testing. Property fulfilling requirement \ref{req:ext-vper}

\subsection{Properties based on Non-Functional Requirements}

\begin{property}[Transaction Throughput]\label{prop:tr-throughput}
The ledger transaction throughput is not significantly impacted by the presence of software updates.
\end{property}

\paragraph{Classify:} By experiment.

\paragraph{how to test}
By experiment: measure how the throughput of the blockchain (i.e., transactions included in the blockchain by the unit of time) is affected by the number of users $N$ participating in the update mechanism.
\paragraph{Single Node Simulation:}
We assume $N$ users running the update protocol. Each user generates from 1 to 10 update events per 100 transactions. We need to have a generator that simulates $N$ users generating update events, where $N = 0,1,2,...$, with the above rate. Also we need a generator that generates common transactions for $M$ fixed users. As $N$ increases we expect to see the number of transactions included in a block to be reduced (linearly), since more update events will be stored also.

In the lack of an integration with Cardano, the generators generate the blockchain and not just single transactions. Therefore, the nodes dont create blocks and dont actually build the blockchain. So, the only thing we can do is to pack transactions and update events into blocks and measure the effect of the update payload in including transactions into a block.We take a trace that comprises a fixed number of blocks (corresponding to a specific duration of running the protocol) and count the number of transactions in the trace excluding the update events. The number of transactions included in a trace of a specific fixed length (corresponding to a specific time window) will be the logical equivalent to the transaction throughput.

\paragraph{Networked Simulation:}
We assume $N$ users running the update protocol. Each user generates from 1 to 10 update events per 100 transactions. We need to have a generator that simulates $N$ users generating update events, where $N = 0,1,2,...$, with the above rate. Also we need a generator that generates common transactions for $M$ fixed users. $N$ and $M$ could be separate processes/threads that generate transactions and transmit them to the network. We assume that a node runs some version of the Cardano client and thus reads transactions and blocks form the network and builds the blockchain, or in the case of a slot leader issues a new block. Lets assume that with no update events, it takes 20secs for a transaction to be included into a block. This is the transaction throughput. If the node also runs the update protocol (i.e., integration of Cardano with our update protocol), we need to measure the effect of this to the throughput for various $N$ users.

\begin{property}[Blockchain Size]\label{prop:blockchain-size}
The blockchain size is not significantly impacted by the presence of software updates.
\end{property}

\paragraph{Classify:} By experiment.

\paragraph{how to test}
We assume $N$ users running the update protocol. Each user generates from 1 to 10 update events per 100 transactions. We need to have a generator that simulates $N$ users generating update events, where $N = 0,1,2,...$, with the above rate. Also we need a generator that generates common transactions for $M$ fixed users. We assume traces of fixed size and measure the number of transactions stored in the unit of storage. As $N$ increases we expect to see this measure to decrease linearly.

\begin{property}[Update Throughput]\label{prop:update-thr}
The update throughput reduces linearly in the number of users running the update protocol
\end{property}

\paragraph{Classify:} By experiment.

\paragraph{how to test}
We want to run our update protocol end-to-end, from proposal submission to activation and measure elapsed time for a software update, excluding human delays (e.g., implementation delay). We start with a single user and scale up to $N$ users running the protocol concurrently.

We assume $N$ users running the update protocol. Each user generates from 1 to 10 update events per 100 transactions. We need to have a generator that simulates $N$ users generating update events, where $N = 0,1,2,...$, with the above rate. Also we need a generator that generates common transactions for $M$ fixed users. We assume traces of fixed size and measure the end-to-end elapsed time for all software updates. Then, we calculate the percentile 80 of this elapsed time. This will be our basic metric.

\subsection{Consensus Protocol Properties}

\begin{property}[Bootstrapping]\label{prop:bootstrap}
Expand Down
2 changes: 2 additions & 0 deletions formal-spec/decentralized-updates.tex
Expand Up @@ -98,8 +98,10 @@
\tableofcontents
\listoffigures

\input{requirements.tex}
\input{candidateproperties.tex}
\input{properties.tex}
\input{measurements.tex}


\addcontentsline{toc}{section}{References}
Expand Down
60 changes: 60 additions & 0 deletions formal-spec/measurements.tex
@@ -0,0 +1,60 @@
\section{Measurements Specification} \label{sec:measurements}
In this section we want to describe an experimental evaluation of our proposed update mechanism. This experimental evaluation will help to verify to what degree our proposal fulfills non-functional requirements, such as the ones described in section \ref{sec:non-func-reqs}.
\subsection{What to Measure}
Our experimental evaluation will mainly focus on the following metrics:
\paragraph{Transaction Throughput}
If $N_{Tx}$ is the number of transactions that fit in a block and is defined as $N_{Tx} = \frac{Block Size}{Avg Transaction Size}$ and we get a new block every $T_B$ units of time (e.g.,seconds), then we define as \emph{transaction throughput} $Tx_{th}$ the ratio $Tx_{th} = \frac{N_{Tx}}{T_B}$ and we usually measure it in \emph{transactions per sec (tps)}. We want to evaluate the impact of the number of users $N_u$ that actively participate in the update mechanism to this metric.

\paragraph{Blockchain Size}
Lets consider a blockchain system running a consensus protocol. At time point $T_{start}$ we omit $k$ blocks from the end of the chain ($k$ is the security parameter of the protocol) and mark the slot of the last block in the remaining chain as $S_{start}$. We let the consensus protocol run for a fixed time window of $T_w$ units of time until $T_{end}$ ($T_w = T{end} - T{start}$). Similarly, we omit $k$ blocks from the end of the chain and mark the slot of the first block in the remaining chain as $S_{end}$. We define the size of the chain between slot $S_{start}$ and slot $S_{end}$ (included), as the \emph{blockchain size of time window} $T_w$ and note it as $BSize_{T_w}$. We want to evaluate the impact of the number of users $N_u$ that actively participate in the update mechanism to this metric.

\paragraph{Update Time to Activation}
A software update $SU$ starts its life with the submission into the blockchain of a \emph{system improvement proposal} (SIP) and ends upon the activation of the $SU$. We define the elapsed time that takes for a software update to activate from start to finish, excluding the human delays (e.g., the time it takes for the implementation of the software update) as the \emph{update time to activation} and note $T_{act}$. We want to evaluate the impact of the number of users $N_u$ that actively participate in the update mechanism to this metric.

\subsection{How to Measure}
We consider two different approaches for conducting the measurements. The two alternatives have to do with the integration (or not) with the Cardano blockchain. In the \emph{single node} alternative, we have a node implementation that does not run the consensus protocol; it only runs the software updates protocol over a generated set of update events. In the \emph{networked node} alternative, the node runs the consensus protocol, as well as the software updates protocol and to this end communicates with other nodes in the network.

\subsubsection{Single Node Simulation}
\paragraph{Set up}
We assume a single generator process. This generator process will simulate $N_u$ users running the software update protocol thus generating update events (i.e., transactions) and $M_u$ users running the consensus protocol generating common transactions.

We assume that a single user generates update transactions with a specific weight compared to common transactions (e.g., 1 update transaction for every 1000 common transactions).

\paragraph{Transaction Throughput}
\paragraph{Blockchain Size}
\paragraph{Update Time to Activation}

\subsubsection{Networked Node Simulation}
\paragraph{Set up}
\paragraph{Transaction Throughput}
\paragraph{Blockchain Size}
\paragraph{Update Time to Activation}


\paragraph{Classify:} By experiment.

\paragraph{how to test}
By experiment: measure how the throughput of the blockchain (i.e., transactions included in the blockchain by the unit of time) is affected by the number of users $N$ participating in the update mechanism.
\paragraph{Single Node Simulation:}
We assume $N$ users running the update protocol. Each user generates from 1 to 10 update events per 100 transactions. We need to have a generator that simulates $N$ users generating update events, where $N = 0,1,2,...$, with the above rate. Also we need a generator that generates common transactions for $M$ fixed users. As $N$ increases we expect to see the number of transactions included in a block to be reduced (linearly), since more update events will be stored also.

In the lack of an integration with Cardano, the generators generate the blockchain and not just single transactions. Therefore, the nodes dont create blocks and dont actually build the blockchain. So, the only thing we can do is to pack transactions and update events into blocks and measure the effect of the update payload in including transactions into a block.We take a trace that comprises a fixed number of blocks (corresponding to a specific duration of running the protocol) and count the number of transactions in the trace excluding the update events. The number of transactions included in a trace of a specific fixed length (corresponding to a specific time window) will be the logical equivalent to the transaction throughput.

\paragraph{Networked Simulation:}
We assume $N$ users running the update protocol. Each user generates from 1 to 10 update events per 100 transactions. We need to have a generator that simulates $N$ users generating update events, where $N = 0,1,2,...$, with the above rate. Also we need a generator that generates common transactions for $M$ fixed users. $N$ and $M$ could be separate processes/threads that generate transactions and transmit them to the network. We assume that a node runs some version of the Cardano client and thus reads transactions and blocks form the network and builds the blockchain, or in the case of a slot leader issues a new block. Lets assume that with no update events, it takes 20secs for a transaction to be included into a block. This is the transaction throughput. If the node also runs the update protocol (i.e., integration of Cardano with our update protocol), we need to measure the effect of this to the throughput for various $N$ users.


\paragraph{Classify:} By experiment.

\paragraph{how to test}
We assume $N$ users running the update protocol. Each user generates from 1 to 10 update events per 100 transactions. We need to have a generator that simulates $N$ users generating update events, where $N = 0,1,2,...$, with the above rate. Also we need a generator that generates common transactions for $M$ fixed users. We assume traces of fixed size and measure the number of transactions stored in the unit of storage. As $N$ increases we expect to see this measure to decrease linearly.

\paragraph{Classify:} By experiment.

\paragraph{how to test}
We want to run our update protocol end-to-end, from proposal submission to activation and measure elapsed time for a software update, excluding human delays (e.g., implementation delay). We start with a single user and scale up to $N$ users running the protocol concurrently.

We assume $N$ users running the update protocol. Each user generates from 1 to 10 update events per 100 transactions. We need to have a generator that simulates $N$ users generating update events, where $N = 0,1,2,...$, with the above rate. Also we need a generator that generates common transactions for $M$ fixed users. We assume traces of fixed size and measure the end-to-end elapsed time for all software updates. Then, we calculate the percentile 80 of this elapsed time. This will be our basic metric.


\subsection{Prerequisites}
2 changes: 1 addition & 1 deletion formal-spec/properties.tex
@@ -1,4 +1,4 @@
\section{Properties}
\section{Formal Specification of Properties}
\label{sec:properties}

The notation used in this section is explained in Section~8 of
Expand Down
45 changes: 45 additions & 0 deletions formal-spec/requirements.tex
@@ -0,0 +1,45 @@
\section{Requirements for the Software Update Mechanism} \label{sec:requirements}

\subsection{Functional Requirements} \label{sec:func-reqs}
\begin{enumerate}
\item \textbf{Updates are open to the community.} \label{req:upd-open}
Any stakeholder should be able to propose a software update to be voted by the stakeholders' community. So proposals are open to the community.

\item \textbf{Updates utilize the blockchain} \label{req:upd-blockchain}
Update events are stored in the blockchain as immutable events.

\item \textbf{Decentralized Governance} \label{req-dec-gov}
The stakeholders' community is the only authority to decide on the approval, or rejection, of the proposed software updates.

\item \textbf{Secure Activation} \label{req-sec-act}
Upon activation of a software update the update mechanism guarantees a secure transition to the updated ledger; a transition in which:
\begin{itemize}
\item The security assumptions of the updated ledger will hold (e.g., there will be a sufficient percent of upgraded honest stake)
\item The updated ledger successfully incorporates the old ledger, i.e., the state of the old ledger will be successfully moved into the new ledger.
\item There will be no chain splits due to the activation.
\end{itemize}

\item \textbf{Conflict Resolution.}\label{req:conflict-res}
Two or more software updates that have conflicting changes will not be applied both, but a conflict resolution will take place.

\item \textbf{Update Dependencies.}\label{req:update-dep}
A software update will never be applied if its dependency requirements are not met

\item \textbf{Extensible voting period.} \label{req:ext-vper} The period required for approving a software update is not fixed but can be extended by the stakeholders' community.

\end{enumerate}

\subsection{Non-Functional Requirements} \label{sec:non-func-reqs}

\begin{enumerate}
\item \textbf{Transaction Throughput}
\label{req:tr-throughput}
The ledger transaction throughput is not significantly impacted by the presence of software updates (see section \ref{sec:measurements}).

\item \textbf{Blockchain Size} \label{req:blockchain-size}
The blockchain size is not significantly impacted by the presence of software updates (see section \ref{sec:measurements}).

\item \textbf{Update Time to Activation}\label{req:update-thr}
The time to activation of a software update reduces linearly in the number of users running the update protocol (see section \ref{sec:measurements}).

\end{enumerate}

0 comments on commit 4cb10a8

Please sign in to comment.