Skip to content

Commit ce0f02c

Browse files
aeyakovenkooverleaf
authored andcommitted
Update on Overleaf.
1 parent 1a197ca commit ce0f02c

File tree

3 files changed

+159
-3
lines changed

3 files changed

+159
-3
lines changed

figures/fig_1.png

16.6 KB
Loading

figures/fig_2.png

20.3 KB
Loading

main.tex

Lines changed: 159 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,161 @@
1-
\documentclass{article}
2-
\usepackage[utf8]{inputenc}
1+
\title{Loom: High performance blockchain }
2+
3+
\author{
4+
Anatoly Yakovenko \\
5+
aeyakovenko@gmail.com\\
6+
}
7+
\date{}
8+
9+
\documentclass[12pt]{article}
10+
11+
\usepackage{graphicx}
12+
313
\begin{document}
4-
(Type your content here.)
14+
\maketitle
15+
16+
\begin{abstract}
17+
A new Proof of History algorithm is proposed for global read consistency which can be used alongside a consensus algorithm to minimize messaging overhead in a Byzantine Fault Tolerant replicated state machine. It achieves performance by creating a single globally agreed upon order of events independent of network consensus. Nodes participating in the network only vote on a binary choice of accepting or rejecting the ordering. Without hardware failures, all the participating nodes are expected to agree with the proposed ordering with minimal communication overhead above the transaction data itself. Any consensus algorithm can be used, such as Proof of Work or Proof of Stake, a simple Proof of Stake consensus algorithm is proposed. To ensure high availability of data, an efficient streaming Proof of Replication is proposed which takes advantage of the time keeping properties provided by Proof of History. The combination of PoRep and PoH provides a substantial defense against forgery of the ledger in terms of time and storage. The protocol is analyzed on a 1gbps network, and it is shown that throughput is limited by network or ECDSA digests, and with a GPU dedicated to ECDSA digests over \textbf{350k} and up to \textbf{700k} transactions per second with high availability is theoretically possible.
18+
19+
\end{abstract}
20+
21+
\section{Introduction}
22+
This is time for all good men to come to the aid of their party!
23+
24+
%\paragraph{Outline}
25+
%The remainder of this article is organized as follows.
26+
%Section~\ref{previous work} gives account of previous work.
27+
%Our new and exciting results are described in Section~\ref{results}.
28+
%Finally, Section~\ref{conclusions} gives the conclusions.
29+
30+
\section{Proof of History}\label{proof_of_history}
31+
32+
Proof of History provides a way to cryptographically verify passage of time between two events. It uses a cryptographically secure function whose output cannot be predicted from the input, and must be completely executed to generate the output. The function is run in a sequence, it’s previous output as the current input, periodically recording the current output, and how many times it’s been called. The output can then be recomputed and verified by external computers in parallel by checking each period in parallel on a separate core. Data can be timestamped into this sequence by recording the data and the index it was mixed into the sequence. The timestamp then guarantees that the data was created sometime before this hash was generated in the sequence. Multiple generators can synchronize amongst each other by mixing their state into each others sequences. \\
33+
34+
\subsection{Description}
35+
36+
With a cryptographic function, like a cryptographic hash (sha256, md5,
37+
sha-1), whose output cannot be predicted without running the function,
38+
run the function from some random starting value and take its output
39+
and pass it as the input into the same function again. And record the
40+
number of times the function has been called and the output at each
41+
call. \\\\
42+
\noindent For example: \\\\\noindent
43+
\texttt{
44+
sha256(\char`\"any random starting value\char`\") $\rightarrow$
45+
hash1, (n\_count~$=~1$) \\
46+
sha256(hash1) $\rightarrow$ hash2, (n\_count~$=~2$)\\
47+
sha256(hash2) $\rightarrow$ hash3, (n\_count~$=~3$)\\
48+
}
49+
50+
\noindent Where \texttt{hashN} represents the actual hash output. \\
51+
52+
Instead of publishing every hash on every index, only a subset of
53+
these hashes could be published at an interval.\\
54+
55+
\noindent For example:\\\\\noindent
56+
\texttt{
57+
sha256(\char`\"any starting value\char`\") $\rightarrow$ hash1, (n\_count~$=1$)\\
58+
\ldots\\
59+
sha256(hash199) $\rightarrow$ hash200, (n\_count~$=200$)\\
60+
\ldots\\
61+
sha256(hash299) $\rightarrow$ hash300, (n\_count~$=300$)\\
62+
}
63+
64+
This set of events can only be computed in sequence by a single computer thread, because there is no way to predict what the hash value at index $300$ is going to be without actually running the algorithm from the starting value $300$ times.
65+
66+
\begin{figure}
67+
\begin{center}
68+
\centering
69+
\includegraphics[width=0.6\textwidth]{figures/fig_1.png}
70+
\caption[Fig 1]{Figure description \label{fig_1}}
71+
\end{center}
72+
\end{figure}
73+
%A much longer \LaTeXe{} example was written by Gil~\cite{Gil:02}.
74+
In the example in Figure 1, hash \texttt{62f51643c1} was produced on
75+
count $510144806912$ and hash \texttt{c43d862d88} was produced on
76+
count $510146904064$. Real time passed between count $510144806912$
77+
and count $510146904064$.
78+
79+
\subsection{Timestamp for Events}
80+
81+
This sequence of hashes can also be used to record that some piece of data was created before a particular hash index was generated. Using a `combine` function to combine the piece of data with the current hash at the current index. The `data` can simply be a cryptographically unique hash of arbitrary event data. The combine function can be a simple append of data, or any operation that is collision resistant.\\
82+
83+
Arithmetic operations like addition, multiplication etc... wouldn’t work because an attacker could have precomputed a separate sequence in parallel, and could join the two by inserting a piece of data that would add up to the starting value of the parallel sequence. Append would force the attacker to try to create a collision between a hash, and the data they are trying to append.\\
84+
85+
86+
\noindent For example:\\\\\noindent
87+
\texttt{
88+
sha256(\char`\"any starting value\char`\") $\rightarrow$ hash1,
89+
(n\_count $=~1$)1\\
90+
\ldots\\
91+
sha256(hash199) $\rightarrow$ hash200, (n\_count $=~200$)\\
92+
\ldots\\
93+
sha256(hash299) $\rightarrow$ hash300, (n\_count $=~300$)\\
94+
}
95+
96+
\noindent Some external event occurs, like a photograph was taken, or
97+
any arbitrary digital data was created:\\\\\noindent
98+
\texttt{
99+
sha256(hash334) $\rightarrow$ hash335, (n\_count $=~335$), photograph\_sha256\\
100+
sha256(append(hash335, photograph\_sha256) $\rightarrow$ hash336,
101+
(n\_count $=~336$)\\
102+
\ldots\\
103+
sha256(hash399) $\rightarrow$ hash400, (n\_count $=~400$)\\
104+
}
105+
106+
\texttt{Hash336} is computed from the appended binary data of
107+
\texttt{hash335} and the \texttt{sha256} of the photograph. The index,
108+
and the \texttt{sha256} of the photograph are recorded as part of the
109+
sequence output. So anyone verifying this sequence can then recreate
110+
this change to the sequence. The verifying can still be done in
111+
parallel:\\\\\noindent
112+
\texttt{
113+
sha256(hash299) $\rightarrow$ hash300, (n\_count $=~300$)\\
114+
sha256(hash334) $\rightarrow$ hash335, (n\_count $=~335$), photograph\_sha256\\
115+
}\\\noindent
116+
And\\\\\noindent
117+
\texttt{
118+
sha256(append(hash335, photograph\_sha256) $\rightarrow$ hash336,
119+
(n\_count $=~336$)\\
120+
sha256(hash399) $\rightarrow$ hash400, (n\_count $=~400$)\\
121+
}
122+
123+
Because the initial process is still sequential, we can then tell that things entered into the sequence must have occurred sometime before the future hashed value was computed.\\\\\noindent
124+
\texttt{
125+
sha256(hash334) $\rightarrow$ hash335, (n\_count $=~335$), photograph1\_sha256\\
126+
sha256(append(hash335, photograph\_sha256) $\rightarrow$ hash336,
127+
(n\_count $=~336$)\\
128+
\ldots\\
129+
sha256(hash599) $\rightarrow$ hash600, (n\_count $=~600$), photograph2\_sha256
130+
sha256(append(hash600, photograph2\_sha256) $\rightarrow$ hash601,
131+
(n\_count $=~601$)\\
132+
}
133+
134+
So \texttt{photograph2} was created before \texttt{hash601}, and
135+
\texttt{photograph1} was created before \texttt{hash336}. Inserting this extra data into the sequence of hashes results in an unpredictable change to all subsequent values in the sequence. So it would be impossible to precompute any future sequences based on prior knowledge of what data will be mixed into the sequence.\\
136+
137+
The sequence only needs to mix and publish a hash of the event data into the event sequence. The mapping of the hash to event data can be stored outside of the sequence, and the event data can contain other metadata within itself, such as real time stamps and connection IPs.\\
138+
139+
\begin{figure}
140+
\begin{center}
141+
\centering
142+
\includegraphics[width=0.9\textwidth]{figures/fig_2.png}
143+
\caption[Fig 2]{Figure description \label{fig_2}}
144+
\end{center}
145+
\end{figure}
146+
147+
In the example in Figure 2, input \texttt{cfd40df8\ldots} was inserted into the Proof of History sequence. The count at which it was inserted is $510145855488$ and the state at which it was inserted it is \texttt{3d039eef3}.\\
148+
149+
Every node observing this sequence can determine the order at which all events have been inserted. Generating a reverse order would require an attacker to start the malicious sequence after the second event. This delay would allow any non malicious peer to peer nodes to communicate about the original order.\\
150+
151+
\section{Results}\label{results}
152+
In this section we describe the results.
153+
154+
\section{Conclusions}\label{conclusions}
155+
We worked hard, and achieved very little.
156+
157+
\bibliographystyle{abbrv}
158+
\bibliography{simple}
159+
5160
\end{document}
161+
This is never printed

0 commit comments

Comments
 (0)