Skip to content

Commit

Permalink
example comparing method weights
Browse files Browse the repository at this point in the history
  • Loading branch information
alistra committed May 2, 2013
1 parent deaa4fa commit 109ce2e
Show file tree
Hide file tree
Showing 2 changed files with 29 additions and 1 deletion.
19 changes: 19 additions & 0 deletions thesis-pics/dsu-weight-bad-example.c
@@ -0,0 +1,19 @@
#include <ds.h>

const int n = 4096;
const int x = 10;

int main(int argc, char **argv)
{
ds d = init_d();

for(int i = 0; i < n; i++)
insert_d(d, i, 0);

delete_max_d(d, 1);
delete_min_d(d, 1);

for(int i = 0; i < x; i++)
search_d(d, i, x);

}
11 changes: 10 additions & 1 deletion thesis.tex
Expand Up @@ -531,8 +531,17 @@ \section{Extensions of the idea}
the operation for that data structure identity. That's not a very precise solution, because a lot of
less-weighted heavy operations can overpower a crucial bottleneck fast operation.

\todo{example}
\begin{figure}[!h]
\lstinputlisting{thesis-pics/dsu-weight-bad-example.c}

\caption{}

\label{fig:dsu-weight-bad-example}
\end{figure}

\begin{verbatim}
\texttt{http://www.wolframalpha.com/input/?i=x+\%2B+2+*n+\%3E+\%28x+\%2B+2\%29*+log\_2\%28n\%29\%2C+n+between+0+and+1000\%2C+x+between+0+and+1000&dataset=&equal=Submit}
\end{verbatim}
A better approach would be to approximate the current element count $N$ and actually evaluate the complexity
function on the element count, multiplied by the weight, then the sum of those would be our metric of
profitability of a data structure implementation. That would more accurately describe the cost and would be
Expand Down

0 comments on commit 109ce2e

Please sign in to comment.