Skip to content
Joe edited this page Jul 7, 2020 · 5 revisions

Normalizing Data

The first step is to normalize your data sets, details on how to do this can be found here. Be sure to keep all your normalized files in a single directory. This step is the same for both the serverless and server deployments.

Next you'll need to decide if you want to setup a traditional server deployment or a serverless deployment, there are benefits and drawbacks to each as described below.

Server Deployment

The server deployment is the cheapest way to setup LeakDB, but you'll have to compute indexes yourself. The amount of time it takes to compute an index varies depending on the data set and your hardware. LeakDB can sort multiple terabyte data sets on almost any hardware but it could take a very long time. Domain indexes in particular can take a very long time to compute due to the high number of collisions. However, you only have to pay the computational price once. That is to say once-per-data-set, if you've already computed an index and want to add additional data you'll need to re-sort the entire index (you can save bloom filter results, etc. though), so be sure to collect and normalize all your data before computing the index.

Serverless Deployment

The severless deployment generally costs more money to run as the backend is BigQuery. With BigQuery you pay for data storage as well paying per-query; it isn't particularly expensive unless you start running lots of queries against the data set. It's also much faster to setup since you don't have to compute indexes yourself.

Clone this wiki locally