Skip to content
Lossy Counting and Sticky Sampling implementation for efficient frequency counts on data streams.
Branch: master
Clone or download

Latest commit

Latest commit 9e890dd May 21, 2016


Type Name Latest commit message Commit time
Failed to load latest commit information.
project Lossy counting first commit Jul 3, 2015
src updated scala, scalatest to latest version Jul 23, 2015
.gitignore Git ignore Jul 3, 2015
.travis.yml updated scala, scalatest to latest version Jul 23, 2015
LICENSE Update LICENSE May 21, 2016 Changed readme Jul 23, 2015
build.sbt Changed sbt name Jul 23, 2015

Frequency Count Algorithms for Data Streams Build Status

The project provides a Scala implementation of the Lossy Counting and Sticky Sampling algorithms for efficient counting on data streams. You can find a description of the algorithms at this post.

We want to know which items exceed a certain frequency and identify events and patterns. Answers to such questions in real-time over a continuous data stream is not an easy task when serving millions of hits due to the following challenges:

  • Single Pass
  • Limited memory
  • Volume of data in real-time

The above impose a smart counting algorithm. Data stream mining to identify events & patterns can be performed by applying the following algorithms: Lossy Counting and Sticky Sampling.

How to run

Using sbt to build and run:

Lossy Counting:
sbt "run-main frequencycount.lossycounting.LossyCountingModel"

Sticky Sampling:
sbt "run-main frequencycount.stickysampling.StickySamplingModel"

Run the tests using sbt test


Have you found any issues? Want to contribute?

Help me finish the distributed implementation on Spark (see branch).

Please contact me at or create a new Issue. Pull requests are always welcome.

You can’t perform that action at this time.