From 2ff57c506cdde7e479014fbe7d982e1b8208c125 Mon Sep 17 00:00:00 2001 From: Jiang Bian Date: Thu, 14 Nov 2013 08:26:38 -0600 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index dcdf572..d3f1805 100644 --- a/README.md +++ b/README.md @@ -5,7 +5,7 @@ A crawler that helps you collect data from Twitter for research. Most of the hea To workaround Twitter's rate limits, ``tweetf0rm`` can spawn multiple crawlers each with different proxy and twitter dev account on a single computer (or on multiple computers) and collaboratively ``farm`` user tweets and twitter relationship networks (i.e., friends and followers). The communication channel for coordinating among multiple crawlers is built on top of [redis](http://redis.io/) -- a high performance in-memory key-value store. It has its own scheduler that can balance the load of each worker (or worker machines). -It's quite stable for the things that I want to do. I have collected billions of tweets from **2.8 millions** twitter users in about 2 weeks with a single machine. +It's quite stable for the things that I want to do. I have collected billions of tweets from **2.6 millions** twitter users in about 2 weeks with a single machine. Dataset ------------