Building a Big-Data production using the power of Python and Hadoop
We performed Data Aggregation, Big Data Analysis and Visualization which involves data aggregation from API exposed by NYTimes website and Twitter, then we applied classical big data analytic method of MapReduce for perfroming word count on the unstructured data collected, usually text corpuses to build a visualised data product showing most trending words in the searched topic from NYTimes, using D3.js.
Report and documentation can be found on this Report link
We created two scripts for collecting the data from NY Times and Twitter for five search topics. Data for whole week has been collected ranging from 1st April to 7th April, 2018. a. We chose 5 topics, trade war, donald trump, south china sea, cambridge analytica, and gun crime. b. We then collected the data in the textcorpus folder and applied mapreduce to find the word count and word-co-occurrence count. c. The csv files then generated were used in the d3js visualization.
Screencast description of the Live Project
- report contains summary report detailing our implementation and results.
- codes contains the source code in python and ipython notebook
- Part1: Contains the data and the script files for the Lab1 chapter-wise (CH3,CH4,CH5 subfolders in it)
- Part2 : Contains the data and the script files for the Lab2 Nytimes and twitter folder contains the final csv files which are being used for the web visualization. Input,output,output_co folders are used to store the text files which can be used further to perform the map-reduce operation.
- Jupyter Notebook
- Python 3.6 Environment based on Anaconda
- Ubuntu 17, Intel core i7 processor