- data mining using tweepy (search by key words ex: 'trump')
- sentiment analysis using NLTK, textblob etc
- words frequence count from a batch of tweeter messages (remove stopwords and stemming)
- calling methods by API
(ex: localhost:5000/search?query=trump&tweet_limit=100&word_freq=20)
TODO:
- mysql for storeage
- front-end page
- get all tweets from one user's
- proper auth
- python 3.7.0
- MySQL
- Twitter Account with consumer key and access token (get approved from twitter first)
Example of app_config.py file
consumer_key=""
consumer_secret=""
access_token=""
access_token_secret=""
HOST = "127.0.0.1"
USER = "root"
PASSWD = "mypwd"
DATABASE = "twt_database"
- we need tweepy
- we need TextBlob for sentiment analysis (note that NLTK download is about 3.5 GB)
pip install tweepy
pip install nltk
pip install textblob
$ python
>>> import nltk
>>> nltk.download()
We have to select “all” and click download.