- Clone the repo
git clone https://github.com/dhrubomoy/sa-backend.git
- Install virtualenv if not installed
pip install virtualenv
- Create a virtual environment
cd sa-backend virtualenv env
- Activate the environment
source env/bin/activate
- Install packages
pip install -r requirements.txt
- Open the
/src/sentiment_analysis/utils/twitter_analysis_utils.py
file and replace following lines with your actual consumer key and access token for twitter apiconsumer_key = 'YOUR CONSUMER KEY' consumer_secret = 'YOUR CONSUMER SECRET' access_token = 'YOUR ACCESS TOKEN' access_token_secret = 'YOUR ACCESS TOKEN SECRET'
- Copy the models in
src/files/models
- Use existing model or tokenizer: Create a folder called
models
insidesrc/files/
. Download these models and save it inside themodels
folder: rnn-glove-model-01-0.8362.hdf5 and rnn-word2vec-model-03-0.8425.hdf5 - If you create a new model and tokenizer: Create a folder called
models
insidesrc/files/
. After training the model, copy the saved model inmodels
folder, save the tokenizer insrc/files/tokenizers
folder. Make sure that that the names of the files are reflected insrc/sentiment_analysis/utils/sentiment_analysis_utils/rnn_model.py
file by updating variables such asRNN_W2V_MODEL_PATH
,RNN_GLOVE_MODEL_PATH
etc.
- Use existing model or tokenizer: Create a folder called
Then download all the .hdf5 files from our google drive shared folder (mlsa-project/models
) and save them inside the src/files/models/
folder that you created.
- Apply migrations
cd src python manage.py migrate
- Run server
python manage.py runserver
The server should be up and running. Check http://127.0.0.1:8000/api/searched_tweets/
Follow instruction from sa-frontend repository README
Unfortunately deploying both backend and frontend to aws is quite tedious, and I couldn't find an easier way to make it easier. Follow these instructions:
Once you've set up the server, everytime you stop and start the server, you must do step 6,8, and 9 mentioned in this file