Skip to content

18alantom/chattr

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

chattr

It's a twitter sentiment thing.

chattr operation

Basically does this

  1. Scrapes tweets using twint.
  2. Passes the scraped tweets through roBERTa , (Hugging Face 🤗), this gives sentiment probabilities.
  3. All of the above is served using FastAPI.

Other than that


Running It

Steps for getting the code to do what's in the gif.

API Server

Installing Dependencies

  • Install the dependencies in api/requirements.txt
  • For torch I suggest following the steps here.
  • Also if twint doesn't install properly using pip, use pip3 install --upgrade -e git+https://github.com/twintproject/twint.git@origin/master#egg=twint, this worked for me.

Running the api server

  • uvicorn api.main:app --reload from the root of this repo.

If Hugging Face throws an error then rename files in api/core/model to:

model
└── cardiffnlp
    └── twitter-roberta-base-sentiment
        ├── config.json
        └── pytorch_model.bin

and files in api/core/tokenizer to:

tokenizer
└── cardiffnlp
    └── twitter-roberta-base-sentiment
        ├── config.json
        ├── merges.txt
        ├── special_tokens_map.json
        └── vocab.json

Note: these files are downloaded when the server runs.

Frontend

  • npm install to install the dependencies.
  • npm start to serve the frontend.

Few more things

  • API server should be served from port http://localhost:5000 and frontend server from http://localhost:3000.
  • Impact is calculated using likes, retweets, and replies.
  • This was done without much thought, more of an exercise than a project, not meant to be serious so a more than a few things may be broken.
  • Tweet scores aren't aggregated, it would be more meaningful if it was.