Crawl and search I need backup!
related data from Twitter stream API.
You have to apply a Twitter app first,
then put the authenticate data in auth.ini
under gbf_backmeup
directory:
[AUTH]
consumer key = xxxxxxxxxxxxxxx
consumer secret = xxxxxxxxxxxxxxxx
token = xxxxxxxxxxxxxxxxx
token secret = xxxxxxxxxxxxxxxx
Create table schema and predefined dataset:
$ make model
or
from gbf_backmeup import model
model()
Crawl I need backup!
related data from twitter stream API, and save them in database:
$ make crawl
or
from gbf_backmeup import crawl
crawl()
It will continuously crawl data from twitter unless interrupted manually.
Search battles you want to help:
from gbf_backmeup import search_battles
boss_name = 'Luminiera Omega'
boss_level = 75
search_battles(boss_name, boss_level)
Currently, because there is no thread design,
it is suggested execute make crawl
and find battles simultaneously and in diffenet processes.
You could also use search_battles_example.py
as an example.
Wipe out your granblue fantansy related tweets and battle data in database:
$ make wipeout
or
from gbf_backmeup import wipeout
wipeout()