Skip to content

martinbednar/web_crawler_data_analysis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

48 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Web crawler - Data analysis

The scripts in this repository are used to analyze data obtained while crawling the web using the tool Web Crawler.

Already crawled data, which can be analyzed using scripts in this repository, is published on the FIT cloud.

The results from the previous crawling are also published on the FIT cloud.

Get results from crawling for FingerPrint Detector

FingerPrint Detector (first published in version 0.6 of JShelter) needs a JSON configuration file with JavaScript endpoints and their weights, where weight means how much this endpoint is abused to get a device fingerprint.

The results (endpoints + their weights) can be obtained from the crawled data by runnning the Python script fpd_get_results.py. This script expects the following parameters:

  • --dbs
  • --dbs_uMatrix
  • --dbs_uBlock

The output will be saved in the ./results directory.

The output will look like these results.

Comprehensive analysis of crawled data

Much more information can be mined from the crawled data than just the endpoints and their weights. The Python script start_analysis.py is here for a comprehensive analysis of the crawled data.

This script start_analysis.py expects following parameters:

  • --dbs
  • --dbs_p <the path to the folder where the SQLite databases containing captured javascript calls with a privacy extension (e.g. uBlock Origin) are stored>

The output will be saved in the ./results directory.

The output will look like these results.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages