Web scrapping tool written in python3, using regex, to get CVEs, Source and URLs.
Generates a CSV file in the current directory.
Uses the NIST API v2 to get info.
requests
,bs4
(orbeautifulsoup4
) andprettytable
must be installed.
You might want to create a venv
before installing the dependencies.
pip install -r requirements.txt
If you need to use a proxy, then write it at the beginning of the script in the variable proxy
.
# Your proxy here...
proxy = "http://your.proxy.there:8080"
python3 searchcve.py -u https://us-cert.cisa.gov/ncas/alerts/aa21-209a
python3 searchcve.py -u https://www.kennasecurity.com/blog/top-vulnerabilities-of-the-decade/
python3 searchcve.py --url https://arstechnica.com/gadgets/2021/07/feds-list-the-top-30-most-exploited-vulnerabilities-many-are-years-old/
python3 searchcve.py --url https://nvd.nist.gov/
Just in Bash (Ubuntu 18+):
chmod +x developer.sh
./developer.sh
chmod +x searchcve.py
./searchcve.py -u https://us-cert.cisa.gov/ncas/alerts/aa21-209a
./searchcve.py --url https://nvd.nist.gov/
Command line tool that uses the NIST API to get resources.
usage: searchcve_api.py [-h] [-c CVE] [-k KEYWORD] [-u URL] [-i INPUT_FILE]
optional arguments:
-h, --help show this help message and exit
-c CVE, --cve CVE Choose CVE e.g. "CVE-2020-1472"
-k KEYWORD, --keyword KEYWORD
Choose keyword e.g. "microsoft" -- it will give the 20 latest vulnerabilities and export to csv in the current directory
-u URL, --url URL Choose URL e.g. "https://nvd.nist.gov/" -- it will export to csv in the current directory
-i INPUT_FILE, --input-file INPUT_FILE
Choose the path to input file containing CVEs or URLs e.g. "test.csv" -- it will export to csv in the current directory
python3 searchcve.py -c CVE-2020-1472
python3 searchcve.py -k microsoft
python3 searchcve.py -i cves.csv