Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature latest #161

Closed
wants to merge 33 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
b0caccd
fixed issues plus jira comment formatting
qmontal Feb 12, 2019
ccf2e4b
fix #147
qmontal Feb 12, 2019
8c53987
tracking of processing was in debug instead of info logging
qmontal Feb 12, 2019
bc3367e
exception of empty scans
qmontal Feb 12, 2019
177c254
allow jira sync module to run after the rest
qmontal Feb 12, 2019
587546a
fix typo
qmontal Feb 14, 2019
c2d80c7
made host resolution optional from the config file with dns_resolv var
qmontal Feb 15, 2019
521184d
Update bug_report.md
qmontal Feb 21, 2019
2c7965d
fix #151
qmontal Feb 25, 2019
5dd6503
Merge branch 'beta-1.8' of https://github.com/HASecuritySolutions/Vul…
qmontal Feb 25, 2019
f170dcb
reorg resources files
qmontal Feb 25, 2019
bdbe31d
resources reorg 2
qmontal Feb 25, 2019
05420dd
readding docker-compose credentials template
qmontal Feb 25, 2019
b36e315
fix #142
qmontal Feb 25, 2019
46ddee3
confirm openvas 9 works
qmontal Feb 25, 2019
a3da41e
added to readme openvas supported versions
qmontal Feb 26, 2019
4e94bef
fix bug not detecting existent label due to string format
qmontal Feb 26, 2019
623c881
fix jira issue index when comparing created tickets
qmontal Feb 27, 2019
a288f41
added label *false positive* for reporting on jira
qmontal Feb 27, 2019
86e792f
workaround regarding ignoring ticket updates after risk accepted
qmontal Mar 1, 2019
401dfec
fix #143, added a temporary container to upload through kibana API
qmontal Mar 4, 2019
e7bd4d2
deleting dependency and pulling qualysapi official library, vulnwhisp…
qmontal Mar 15, 2019
936c4a3
added automatic jira server_decommission label removal after x time
qmontal Mar 19, 2019
2d3a140
fix bug
qmontal Mar 19, 2019
70e1d77
fix missing section specification on qualys was connector #156
qmontal Mar 20, 2019
5cdb255
Merge branch 'beta-1.8' of https://github.com/HASecuritySolutions/Vul…
qmontal Mar 20, 2019
9d52596
fix xml encoding issue #156
qmontal Mar 20, 2019
a4420b7
reverse unintended change on frameworks_example.ini
qmontal Mar 20, 2019
47df1ee
typo
qmontal Mar 20, 2019
843aac6
fixing issue with new vulns of already risk accepted issues not being…
qmontal Mar 20, 2019
a4b1b9c
fixed issue where, asset after a removed one, was ignored due to pyth…
qmontal Mar 21, 2019
97e4f07
added logging to file
qmontal Mar 22, 2019
3601ace
improved file logging format
qmontal Mar 22, 2019
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
10 changes: 7 additions & 3 deletions .github/ISSUE_TEMPLATE/bug_report.md
Expand Up @@ -11,7 +11,10 @@ assignees: ''
A clear and concise description of what the bug is.

**Affected module**
Which one is the module that is not working as expected, e.g. Nessus, Qualys WAS, Qualys VM, OpenVAS, ELK, Jira...)
Which one is the module that is not working as expected, e.g. Nessus, Qualys WAS, Qualys VM, OpenVAS, ELK, Jira...).

**VulnWhisperer debug trail**
If applicable, paste the VulnWhisperer debug trail of the execution for further detail (execute with '-d' flag).

**To Reproduce**
Steps to reproduce the behavior:
Expand All @@ -27,8 +30,9 @@ A clear and concise description of what you expected to happen.
If applicable, add screenshots to help explain your problem.

**System in which VulnWhisperer runs (please complete the following information):**
- OS: [e.g. iOS]
- Version [e.g. 22]
- OS: [e.g. Ubuntu Server]
- Version: [e.g. 18.04.2 LTS]
- VulnWhisperer Version: [e.g. 1.7.1]

**Additional context**
Add any other context about the problem here.
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Expand Up @@ -2,6 +2,7 @@
data/
logs/
elk6/vulnwhisperer.ini
resources/elk6/vulnwhisperer.ini
configs/frameworks_example.ini

# Byte-compiled / optimized / DLL files
Expand Down
3 changes: 0 additions & 3 deletions .gitmodules

This file was deleted.

2 changes: 1 addition & 1 deletion README.md
Expand Up @@ -19,7 +19,7 @@ Currently Supports
- [X] [Nessus (**v6**/**v7**/**v8**)](https://www.tenable.com/products/nessus/nessus-professional)
- [X] [Qualys Web Applications](https://www.qualys.com/apps/web-app-scanning/)
- [X] [Qualys Vulnerability Management](https://www.qualys.com/apps/vulnerability-management/)
- [X] [OpenVAS](http://www.openvas.org/)
- [X] [OpenVAS (**v7**/**v8**/**v9**)](http://www.openvas.org/)
- [X] [Tenable.io](https://www.tenable.com/products/tenable-io)
- [ ] [Detectify](https://detectify.com/)
- [ ] [Nexpose](https://www.rapid7.com/products/nexpose/)
Expand Down
17 changes: 16 additions & 1 deletion bin/vuln_whisperer
Expand Up @@ -41,7 +41,13 @@ def main():
stream=sys.stdout,
level=logging.DEBUG if args.debug else logging.INFO
)
logger = logging.getLogger(name='main')
logger = logging.getLogger()
# we set up the logger to log as well to file
fh = logging.FileHandler('vulnwhisperer.log')
fh.setLevel(logging.DEBUG if args.debug else logging.INFO)
fh.setFormatter(logging.Formatter("%(asctime)s %(levelname)s %(name)s - %(funcName)s:%(message)s", "%Y-%m-%d %H:%M:%S"))
logger.addHandler(fh)

if args.fancy:
import coloredlogs
coloredlogs.install(level='DEBUG' if args.debug else 'INFO')
Expand All @@ -68,6 +74,7 @@ def main():

vw.whisper_vulnerabilities()
# TODO: fix this to NOT be exit 1 unless in error
close_logging_handlers()
sys.exit(1)

else:
Expand All @@ -82,6 +89,7 @@ def main():

vw.whisper_vulnerabilities()
# TODO: fix this to NOT be exit 1 unless in error
close_logging_handlers()
sys.exit(1)

except Exception as e:
Expand All @@ -90,8 +98,15 @@ def main():
logger.error('{}'.format(str(e)))
print('ERROR: {error}'.format(error=e))
# TODO: fix this to NOT be exit 2 unless in error
close_logging_handlers()
sys.exit(2)

close_logging_handlers()

def close_logging_handlers():
for handler in logger.handlers:
handler.close()
logger.removeFilter(handler)

if __name__ == '__main__':
main()
2 changes: 2 additions & 0 deletions configs/frameworks_example.ini
Expand Up @@ -89,12 +89,14 @@ verbose=true
#proxy_password = proxypass

[jira]
enabled = false
hostname = jira-host
username = username
password = password
write_path = /opt/VulnWhisperer/data/jira/
db_path = /opt/VulnWhisperer/data/database
verbose = true
dns_resolv = False

#Sample jira report scan, will automatically be created for existent scans
#[jira.qualys_vuln.test_scan]
Expand Down
1 change: 0 additions & 1 deletion deps/qualysapi
Submodule qualysapi deleted from 42c3b4
22 changes: 17 additions & 5 deletions docker-compose.v6.yml
Expand Up @@ -6,7 +6,7 @@ services:
environment:
- cluster.name=vulnwhisperer
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
- xpack.security.enabled=false

ulimits:
Expand All @@ -21,7 +21,7 @@ services:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
restart: always
#restart: always
networks:
esnet:
aliases:
Expand All @@ -40,13 +40,24 @@ services:
esnet:
aliases:
- kibana.local
kibana-config:
image: alpine
container_name: kibana-config
volumes:
- ./resources/elk6/init_kibana.sh:/opt/init_kibana.sh
- ./resources/elk6/kibana_APIonly.json:/opt/kibana_APIonly.json
command: sh -c "apk add --no-cache curl bash && chmod +x /opt/init_kibana.sh && chmod +r /opt/kibana_APIonly.json && cd /opt/ && /bin/bash /opt/init_kibana.sh" # /opt/kibana_APIonly.json"
networks:
esnet:
aliases:
- kibana-config.local
logstash:
image: docker.elastic.co/logstash/logstash:6.6.0
container_name: logstash
volumes:
- ./elk6/pipeline/:/usr/share/logstash/pipeline
#- ./elk6/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./resources/elk6/pipeline/:/usr/share/logstash/pipeline
- ./data/:/opt/vulnwhisperer/data
#- ./resources/elk6/logstash.yml:/usr/share/logstash/config/logstash.yml
environment:
- xpack.monitoring.enabled=false
depends_on:
Expand All @@ -64,8 +75,9 @@ services:
"/opt/vulnwhisperer/vulnwhisperer.ini"
]
volumes:
- /opt/vulnwhisperer/data/:/opt/vulnwhisperer/data
- ./data/:/opt/vulnwhisperer/data
- ./elk6/vulnwhisperer.ini:/opt/vulnwhisperer/vulnwhisperer.ini
- ./resources/elk6/vulnwhisperer.ini:/opt/vulnwhisperer/vulnwhisperer.ini
network_mode: host
volumes:
esdata1:
Expand Down
1 change: 1 addition & 0 deletions requirements.txt
Expand Up @@ -8,3 +8,4 @@ bs4
jira
bottle
coloredlogs
qualysapi>=5.1.0
Expand Up @@ -21,7 +21,7 @@ services:
- 9200:9200
environment:
- xpack.security.enabled=false
restart: always
#restart: always
networks:
esnet:
aliases:
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
33 changes: 33 additions & 0 deletions resources/elk6/init_kibana.sh
@@ -0,0 +1,33 @@
#!/bin/bash

#kibana_url="localhost:5601"
kibana_url="kibana.local:5601"
add_saved_objects="curl -u elastic:changeme -k -XPOST 'http://"$kibana_url"/api/saved_objects/_bulk_create' -H 'Content-Type: application/json' -H \"kbn-xsrf: true\" -d @"

#Create all saved objects - including index pattern
saved_objects_file="kibana_APIonly.json"

#if [ `curl -I localhost:5601/status | head -n1 |cut -d$' ' -f2` -eq '200' ]; then echo "Loading VulnWhisperer Saved Objects"; eval $(echo $add_saved_objects$saved_objects_file); else echo "waiting for kibana"; fi

until [ "`curl -I "$kibana_url"/status | head -n1 |cut -d$' ' -f2`" == "200" ]; do
curl -I "$kibana_url"/status
echo "Waiting for Kibana"
sleep 5
done

echo "Loading VulnWhisperer Saved Objects"
echo $add_saved_objects$saved_objects_file
eval $(echo $add_saved_objects$saved_objects_file)

#set "*" as default index
#id_default_index="87f3bcc0-8b37-11e8-83be-afaed4786d8c"
#os.system("curl -X POST -H \"Content-Type: application/json\" -H \"kbn-xsrf: true\" -d '{\"value\":\""+id_default_index+"\"}' http://elastic:changeme@"+kibana_url+"kibana/settings/defaultIndex")

#Create vulnwhisperer index pattern
#index_name = "logstash-vulnwhisperer-*"
#os.system(add_index+index_name+"' '-d{\"attributes\":{\"title\":\""+index_name+"\",\"timeFieldName\":\"@timestamp\"}}'")

#Create jira index pattern, separated for not fill of crap variables the Discover tab by default
#index_name = "logstash-jira-*"
#os.system(add_index+index_name+"' '-d{\"attributes\":{\"title\":\""+index_name+"\",\"timeFieldName\":\"@timestamp\"}}'")

File renamed without changes.
428 changes: 428 additions & 0 deletions resources/elk6/kibana_APIonly.json

Large diffs are not rendered by default.

File renamed without changes.
File renamed without changes.
Expand Up @@ -95,6 +95,7 @@ password = password
write_path = /opt/vulnwhisperer/data/jira/
db_path = /opt/vulnwhisperer/data/database
verbose = true
dns_resolv = False

#Sample jira report scan, will automatically be created for existent scans
#[jira.qualys_vuln.test_scan]
Expand Down
30 changes: 17 additions & 13 deletions vulnwhisp/frameworks/qualys_web.py
Expand Up @@ -43,7 +43,7 @@ def __init__(self, config=None):
self.logger.error('Could not connect to Qualys: {}'.format(str(e)))
self.headers = {
"content-type": "text/xml"}
self.config_parse = qcconf.QualysConnectConfig(config)
self.config_parse = qcconf.QualysConnectConfig(config, 'qualys_web')
try:
self.template_id = self.config_parse.get_template_id()
except:
Expand Down Expand Up @@ -74,7 +74,7 @@ def get_was_scan_count(self, status):
E.filters(
E.Criteria({'field': 'status', 'operator': 'EQUALS'}, status))))
xml_output = self.qgc.request(self.COUNT_WASSCAN, parameters)
root = objectify.fromstring(xml_output)
root = objectify.fromstring(xml_output.encode('utf-8'))
return root.count.text

def get_reports(self):
Expand Down Expand Up @@ -127,17 +127,21 @@ def get_all_scans(self, limit=1000, offset=1, status='FINISHED'):
qualys_api_limit = limit
dataframes = []
_records = []
total = int(self.get_was_scan_count(status=status))
self.logger.info('Retrieving information for {} scans'.format(total))
for i in range(0, total):
if i % limit == 0:
if (total - i) < limit:
qualys_api_limit = total - i
self.logger.info('Making a request with a limit of {} at offset {}'.format((str(qualys_api_limit), str(i + 1))))
scan_info = self.get_scan_info(limit=qualys_api_limit, offset=i + 1, status=status)
_records.append(scan_info)
self.logger.debug('Converting XML to DataFrame')
dataframes = [self.xml_parser(xml) for xml in _records]
try:
total = int(self.get_was_scan_count(status=status))
self.logger.error('Already have WAS scan count')
self.logger.info('Retrieving information for {} scans'.format(total))
for i in range(0, total):
if i % limit == 0:
if (total - i) < limit:
qualys_api_limit = total - i
self.logger.info('Making a request with a limit of {} at offset {}'.format((str(qualys_api_limit), str(i + 1))))
scan_info = self.get_scan_info(limit=qualys_api_limit, offset=i + 1, status=status)
_records.append(scan_info)
self.logger.debug('Converting XML to DataFrame')
dataframes = [self.xml_parser(xml) for xml in _records]
except Exception as e:
self.logger.error("Couldn't process all scans: {}".format(e))

return pd.concat(dataframes, axis=0).reset_index().drop('index', axis=1)

Expand Down