Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How does it work? #4

Closed
ethicalhack3r opened this issue May 15, 2017 · 1 comment
Closed

How does it work? #4

ethicalhack3r opened this issue May 15, 2017 · 1 comment

Comments

@ethicalhack3r
Copy link

I love the idea but was wondering how it worked? (without me having to spend hours going through the source code)

  1. Do you spider the target site first? Then spider wayback? Then compare the two?
  2. Do you just spider wayback and show the links?
  3. Do you use Burp's current spider results and compare against wayback findings?

Would be cool if this was elaborated on in the main readme.

Thanks!

@P3GLEG
Copy link
Owner

P3GLEG commented May 15, 2017

  1. It spiders only Wayback, it will never directly touch the site. I did this to make it completely passive without alerting the target you are looking at it.

  2. I spider WayBack for the root page(parse it for links), robots.txt and sitemap.xml and filter the query based on the sha digest in Wayback to prevent extra work. Each one is then parsed and added to the sitemap tree. If you double click each individual element you can see the timestamp of when it was indexed.

  3. I don't at the moment. No. The source code itself is pretty simple the hardest part was figuring out how to do Java Swing.

I'll do that soon.

@P3GLEG P3GLEG closed this as completed May 15, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants