[REVIEW] S3 scraper fix #843

merged 8 commits into from Apr 30, 2013

2 participants

Sonatype member

The S3 scraper is broken, if the bucket being listed would return "paged" response (when the repository has over 1000 unique files in it). The paging logic was creating broken URLs, resulting in having only 1st page fetched, and used to build prefixes file. More about it in the linked issue.

Related issue:




I need small standalone example project and exact steps to reproduce the problem without this pull applied.

For example, I see no change in proxy repository prefix file with or without this pull for http://repository.springsource.com/maven/bundles/external/ repository mentioned in NEXUS-5696. In both cases the prefix file looks like

# Prefix file generated by Sonatype Nexus
# Do not edit, changes will be overwritten!
Sonatype member

@ifedorenko I think you made something wrong with testing, as in my case (virgin boot) the list is 152(+2 comment) lines long.

My guess is that you reused a working directory from "old" (broken) Nexus (with already scraped prefixes.txt with broken S3 scraper), and hence, at boot WL was "detected" but it's update would happen at next tick (24h). To really verify this, either delete the prefixes.txt file and reboot, or perform a forced update by clicking on "Update now" button.

Here is the prefix file you should have after figuring out what went wrong:

# Prefix file generated by Sonatype Nexus
# Do not edit, changes will be overwritten!

Ok, I can confirm the pull works for repository mentioned in NEXUS-5696. I think I had problems with browser cache, but I am not 100% sure.

I still don't understand how the code works. What happens if the bucket has 5k keys, for example?

Sonatype member

@ifedorenko Amazon S3 caps the response "page" to have max 1000 entries. This is a hard constraint, no client can modify it, so it can be taken granted. If bucket contains 1k+ keys, they will be "paged" (that's the only was to list them all). In your example, it will be 5 HTTP request requesting the 5k entries.

But now that you mentions this, I think the code should be reworked from being recursive to sequential, as in that example, 5 instances of jsoup Documents will be held in memory.

Still, I think this pull should be merged in, as it fixes the bug, and then another one should be made that refactors the diveIn method and make its runtime execution sequential (or looped) over recursive.

Sonatype member

The diveIn method made non-recursive, to make it better heap-friendly.

Sonatype member


Job had one unrelated (Search related) failure, the added test passed (is an UT).



I did look for recursion but I could not find it. I do see iteration and can see how the code is expected to work now :-)

@cstamas cstamas merged commit 1a148aa into master Apr 30, 2013
@cstamas cstamas deleted the s3-scraper-fix branch Apr 30, 2013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment