-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding new tests using mozhttpd server #137
Conversation
Note: The changes regarding whitespace were forced upon me by a pep8 pre-commit hook ;) |
wdir = '/'.join([server_address, mhttpd.WDIR]) | ||
scraper = ReleaseScraper(os.getcwd(), 'latest', platform='win32', | ||
base_url=wdir) | ||
scraper.download() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to perform a download? We of course want to test that the download method works, but I don't think we'll need to do this for every test. Essentially we want to know that given a bunch of files/directories, the correct one is selected by the parser for download. We could even do this without mozhttpd.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmmm... it's true that we could just have some tests in place that cover all the code of the general scraper.download()
method and the rest just tests that the correct path is created...
Let me just think about what tests need to be run to cover all of this.
It's a good start. But to get a better picture about mozhttpd I would like to see how you mimic the ftp.mo server for content, and enhance the DirectoryParser for that. |
# TODO: | ||
# Tests of base class: | ||
# - detect_platform | ||
# - show_matching_builds |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had trouble finding a good method of testing those. Any suggestions are welcome.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What exactly will you test here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At the time of writing this, I was having looking to get good code coverage.
On review, it does not make much sense to test these two. detect_platform
relies on mozinfo, so if there is a mistake it will be found there not in mozdownload and show_matching_builds
is just a bunch of print outs, so nothing crucial.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So lets get those lines removed.
I think this is enough for the start. The rest we could do in issues once this has all been generally overhauled by you two. I used coverage to make sure the tests cover as much code as possible. If you are interested, I can send the output file to you. There are still some gaps (I couldn't catch that NotFoundException), but I find it a good start. Thanks for looking over it! |
version='latest', | ||
platform='win32', | ||
base_url=self.wdir) | ||
parser1 = DirectoryParser(scraper.path) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In retrospect, I am not sure how clever that is when testing the DirectoryParser. This test will then also be affected by changes to the ReleaseScraper regex, as well.
Maybe I should pass in a regex manually. Any comments on that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are you using the ReleaseScraper here? I don't think you need any scraper class it all. Just prepare a folder with a couple of entries and compare the listing via DirectoryParser with the list you get from the file system.
Even with a lot of comments I think that this is a nice start. Personally I would have reduced it to the DirectoryParser without taking any scraper class in the patch. But it's your choice how deep you want to go here. It makes a follow-up class smaller. Also it gives us an idea about the usage of mozhttpd and if we can slim down the extra mozhttpd testing class. I will check that the next time. Thanks for the work you already put in here, Johannes! |
I asked @wlach with regard to some of the mozhttpd questions you had as I didn't know and couldn't find a satisfactory answer in the documentation.
"I think you should be able to delay responses fairly easily by doing a subclass which does a quick sleep when static content is gotten."
General comment to the above: So, in a word, I guess, we would have to improvise a little for that, but it should be possible. |
I think we should file both as follow-up work. For basic http auth we have issue #141, but for delaying the response we might have to wait once we have time to go into details, unless @wlach can give us further hints. I for myself will not have time the next couple of week. :S |
|
||
m = hashlib.md5() | ||
# rb necessary to run correctly in windows. | ||
fopen = open(fpath, "rb") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please use with so we don't loose the handle in case of an exception.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did not use with
as it only got brought in in python2.6. In 2.5 we would need to import the __future__
module. I remember a case, where a feature that came in with 2.6 caused a regression elsewhere.
Should I still use it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@davehunt do we care about Python 2.5? I don't. Any tool we have sets Python 2.6 as lower limit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, 2.6 should be the lowest we support, IMO.
import mozhttpd | ||
import requests | ||
|
||
from mozdownload import * |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I have learned lately that cause issues with pyflake. Just do an ´import mozdownload´.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should consider adding pyflake to our Travis CI job and pre-commit hooks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I filed issue #157 for it.
Beside the mentioned changes please do not forget to update the list of manual tests and remove those lines which are covered by the automation now. Also this PR doesn't apply cleanly anymore at the moment. |
test_directory_parser: replaced all Scraper instances with regex or strings test_release_scraper: Tests target attribute utils: added create_md5 function
|
||
def setUp(self): | ||
"""Starts server that lists all files in the directory""" | ||
self.logger = mozlog.getLogger(' ') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a suggestion on how to solve the question on the print statement. If we need additional info, we just set the level to debug and it will be shown (still not pretty but shown).
I have forgotten to add mozlog to the ./runtests.sh
script, so I shall just let it stand here for you guys to decide and if you agree on it, I shall add it, otherwise I will revert to the original again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good to me, although we should pass a class name to the logger, otherwise it won't be clearly identifiable from the mozdownload logs entries.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed with Dave. Lets add a name.
There are no manual tests to be removed except one for the ReleaseScraper, but I would like to do that with issue #156, if you don't mind. Also do not review the last commit, unless you agree with the changes highlighted in this comment |
Ok, would work for me then. Just missed a feedback from you about it from an already earlier comment. |
|
||
m = hashlib.md5() | ||
# rb necessary to run correctly in windows. | ||
with open(path, "rb") as fopen: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of ´fopen´ please call it ´f´. I could get the impression its a modifed open method. :)
I think we are very close in getting it landed. Just fix the remaining items pointed out. Next time I will do a testrun on my own. |
I did the one change you suggested, so it should be all set to go. I tested it on my machine in Ubuntu and it ran without problems. Let me know if it throws any errors on yours. Cheers! P.S. Sorry about not responding to the manual tests comment. I must have missed it... or forgotten about it (blush) |
This has been merged as 2c6a9e6 |
Awesome! :) |
Hi folks,
This PR is for discussion only. It is just the easiest way of sharing ideas.
I set up two very basic tests for the DirectoryParser and for the ReleaseScraper (-p win32 -v latest).
Note that there is a basic unittest class setting up mozhttpd and terminating it, so we just need to inherit from it.
downloadable_tests
is the folder that mimicks the Mozilla Server, so we can just add downloadable content into it.The things I would like to have your input in is:
Thanks. I'm looking forward to your input :)