Skip to content

Run all S3 tests on travis #1790

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
lopuhin opened this issue Feb 18, 2016 · 12 comments · Fixed by #4804
Closed

Run all S3 tests on travis #1790

lopuhin opened this issue Feb 18, 2016 · 12 comments · Fixed by #4804

Comments

@lopuhin
Copy link
Member

lopuhin commented Feb 18, 2016

There are some tests S3 that are currently skipped on travis (S3FeedStorageTest, TestS3FilesStore) because they need real S3. It would be nice to run them, here are some options:

  • Test against real S3, using https://docs.travis-ci.com/user/environment-variables/ to hide ASW keys. But there are some problems: tests might be unstable if S3 is down, and this depends on someone providing AWS keys and paying the (tiny) bills.
  • Use something like https://github.com/jubos/fake-s3 to fake S3 - if this works, than it will be much more convenient.
  • Mock S3 (did not find anything satisfactory upfront, but maybe there is something).

From discussion here: #1761 (comment)

@darshanime
Copy link
Contributor

Hey @lopuhin!
I would like to attempt to set fake-s3 up. I installed fake-s3 on my local machine and successfully tried a few put, rb requests. Now, for our tests, I would need to install ruby and flask-s3 in tox and Travis environments by editing the tox.ini and .travis.yml. In the TestS3FilesStore test itself, what about this:

#rest of the test
        uri = "s3://test.bucket/scrapy-test" 
        #or we can set S3_TEST_FILE_URI to this in tox.ini
        store = S3FilesStore(uri)
        store.port = 4567
        store.host = 'localhost'
#rest of the test

(We can start the flask-s3 server at port 4567 in the tox.ini and .travis.yml.)

Am I on the correct path?

@lopuhin
Copy link
Member Author

lopuhin commented Feb 19, 2016

Hey @darshanime , I think your approach is correct, please go ahead! Maybe it would also be nice to check that fake-s3 is running at the start of the test, and skip the test if it's not.

@kmike
Copy link
Member

kmike commented Feb 19, 2016

It may be better to start fake-s3 server as a pytest fixture, better on a random port. We have technical debt in Scrapy already though with mockserver and mitmproxy - they run on fixed ports, and this makes it impossible to execute tests in parallel.

I also wonder how good is https://github.com/spulec/moto - it seems to have server mode.

@darshanime
Copy link
Contributor

Hey @kmike, @lopuhin!
I tried to use moto today to simulate S3 with mixed success.
I used pytest fixtures to setup the S3 session as suggested.

from moto import mock_s3

class TestS3FilesStore(unittest.TestCase):

    @pytest.fixture(autouse=True)
    def setup_s3(self):
            mock = mock_s3()
            mock.start()
            conn = boto.connect_s3()
            bucket = conn.create_bucket('test.bucket')
#rest of the test

The tests passed when I ignored the is_botocore() checks everywhere(by setting it to false). However, with botocore, I am getting SSLError: [Errno bad handshake] (32, 'EPIPE') error.

@kmike
Copy link
Member

kmike commented Feb 20, 2016

Hey @darshanime - with mock_s3 we won't be able to test botocore, so it is worse than fake-s3; but maybe we can use moto in server mode for tests if it provides a server which emulates s3 like fake-s3.

@darshanime
Copy link
Contributor

Yes, it does have a stand-alone server mode. Is it okay to start it using import subprocess; subprocess.call(["moto_server", "s3"]) or should I look at moto's source code and see how it is started?

@kmike
Copy link
Member

kmike commented Feb 20, 2016

@darshanime subprocess / Popen is fine

@darshanime
Copy link
Contributor

Hi!

I came across a problem when using moto_server. To use moto in a standalone-server mode, we have to provide proxies - something that is not supported in botocore. This can be resolved by declaring env variables in the pytest.fixture:

os.environ["HTTP_PROXY"] = "127.0.0.1:5000"
os.environ["HTTPS_PROXY"] = "127.0.0.1:5000"
subprocess.Popen(["moto_server", "s3"])
#using boto, bucket successfully created when the connection is initialized thus:
conn = boto.connect_s3("", "", is_secure=False, proxy_port = 5000, proxy ="127.0.0.1")
bucket = conn.create_bucket('test.bucket')

However, when the test runs, the botocore demands this link from the moto_server which ends with a 404, failing the test.
127.0.0.1 - - [22/Feb/2016 13:21:38] "GET http://169.254.169.254/latest/meta-data/iam/security-credentials/ HTTP/1.1" 404 -
Looks like we aren't able to fake our credentials, and botocore doesn't seem to have any feature to tone down security (from the botocore documentation):

botocore does not provide higher-level abstractions on top of these services, operations and responses. That is left to the application layer.

@jersub
Copy link

jersub commented Mar 15, 2016

Hi,

For the record, several months ago, I wrote some code for testing S3FeedStorage using moto in PR #1559.

For botocore, an alternative might be to use the stubbing approach described in https://botocore.readthedocs.org/en/latest/reference/stubber.html

@jersub
Copy link

jersub commented Mar 16, 2016

I see moto is supposed to support both botocore and boto3, even if there are still some open issues: getmoto/moto#348

@Gallaecio
Copy link
Member

Specific S3 tests that seem to be skipped with local tox not using AWS environment variables:

tests/test_downloader_handlers.py::S3TestCase::test_request_signing5
tests/test_feedexport.py::S3FeedStorageTest::test_parse_credentials
tests/test_feedexport.py::S3FeedStorageTest::test_store
tests/test_pipeline_files.py::TestS3FilesStore::test_persist

@Gallaecio
Copy link
Member

I think we should postpone this work until after #1866 is implemented.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants