Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doc: dev guide: how to run s3-tests locally against vstart #14508

Merged
1 commit merged into from Apr 13, 2017
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
91 changes: 91 additions & 0 deletions doc/dev/index.rst
Expand Up @@ -1386,6 +1386,97 @@ server list`` on the teuthology machine, but the target VM hostnames (e.g.
cluster.


Testing - how to run s3-tests locally
=====================================

RGW code can be tested by building Ceph locally from source, starting a vstart
cluster, and running the "s3-tests" suite against it.

The following instructions should work on jewel and above.

Step 1 - build Ceph
-------------------

Refer to :doc:`install/build-ceph`.

You can do step 2 separately while it is building.

Step 2 - s3-tests
-----------------

The test suite is in a separate git repo, and is written in python. Perform the
following steps for jewel::

git clone git://github.com/ceph/s3-tests
cd s3-tests
git checkout ceph-jewel
./bootstrap

For kraken, checkout the ``ceph-kraken`` branch instead of ``ceph-jewel``. For
master, use ``ceph-master``.

Step 3 - vstart
---------------

When the build completes, and still in the top-level directory of the git
clone where you built Ceph, do the following::

cd src/
./vstart.sh -n -r --mds_num 0

This will produce a lot of output as the vstart cluster is started up. At the
end you should see a message like::

started. stop.sh to stop. see out/* (e.g. 'tail -f out/????') for debug output.

This means the cluster is running.

Step 4 - prepare S3 environment
-------------------------------

The s3-tests suite expects to run in a particular environment (S3 users, keys,
configuration file).

Before you try to prepare the environment, make sure you don't have any
existing keyring or ``ceph.conf`` files in ``/etc/ceph``.

For jewel, Abhishek Lekshmanan wrote a script that can be used for this
purpose. Assuming you are testing jewel, run the following commands from the
``src/`` directory of your ceph clone (where you just started the vstart
cluster)::

pushd ~
wget https://gist.githubusercontent.com/theanalyst/2fee6bc2780f67c79cad7802040fcddc/raw/b497ddba053d9a6fb5d91b73924cbafcfc32f137/s3tests-bootstrap.sh
popd
sh ~/s3tests-bootstrap.sh

If the script is successful, it will display a blob of JSON and create a file
called ``s3.conf`` in the current directory.

Step 5 - run s3-tests
---------------------

To actually run the tests, take note of the full path to the ``s3.conf`` file
created in the previous step and then move to the directory where you cloned
``s3-tests`` in Step 2.

First, verify that the test suite is there and can be run::

S3TEST_CONF=/path/to/s3.conf ./virtualenv/bin/nosetests -a '!fails_on_rgw' -v --collect-only

This should complete quickly - it is like a "dry run" of all the tests in the
suite.

Finally, run the test suite itself::

S3TEST_CONF=/path/to/s3.conf ./virtualenv/bin/nosetests -a '!fails_on_rgw' -v

Note: the following test is expected to error - this is a problem in the test
setup (WIP), not an actual test failure::

ERROR: s3tests.functional.test_s3.test_bucket_acl_grant_email


.. WIP
.. ===
..
Expand Down