Skip to content

Commit

Permalink
Merge pull request #14508 from smithfarm/wip-doc-dev
Browse files Browse the repository at this point in the history
doc: dev guide: how to run s3-tests locally against vstart

Reviewed-by: Loic Dachary <ldachary@redhat.com>
  • Loading branch information
Loic Dachary committed Apr 13, 2017
2 parents 9967fce + 0dd2dfc commit 6e20d64
Showing 1 changed file with 91 additions and 0 deletions.
91 changes: 91 additions & 0 deletions doc/dev/index.rst
Expand Up @@ -1386,6 +1386,97 @@ server list`` on the teuthology machine, but the target VM hostnames (e.g.
cluster.


Testing - how to run s3-tests locally
=====================================

RGW code can be tested by building Ceph locally from source, starting a vstart
cluster, and running the "s3-tests" suite against it.

The following instructions should work on jewel and above.

Step 1 - build Ceph
-------------------

Refer to :doc:`install/build-ceph`.

You can do step 2 separately while it is building.

Step 2 - s3-tests
-----------------

The test suite is in a separate git repo, and is written in python. Perform the
following steps for jewel::

git clone git://github.com/ceph/s3-tests
cd s3-tests
git checkout ceph-jewel
./bootstrap

For kraken, checkout the ``ceph-kraken`` branch instead of ``ceph-jewel``. For
master, use ``ceph-master``.

Step 3 - vstart
---------------

When the build completes, and still in the top-level directory of the git
clone where you built Ceph, do the following::

cd src/
./vstart.sh -n -r --mds_num 0

This will produce a lot of output as the vstart cluster is started up. At the
end you should see a message like::

started. stop.sh to stop. see out/* (e.g. 'tail -f out/????') for debug output.

This means the cluster is running.

Step 4 - prepare S3 environment
-------------------------------

The s3-tests suite expects to run in a particular environment (S3 users, keys,
configuration file).

Before you try to prepare the environment, make sure you don't have any
existing keyring or ``ceph.conf`` files in ``/etc/ceph``.

For jewel, Abhishek Lekshmanan wrote a script that can be used for this
purpose. Assuming you are testing jewel, run the following commands from the
``src/`` directory of your ceph clone (where you just started the vstart
cluster)::

pushd ~
wget https://gist.githubusercontent.com/theanalyst/2fee6bc2780f67c79cad7802040fcddc/raw/b497ddba053d9a6fb5d91b73924cbafcfc32f137/s3tests-bootstrap.sh
popd
sh ~/s3tests-bootstrap.sh

If the script is successful, it will display a blob of JSON and create a file
called ``s3.conf`` in the current directory.

Step 5 - run s3-tests
---------------------

To actually run the tests, take note of the full path to the ``s3.conf`` file
created in the previous step and then move to the directory where you cloned
``s3-tests`` in Step 2.

First, verify that the test suite is there and can be run::

S3TEST_CONF=/path/to/s3.conf ./virtualenv/bin/nosetests -a '!fails_on_rgw' -v --collect-only

This should complete quickly - it is like a "dry run" of all the tests in the
suite.

Finally, run the test suite itself::

S3TEST_CONF=/path/to/s3.conf ./virtualenv/bin/nosetests -a '!fails_on_rgw' -v

Note: the following test is expected to error - this is a problem in the test
setup (WIP), not an actual test failure::

ERROR: s3tests.functional.test_s3.test_bucket_acl_grant_email


.. WIP
.. ===
..
Expand Down

0 comments on commit 6e20d64

Please sign in to comment.