From 0dd2dfce1d5e28249c7f7116426833477c775598 Mon Sep 17 00:00:00 2001 From: Nathan Cutler Date: Thu, 13 Apr 2017 19:14:52 +0200 Subject: [PATCH] doc: dev guide: how to run s3-tests locally against vstart Add a bunch of verbiage to the Developer Guide Signed-off-by: Abhishek Lekshmanan Signed-off-by: Nathan Cutler --- doc/dev/index.rst | 91 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 91 insertions(+) diff --git a/doc/dev/index.rst b/doc/dev/index.rst index fa19d307e6aa1..38b64eaf636a0 100644 --- a/doc/dev/index.rst +++ b/doc/dev/index.rst @@ -1386,6 +1386,97 @@ server list`` on the teuthology machine, but the target VM hostnames (e.g. cluster. +Testing - how to run s3-tests locally +===================================== + +RGW code can be tested by building Ceph locally from source, starting a vstart +cluster, and running the "s3-tests" suite against it. + +The following instructions should work on jewel and above. + +Step 1 - build Ceph +------------------- + +Refer to :doc:`install/build-ceph`. + +You can do step 2 separately while it is building. + +Step 2 - s3-tests +----------------- + +The test suite is in a separate git repo, and is written in python. Perform the +following steps for jewel:: + + git clone git://github.com/ceph/s3-tests + cd s3-tests + git checkout ceph-jewel + ./bootstrap + +For kraken, checkout the ``ceph-kraken`` branch instead of ``ceph-jewel``. For +master, use ``ceph-master``. + +Step 3 - vstart +--------------- + +When the build completes, and still in the top-level directory of the git +clone where you built Ceph, do the following:: + + cd src/ + ./vstart.sh -n -r --mds_num 0 + +This will produce a lot of output as the vstart cluster is started up. At the +end you should see a message like:: + + started. stop.sh to stop. see out/* (e.g. 'tail -f out/????') for debug output. + +This means the cluster is running. + +Step 4 - prepare S3 environment +------------------------------- + +The s3-tests suite expects to run in a particular environment (S3 users, keys, +configuration file). + +Before you try to prepare the environment, make sure you don't have any +existing keyring or ``ceph.conf`` files in ``/etc/ceph``. + +For jewel, Abhishek Lekshmanan wrote a script that can be used for this +purpose. Assuming you are testing jewel, run the following commands from the +``src/`` directory of your ceph clone (where you just started the vstart +cluster):: + + pushd ~ + wget https://gist.githubusercontent.com/theanalyst/2fee6bc2780f67c79cad7802040fcddc/raw/b497ddba053d9a6fb5d91b73924cbafcfc32f137/s3tests-bootstrap.sh + popd + sh ~/s3tests-bootstrap.sh + +If the script is successful, it will display a blob of JSON and create a file +called ``s3.conf`` in the current directory. + +Step 5 - run s3-tests +--------------------- + +To actually run the tests, take note of the full path to the ``s3.conf`` file +created in the previous step and then move to the directory where you cloned +``s3-tests`` in Step 2. + +First, verify that the test suite is there and can be run:: + + S3TEST_CONF=/path/to/s3.conf ./virtualenv/bin/nosetests -a '!fails_on_rgw' -v --collect-only + +This should complete quickly - it is like a "dry run" of all the tests in the +suite. + +Finally, run the test suite itself:: + + S3TEST_CONF=/path/to/s3.conf ./virtualenv/bin/nosetests -a '!fails_on_rgw' -v + +Note: the following test is expected to error - this is a problem in the test +setup (WIP), not an actual test failure:: + + ERROR: s3tests.functional.test_s3.test_bucket_acl_grant_email + + .. WIP .. === ..