New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cls: optimize header file dependency #15165

Merged
merged 14 commits into from Jun 5, 2017

Conversation

Projects
None yet
4 participants
@badone
Contributor

badone commented May 19, 2017

Follows on from #9663

cxwshawn and others added some commits Jun 13, 2016

cls: optimize cephfs header file dependency
Signed-off-by: Xiaowei Chen <chen.xiaowei@h3c.com>
cls: optimize journal header file dependency
Signed-off-by: Xiaowei Chen <chen.xiaowei@h3c.com>
cls: optimize lock header file dependency
Signed-off-by: Xiaowei Chen <chen.xiaowei@h3c.com>
cls: optimize log header file dependency
Signed-off-by: Xiaowei Chen <chen.xiaowei@h3c.com>
cls: optimize numops header file dependency
Signed-off-by: Xiaowei Chen <chen.xiaowei@h3c.com>
cls: optimize rbd header file dependency
Signed-off-by: Xiaowei Chen <chen.xiaowei@h3c.com>
cls: optimize refcount header file dependency
Signed-off-by: Xiaowei Chen <chen.xiaowei@h3c.com>
cls: optimize replica_log header file dependency
Signed-off-by: Xiaowei Chen <chen.xiaowei@h3c.com>
cls: optimize rgw header file dependency
Signed-off-by: Xiaowei Chen <chen.xiaowei@h3c.com>
cls: optimize statelog header file dependency
Signed-off-by: Xiaowei Chen <chen.xiaowei@h3c.com>
cls: optimize timeindex header file dependency
Signed-off-by: Xiaowei Chen <chen.xiaowei@h3c.com>
cls: optimize user header file dependency
Signed-off-by: Xiaowei Chen <chen.xiaowei@h3c.com>
cls: optimize version header file dependency
Signed-off-by: Xiaowei Chen <chen.xiaowei@h3c.com>
cls: Formatting changes and merge fixup
Fix white space inconsistencies and resolve compile error.

Signed-off-by: Brad Hubbard <bhubbard@redhat.com>
@liewegas

This comment has been minimized.

Member

liewegas commented May 23, 2017

suspicious of this upgrade failure: /a/sage-2017-05-23_06:32:52-upgrade:jewel-x-wip-sage-testing---basic-smithi/1220951

2017-05-23T07:37:43.205 INFO:tasks.workunit:Running workunit cls/test_cls_hello.sh...
2017-05-23T07:37:43.205 INFO:teuthology.orchestra.run.smithi022:Running (workunit test cls/test_cls_hello.sh): 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ce
ph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CLS_RBD_GTEST_FILTER=\'*:-TestClsRbd.mirror_image\' adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/cls/test_cls_hello.sh'
2017-05-23T07:37:43.274 INFO:tasks.workunit.client.1.smithi022.stdout:Running main() from gmock_main.cc
2017-05-23T07:37:43.274 INFO:tasks.workunit.client.1.smithi022.stdout:[==========] Running 6 tests from 1 test case.
2017-05-23T07:37:43.274 INFO:tasks.workunit.client.1.smithi022.stdout:[----------] Global test environment set-up.
2017-05-23T07:37:43.274 INFO:tasks.workunit.client.1.smithi022.stdout:[----------] 6 tests from ClsHello
2017-05-23T07:37:43.275 INFO:tasks.workunit.client.1.smithi022.stdout:[ RUN      ] ClsHello.SayHello
2017-05-23T10:37:43.262 INFO:tasks.workunit:Stopping ['rados/test-upgrade-v11.0.0.sh', 'cls'] on client.1...
2017-05-23T10:37:43.336 INFO:teuthology.orchestra.run.smithi022:Running: 'rm -rf -- /home/ubuntu/cephtest/workunits.list.client.1 /home/ubuntu/cephtest/clone.client.1'
2017-05-23T10:37:43.559 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 86, in run_tasks
    manager = run_one_task(taskname, ctx=ctx, config=config)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 65, in run_one_task
    return task(**kwargs)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/parallel.py", line 55, in task
    p.spawn(_run_spawned, ctx, confg, taskname)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/parallel.py", line 85, in __exit__
    for result in self:
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/parallel.py", line 99, in next
    resurrect_traceback(result)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/parallel.py", line 22, in capture_traceback
    return func(*args, **kwargs)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/parallel.py", line 63, in _run_spawned
    mgr = run_tasks.run_one_task(taskname, ctx=ctx, config=config)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 65, in run_one_task
    return task(**kwargs)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/sequential.py", line 46, in task
    mgr = run_tasks.run_one_task(taskname, ctx=ctx, config=confg)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 65, in run_one_task
    return task(**kwargs)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_wip-sage-testing/qa/tasks/workunit.py", line 176, in task
    config.get('env'), timeout=timeout)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/parallel.py", line 85, in __exit__
    for result in self:
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/parallel.py", line 99, in next
    resurrect_traceback(result)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/parallel.py", line 22, in capture_traceback
    return func(*args, **kwargs)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_wip-sage-testing/qa/tasks/workunit.py", line 450, in _run_tests
    label="workunit test {workunit}".format(workunit=workunit)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/remote.py", line 193, in run
    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 414, in run
    r.wait()
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 149, in wait
    self._raise_for_status()
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 171, in _raise_for_status
    node=self.hostname, label=self.label
CommandFailedError: Command failed (workunit test cls/test_cls_hello.sh) on smithi022 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CLS_RBD_GTEST_FILTER=\'*:-TestClsRbd.mirror_image\' adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/cls/test_cls_hello.sh'
@tchaikov

This comment has been minimized.

Contributor

tchaikov commented Jun 5, 2017

all of upgrade:jewel-x/point-to-point-x/{distros/ubuntu_14.04.yaml point-to-point-upgrade.yaml} tests passed in above upgrade suite tests. while the failure referenced by @liewegas is upgrade:jewel-x/point-to-point-x/{distros/centos_7.3.yaml point-to-point-upgrade.yaml}.

i think in the sense of "headers changes", it's orthogonal to ubuntu/centos.

@tchaikov tchaikov merged commit 4722abe into ceph:master Jun 5, 2017

3 checks passed

Signed-off-by all commits in this PR are signed
Details
Unmodifed Submodules submodules for project are unmodified
Details
default Build finished.
Details

@badone badone deleted the badone:wip-cls-optimize-header-file-dependency branch Jun 5, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment