Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changes for orchestra.daemon to work with systemd #934

Merged
merged 21 commits into from Oct 18, 2017

Conversation

vasukulkarni
Copy link
Contributor

Use systemd when cluster is setup with ceph-deploy or ceph-ansible, still in testing..

Signed-off-by: Vasu Kulkarni vasu@redhat.com

@vasukulkarni
Copy link
Contributor Author

@ktdreyer @athanatos once this works we can replace half of the install task with ceph-deploy and other half with ceph-ansible that will do more customer like testing for thrash testcases

Currently holding on http://tracker.ceph.com/issues/17050 to be fixed as well, also have to check on what the wait function does once i can reach that stage..

@athanatos
Copy link
Contributor

Neat!

@tchaikov
Copy link
Contributor

@vasukulkarni i just closed http://tracker.ceph.com/issues/17050 as "rejected".

@vasukulkarni
Copy link
Contributor Author

@tchaikov thanks, that makes sense, will make changes and rerun.

@vasukulkarni vasukulkarni changed the title [DNM] Changes for daemon-helper to work with systemd Changes for daemon-helper to work with systemd Jan 13, 2017
@vasukulkarni
Copy link
Contributor Author

@zmc this is ready for merge and tested.

zmc
zmc previously requested changes Mar 6, 2017
Copy link
Member

@zmc zmc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What controls whether or not we use systemd vs. the "old way" of managing processes?

I'm a little confused about the overall use of the term 'init' - this implementation won't easily support any other init system - not that I think it needs to.

I think that instead of all the copy/pasting re: systemctl commands, it would be much nicer to e.g. use a template string for all of them and perhaps call a function with the action, role and id to get the command string.

Start this daemon instance.
"""
if not self.running():
self.log.error('Restarting a running daemon')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should probably be a warning

if self.use_init:
self.log.info("using init to restart")
if not self.running():
self.log.error('starting a non-running daemon')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should also be a warning I think


class DaemonState(object):
"""
Daemon State. A daemon exists for each instance of each role.
"""
def __init__(self, remote, role, id_, *command_args, **command_kwargs):
def __init__(self, remote, role, id_, use_init=False, *command_args, **command_kwargs):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is use_init ? Should it be use_sytemd ?

self.log.info('Restarting daemon')
self.log.info('Restarting daemon with args')
if self.use_init:
self.log.info("restart with args not supported in init")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps this should be a warning - and it should mention systemd, no?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or raise NotImplementedError ?

@@ -108,6 +193,16 @@ def signal(self, sig, silent=False):

:param sig: signal to send
"""
if self.use_init:
self.log.info("using init to send signal")
self.log.info("WARNING init may restart after kill signal")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • More vague usage of the term "init"
  • log.info() and 'WARNING' ?
  • What exactly does "init may restart" mean?

@@ -117,18 +212,35 @@ def running(self):
Are we running?
:return: True if remote run command value is set, False otherwise.
"""
if self.use_init:
self.log.info("using init to send signal")
self.log.info("WARNING init may restart after kill signal")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See above comment

return self.proc is not None

def reset(self):
"""
clear remote run command value.
"""
if self.use_init:
self.log.info("reset not supported with init")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this really need to log anything?

self.proc = None

def wait_for_exit(self):
"""
clear remote run command value after waiting for exit.
"""
if self.use_init:
self.log.info("wait_for_exit not supported with init")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is logging necessary here?

self.id = remote.shortname
else:
self.id = id_
self.proc_regex = '"' + self.proc_name + '.*--id ' + self.id + '"'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's split out the pid-getting functionality into a separate method

'grep', '-v',
'grep', run.Raw('|'),
'awk',
run.Raw("{'print $2'}")]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of the above, could we use systemctl status ?

@zmc
Copy link
Member

zmc commented Mar 17, 2017

@vasukulkarni, any ETA on this one?

@vasukulkarni
Copy link
Contributor Author

internal task pr should be ready for monday, soon after that I will pick this one.

@zmc
Copy link
Member

zmc commented Apr 13, 2017

@vasukulkarni and I agreed that I should help out with this PR. I'll rebase it and make changes as needed. In the meantime I will make this a DNM as it's not been tested.

@zmc zmc changed the title Changes for daemon-helper to work with systemd DNM: Changes for orchestra.daemon to work with systemd Apr 13, 2017
@zmc zmc force-pushed the wip-daemon-helper-systemd branch 4 times, most recently from 440a5aa to 48a0322 Compare April 18, 2017 20:50
@zmc zmc force-pushed the wip-daemon-helper-systemd branch from 7029e62 to 93b05b4 Compare April 28, 2017 19:12
vasukulkarni and others added 11 commits September 21, 2017 12:49
when cluster is setup using ceph-ansible or ceph-deploy
use systemd commands to kill/revive deamons.

Signed-off-by: Vasu Kulkarni <vasu@redhat.com>
Signed-off-by: Zack Cerza <zack@redhat.com>
Signed-off-by: Zack Cerza <zack@redhat.com>
Signed-off-by: Zack Cerza <zack@redhat.com>
Signed-off-by: Zack Cerza <zack@redhat.com>
Signed-off-by: Zack Cerza <zack@redhat.com>
Signed-off-by: Zack Cerza <zack@redhat.com>
... we already had self.id_

Signed-off-by: Zack Cerza <zack@redhat.com>
Signed-off-by: Zack Cerza <zack@redhat.com>
Signed-off-by: Zack Cerza <zack@redhat.com>
Signed-off-by: Zack Cerza <zack@redhat.com>
We had this right for systemctl commands, but not for the journalctl
command.

Signed-off-by: Zack Cerza <zack@redhat.com>
@zmc
Copy link
Member

zmc commented Sep 21, 2017

@vasukulkarni I've cleaned up the branch. Note:

  • It is still enabled by default on machines that use systemd. I need to think of how to make it configurable in a sane way
  • It will probably not work just yet (needs testing), so I'm less immediately concerned about its default state
  • It needs the changes in https://github.com/zmc/ceph/tree/wip-master-init to run

@vasukulkarni
Copy link
Contributor Author

@zmc awesome thanks!

for 1) we can try to use config at higher level eg: overrides: disable-daemon-helper: true or something so that its easier to run suites with overrides

@vasukulkarni
Copy link
Contributor Author

These are the first 10 tests that I ran with these changes and I have checked logs for few and dont see any issues 💯 , if/when we enable the option to configure it from yaml we could starting enabling it per suite basis.

http://pulpito.ceph.com/vasu-2017-09-29_22:02:40-smoke-luminous-distro-basic-ovh/

2017-09-29T22:23:17.464 INFO:tasks.ceph:Starting mon daemons in cluster ceph...
2017-09-29T22:23:17.464 INFO:teuthology.orchestra.run.ovh081:Running: 'which systemctl'
2017-09-29T22:23:17.620 INFO:teuthology.orchestra.run.ovh081.stdout:/bin/systemctl
2017-09-29T22:23:17.620 INFO:tasks.ceph.mon.a:Restarting daemon using systemd
2017-09-29T22:23:17.620 INFO:teuthology.orchestra.run.ovh081:Running: 'ps -ef | grep "ceph-mon.*--id a" | grep -v grep | awk {\'print $2\'}'
2017-09-29T22:23:17.806 INFO:tasks.ceph.mon.a:starting a non-running daemon
2017-09-29T22:23:17.806 INFO:teuthology.orchestra.run.ovh081:Running: 'sudo systemctl start ceph-mon@a'
2017-09-29T22:23:17.993 INFO:teuthology.orchestra.run.ovh013:Running: 'which systemctl'
2017-09-29T22:23:18.096 INFO:teuthology.orchestra.run.ovh013.stdout:/bin/systemctl
2017-09-29T22:23:18.097 INFO:tasks.ceph.mon.b:Restarting daemon using systemd
2017-09-29T22:23:18.097 INFO:teuthology.orchestra.run.ovh013:Running: 'ps -ef | grep "ceph-mon.*--id b" | grep -v grep | awk {\'print $2\'}'
2017-09-29T22:23:18.255 INFO:tasks.ceph.mon.b:starting a non-running daemon
2017-09-29T22:23:18.256 INFO:teuthology.orchestra.run.ovh013:Running: 'sudo systemctl start ceph-mon@b'
2017-09-29T22:23:18.444 INFO:tasks.ceph.mon.c:Restarting daemon using systemd
2017-09-29T22:23:18.445 INFO:teuthology.orchestra.run.ovh013:Running: 'ps -ef | grep "ceph-mon.*--id c" | grep -v grep | awk {\'print $2\'}'
2017-09-29T22:23:18.604 INFO:tasks.ceph.mon.c:starting a non-running daemon
2017-09-29T22:23:18.605 INFO:teuthology.orchestra.run.ovh013:Running: 'sudo systemctl start ceph-mon@c'
2017-09-29T22:23:18.773 INFO:tasks.ceph:Starting mgr daemons in cluster ceph...
2017-09-29T22:23:18.773 INFO:tasks.ceph.mgr.x:Restarting daemon using systemd
2017-09-29T22:23:18.773 INFO:teuthology.orchestra.run.ovh081:Running: 'ps -ef | grep "ceph-mgr.*--id x" | grep -v grep | awk {\'print $2\'}'
2017-09-29T22:23:18.900 INFO:tasks.ceph.mgr.x:starting a non-running daemon
2017-09-29T22:23:18.900 INFO:teuthology.orchestra.run.ovh081:Running: 'sudo systemctl start ceph-mgr@x'
2017-09-29T22:23:19.068 INFO:tasks.ceph.mgr.y:Restarting daemon using systemd
2017-09-29T22:23:19.068 INFO:teuthology.orchestra.run.ovh013:Running: 'ps -ef | grep "ceph-mgr.*--id y" | grep -v grep | awk {\'print $2\'}'
2017-09-29T22:23:19.182 INFO:tasks.ceph.mgr.y:starting a non-running daemon
2017-09-29T22:23:19.183 INFO:teuthology.orchestra.run.ovh013:Running: 'sudo systemctl start ceph-mgr@y'
2017-09-29T22:23:19.352 INFO:tasks.ceph:Setting crush tunables to default
2017-09-29T22:23:19.353 INFO:teuthology.orchestra.run.ovh081:Running: 'sudo ceph --cluster ceph osd crush tunables default'
2017-09-29T22:23:28.929 INFO:teuthology.orchestra.run.ovh081.stderr:adjusted tunables profile to default
2017-09-29T22:23:28.953 INFO:tasks.ceph:Starting osd daemons in cluster ceph...
2017-09-29T22:23:28.953 INFO:teuthology.orchestra.run.ovh081:Running: "python -c 'import os; import tempfile; import sys;(fd,fname) = tempfile.mkstemp();os.close(fd);sys.stdout.write(fname.rstrip());sys.stdout.flush()'"
2017-09-29T22:23:29.128 INFO:teuthology.orchestra.run.ovh081.stdout:/tmp/tmpELMXtd
2017-09-29T22:23:29.128 INFO:teuthology.orchestra.run.ovh081:Running: 'sudo cp /var/lib/ceph/osd/ceph-0/fsid /tmp/tmpELMXtd'
2017-09-29T22:23:29.291 INFO:teuthology.orchestra.run.ovh081:Running: 'sudo chmod 0666 /tmp/tmpELMXtd'
2017-09-29T22:23:29.717 DEBUG:teuthology.orchestra.remote:ovh081:/tmp/tmpELMXtd is 37B
2017-09-29T22:23:30.094 INFO:teuthology.orchestra.run.ovh081:Running: 'rm -fr /tmp/tmpELMXtd'
2017-09-29T22:23:30.197 INFO:teuthology.orchestra.run.ovh081:Running: "python -c 'import os; import tempfile; import sys;(fd,fname) = tempfile.mkstemp();os.close(fd);sys.stdout.write(fname.rstrip());sys.stdout.flush()'"
2017-09-29T22:23:30.354 INFO:teuthology.orchestra.run.ovh081.stdout:/tmp/tmpjm5VFo
2017-09-29T22:23:30.355 INFO:teuthology.orchestra.run.ovh081:Running: 'sudo cp /var/lib/ceph/osd/ceph-1/fsid /tmp/tmpjm5VFo'
2017-09-29T22:23:30.500 INFO:teuthology.orchestra.run.ovh081:Running: 'sudo chmod 0666 /tmp/tmpjm5VFo'
2017-09-29T22:23:30.926 DEBUG:teuthology.orchestra.remote:ovh081:/tmp/tmpjm5VFo is 37B
2017-09-29T22:23:31.305 INFO:teuthology.orchestra.run.ovh081:Running: 'rm -fr /tmp/tmpjm5VFo'
2017-09-29T22:23:31.412 INFO:teuthology.orchestra.run.ovh013:Running: "python -c 'import os; import tempfile; import sys;(fd,fname) = tempfile.mkstemp();os.close(fd);sys.stdout.write(fname.rstrip());sys.stdout.flush()'"
2017-09-29T22:23:31.540 INFO:teuthology.orchestra.run.ovh013.stdout:/tmp/tmpRkW24A
2017-09-29T22:23:31.541 INFO:teuthology.orchestra.run.ovh013:Running: 'sudo cp /var/lib/ceph/osd/ceph-2/fsid /tmp/tmpRkW24A'
2017-09-29T22:23:31.689 INFO:teuthology.orchestra.run.ovh013:Running: 'sudo chmod 0666 /tmp/tmpRkW24A'
2017-09-29T22:23:32.110 DEBUG:teuthology.orchestra.remote:ovh013:/tmp/tmpRkW24A is 37B
2017-09-29T22:23:32.451 INFO:teuthology.orchestra.run.ovh013:Running: 'rm -fr /tmp/tmpRkW24A'
2017-09-29T22:23:32.555 INFO:teuthology.orchestra.run.ovh013:Running: "python -c 'import os; import tempfile; import sys;(fd,fname) = tempfile.mkstemp();os.close(fd);sys.stdout.write(fname.rstrip());sys.stdout.flush()'"
2017-09-29T22:23:32.679 INFO:teuthology.orchestra.run.ovh013.stdout:/tmp/tmpyKCeG6
2017-09-29T22:23:32.680 INFO:teuthology.orchestra.run.ovh013:Running: 'sudo cp /var/lib/ceph/osd/ceph-3/fsid /tmp/tmpyKCeG6'
2017-09-29T22:23:32.827 INFO:teuthology.orchestra.run.ovh013:Running: 'sudo chmod 0666 /tmp/tmpyKCeG6'
2017-09-29T22:23:33.256 DEBUG:teuthology.orchestra.remote:ovh013:/tmp/tmpyKCeG6 is 37B
2017-09-29T22:23:33.596 INFO:teuthology.orchestra.run.ovh013:Running: 'rm -fr /tmp/tmpyKCeG6'
2017-09-29T22:23:33.698 INFO:teuthology.orchestra.run.ovh013:Running: 'sudo ceph --cluster ceph osd new 7acbb9d6-5c31-4e38-b263-d10e47c81467 0'
2017-09-29T22:23:34.162 INFO:teuthology.orchestra.run.ovh013.stdout:0
2017-09-29T22:23:34.181 INFO:teuthology.orchestra.run.ovh013:Running: 'sudo ceph --cluster ceph osd new cea06024-b985-4071-8f31-6255ec64aeb1 1'
2017-09-29T22:23:34.597 INFO:teuthology.orchestra.run.ovh013.stdout:1
2017-09-29T22:23:34.618 INFO:teuthology.orchestra.run.ovh013:Running: 'sudo ceph --cluster ceph osd new 908f5340-b3c8-4077-afd7-fb39d049848e 2'
2017-09-29T22:23:35.025 INFO:teuthology.orchestra.run.ovh013.stdout:2
2017-09-29T22:23:35.043 INFO:teuthology.orchestra.run.ovh013:Running: 'sudo ceph --cluster ceph osd new 699ab681-3549-4950-8111-733baf792edd 3'
2017-09-29T22:23:35.429 INFO:teuthology.orchestra.run.ovh013.stdout:3
2017-09-29T22:23:35.443 INFO:tasks.ceph.osd.0:Restarting daemon using systemd
2017-09-29T22:23:35.443 INFO:teuthology.orchestra.run.ovh081:Running: 'ps -ef | grep "ceph-osd.*--id 0" | grep -v grep | awk {\'print $2\'}'
2017-09-29T22:23:35.561 INFO:tasks.ceph.osd.0:starting a non-running daemon
2017-09-29T22:23:35.561 INFO:teuthology.orchestra.run.ovh081:Running: 'sudo systemctl start ceph-osd@0'
2017-09-29T22:23:35.776 INFO:tasks.ceph.osd.1:Restarting daemon using systemd
2017-09-29T22:23:35.776 INFO:teuthology.orchestra.run.ovh081:Running: 'ps -ef | grep "ceph-osd.*--id 1" | grep -v grep | awk {\'print $2\'}'
2017-09-29T22:23:35.924 INFO:tasks.ceph.osd.1:starting a non-running daemon
2017-09-29T22:23:35.924 INFO:teuthology.orchestra.run.ovh081:Running: 'sudo systemctl start ceph-osd@1'
2017-09-29T22:23:36.142 INFO:tasks.ceph.osd.2:Restarting daemon using systemd
2017-09-29T22:23:36.142 INFO:teuthology.orchestra.run.ovh013:Running: 'ps -ef | grep "ceph-osd.*--id 2" | grep -v grep | awk {\'print $2\'}'
2017-09-29T22:23:36.232 INFO:tasks.ceph.osd.2:starting a non-running daemon
2017-09-29T22:23:36.232 INFO:teuthology.orchestra.run.ovh013:Running: 'sudo systemctl start ceph-osd@2'
2017-09-29T22:23:36.445 INFO:tasks.ceph.osd.3:Restarting daemon using systemd
2017-09-29T22:23:36.445 INFO:teuthology.orchestra.run.ovh013:Running: 'ps -ef | grep "ceph-osd.*--id 3" | grep -v grep | awk {\'print $2\'}'
2017-09-29T22:23:36.603 INFO:tasks.ceph.osd.3:starting a non-running daemon
2017-09-29T22:23:36.603 INFO:teuthology.orchestra.run.ovh013:Running: 'sudo systemctl start ceph-osd@3'
2017-09-29T22:23:36.829 INFO:tasks.ceph:Waiting for OSDs to come up
2017-09-29T22:23:36.830 INFO:teuthology.orchestra.run.ovh081:Running: 'sudo systemctl show ceph-osd@1 | grep -i state'
2017-09-29T22:23:36.944 INFO:teuthology.orchestra.run.ovh081.stdout:LoadState=loaded
2017-09-29T22:23:36.944 INFO:teuthology.orchestra.run.ovh081.stdout:ActiveState=active
2017-09-29T22:23:36.944 INFO:teuthology.orchestra.run.ovh081.stdout:SubState=running
2017-09-29T22:23:36.945 INFO:teuthology.orchestra.run.ovh081.stdout:UnitFileState=disabled
2017-09-29T22:23:36.945 INFO:teuthology.orchestra.run.ovh081.stdout:StateChangeTimestamp=Fri 2017-09-29 22:23:36 UTC
2017-09-29T22:23:36.945 INFO:teuthology.orchestra.run.ovh081.stdout:StateChangeTimestampMonotonic=687976204
2017-09-29T22:23:36.945 INFO:teuthology.orchestra.run.ovh081:Running: 'sudo systemctl show ceph-osd@0 | grep -i state'
2017-09-29T22:23:37.095 INFO:teuthology.orchestra.run.ovh081.stdout:LoadState=loaded
2017-09-29T22:23:37.096 INFO:teuthology.orchestra.run.ovh081.stdout:ActiveState=active
2017-09-29T22:23:37.096 INFO:teuthology.orchestra.run.ovh081.stdout:SubState=running
2017-09-29T22:23:37.096 INFO:teuthology.orchestra.run.ovh081.stdout:UnitFileState=disabled
2017-09-29T22:23:37.096 INFO:teuthology.orchestra.run.ovh081.stdout:StateChangeTimestamp=Fri 2017-09-29 22:23:35 UTC
2017-09-29T22:23:37.096 INFO:teuthology.orchestra.run.ovh081.stdout:StateChangeTimestampMonotonic=687612366
2017-09-29T22:23:37.096 INFO:teuthology.orchestra.run.ovh013:Running: 'sudo systemctl show ceph-osd@3 | grep -i state'
2017-09-29T22:23:37.215 INFO:teuthology.orchestra.run.ovh013.stdout:LoadState=loaded
2017-09-29T22:23:37.215 INFO:teuthology.orchestra.run.ovh013.stdout:ActiveState=active
2017-09-29T22:23:37.215 INFO:teuthology.orchestra.run.ovh013.stdout:SubState=running
2017-09-29T22:23:37.216 INFO:teuthology.orchestra.run.ovh013.stdout:UnitFileState=disabled
2017-09-29T22:23:37.216 INFO:teuthology.orchestra.run.ovh013.stdout:StateChangeTimestamp=Fri 2017-09-29 22:23:36 UTC
2017-09-29T22:23:37.216 INFO:teuthology.orchestra.run.ovh013.stdout:StateChangeTimestampMonotonic=683024266
2017-09-29T22:23:37.216 INFO:teuthology.orchestra.run.ovh013:Running: 'sudo systemctl show ceph-osd@2 | grep -i state'
2017-09-29T22:23:37.370 INFO:teuthology.orchestra.run.ovh013.stdout:LoadState=loaded
2017-09-29T22:23:37.371 INFO:teuthology.orchestra.run.ovh013.stdout:ActiveState=active
2017-09-29T22:23:37.371 INFO:teuthology.orchestra.run.ovh013.stdout:SubState=running
2017-09-29T22:23:37.371 INFO:teuthology.orchestra.run.ovh013.stdout:UnitFileState=disabled
2017-09-29T22:23:37.371 INFO:teuthology.orchestra.run.ovh013.stdout:StateChangeTimestamp=Fri 2017-09-29 22:23:36 UTC
2017-09-29T22:23:37.371 INFO:teuthology.orchestra.run.ovh013.stdout:StateChangeTimestampMonotonic=682640410
2017-09-29T22:23:37.372 INFO:teuthology.orchestra.run.ovh081:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json'

@vasukulkarni
Copy link
Contributor Author

I will also schedule a full smoke suite today

@vasukulkarni
Copy link
Contributor Author

I think we are probably missing some setting, need to check with @jdurgin on what needs to be done for this to pass

http://qa-proxy.ceph.com/teuthology/vasu-2017-10-03_18:00:35-smoke-luminous-distro-basic-ovh/1699376/teuthology.log

2017-10-03T18:32:34.026 INFO:teuthology.orchestra.run.ovh028:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.0'
2017-10-03T18:32:34.219 INFO:teuthology.orchestra.run.ovh028.stderr:2017-10-03 18:32:34.193614 7f45c6cfd700 -1 WARNING: all dangerous and experimental features are enabled.
2017-10-03T18:32:34.241 INFO:teuthology.orchestra.run.ovh028.stderr:2017-10-03 18:32:34.215396 7f45c6cfd700 -1 WARNING: all dangerous and experimental features are enabled.
2017-10-03T18:32:34.434 INFO:teuthology.orchestra.run.ovh028.stdout:0
2017-10-03T18:32:34.434 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574867
 got 0
 for osd.0

b) http://tracker.ceph.com/issues/21671

@vasukulkarni
Copy link
Contributor Author

I will fix the current cluster configuration to properly test with systemd without overloads and rerun, not really great run with current cluster config here http://pulpito.ceph.com/vasu-2017-10-03_18:00:35-smoke-luminous-distro-basic-ovh/

To use systemd, any tasks which create a DaemonGroup object must pass
use_systemd=True. This could be accomplished by looking for a flag in
the job config. Note that this option is mainly for regression testing;
it may be removed in the future, where the default is to use systemd
when possible.

Signed-off-by: Zack Cerza <zack@redhat.com>
@zmc zmc force-pushed the wip-daemon-helper-systemd branch from d64addb to 51464ba Compare October 12, 2017 19:42
@vasukulkarni
Copy link
Contributor Author

@zmc this is ready to be merged!

Tested here: http://pulpito.ceph.com/vasu-2017-10-14_17:53:49-smoke-luminous-distro-basic-ovh/
the cephfs failures are due to distro kernel

I also used the qa branch at : https://github.com/zmc/ceph/tree/wip-master-init , so we should be able to merge both to master.

@vasukulkarni
Copy link
Contributor Author

@zmc
Copy link
Member

zmc commented Oct 17, 2017

@vasukulkarni is there a run that mirrors one of those but uses master branches?

@vasukulkarni
Copy link
Contributor Author

@zmc scheduled here now, just used stable branch before http://pulpito.ceph.com/vasu-2017-10-17_22:00:28-smoke-master-testing-basic-ovh/

@zmc
Copy link
Member

zmc commented Oct 17, 2017

@vasukulkarni that's not using master for teuthology :-/

@vasukulkarni
Copy link
Contributor Author

@zmc its not clear what you are asking in comment, I dont understand why 'master' branch of teuthology needs to be tested?

@zmc
Copy link
Member

zmc commented Oct 17, 2017 via email

@vasukulkarni
Copy link
Contributor Author

@zmc ok if you need for comparision here from the nightly runs http://pulpito.ceph.com/teuthology-2017-10-17_05:02:02-smoke-master-testing-basic-ovh/

Additional 2 rgw is due to own qa branch that does not have equivalent branch in s3tests git repo.

@zmc
Copy link
Member

zmc commented Oct 17, 2017

@vasukulkarni ah, that's definitely helpful, thanks!

@vasukulkarni
Copy link
Contributor Author

Can t wait for this to be merged! , Please merge when you are done reviewing, I can merge the qa part.

@zmc zmc dismissed their stale review October 18, 2017 18:50

Need a review from @vasukulkarni now

@zmc
Copy link
Member

zmc commented Oct 18, 2017

@vasukulkarni because of the way we've done this PR, it won't let you formally review. So if you could give a thumbs up or something, I can review and then we can merge.

@zmc zmc changed the title DNM: Changes for orchestra.daemon to work with systemd Changes for orchestra.daemon to work with systemd Oct 18, 2017
@vasukulkarni
Copy link
Contributor Author

👍 👍 LGTM and is ready for merge along with qa branch at https://github.com/zmc/ceph/tree/wip-master-init

@zmc
Copy link
Member

zmc commented Oct 18, 2017

@vasukulkarni - just filed the ceph PR

Copy link
Member

@zmc zmc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vasukulkarni
Copy link
Contributor Author

@zmc thanks, will merge both once the qa is green.

@vasukulkarni vasukulkarni merged commit c0439f3 into master Oct 18, 2017
@vasukulkarni vasukulkarni deleted the wip-daemon-helper-systemd branch October 18, 2017 19:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants