New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix GlusterFS module for version 4.0 and above #48222

Merged
merged 6 commits into from Jun 28, 2018

Conversation

Projects
None yet
2 participants
@Martin819
Contributor

Martin819 commented Jun 20, 2018

What does this PR do?

Fix for versions of GlusterFS 4.0 and above. In the GlusterFS module, there are conditions checking the version of GFS lower than 3.6, because it uses different syntax to parse the response. Unfortunately, the condition on checks minor version, so when running a version 4.0, it fails, because 0 < 6 = true.

What issues does this PR fix or reference?

Fixed by changing the condition to compare both minor and major versions.

Previous Behavior

Failed to parse the response causing the state to fail.

    Function: glusterfs.peered
        Name: 172.16.10.93
      Result: False
     Comment: An exception occurred in this state: Traceback (most recent call last):
                File "/usr/lib/python2.7/dist-packages/salt/state.py", line 1905, in call
                  **cdata['kwargs'])
                File "/usr/lib/python2.7/dist-packages/salt/loader.py", line 1830, in wrapper
                  return f(*args, **kwargs)
                File "/usr/lib/python2.7/dist-packages/salt/states/glusterfs.py", line 84, in peered
                  peers = __salt__['glusterfs.peer_status']()
                File "/usr/lib/python2.7/dist-packages/salt/modules/glusterfs.py", line 162, in peer_status
                  root = _gluster_xml('peer status')
                File "/usr/lib/python2.7/dist-packages/salt/modules/glusterfs.py", line 86, in _gluster_xml
                  raise CommandExecutionError('\n'.join(result.splitlines()[:-1]))
              CommandExecutionError: peer status
              gluster> peer status
              <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
              <cliOutput>
                <opRet>0</opRet>
                <opErrno>0</opErrno>
                <opErrstr/>
                <peerStatus>
                   <peer>
                    <uuid>28df4ec7-066d-411a-8025-62f6389175aa</uuid>
                    <hostname>172.16.10.92</hostname>
                    <hostnames>
                      <hostname>172.16.10.92</hostname>
                    </hostnames>
                    <connected>1</connected>
                    <state>3</state>
                    <stateStr>Peer in Cluster</stateStr>
                  </peer>
                </peerStatus>
              </cliOutput>
     Started: 15:31:27.224687
    Duration: 488.657 ms

New Behavior

State successfully finished.

          ID: glusterfs_peers
    Function: glusterfs.peered
        Name: 172.16.10.93
      Result: True
     Comment: Host 172.16.10.93 successfully peered
     Started: 08:10:31.702237
    Duration: 647.8 ms
     Changes:   
              ----------
              new:
                  ----------
                  28df4ec7-066d-411a-8025-62f6389175aa:
                      ----------
                      connected:
                          1
                      hostnames:
                          - 172.16.10.92
                      state:
                          3
                      stateStr:
                          Peer in Cluster
                  7688e4c0-2eab-402f-90cb-dc52dc306d30:
                      ----------
                      connected:
                          1
                      hostnames:
                          - 172.16.10.93
                      state:
                          4
                      stateStr:
                          Accepted peer request
              old:
                  ----------
                  28df4ec7-066d-411a-8025-62f6389175aa:
                      ----------
                      connected:
                          1
                      hostnames:
                          - 172.16.10.92
                      state:
                          3
                      stateStr:
                          Peer in Cluster

@Martin819 Martin819 force-pushed the Martin819:develop branch from 4ffc902 to 3449833 Jun 20, 2018

@rallytime

This comment has been minimized.

Contributor

rallytime commented Jun 25, 2018

Hi @Martin819 - This is failing some tests. Can you take a look?

unit.modules.test_glusterfs.GlusterfsTestCase.test_add_volume_bricks
unit.modules.test_glusterfs.GlusterfsTestCase.test_create_volume
unit.modules.test_glusterfs.GlusterfsTestCase.test_delete_volume
unit.modules.test_glusterfs.GlusterfsTestCase.test_get_max_op_version
unit.modules.test_glusterfs.GlusterfsTestCase.test_get_op_version
unit.modules.test_glusterfs.GlusterfsTestCase.test_list_volumes
unit.modules.test_glusterfs.GlusterfsTestCase.test_peer
unit.modules.test_glusterfs.GlusterfsTestCase.test_peer_status
unit.modules.test_glusterfs.GlusterfsTestCase.test_set_op_version
unit.modules.test_glusterfs.GlusterfsTestCase.test_start_volume
unit.modules.test_glusterfs.GlusterfsTestCase.test_status
unit.modules.test_glusterfs.GlusterfsTestCase.test_stop_volume
unit.modules.test_glusterfs.GlusterfsTestCase.test_volume_info

https://jenkins.saltstack.com/job/PR/job/salt-pr-linode-cent7-py3/5912/#showFailuresLink

@Martin819 Martin819 force-pushed the Martin819:develop branch from c352b4b to 943aab1 Jun 26, 2018

@Martin819

This comment has been minimized.

Contributor

Martin819 commented Jun 26, 2018

@rallytime Hi, thanks for pinging me about it.
It was a stupid mistake. Should be fixed now.

@rallytime

Thanks @Martin819 for submitting this.

@rallytime

This comment has been minimized.

Contributor

rallytime commented Jun 26, 2018

Hi @Martin819 - There's just one more glusterfs test failure that needs to be fixed.

  • unit.modules.test_glusterfs.GlusterfsTestCase.test_get_max_op_version

https://jenkins.saltstack.com/job/PR/job/salt-pr-linode-ubuntu14-n/23969/

@Martin819 Martin819 force-pushed the Martin819:develop branch 2 times, most recently from 2922ee3 to e57bea7 Jun 27, 2018

@Martin819

This comment has been minimized.

Contributor

Martin819 commented Jun 28, 2018

@rallytime This should be fixed now by updating the test case. It uses mocked version so I had to adjust it to the new format.

@Martin819 Martin819 force-pushed the Martin819:develop branch from 064e1d3 to 1f98842 Jun 28, 2018

@Martin819 Martin819 force-pushed the Martin819:develop branch from 1f98842 to fce8877 Jun 28, 2018

Martin819 added some commits Jun 28, 2018

@rallytime rallytime merged commit bd5da83 into saltstack:develop Jun 28, 2018

10 of 17 checks passed

default Build started sha1 is merged.
Details
jenkins/PR/salt-pr-linode-cent7-py3 Pull Requests » Salt PR - Linode CentOS 7 - PY3 #6104 — RUNNING
Details
jenkins/PR/salt-pr-linode-ubuntu16-py3 Pull Requests » Salt PR - Linode Ubuntu16.04 - PY3 #11074 — RUNNING
Details
jenkins/PR/salt-pr-rs-cent7-n Pull Requests » Salt PR - RS CentOS 7 #20157 — RUNNING
Details
jenkins/pr/py2-centos-7 running py2-centos-7...
Details
jenkins/pr/py3-centos-7 running py3-centos-7...
Details
jenkins/pr/py3-ubuntu-1604 running py3-ubuntu-1604...
Details
WIP ready for review
Details
codeclimate All good!
Details
continuous-integration/jenkins/pr-merge This commit looks good
Details
jenkins/PR/salt-pr-clone Pull Requests » Salt PR - Clone #26308 — SUCCESS
Details
jenkins/PR/salt-pr-docs-n Pull Requests » Salt PR - Docs #18354 — SUCCESS
Details
jenkins/PR/salt-pr-linode-ubuntu14-n Pull Requests » Salt PR - Linode Ubuntu14.04 #24032 — SUCCESS
Details
jenkins/PR/salt-pr-lint-n Pull Requests » Salt PR - Code Lint #22989 — SUCCESS
Details
jenkins/pr/docs The docs job has passed
Details
jenkins/pr/lint The lint job has passed
Details
jenkins/pr/py2-ubuntu-1604 The py2-ubuntu-1604 job has passed
Details
@Martin819

This comment has been minimized.

Contributor

Martin819 commented Jun 28, 2018

@rallytime
Thanks for merging. Do you think this could be backported to 2017.7?

@rallytime

This comment has been minimized.

Contributor

rallytime commented Jun 28, 2018

Yeah, for sure.

@rallytime

This comment has been minimized.

Contributor

rallytime commented Jul 2, 2018

Hi @Martin819 - It looks like this module has diverged in develop from the earlier branches and this won't back-port cleanly. If you'd like this fix to be in 2017.7, then you'll need to make a separate fix against that branch. Would you mind submitting that?

@Martin819

This comment has been minimized.

Contributor

Martin819 commented Jul 11, 2018

Hi @rallytime I can do that for sure.

cachedout added a commit that referenced this pull request Jul 11, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment