Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
new module: AIX Volume Group creating, resizing, removing #30381
It is a module to manage Volume Groups to AIX Logical Volume
@AugustusKling @ColOfAbRiX @DavidWittman @EvanK @LinusU @abulimov @adejoux @agaffney @ahtik @Akasurde @azaghal @dankeder @david_obrien @davixx @dougluce @dsummersl @giovannisciortino @goozbach @groks @haad @hryamzik @jasperla @jhoekx @jsumners @jtyr @kevensen @lberruti @matze @maxamillion @mcv21 @molekuul @mpdehaan @mulby @natefoo @nibalizer @ovcharenko @pmarkham @pyykkis @risaacson @rosmo @saito-hideki @sfromm @srvg @tdtrask @tmshn @xen0l
As a maintainer of a module in the same namespace this new module has been submitted to, your vote counts for shipits. Please review this module and add
I have some remarks on the module:
When a disk is in use by a volumegroup, you exit the module with a fail,
If you give a list of disks, the module will fail if even one disk will fail the "usage check" is this wat you want?
Checkmode is a mode in Ansible where you check what will be changed when you run the module with these parameters, without actually doing the action. You didn't incorporate this in the module, however you state that you support checkmode.
There are limits of how many PP's a disk can contain. An AIX system calulates the PP size from the size of the disk if you do not specify this. why not let the system decide the value of the PP-size if not specified in the playbook?
When you are removing a pv from a volumegroup, you do not check if the pv is within this volumegroup, so the command will fail, but the result is correct, the disk is not part of the volumegroup, as you want, you should handle this situation better.
Take into account that a volumegroup must be varyon before taking actions on a volumegroup.
I hope you can do anything with my remarks.
I would like ask somethings regarding your remarks
I agree with that point and I'm implementing it.
My approach is wrong, should be the same return for ASM and VG use. IMHO If you give a list of disks and one disk is wrong it should fail and forces the administrator correct the pool and avoid wrong resizes. Anyway, I'm opened to opinions.
I want to implement it, but I'm quite confused how to do it.
Totally agree, I'm changing it.
It is treated by AIX as well, but for sure I do it also.
We will have also this situation when the volume group still having filesystems mounted or open lv. I think is better leave it up to AIX, no?
Including this verification on the top.
The Check Mode should be implemented the way that if using
def state_vg(...): changed = False ... if state == 'varyon': if vg_state is False: changed = True if not module.check_mode: varyonvg_cmd = module.get_bin_path("varyonvg", True) rc, varyonvg_out, err = module.run_command( "%s %s" % (varyonvg_cmd, vg)) if rc != 0: module.failed_json(msg="...", stdout=out, stderr=err) ... return changed def main(): if state == 'present': changed = create_extend_vg(...) elif state == 'absent': changed = reduce_vg(...) elif state == 'varyon' or 'varyoff': ￼ changed = state_vg(...) module.exit_json(changed=True, state=state)
referenced this pull request
Sep 16, 2017
Looks like all that is needed is to change
force: description: - Forces volume group creation. choices: [True, False] default: "no"
force: description: - Forces volume group creation. type: bool default: "no"
If I'm reading the error and the documentation correctly