Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Common: changed civetweb line in rgw section(conf) #1731

Merged
merged 1 commit into from Aug 23, 2017

Conversation

SirishaGuduru
Copy link
Contributor

@SirishaGuduru SirishaGuduru commented Jul 31, 2017

Resolves issue: Multiple RGW Ceph.conf Issue #1258

In multi-RGW setup, in ceph.conf the RGW sections
contain identical bind IP in civetweb line. So this
modification fixes that issue and puts the right IP
for each RGW.

Closes: #1767
Signed-off-by: SirishaGuduru SGuduru@walmartlabs.com

@leseb
Copy link
Member

leseb commented Jul 31, 2017

jenkins test this please

Copy link
Member

@leseb leseb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SirishaGuduru
Copy link
Contributor Author

@leseb, In this section "https://github.com/SirishaGuduru/ceph-ansible/blob/341af20e6820a924df11cb983aaa674a07954606/roles/ceph-common/templates/ceph.conf.j2#L67-L73" monitor address is basically the addresses of mon nodes. If we have RGWs residing on different nodes other than mon nodes, how will this work? And this condition gives the same IP for all RGW sections.

@leseb
Copy link
Member

leseb commented Aug 2, 2017

@SirishaGuduru sorry I might have been not very clear. You really need to re-use this entire block https://github.com/SirishaGuduru/ceph-ansible/blob/341af20e6820a924df11cb983aaa674a07954606/roles/ceph-common/templates/ceph.conf.j2#L67-L80, leave the block thing but you have to create a set of variables for rgw:

  • raadosgw_interface
  • radosgw_address

Then, you need to make these variables mandatory by adding check in the rgw role, see examples from the mon here: https://github.com/ceph/ceph-ansible/blob/149f21b2eff8d37c979148fb838bb73bfd5b00a6/roles/ceph-mon/tasks/check_mandatory_vars.yml

Hope it's clearer now.
Thanks!

@SirishaGuduru
Copy link
Contributor Author

@leseb, its clear now. I will look at how mon is implemented and try doing the same.

@leseb
Copy link
Member

leseb commented Aug 3, 2017

@SirishaGuduru thanks!

Copy link
Member

@leseb leseb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't need to delete manually, you need to rebase your branch on master and then push force

@SirishaGuduru
Copy link
Contributor Author

SirishaGuduru commented Aug 4, 2017

@leseb , I did try rebasing master. There was a merge conflict with this file on my branch. Hence had to do a commit. Also since, my PR has changes in this file.

@leseb
Copy link
Member

leseb commented Aug 4, 2017

@SirishaGuduru if there is a conflict you have to resolve it then you can pursue you rebase and then push cleanly.

@guits
Copy link
Collaborator

guits commented Aug 7, 2017

@SirishaGuduru I think you forgot to modify the roles/ceph-defaults/defaults/main.yml.

You shouldn't edit *.yml.sample files directly.
Basically you edit roles/ceph-*/defaults/main.yml and then you execute generate_group_vars_sample.sh at root to generate the *.yml.sample files

@SirishaGuduru
Copy link
Contributor Author

SirishaGuduru commented Aug 7, 2017

@guits, sorry I didn't change this roles/ceph-defaults/defaults/main.yml as I didn't realize the recent commits, but when I run this generate_group_vars_sample.sh, some other sample files are also being changed. Is there a way I can just run this for the changes in all.yml.sample (because my changes are only in that file)?

@guits
Copy link
Collaborator

guits commented Aug 7, 2017

As far as I know, if you rebased well on master, you shouldn't have any other changes except yours when running generate_group_vars_sample.sh

@SirishaGuduru
Copy link
Contributor Author

@guits, I'm well on par with master, except for my commit in my branch. But still when I run generate_group_vars_sample.sh, I see modified: group_vars/all.yml.sample modified: group_vars/osds.yml.sample modified: group_vars/rhcs.yml.sample
where group_vars/all.yml.sample is with my changes.

@guits
Copy link
Collaborator

guits commented Aug 7, 2017

@SirishaGuduru it doesn't matter, commit the changes and push them, unless I see something wrong it should be ok.

@SirishaGuduru
Copy link
Contributor Author

@leseb, @guits, I have made necessary changes. Please review.

@@ -294,7 +294,15 @@ radosgw_civetweb_num_threads: 100
# For additional civetweb configuration options available such as SSL, logging,
# keepalive, and timeout settings, please see the civetweb docs at
# https://github.com/civetweb/civetweb/blob/master/docs/UserManual.md
radosgw_civetweb_options: "port={{ radosgw_civetweb_bind_ip }}:{{ radosgw_civetweb_port }} num_threads={{ radosgw_civetweb_num_threads }}"
#radosgw_civetweb_options: "num_threads={{ radosgw_civetweb_num_threads }}"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all the variables declared here should be uncommented.

@SirishaGuduru
Copy link
Contributor Author

@leseb, @guits can you please review ithis?

@guits
Copy link
Collaborator

guits commented Aug 10, 2017

@SirishaGuduru
Tests need to be updated, could you update them?

@SirishaGuduru
Copy link
Contributor Author

@guits will do that. Do you have any specific tests that I need to target on? or should I go with how monitor_interface is used in tests?

@guits
Copy link
Collaborator

guits commented Aug 11, 2017

@SirishaGuduru yes, reproduce the same thing which has been made for monitor_interface/monitor_address for each test, please

@guits
Copy link
Collaborator

guits commented Aug 17, 2017

jenkins test luminous-ansible2.3-update_dmcrypt

@guits
Copy link
Collaborator

guits commented Aug 17, 2017

jenkins test luminous-ansible2.3-docker_dmcrypt_journal_collocation

@SirishaGuduru
Copy link
Contributor Author

@guits, does this look good?

@SirishaGuduru
Copy link
Contributor Author

@gfidente, in this PR I have implemented radosgw_address_block just as "monitor_address_block" is done.

@gfidente
Copy link
Contributor

@SirishaGuduru yes this is great indeed, I pointed to this PR from issue #1767 ; I don't think it addresses setting the civetweb bind ip address for the docker container; also see PR #1773

Copy link
Member

@leseb leseb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One thing only, please run ./generate_group_vars_sample.sh. Do not edit group_vars/rhcs.yml.sample. Just let the script run and do its business :). Thanks! Once we have that I'll merge.

@SirishaGuduru
Copy link
Contributor Author

@leseb , I did not edit this file manually. This got modified when I ran generate_group_vars_sample.sh

Copy link
Member

@leseb leseb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you rebase on master and then run ./generate_group_vars_sample.sh. We are almost there, thanks!

@@ -0,0 +1,8 @@
---
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having a check in roles/ceph-common/tasks/checks/check_mandatory_vars.yml is sufficient, so you can remove this one.

@SirishaGuduru SirishaGuduru force-pushed the rgw-civetwebIP-conf branch 3 times, most recently from e7fee01 to 4289864 Compare August 23, 2017 09:31
Resolves issue: Multiple RGW Ceph.conf Issue ceph#1258

In multi-RGW setup, in ceph.conf the RGW sections
contain identical bind IP in civetweb line. So this
modification fixes that issue and puts the right IP
for each RGW.

Signed-off-by: SirishaGuduru SGuduru@walmartlabs.com

Modified ceph-defaults and ran generate_group_vars_sample.sh

group_vars/osds.yml.sample and group_vars/rhcs.yml.sample are
not part of the changes. But they got modified when
generate_group_vars_sample.sh is ran to generate group_vars/
all.yml.sample.

Uncommented added variables in ceph-defaults

Updated tests by adding value for radosgw_interface

Added radosgw_interface to centos cluster tests

Modified ceph-rgw role,rebased and ran generate_group_vars_sample.sh

In ceph-rgw role removed check_mandatory_vars.yml.
Rebased on master.
Ran generate_group_vars_sample.sh and then the below files got
modified.
@leseb
Copy link
Member

leseb commented Aug 23, 2017

jenkins test luminous-ansible2.3-bluestore_dmcrypt_journal

@leseb
Copy link
Member

leseb commented Aug 23, 2017

jenkins test luminous-ansible2.3-update_docker_cluster

@leseb
Copy link
Member

leseb commented Aug 23, 2017

jenkins test luminous-ansible2.3-xenial_cluster

@leseb
Copy link
Member

leseb commented Aug 23, 2017

jenkins test luminous-ansible2.3-update_dmcrypt

@leseb
Copy link
Member

leseb commented Aug 23, 2017

jenkins test ansible2.3-bluestore_docker_dmcrypt_journal_collocation

@leseb
Copy link
Member

leseb commented Aug 23, 2017

@SirishaGuduru did you run ./generate_group_vars_sample.sh?
Your group vars files are wrong, if I pull your branch and run it, I get the following diff: https://pastebin.com/qQKddyRM

Thanks!

@leseb leseb merged commit d9b3d4a into ceph:master Aug 23, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Set mds_address_block and radosgw_address_block like monitor_address_block
4 participants