Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Satellite 6.9 #386

Merged
merged 1 commit into from Jun 14, 2021
Merged

Conversation

jturel
Copy link
Collaborator

@jturel jturel commented Jun 8, 2021

No description provided.

roles/satellite-clone/vars/satellite_6.9.yml Show resolved Hide resolved
satellite_installer_cmd: satellite-installer
satellite_scenario: satellite
capsule_puppet_module: foreman-proxy
satellite_installer_options: "--foreman-ipa-authentication false --reset-puppet-server-ssl-chain-filepath --disable-system-checks"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are these 2 options always passed in? (I understand disabling system checks, at least party)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

foreman-ipa-authentication false -> #268
reset-puppet-server-ssl-chain-filepath -> #349

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

foreman-ipa-authentication false -> #268

I guess that makes sense.

reset-puppet-server-ssl-chain-filepath -> #349

Hah, just last Friday I was playing with this: theforeman/puppet-puppet@c05e0a9

That said, it very much feels like a 6.3 -> 6.4 change due to Puppet 3 -> 4. In current versions I don't think it would be needed.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hah, just last Friday I was playing with this: theforeman/puppet-puppet@c05e0a9

Beautiful coincidence!

That said, it very much feels like a 6.3 -> 6.4 change due to Puppet 3 -> 4. In current versions I don't think it would be needed.

Let's give it a whirl (about to start a test of this PR right now to see)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

foreman-ipa-authentication false -> #268
reset-puppet-server-ssl-chain-filepath -> #349

TYVM. Could you add this as a comment in the YAML so that future us / future somebody else can find it more easily?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can do. I feel a utility such as this (expected to run on old, old versions of Satellite) warrants excessive commenting

@jturel
Copy link
Collaborator Author

jturel commented Jun 8, 2021

New one for me here:

--> Finished Dependency Resolution                                                                                                                            
Error: Package: satellite-common-6.9.2-1.el7sat.noarch (rhel-7-server-satellite-6.9-rpms)                                                                     
           Requires: ansible >= 2.9                                                                                                                           
           Installed: ansible-2.4.2.0-2.el7.noarch (@rhel-7-server-extras-rpms)                                                                               
               ansible = 2.4.2.0-2.el7                                                                                                                        
           Available: ansible-2.4.0.0-1.el7ae.noarch (rhel-7-server-satellite-maintenance-6-rpms)                                                             
               ansible = 2.4.0.0-1.el7ae        

Which repo should ansible > 2.9 be coming from? Need to update the list to auto-enable it

Is it rhel-7-server-ansible-2.9-rpms ?

@sthirugn
Copy link
Collaborator

sthirugn commented Jun 8, 2021

Looks good to me, but I don't have time to test it. Feel free to merge after @ekohl acks it.

@evgeni
Copy link
Member

evgeni commented Jun 8, 2021

New one for me here:

--> Finished Dependency Resolution                                                                                                                            
Error: Package: satellite-common-6.9.2-1.el7sat.noarch (rhel-7-server-satellite-6.9-rpms)                                                                     
           Requires: ansible >= 2.9                                                                                                                           
           Installed: ansible-2.4.2.0-2.el7.noarch (@rhel-7-server-extras-rpms)                                                                               
               ansible = 2.4.2.0-2.el7                                                                                                                        
           Available: ansible-2.4.0.0-1.el7ae.noarch (rhel-7-server-satellite-maintenance-6-rpms)                                                             
               ansible = 2.4.0.0-1.el7ae        

Which repo should ansible > 2.9 be coming from? Need to update the list to auto-enable it

Is it rhel-7-server-ansible-2.9-rpms ?

Exactly this.

@@ -60,6 +60,10 @@
register: enable_repos_result
when: enable_repos

- name: Enable Ansible 2.9 repository
command: subscription-manager repos --enable rhel-{{ ansible_distribution_major_version}}-server-ansible-2.9-rpms
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The el8 repo has a completely different name, so no need to template it here

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But you could template the Ansible version, and put that into vars :)

@@ -60,6 +60,10 @@
register: enable_repos_result
when: enable_repos

- name: Enable Ansible 2.9 repository
command: subscription-manager repos --enable rhel-{{ ansible_distribution_major_version}}-server-ansible-2.9-rpms
when: satellite_version in ["6.9"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can (and should) be done on 6.8 too, we just didn't have the more explicit dependency in the RPM that would trigger the error you posted in 6.8

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was testing this as-is and I ran into an error in the middle of execution and I think it had to with the fact that I started the playbook while running ansible 2.4 and ansible was upgraded from underneath:

TASK [satellite-clone : Install packages necessary to check DB status] ***************************************************************************************
Tuesday 08 June 2021  15:28:19 +0000 (0:07:30.279)       0:10:16.888 **********  
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'Task' object has no attribute 'async_val'
fatal: [localhost]: FAILED! => {"msg": "Unexpected failure during module execution.", "stdout": ""}

msg: Unexpected failure during module execution.

This step completed successfully when I ran it a second time (with ansible 2.9 loaded from the start)

To me it seems that the initial steps in the doc should reflect the correct repo to enable to get the right ansible version installed before the clone process is started

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand where this issue comes from, but if using 2.9 fixes it, we should be good.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

roles/satellite-clone/tasks/main.yml Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
@evgeni
Copy link
Member

evgeni commented Jun 9, 2021

Overall this reads correctly. I still wonder which Ansible should be used for s-clone, but maybe @mccun934 can shed some light on it.

Seems weird to have to enable the right ansible repo, and then s-clone does enable it again?

@jturel
Copy link
Collaborator Author

jturel commented Jun 9, 2021

Overall this reads correctly. I still wonder which Ansible should be used for s-clone, but maybe @mccun934 can shed some light on it.

Seems weird to have to enable the right ansible repo, and then s-clone does enable it again?

It's a little wonky but the way it plays out is that it's just making sure that by the time everything is done with the clone process all of the repos are enabled as they would have been if one followed the installation docs, especially since Clone has a step to disable all repos fairly early on (likely to make sure packages are getting pulled from the expected locations).

@jturel
Copy link
Collaborator Author

jturel commented Jun 9, 2021

I'm hitting an issue at the end of the process where, after the restoration is performed via foreman-maintain, the installer is run to upgrade to the latest z-stream.

TASK [satellite-clone : Run installer upgrade to match latest z-version] *********************************************************************************************************************
Wednesday 09 June 2021  19:13:20 +0000 (0:00:00.228)       0:55:28.751 ********                                                                                                                                                               
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["satellite-installer", "--upgrade", "--disable-system-checks"], "delta": "0:00:07.912125", "end": "2021-06-09 19:13:29.012080", "msg": "non-zero return code", "rc": 1, "start": "202
1-06-09 19:13:21.099955", "stderr": "ERROR: Unrecognised option '--upgrade'\n\nSee: 'satellite-installer --help'", "stderr_lines": ["ERROR: Unrecognised option '--upgrade'", "", "See: 'satellite-installer --help'"], "stdout": "2021-06-09 
19:13:24 [NOTICE] [root] Loading default values from puppet modules...\n2021-06-09 19:13:28 [NOTICE] [root] ... finished", "stdout_lines": ["2021-06-09 19:13:24 [NOTICE] [root] Loading default values from puppet modules...", "2021-06-09 1
9:13:28 [NOTICE] [root] ... finished"]} 

Will update to use foreman-maintain for 6.9 and later

when: satellite_version is version("6.8", "<=")

- name: Upgrade to latest z-version with satellite-maintain
command: 'satellite-maintain upgrade run --target-version={{ satellite_version }}.z --assumeyes'
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this work as far back as 6.5? If so I can remove the above equivalent which used satellite-installer

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @jturel , great question!

As you saw, --upgrade was removed (with theforeman/foreman-installer@3c60c74 ). In fact, at that point it did nothing except print a notice that --upgrade is deprecated since we automatically run any pending db migrations (since theforeman/foreman-installer@1cc2378 ) was included with Foreman 2.1 (corresponding to Satellite 6.8)

IMO, rather than making the change to use satellite-maintain here with 6.9 and later versions, because satellite-maintain introduces a lot of additional overhead (running various pre-upgrade checks, unlocking packages and updating them, etc) which is intended to simplify the user experience of upgrading for customers, but doesn't have much benefit in this specific context, I would simply make the change to remove --upgrade from the task here, but include it in satellite_upgrade_options only when: satellite_version is version("6.7", "<=")

In other words, logically just running the installer still makes sense at this point, we just need to no longer pass --upgrade to it as that was deprecated in Satellite 6.8 and has been removed in Satellite 6.9.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Slightly different solution: why don't we add --upgrade to satellite_upgrade_options for 6.5,6.6,6.7 and leave it out in 6.8+? Removes the need for doing wild satellite_version checks here in the playbook as we already satellite_upgrade_options?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @wbclark & @evgeni - gonna update the vars file to prevent adding back a satellite_version check

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

Copy link
Member

@evgeni evgeni left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From an Ansible perspective this looks good (I added one comment, but nothing major).

Does a clone run succeed with these changes?

# The location of this file changed in 6.9
- name: Remove cpdb_done file
file:
path: /var/lib/candlepin/.puppet-candlepin-cpdb-create-done
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could fold this and the legacy path into a list and do a with_items.

Or put the filepath as a variable as for each satellite version there is always just one matching.

Or you leave it like this, as it doesn't hurt ;)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like with_items - will do

@jturel
Copy link
Collaborator Author

jturel commented Jun 10, 2021

Updated & squashed!

@jturel jturel merged commit 0fd2377 into RedHatSatellite:master Jun 14, 2021
@jturel jturel deleted the sat_69 branch June 14, 2021 17:56
@sthirugn
Copy link
Collaborator

yay! @jturel++

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants