Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes improvements for docker compose and terraform quickstarts #174

Conversation

douglaswainer
Copy link
Contributor

Description

The primary purpose of this PR is to make the docker-compose quickstart easier to get working out of the box for people trying to run it on bare metal machines with NVME drives. It also addresses the some issues I encountered when testing these changes with the other docker-compose based quickstart guides.

  • Docker-compose (on bare metal)
  • Terraform Equinix metal
  • Vagrant (with USE_HELM=false

Plus some cleanup to remove some code/files that I believe are legacy and no longer used.

Why is this needed

This is my attempt to address 172 and 173.

How Has This Been Tested?

I've tested by running through the quickstart documentation for all compose based quickstarts:

  • Docker-compose (On bare metal Lenovo Thinkcenter Tiny M900 machines, with NVME drives)
  • Terraform
  • Vagrant/Libvirt with USE_HELM=true
  • Vagrant/Libvirt with USE_HELM=false

How are existing users impacted? What migration steps/scripts do we need?

All statements like {{ index .Hardware.Disks 0 }} in the deploy/stack/compose/manifests/template.yaml have been replaced with environment variables (set in the .env file) that get substituted by the manifest-update container task. This should make it more likely to work for new users, but might confuse users that add their own manifest files.

Checklist:

I have:

  • updated the documentation and/or roadmap (if required)
  • added unit or e2e tests
  • provided instructions on how to upgrade

Signed-off-by: Douglas Wainer <douglas.g.wainer@gmail.com>
…ible with new docker-compose changes

Signed-off-by: Douglas Wainer <douglas.g.wainer@gmail.com>
Signed-off-by: Douglas Wainer <douglas.g.wainer@gmail.com>
@jacobweinstock jacobweinstock self-assigned this Jun 19, 2023
@jacobweinstock jacobweinstock self-requested a review June 19, 2023 17:28
Signed-off-by: Douglas Wainer <douglas.g.wainer@gmail.com>
Signed-off-by: Douglas Wainer <douglas.g.wainer@gmail.com>
Signed-off-by: Douglas Wainer <douglas.g.wainer@gmail.com>
@douglaswainer
Copy link
Contributor Author

I've been looking into the hardware types and I can see how I could have used the MetadataInstanceStorageMount, which I believe could be substituted more naturally (or more inline with Tinkerbell templates) with:

{{ (index .Hardware.Metadata.Instance.Storage.Filesystems 0).Mount.Device }}

Provided the hardware.yaml provides this metadata. I might be wrong, I'm not very familiar with Golang templates so I'll need to test.

I'm happy to close this and go and rework/retest my changes and then reopen again at a later stage, if that is preferable?

Copy link
Member

@jacobweinstock jacobweinstock left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tremendous work, @douglaswainer, thank you!

deploy/infrastructure/terraform/.env Outdated Show resolved Hide resolved
deploy/stack/compose/manifests/template.yaml Show resolved Hide resolved
deploy/stack/compose/setup.sh Outdated Show resolved Hide resolved
deploy/stack/compose/setup.sh Outdated Show resolved Hide resolved
@jacobweinstock
Copy link
Member

jacobweinstock commented Jul 8, 2023

I've been looking into the hardware types and I can see how I could have used the MetadataInstanceStorageMount, which I believe could be substituted more naturally (or more inline with Tinkerbell templates) with:

{{ (index .Hardware.Metadata.Instance.Storage.Filesystems 0).Mount.Device }}

Provided the hardware.yaml provides this metadata. I might be wrong, I'm not very familiar with Golang templates so I'll need to test.

I'm happy to close this and go and rework/retest my changes and then reopen again at a later stage, if that is preferable?

Hey @douglaswainer, at the moment only the disks (hardware.Spec.Disks) are available for use like this. This was deliberate on our part so that we could evolve this functionality with purpose as use-cases arise. So no need to rework this. Thanks though!

@douglaswainer
Copy link
Contributor Author

Hey @douglaswainer, at the moment only the disks (hardware.Spec.Disks) are available for use like this. This was deliberate on our part so that we could evolve this functionality with purpose as use-cases arise. So no need to rework this. Thanks though!

Oh interesting, I assumed the whole hardware data json could be referenced. Thank you for clarifying!

Signed-off-by: Douglas Wainer <douglas.g.wainer@gmail.com>
@@ -0,0 +1,35 @@
# Can be set to your own hook builds
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

im not too excited about duplicate .env files to maintain. any chance we could have only one?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I'm not happy about it either, I can look into options for merging them into one but I don't know of an easy solution based on .env files.

There's a few bits of reused variables and bash code in the project, I was thinking of converting the bash provisioning to Ansible and splitting out the more specific provisioning tasks with roles based on whether the machine is bare metal, equinix, vagrant etc.. Though I think that would be a separate issue/PR for later on.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the response and especially for your work and patience on this!

@jacobweinstock jacobweinstock added the ready-to-merge Signal to Mergify to merge the PR. label Jul 21, 2023
@mergify mergify bot merged commit 4195ea1 into tinkerbell:main Jul 21, 2023
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready-to-merge Signal to Mergify to merge the PR.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants