Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for Installation Workflows #64

Closed
thadc23 opened this issue Jun 11, 2018 · 9 comments
Closed

Support for Installation Workflows #64

thadc23 opened this issue Jun 11, 2018 · 9 comments

Comments

@thadc23
Copy link

thadc23 commented Jun 11, 2018

Is there any future considerations being given to expanding this provider to actually lay down the Manager, Controllers, and Edges or would the expectation be to use the vSphere provider?

If the latter, where would the logic go to connect the controllers and edges to the manager?

Thanks.

@avoltmer
Copy link
Contributor

avoltmer commented Jun 11, 2018

Assuming you are deploying the NSX Manager to vSphere I recommend you use the vSphere provider to deploy the Manager .ova. The other items generally would be a new feature request / enhancement to support installation workflows. Currently the provider is oriented toward logical networking and security.

@avoltmer avoltmer changed the title NSXT Paving Support Support for Installation Workflows Jun 11, 2018
@johnuopini
Copy link

Deploying the OVA files using the vsphere provider is quite easy there are a few things however that would be nice to see in this plugin and that i am currently doing via API calls to the manager:

  • Register a controller, an edge or an ESXi host into the manager plane, so especially having the the edge and the controller as new resource objects (the OVA can be provided as a parameter potentially)
  • Create a T0 router
  • Create basic configuration es transport zones

Having these 3 items would make the solution quite complete. I have everything done via python calls already so maybe i can fork and try to create a pull for this, a bit new to GO so not sure if i can make it but will try.

@aegershman
Copy link

@johnuopini I believe the creation of T0/T1 routers can be setup as resources? Please correct me if I'm off-base.

@johnuopini
Copy link

@aegershman yes, when i wrote that i didnt see the head repo but just the one in the terraform official site, the one here on GitHub supports T0 resources and T1, still doesnt allow to create Master / Controllers and Edges but afaik 2.5/2.6 might provide this in some way (either through the Master itself thus making the requirement here not useful) or by supporting it directly in TF

@aegershman
Copy link

aegershman commented Feb 27, 2019

Gotcha, just confirming. Yes I also hope that most of the installation & configuration of those components can be represented as terraform resources. I'm opposed to using multiple automation toolchains (like ansible/python scripts calling REST endpoints, or bleh, manual config in the UI) vs. keeping the source-of-truth entirely terraform. Thanks @johnuopini 👍

@ghost
Copy link

ghost commented Nov 7, 2019

My company operates NSX-T under PKS. We have several PKS foundations and wanted to realize the basic installation (T0 router, T1 infra router, infra segment, IP pools, ...) for these with terraform, because we already manage the DFW and Edge FW configurations with terraform.

Because the terraform provider can not generate any edge systems, we have to do it in a laborious way with api calls. We miss the rollout of the edge VMs, the configuration of the edge VMs (eg. logging server, timeserver, password expiration, ...), configurationof uplinks in the VLAN transport zone and all this things.

We also want to solve the disaster recovery with terraform. Other automation systems require servers or appliances. But if you have to start from scratch to rebuild the infrastructure, then terraform will do this on a simple Linux machine and the .tf files.

There may be ways to do that with various terraform providers (vSphere, Shell, RESTapi, ...), but that's a cruel thing.

@github-actions
Copy link

github-actions bot commented Jun 5, 2021

Marking this issue as stale due to inactivity. This helps our maintainers find and focus on the active issues. If this issue receives no comments in the next 30 days it will automatically be closed. Maintainers can also remove the stale label.

If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thank you!

@github-actions github-actions bot added the stale label Jun 5, 2021
@annakhm
Copy link
Collaborator

annakhm commented Jun 9, 2021

This is considered for next major release

@github-actions github-actions bot closed this as completed Jul 9, 2021
@annakhm annakhm removed the stale label Apr 3, 2023
@annakhm annakhm reopened this Apr 3, 2023
@annakhm
Copy link
Collaborator

annakhm commented Oct 13, 2023

This is now supported, hence closing

@annakhm annakhm closed this as completed Oct 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants