Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

First ovh working iteration #100

Closed
wants to merge 1 commit into from

Conversation

yanndegat
Copy link

Hi @dghubble

as mentioned in dec emails
i've worked on porting typhoon to the openstack impl of OVH cloud

here's a first working iteration.
My primary obj is to push it soon in the terraform registry.
Either i create a repo on ovh's github org, and mention typhoon in the README
or maybe we can work out toghether how to make it upstream if ever it 's something you would like to

anyway, whatever you decide, if i can get any feedback on this module, it would be great.

Note: i've converted cl provider configs to ignition provider config

@dghubble
Copy link
Member

dghubble commented Jan 20, 2018

First, our email conversation did not green light ovh inclusion.

Writing the module isn't the issue, its the maintenance, validation, testing, and support that goes into each platform, for Typhoon releases, for the foreseeable future. You can read the same answer given to VMWare vSphere. Adding more platforms is far out on the roadmap and when that time comes maintainers I know and trust from different cloud providers may be brought onboard. I thought I was clear in email, but I apologize if not.

Next, Openstack is explicitly called out as a non-goal in the project. As I said in email.

Taking a quick glace at the PR, I have so many issues with this implementation:

  • Openstack (and its many components) are not a goal and not going to be supported
  • Consul should not be a part of these clusters in any form
  • Hardcoded networkd units
  • Cruft in the loadbalancers
  • "i've converted cl provider configs to ignition provider config"
    • The Terraform ignition module is not maintained by CoreOS at all. Typhoon chose to use Container Linux configs on every platform. Its not acceptable for a potential module to go off on its own and make different decisions. It would not be maintainable at all. Its a red flag that was considered a minor note.
  • Embedded password hashes
  • Bastion setup that somehow takes passwords?
  • "OVH doesn't provide neither managed private dns service nor managed private lb. Yet such feature is on OVH public cloud roadmap."
    • Typhoon doesn't need to settle if a platform is missing features. Adding new platforms is far out on the priority list and when we get to that point we can choose from the platforms that are good matches.
  • Random new remote exec steps
  • bastion_flavor_name, consul_flavor_name, lb_flavor_name - ugh, no
  • You somehow have counts for controllers, apis, workers, and consuls - ugh, no
  • OVH server groups can only hold 5 nodes? Seems arbitrary and odd.
  • Generally overly complex

OVH is welcome to create its own Terraform modules in its own repos. Please don't imply Typhoon is endorsing or affiliated.

@dghubble dghubble closed this Jan 20, 2018
@dghubble
Copy link
Member

Closing as this is far away from something I'd accept and I don't want to give false hope it is mergeable.

@yanndegat
Copy link
Author

hi @dghubble
thank you very much for all the time you took to answer this PR and have a look at it.
our email conversation wasn't "that" clear.

firstly, lots of issues you're mentionning are definitely not intended to last. i'm in a big WIP, but as i was at a point in time where i had to be sure if it would have a chance to be "hosted in typhoon repo", whatever the efforts needed to comply with typhoon's design considerations.
that's mainly why i submitted it "as is".

Somehow, you could have decided to created some kind of an "unofficial & unsupported external providers" leaving the core of your project clean.

As it doens't seem to be the case for the moment, that leaves me with the other option, which is host our own repo, and eventually take other designs decisions and deal with our missing features in a more appropriate way. because if you're not satisfied with some dirty shortcuts i've made, i'm not either.

lastly, i'd still like to mention typhoon somehow, not as an "affiliated" or "endorsed" project, but
such as "originally forked from/inspired by/credits/..."

if you don't want that either, then i won't. we'll took no offense.

regards

yann

dghubble added a commit that referenced this pull request Jan 15, 2019
* Fix a regression caused by lowering the Kubelet TLS client
certificate to system:nodes group (#100) since dropping
cluster-admin dropped the Kubelet's ability to delete nodes.
* On clouds where workers can scale down (manual terraform apply,
AWS spot termination, Azure low priority deletion), worker shutdown
runs the delete-node.service to remove a node to prevent NotReady
nodes from accumulating
* Allow Kubelets to delete cluster nodes via system:nodes group. Kubelets
acting with system:node and kubelet-delete ClusterRoles is still an
improvement over acting as cluster-admin
Snaipe pushed a commit to aristanetworks/monsoon that referenced this pull request Apr 13, 2023
* Fix a regression caused by lowering the Kubelet TLS client
certificate to system:nodes group (poseidon#100) since dropping
cluster-admin dropped the Kubelet's ability to delete nodes.
* On clouds where workers can scale down (manual terraform apply,
AWS spot termination, Azure low priority deletion), worker shutdown
runs the delete-node.service to remove a node to prevent NotReady
nodes from accumulating
* Allow Kubelets to delete cluster nodes via system:nodes group. Kubelets
acting with system:node and kubelet-delete ClusterRoles is still an
improvement over acting as cluster-admin
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants