Skip to content
This repository has been archived by the owner on Apr 4, 2023. It is now read-only.

Configure Cassandra nodes to lookup their IP address by their fully qualified domain name #334

Merged

Conversation

wallrj
Copy link
Member

@wallrj wallrj commented Apr 14, 2018

  • Also ensure that they only ever use the Kubernetes seed provider service
  • And ensure that that service publishes the seed IP addresses as soon as they are known,
  • so that seeds them selves can find themselves when they are starting up.

This is a cut down version of #330

  • without the headless service per statefulset, which I think may be unnecessary, since nodes only have to be able to lookup their own FQDN.
  • and without the E2E tests, because I haven't yet figured out how to force minikube to change the IP address of a pod when it's recreated.

Fixes: #319

Release note:

NONE

@munnerz
Copy link
Contributor

munnerz commented Apr 17, 2018

and without the E2E tests, because I haven't yet figured out how to force minikube to change the IP address of a pod when it's recreated.

When a pod is deleted, in 99% of cases it should have a new IP address. Can we make a test that deletes the pod and verifies the IP has changed? (perhaps we delete the pod again if it hasn't changed first time?).

We definitely need an e2e test for this - ideally the test would be labelled "Resiliency" too, as the first of a suite of resiliency tests.

leavingNodes, joiningNodes, movingNodes, mappedNodes,
)
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused - isn't this meant to be a part of another PR?

If yes and this is just stacked, it'd be great if we can stop stacking PRs too much as it makes it far harder to know which to review first. If we keep them separate, PRs that depend on [some other] PR will fail and/or we can add /hold and comment to inform a reviewer that "this PR depends on PR xyz being merged first"

{
Name: "CASSANDRA_RPC_ADDRESS",
Value: " ",
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we use the Cassandra RPC interface, and do we want to listen on the default pod IP for RPC connections? It seems to me like we'd only want to listen on 127.0.0.1?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't use it, but I'm not sure whether Cassandra nodes talk to each other with this protocol.
The default is to listen on all addresses:

So setting it here may not be ideal, but I can improve or remove it later.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

// Set a non-existent default seed.
// The Kubernetes Seed Provider will fall back to a default seed host if it can't look up seeds via the CASSANDRA_SERVICE.
// And if the CASSANDRA_SEEDS environment variable is not set, it defaults to localhost.
// Which could cause confusion if a non-seed node is temporarily unable to lookup the seed nodes from the service.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would happen if we were to set this to localhost instead? Would the node fail to boot, or would it cause a split brain of some sort?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would mean that new nodes might consider themselves seeds in the event that the KubernetesSeedProvider plugin encounters an error.
So yeah, a split brain situation I think.

// https://github.com/kubernetes/examples/blob/cabf8b8e4739e576837111e156763d19a64a3591/cassandra/go/main.go#L51
{
Name: "CASSANDRA_SEEDS",
Value: "black-hole-dns-name",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm slightly concerned this could become an attack vector. What if someone creates a service named black-hole-dns-name? Could an attacker gain access to the contents of the Cassandra DB as a result?

If possible, I'd rather set this to a value that causes Cassandra to fail hard/crash (as this value should only be referenced in the event of a DNS resolution failure)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the answer is to remove the fallback code from the Kubernetes Seed Provider.
If it can't lookup the seeds from the Kubernetes it should log an error and return an empty list,
rather than returning a default.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll do this in a follow up branch, where I remove the reliance on the Docker image entrypoint and instead create a Navigator default config.

…ualified domain name

* Also ensure that they only ever use the Kubernetes seed provider service
* And ensure that that service publishes the seed IP addresses as soon as they are known,
* so that seeds them selves can find themselves when they are starting up.

Fixes: jetstack#319
@wallrj
Copy link
Member Author

wallrj commented May 8, 2018

/retest

Copy link
Member Author

@wallrj wallrj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I rebased and answered your comments @munnerz

// Set a non-existent default seed.
// The Kubernetes Seed Provider will fall back to a default seed host if it can't look up seeds via the CASSANDRA_SERVICE.
// And if the CASSANDRA_SEEDS environment variable is not set, it defaults to localhost.
// Which could cause confusion if a non-seed node is temporarily unable to lookup the seed nodes from the service.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would mean that new nodes might consider themselves seeds in the event that the KubernetesSeedProvider plugin encounters an error.
So yeah, a split brain situation I think.

// https://github.com/kubernetes/examples/blob/cabf8b8e4739e576837111e156763d19a64a3591/cassandra/go/main.go#L51
{
Name: "CASSANDRA_SEEDS",
Value: "black-hole-dns-name",
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll do this in a follow up branch, where I remove the reliance on the Docker image entrypoint and instead create a Navigator default config.

@munnerz
Copy link
Contributor

munnerz commented May 8, 2018

/lgtm
/approve

@jetstack-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: munnerz

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants