Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Review Elasticsearch Status Subresource #1781

Closed
pebrc opened this issue Sep 24, 2019 · 4 comments · Fixed by #1790
Closed

Review Elasticsearch Status Subresource #1781

pebrc opened this issue Sep 24, 2019 · 4 comments · Fixed by #1790
Assignees
Labels
discuss We need to figure this out v1.0.0-beta1

Comments

@pebrc
Copy link
Collaborator

pebrc commented Sep 24, 2019

// ElasticsearchStatus defines the observed state of Elasticsearch
type ElasticsearchStatus struct {
	commonv1alpha1.ReconcilerStatus
	Health          ElasticsearchHealth             `json:"health,omitempty"`
	Phase           ElasticsearchOrchestrationPhase `json:"phase,omitempty"`
	ClusterUUID     string                          `json:"clusterUUID,omitempty"`
	MasterNode      string                          `json:"masterNode,omitempty"`
	ExternalService string                          `json:"service,omitempty"`
	ZenDiscovery    ZenDiscoveryStatus              `json:"zenDiscovery,omitempty"`
}

4 out of 6 attributes exist only for internal debugging and orchestration purposes. We should reconsider if they have a place in the status resource.

  • ClusterUUID: used to test whether we lose cluster state on upgrades, no relevance for users
  • MasterNode: purely informational and replicating information readily available from ES, we generate an event when a change is observed
  • ExternalService this is static information, available via documentation, there is absolutely no point having this as a status attribtue
  • ZenDiscovery, used to persist current mininum master nodes between reconciliation cycles. I am not entirely sure why we need it, it seems to serve currently as an optimisation to avoid unnecessary calls to the ES API. Maybe @barkbay can verify whether I am missing something important here.

@sebgl suggested that the attributes that cannot be removed could be turned into annotations

I suggest:

  • ClusterUUID keep
  • MasterNode remove without replacement
  • ExternalServcie remove without replaceent
  • ZenDiscovery removal, or if we decide we want to keep this optimisation, turn it into an annotation (why? Zen1 is on its way out, way too much prominence in the status)

Related to #1141

@pebrc pebrc added the discuss We need to figure this out label Sep 24, 2019
@barkbay
Copy link
Contributor

barkbay commented Sep 24, 2019

  • ZenDiscovery, used to persist current mininum master nodes between reconciliation cycles. I am not entirely sure why we need it, it seems to serve currently as an optimisation to avoid unnecessary calls to the ES API. Maybe @barkbay can verify whether I am missing something important here.

Yes, it's just an optimization to avoid an http request on ES, it can be removed.

@pebrc
Copy link
Collaborator Author

pebrc commented Sep 24, 2019

After some offline discussions the consensus seems to remove all fields mentioned in the OP ClusterUUID, MasterNode, ExternalService, ZenDiscovery without replacement.

@pebrc pebrc self-assigned this Sep 25, 2019
@sebgl
Copy link
Contributor

sebgl commented Sep 25, 2019

Yes, it's just an optimization to avoid an http request on ES, it can be removed.

At least in a first step, I think it'd be better if we move that one (minimum_master_nodes) to an annotation, so we avoid setting it over and over again at every single reconciliation.
Longer-term, depending on where we land with #1161, maybe we could replace that by another system.
I think it would be a bit of a shame to remove the optimization altogether?

@pebrc
Copy link
Collaborator Author

pebrc commented Sep 25, 2019

so we avoid setting it over and over again at every single reconciliation.

Agreed and I did just that in #1790.

But I think we are a bit arbitrary in our choices of what we deem worthy of optimisation. To give an example: DELETE _cluster/voting_config_exclusions is called without any attempt at optimisation on every reconciliation (if there are no ongoing pod removals)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discuss We need to figure this out v1.0.0-beta1
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants