Skip to content

Commit

Permalink
upgrade: remove unused ValidationTimeout knob
Browse files Browse the repository at this point in the history
After refactoring, this is no longer used.
The only supported failure strategy is infinite retry, so there's nothing to timeout.
  • Loading branch information
zimnx committed Dec 28, 2020
1 parent 5bc4e30 commit b0bc61a
Show file tree
Hide file tree
Showing 7 changed files with 10 additions and 35 deletions.
Expand Up @@ -1407,10 +1407,7 @@ spec:
description: FailureStrategy specifies which logic is executed when upgrade failure happens. Currently only Retry is supported.
type: string
pollInterval:
description: PollInterval specifies how often upgrade logic polls on state updates. Increasing this value should lower number of requests sent to apiserver, but it may affect overall time required spent during upgrade.
type: string
validationTimeout:
description: ValidationTimeout specifies how long it can take for Scylla to boot and enter ready state after upgrade until FailureStrategy is executed.
description: PollInterval specifies how often upgrade logic polls on state updates. Increasing this value should lower number of requests sent to apiserver, but it may affect overall time spent during upgrade.
type: string
type: object
network:
Expand Down Expand Up @@ -1438,7 +1435,7 @@ spec:
description: Host to repair, by default all hosts are repaired
type: string
intensity:
description: Intensity how many token ranges (per shard) to repair in a single Scylla repair job. By default this is 1. If you set it to 0 the number of token ranges is adjusted to the maximum supported by node (see max_repair_ranges_in_parallel in Scylla logs). Valid values are 0 and integers >= 1. Higher values will result in increased cluster load and slightly faster repairs. Changing the intensity impacts repair granularity if you need to resume it, the higher the value the more work on resume. For Scylla clusters that DO NOT SUPPORT ROW-LEVEL REPAIR, intensity can be a decimal between (0,1). In that case it specifies percent of shards that can be repaired in parallel on a repair master node. For Scylla clusters that are row-level repair enabled, setting intensity below 1 has the same effect as setting intensity 1.
description: Intensity how many token ranges (per shard) to repair in a single Scylla repair job. By default this is 1. If you set it to 0 the number of token ranges is adjusted to the maximum supported by node (see max_repair_ranges_in_parallel in Scylla logs). Valid values are 0 and integers >= 1. Higher values will result in increased cluster load and slightly faster repairs. Changing the intensity impacts repair granularity if you need to resume it, the higher the value the more work on resume. For Scylla clusters that *do not support row-level repair*, intensity can be a decimal between (0,1). In that case it specifies percent of shards that can be repaired in parallel on a repair master node. For Scylla clusters that are row-level repair enabled, setting intensity below 1 has the same effect as setting intensity 1.
type: string
interval:
description: Interval task schedule interval e.g. 3d2h10m, valid units are d, h, m, s (default "0").
Expand Down Expand Up @@ -1616,7 +1613,7 @@ spec:
id:
type: string
intensity:
description: Intensity how many token ranges (per shard) to repair in a single Scylla repair job. By default this is 1. If you set it to 0 the number of token ranges is adjusted to the maximum supported by node (see max_repair_ranges_in_parallel in Scylla logs). Valid values are 0 and integers >= 1. Higher values will result in increased cluster load and slightly faster repairs. Changing the intensity impacts repair granularity if you need to resume it, the higher the value the more work on resume. For Scylla clusters that DO NOT SUPPORT ROW-LEVEL REPAIR, intensity can be a decimal between (0,1). In that case it specifies percent of shards that can be repaired in parallel on a repair master node. For Scylla clusters that are row-level repair enabled, setting intensity below 1 has the same effect as setting intensity 1.
description: Intensity how many token ranges (per shard) to repair in a single Scylla repair job. By default this is 1. If you set it to 0 the number of token ranges is adjusted to the maximum supported by node (see max_repair_ranges_in_parallel in Scylla logs). Valid values are 0 and integers >= 1. Higher values will result in increased cluster load and slightly faster repairs. Changing the intensity impacts repair granularity if you need to resume it, the higher the value the more work on resume. For Scylla clusters that *do not support row-level repair*, intensity can be a decimal between (0,1). In that case it specifies percent of shards that can be repaired in parallel on a repair master node. For Scylla clusters that are row-level repair enabled, setting intensity below 1 has the same effect as setting intensity 1.
type: string
interval:
description: Interval task schedule interval e.g. 3d2h10m, valid units are d, h, m, s (default "0").
Expand Down
9 changes: 3 additions & 6 deletions examples/common/operator.yaml
Expand Up @@ -1422,10 +1422,7 @@ spec:
description: FailureStrategy specifies which logic is executed when upgrade failure happens. Currently only Retry is supported.
type: string
pollInterval:
description: PollInterval specifies how often upgrade logic polls on state updates. Increasing this value should lower number of requests sent to apiserver, but it may affect overall time required spent during upgrade.
type: string
validationTimeout:
description: ValidationTimeout specifies how long it can take for Scylla to boot and enter ready state after upgrade until FailureStrategy is executed.
description: PollInterval specifies how often upgrade logic polls on state updates. Increasing this value should lower number of requests sent to apiserver, but it may affect overall time spent during upgrade.
type: string
type: object
network:
Expand Down Expand Up @@ -1453,7 +1450,7 @@ spec:
description: Host to repair, by default all hosts are repaired
type: string
intensity:
description: Intensity how many token ranges (per shard) to repair in a single Scylla repair job. By default this is 1. If you set it to 0 the number of token ranges is adjusted to the maximum supported by node (see max_repair_ranges_in_parallel in Scylla logs). Valid values are 0 and integers >= 1. Higher values will result in increased cluster load and slightly faster repairs. Changing the intensity impacts repair granularity if you need to resume it, the higher the value the more work on resume. For Scylla clusters that DO NOT SUPPORT ROW-LEVEL REPAIR, intensity can be a decimal between (0,1). In that case it specifies percent of shards that can be repaired in parallel on a repair master node. For Scylla clusters that are row-level repair enabled, setting intensity below 1 has the same effect as setting intensity 1.
description: Intensity how many token ranges (per shard) to repair in a single Scylla repair job. By default this is 1. If you set it to 0 the number of token ranges is adjusted to the maximum supported by node (see max_repair_ranges_in_parallel in Scylla logs). Valid values are 0 and integers >= 1. Higher values will result in increased cluster load and slightly faster repairs. Changing the intensity impacts repair granularity if you need to resume it, the higher the value the more work on resume. For Scylla clusters that *do not support row-level repair*, intensity can be a decimal between (0,1). In that case it specifies percent of shards that can be repaired in parallel on a repair master node. For Scylla clusters that are row-level repair enabled, setting intensity below 1 has the same effect as setting intensity 1.
type: string
interval:
description: Interval task schedule interval e.g. 3d2h10m, valid units are d, h, m, s (default "0").
Expand Down Expand Up @@ -1631,7 +1628,7 @@ spec:
id:
type: string
intensity:
description: Intensity how many token ranges (per shard) to repair in a single Scylla repair job. By default this is 1. If you set it to 0 the number of token ranges is adjusted to the maximum supported by node (see max_repair_ranges_in_parallel in Scylla logs). Valid values are 0 and integers >= 1. Higher values will result in increased cluster load and slightly faster repairs. Changing the intensity impacts repair granularity if you need to resume it, the higher the value the more work on resume. For Scylla clusters that DO NOT SUPPORT ROW-LEVEL REPAIR, intensity can be a decimal between (0,1). In that case it specifies percent of shards that can be repaired in parallel on a repair master node. For Scylla clusters that are row-level repair enabled, setting intensity below 1 has the same effect as setting intensity 1.
description: Intensity how many token ranges (per shard) to repair in a single Scylla repair job. By default this is 1. If you set it to 0 the number of token ranges is adjusted to the maximum supported by node (see max_repair_ranges_in_parallel in Scylla logs). Valid values are 0 and integers >= 1. Higher values will result in increased cluster load and slightly faster repairs. Changing the intensity impacts repair granularity if you need to resume it, the higher the value the more work on resume. For Scylla clusters that *do not support row-level repair*, intensity can be a decimal between (0,1). In that case it specifies percent of shards that can be repaired in parallel on a repair master node. For Scylla clusters that are row-level repair enabled, setting intensity below 1 has the same effect as setting intensity 1.
type: string
interval:
description: Interval task schedule interval e.g. 3d2h10m, valid units are d, h, m, s (default "0").
Expand Down
3 changes: 0 additions & 3 deletions pkg/api/v1alpha1/cluster_types.go
Expand Up @@ -76,9 +76,6 @@ const (

// GenericUpgradeSpec hold generic upgrade procedure parameters.
type GenericUpgradeSpec struct {
// ValidationTimeout specifies how long it can take for Scylla to boot and enter ready state
// after image upgrade until FailureStrategy is executed.
ValidationTimeout *metav1.Duration `json:"validationTimeout,omitempty"`
// FailureStrategy specifies which logic is executed when upgrade failure happens.
// Currently only Retry is supported.
FailureStrategy GenericUpgradeFailureStrategy `json:"failureStrategy,omitempty"`
Expand Down
7 changes: 1 addition & 6 deletions pkg/api/v1alpha1/cluster_webhook.go
Expand Up @@ -46,8 +46,7 @@ var _ webhook.Defaulter = &ScyllaCluster{}
var _ webhook.Validator = &ScyllaCluster{}

const (
DefaultGenericUpgradePollInterval = time.Second
DefaultGenericUpgradeValidationTimeout = 30 * time.Minute
DefaultGenericUpgradePollInterval = time.Second
)

func (c *ScyllaCluster) Default() {
Expand Down Expand Up @@ -111,10 +110,6 @@ func (c *ScyllaCluster) Default() {
c.Spec.GenericUpgrade.FailureStrategy = GenericUpgradeFailureStrategyRetry
}

if c.Spec.GenericUpgrade.ValidationTimeout == nil {
c.Spec.GenericUpgrade.ValidationTimeout = &metav1.Duration{Duration: DefaultGenericUpgradeValidationTimeout}
}

if c.Spec.GenericUpgrade.PollInterval == nil {
c.Spec.GenericUpgrade.PollInterval = &metav1.Duration{Duration: DefaultGenericUpgradePollInterval}
}
Expand Down
5 changes: 0 additions & 5 deletions pkg/api/v1alpha1/zz_generated.deepcopy.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

9 changes: 2 additions & 7 deletions pkg/controllers/cluster/actions/upgrade_version.go
Expand Up @@ -43,9 +43,8 @@ type ClusterVersionUpgrade struct {
ScyllaClient *scyllaclient.Client
ClusterSession cqlSession

ipMapping map[string]string
pollInterval time.Duration
validationTimeout time.Duration
ipMapping map[string]string
pollInterval time.Duration

currentRack *scyllav1alpha1.RackSpec
currentNode *corev1.Pod
Expand Down Expand Up @@ -143,15 +142,11 @@ func (a *ClusterVersionUpgrade) Execute(ctx context.Context, s *State) error {
a.recorder = s.recorder

a.pollInterval = scyllav1alpha1.DefaultGenericUpgradePollInterval
a.validationTimeout = scyllav1alpha1.DefaultGenericUpgradeValidationTimeout

if a.Cluster.Spec.GenericUpgrade != nil {
if a.Cluster.Spec.GenericUpgrade.PollInterval != nil {
a.pollInterval = a.Cluster.Spec.GenericUpgrade.PollInterval.Duration
}
if a.Cluster.Spec.GenericUpgrade.ValidationTimeout != nil {
a.validationTimeout = a.Cluster.Spec.GenericUpgrade.ValidationTimeout.Duration
}
}

switch a.upgradeProcedure(ctx) {
Expand Down
Expand Up @@ -54,8 +54,7 @@ var _ = Describe("Cluster controller", func() {
BeforeEach(func() {
scylla = testEnv.SingleRackCluster(ns)
scylla.Spec.GenericUpgrade = &scyllav1alpha1.GenericUpgradeSpec{
PollInterval: &metav1.Duration{Duration: 200 * time.Millisecond},
ValidationTimeout: &metav1.Duration{Duration: 5 * time.Second},
PollInterval: &metav1.Duration{Duration: 200 * time.Millisecond},
}

Expect(testEnv.Create(ctx, scylla)).To(Succeed())
Expand Down

0 comments on commit b0bc61a

Please sign in to comment.