Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Harden new tsid hashing transport version #106897

Closed
wants to merge 1 commit into from

Conversation

dnhatn
Copy link
Member

@dnhatn dnhatn commented Mar 29, 2024

This commit introduced a transport version for the new tsid hash, which was missing in #98023. At the time, the latest version was ASYNC_SEARCH_STATUS_SUPPORTS_KEEP_ALIVE = def(8_582_00_0). Therefore, I introduced a new version: TransportVersion TIME_SERIES_ID_HASHING = def(8_582_10_0). This change aims to minimize potential issues during upgrades from 8.13.0 to later versions in serverless.

public static final TransportVersion ASYNC_SEARCH_STATUS_SUPPORTS_KEEP_ALIVE = def(8_582_00_0);

Relates #98023
Relates #106878

@dnhatn dnhatn added >non-issue :StorageEngine/Downsampling Downsampling (replacement for rollups) - Turn fine-grained time-based data into coarser-grained data labels Mar 29, 2024
@dnhatn dnhatn marked this pull request as ready for review March 29, 2024 05:40
@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/es-storage-engine (Team:StorageEngine)

Copy link
Contributor

@ldematte ldematte left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall seems OK, after conversation on slack I understand this is a refinement on the previous PR.
I'm sure you double-checked this is the correct version. There is one change to be made though, the "part" you used is not for patching.

@@ -120,6 +120,7 @@ static TransportVersion def(int id) {
public static final TransportVersion DESIRED_NODE_VERSION_OPTIONAL_STRING = def(8_580_00_0);
public static final TransportVersion ML_INFERENCE_REQUEST_INPUT_TYPE_UNSPECIFIED_ADDED = def(8_581_00_0);
public static final TransportVersion ASYNC_SEARCH_STATUS_SUPPORTS_KEEP_ALIVE = def(8_582_00_0);
public static final TransportVersion TIME_SERIES_ID_HASHING = def(8_582_10_0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should not use that for a patch version -- those are reserved for serverless.
From the docs a few lines below:

     * M_NNN_SS_P
     *
     * M - The major version of Elasticsearch
     * NNN - The server version part
     * SS - The serverless version part. It should always be 00 here, it is used by serverless only.
     * P - The patch version part

@salvatore-campagna
Copy link
Contributor

salvatore-campagna commented Mar 29, 2024

My understanding is that this is not really required if no customer in Serverless deployments has ever used downsampling. If they did they would have had a corrupted cluster state and we would have seen the issue in Serverless. We also know that all Serverless deployments are already on 8.13 and we didn't see any issue there which means that no one of our Serverless customers has been using downsampling before the latest Serverless release. Any customer using version 8.13, either on-prem or on Serverless, is already using the new cluster state and they would not experience any issue.

Am I wrong?

@kkrik-es
Copy link
Contributor

Since there's no tangible benefit from submitting it, I'd say we skip.

@dnhatn dnhatn closed this Mar 29, 2024
@dnhatn dnhatn removed >non-issue :StorageEngine/Downsampling Downsampling (replacement for rollups) - Turn fine-grained time-based data into coarser-grained data Team:StorageEngine v8.13.1 labels Mar 29, 2024
@dnhatn dnhatn deleted the 8.13-harden-version branch March 29, 2024 16:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants