New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix SecurityException when HDFS Repository used against HA Namenodes #27196

Merged
merged 14 commits into from Dec 1, 2017

Conversation

Projects
None yet
5 participants
@jbaiera
Contributor

jbaiera commented Oct 31, 2017

When running the HDFS Repository Plugin against an HDFS cluster secured with Kerberos and configured with an HA Namenode topology, the repository will throw security exceptions and become unusable when the active Namenode fails over to the standby Namenode. The internal failover client code for HDFS attempts to establish a new connection to a different Namenode under the hood, but is unable to as it lacks the permissions to do so. After the client is created, all regular operations are done with restricted permissions to further police the client's behavior. These permissions do not allow the re-negotiation of the Namenode connection, which happens inside the client code.

This pull request attempts to sense when HA HDFS settings are present and remove permission restrictions during regular execution.

  • Adding integration test for HA-Namenode-Enabled HDFS, regular and secured.
  • Upgrade the MiniHDFS fixture to stand up an HA Namenode topology when configured with a nameservice.
  • Add debug statements to log the configurations being added to the underlying HDFS configuration at repository creation time.
  • Include a failing integration test for running an HDFS repository against an HDFS cluster, transitioning active Namenodes between repository operations.
  • Include configurations for a new HA HDFS Fixture, and a Secure HA HDFS Fixture in the build script.
  • Include new integration test tasks for both HA fixtures.
  • HDFS Repository will still be subject to the permissions in the policy file but will not restrict them further when executing against HA configurations.
  • Move doPrivileged blocks to HdfsSecurityContext so that it can decide when to restrict permissions.
@risdenk

This comment has been minimized.

Show comment
Hide comment
@risdenk

risdenk Nov 1, 2017

Contributor

@jbaiera I got the following exception on my Mac.

...
    java.lang.RuntimeException: Unable to bind on specified streaming port in secure context. Needed 0, got 62869
        at org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:112)
        at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1485)
        at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:847)
        at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:483)
        at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:442)
        at hdfs.MiniHDFS.main(MiniHDFS.java:119)

I think it is related to https://issues.apache.org/jira/browse/HDFS-9213

I had previously run into this and worked around it by copying the SecureDataNodeStarter class into the project and applying the patch from HDFS-9213. An example of that class is here: risdenk@fa3892c

I have not tried that with your PR yet.

Contributor

risdenk commented Nov 1, 2017

@jbaiera I got the following exception on my Mac.

...
    java.lang.RuntimeException: Unable to bind on specified streaming port in secure context. Needed 0, got 62869
        at org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:112)
        at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1485)
        at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:847)
        at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:483)
        at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:442)
        at hdfs.MiniHDFS.main(MiniHDFS.java:119)

I think it is related to https://issues.apache.org/jira/browse/HDFS-9213

I had previously run into this and worked around it by copying the SecureDataNodeStarter class into the project and applying the patch from HDFS-9213. An example of that class is here: risdenk@fa3892c

I have not tried that with your PR yet.

@jbaiera

This comment has been minimized.

Show comment
Hide comment
@jbaiera

jbaiera Nov 1, 2017

Contributor

That's strange, it seems to have still passed CI. There may be something on your local environment that is keeping the minicluster from acquiring those privileged ports. It seems based on the issue you linked that this is a known issue with MiniDFSCluster despite the fact that we're already configuring it it not use the privileged ports...

I did some digging and found the following in the MiniDFSCluster class (excuse the decompiled variable names):

if(UserGroupInformation.isSecurityEnabled() && conf.get("dfs.data.transfer.protection") == null) {
    try {
        secureResources = SecureDataNodeStarter.getSecureResources(dnConf);
    } catch (Exception var27) {
        var27.printStackTrace();
    }
}

Setting dfs.data.transfer.protection would enable SASL authentication for the data xfer protocol and workaround the minicluster's issues with privileged ports. I'll update the tests to use SASL auth for the data transfer protocol.

Contributor

jbaiera commented Nov 1, 2017

That's strange, it seems to have still passed CI. There may be something on your local environment that is keeping the minicluster from acquiring those privileged ports. It seems based on the issue you linked that this is a known issue with MiniDFSCluster despite the fact that we're already configuring it it not use the privileged ports...

I did some digging and found the following in the MiniDFSCluster class (excuse the decompiled variable names):

if(UserGroupInformation.isSecurityEnabled() && conf.get("dfs.data.transfer.protection") == null) {
    try {
        secureResources = SecureDataNodeStarter.getSecureResources(dnConf);
    } catch (Exception var27) {
        var27.printStackTrace();
    }
}

Setting dfs.data.transfer.protection would enable SASL authentication for the data xfer protocol and workaround the minicluster's issues with privileged ports. I'll update the tests to use SASL auth for the data transfer protocol.

jbaiera added some commits Oct 30, 2017

Sense HA HDFS settings and remove permission restrictions during regu…
…lar execution.

Adding integration test for HA-Namenode-Enabled HDFS, regular and secured.
Upgrade the MiniHDFS fixture to stand up an HA Namenode topology when configured with a nameservice.
Add debug statements to log the configurations being added to the underlying HDFS configuration at repository creation time.
Include a failing integration test for running an HDFS repository against an HDFS cluster, transitioning active Namenodes between repository operations.
Include configurations for a new HA HDFS Fixture, and a Secure HA HDFS Fixture in the build script.
Include new integration test tasks for both HA fixtures.
HDFS Repository will still be subject to the permissions in the policy file
but will not restrict them further when executing against HA configurations.
Move doPrivileged blocks to HdfsSecurityContext so that it can decide when
to restrict permissions.
Set HDFS Repository tests to use SASL auth on datanode data transfer …
…protocol to work around MiniDFSCluster's privileged port limitations
@jbaiera

This comment has been minimized.

Show comment
Hide comment
@jbaiera

jbaiera Nov 3, 2017

Contributor

Looks like CI has been broken on this PR. Gave it a kick.

Contributor

jbaiera commented Nov 3, 2017

Looks like CI has been broken on this PR. Gave it a kick.

@risdenk

This comment has been minimized.

Show comment
Hide comment
@risdenk

risdenk Nov 4, 2017

Contributor

@jbaiera I looked through the test logs and don't see the integTestSecure* tests in the logs? Is Jenkins running the secure tests?

With the latest change I see:

2017-11-04 11:17:37,443 WARN  [main] server.KerberosAuthenticationHandler (KerberosAuthenticationHandler.java:init(284)) - HTTP principal: [hdfs/hdfs.build.elastic.co@BUILD.ELASTIC.CO] is invalid for SPNEGO!

The SASL auth requires HTTP/_HOST principals usually for SPNEGO.

As far as the secure ports, I don't run the tests as a root or privileged user so the HDFS code can't bind to those low port numbers. The JIRA I pointed out I think is just bailing out early making it hard to test without a privileged user.

Contributor

risdenk commented Nov 4, 2017

@jbaiera I looked through the test logs and don't see the integTestSecure* tests in the logs? Is Jenkins running the secure tests?

With the latest change I see:

2017-11-04 11:17:37,443 WARN  [main] server.KerberosAuthenticationHandler (KerberosAuthenticationHandler.java:init(284)) - HTTP principal: [hdfs/hdfs.build.elastic.co@BUILD.ELASTIC.CO] is invalid for SPNEGO!

The SASL auth requires HTTP/_HOST principals usually for SPNEGO.

As far as the secure ports, I don't run the tests as a root or privileged user so the HDFS code can't bind to those low port numbers. The JIRA I pointed out I think is just bailing out early making it hard to test without a privileged user.

@jbaiera

This comment has been minimized.

Show comment
Hide comment
@jbaiera

jbaiera Nov 6, 2017

Contributor

Seems the CI build is not running the secure tests most likely due to them lacking a vagrant installation. There are definitely other CI boxes that do have vagrant installed, they're just a different class. I'll double check to make sure that these tests are getting run on those.

Regarding the workaround - I'm hesitant to patch any of the HDFS tools for testing purposes. Even though it's a test-related artifact, I'd rather make sure that our tests work as-is to make future upgrades simpler.

The SASL auth requires HTTP/_HOST principals usually for SPNEGO.

We're not really using any of the HTTP facilities for HDFS in the tests at the moment. If this is causing errors in your build other than just warning messages, I can take a look at configuring the HTTP principals, but if not I'd rather not delay the PR further if it can be avoided.

Contributor

jbaiera commented Nov 6, 2017

Seems the CI build is not running the secure tests most likely due to them lacking a vagrant installation. There are definitely other CI boxes that do have vagrant installed, they're just a different class. I'll double check to make sure that these tests are getting run on those.

Regarding the workaround - I'm hesitant to patch any of the HDFS tools for testing purposes. Even though it's a test-related artifact, I'd rather make sure that our tests work as-is to make future upgrades simpler.

The SASL auth requires HTTP/_HOST principals usually for SPNEGO.

We're not really using any of the HTTP facilities for HDFS in the tests at the moment. If this is causing errors in your build other than just warning messages, I can take a look at configuring the HTTP principals, but if not I'd rather not delay the PR further if it can be avoided.

@jbaiera

This comment has been minimized.

Show comment
Hide comment
@jbaiera

jbaiera Nov 6, 2017

Contributor

@rjernst Can you take a look at this when you're available?

Contributor

jbaiera commented Nov 6, 2017

@rjernst Can you take a look at this when you're available?

@jbaiera jbaiera requested a review from rjernst Nov 6, 2017

@risdenk

This comment has been minimized.

Show comment
Hide comment
@risdenk

risdenk Nov 7, 2017

Contributor

@jbaiera I haven't been able to get a secure test run to succeed. If there is a CI build that you have seen pass with the secure tests that would be great. I'm not sure the warning is related just wanted to point it out. I agree about not wanting to patch HDFS code.

Regardless, the changes look reasonable to me.

Contributor

risdenk commented Nov 7, 2017

@jbaiera I haven't been able to get a secure test run to succeed. If there is a CI build that you have seen pass with the secure tests that would be great. I'm not sure the warning is related just wanted to point it out. I agree about not wanting to patch HDFS code.

Regardless, the changes look reasonable to me.

@jbaiera

This comment has been minimized.

Show comment
Hide comment
@jbaiera

jbaiera Nov 7, 2017

Contributor

@risdenk are you still seeing issues with privileged ports or is it a different problem? Are there any stack traces you can post here?

Contributor

jbaiera commented Nov 7, 2017

@risdenk are you still seeing issues with privileged ports or is it a different problem? Are there any stack traces you can post here?

@risdenk

This comment has been minimized.

Show comment
Hide comment
@risdenk

risdenk Nov 7, 2017

Contributor

@jbaiera - I couldn't find a stack trace the last few times I tried. I would have provided it if I could. Everything looks good just comes back with a failure. Even with --debug there isn't a stacktrace that I could find. I'll try again in the next few days. It could be something on my machine if on other environments the tests pass.

As a side note, what is the command you run to run the tests (without running the whole ES test suite)?

Contributor

risdenk commented Nov 7, 2017

@jbaiera - I couldn't find a stack trace the last few times I tried. I would have provided it if I could. Everything looks good just comes back with a failure. Even with --debug there isn't a stacktrace that I could find. I'll try again in the next few days. It could be something on my machine if on other environments the tests pass.

As a side note, what is the command you run to run the tests (without running the whole ES test suite)?

@risdenk

This comment has been minimized.

Show comment
Hide comment
@risdenk

risdenk Nov 7, 2017

Contributor

@jbaiera - I might have figured out my issue. I should have an update later tonight. I might have been running from the wrong directory instead of at the top level of elasticsearch directory structure.

Contributor

risdenk commented Nov 7, 2017

@jbaiera - I might have figured out my issue. I should have an update later tonight. I might have been running from the wrong directory instead of at the top level of elasticsearch directory structure.

@risdenk

This comment has been minimized.

Show comment
Hide comment
@risdenk

risdenk Nov 8, 2017

Contributor

@jbaiera - I haven't been able to get the tests to pass on my Mac (I run into a unable to start HDFS fixture). I ran on a Linux box and all the tests pass. I'll figure out the test issue some other time on my Mac. Sorry for the confusion.

Contributor

risdenk commented Nov 8, 2017

@jbaiera - I haven't been able to get the tests to pass on my Mac (I run into a unable to start HDFS fixture). I ran on a Linux box and all the tests pass. I'll figure out the test issue some other time on my Mac. Sorry for the confusion.

@jbaiera

This comment has been minimized.

Show comment
Hide comment
@jbaiera

jbaiera Nov 8, 2017

Contributor

I run into a unable to start HDFS fixture

@risdenk You mentioned that you are not getting any exceptions. The build will abort the fixture if it takes longer than 30 seconds to standup and be initialized (i.e. write a port and pid file at the end of initialization). Are you seeing this timeout?

Contributor

jbaiera commented Nov 8, 2017

I run into a unable to start HDFS fixture

@risdenk You mentioned that you are not getting any exceptions. The build will abort the fixture if it takes longer than 30 seconds to standup and be initialized (i.e. write a port and pid file at the end of initialization). Are you seeing this timeout?

@risdenk

This comment has been minimized.

Show comment
Hide comment
@risdenk

risdenk Nov 10, 2017

Contributor

@jbaiera definitely looks like 30 seconds in the logs. It doesn't say that it timed out though.

The below log is with --info. Even with debug the logs are more verbose but doesn't say anything about a timeout.

...
  [log]
    2017-11-07 21:22:25,500 WARN  [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
...
    2017-11-07 21:22:54,918 INFO  [main] mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:63639
-----------------------------------------
:plugins:repository-hdfs:secureHdfsFixture FAILED
:plugins:repository-hdfs:secureHdfsFixture (Thread[main,5,main]) completed. Took 30.414 secs.
Contributor

risdenk commented Nov 10, 2017

@jbaiera definitely looks like 30 seconds in the logs. It doesn't say that it timed out though.

The below log is with --info. Even with debug the logs are more verbose but doesn't say anything about a timeout.

...
  [log]
    2017-11-07 21:22:25,500 WARN  [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
...
    2017-11-07 21:22:54,918 INFO  [main] mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:63639
-----------------------------------------
:plugins:repository-hdfs:secureHdfsFixture FAILED
:plugins:repository-hdfs:secureHdfsFixture (Thread[main,5,main]) completed. Took 30.414 secs.
@jbaiera

This comment has been minimized.

Show comment
Hide comment
@jbaiera

jbaiera Nov 10, 2017

Contributor

@risdenk re-running with --stacktrace might help as well. I think you might be running into an issue with MacOS and it's localhost domain name resolution. The last time people were reporting the fixture not working on Mac it was fixed by mapping 127.0.0.1 and ::1 to localhost in their /etc/hosts file.

Contributor

jbaiera commented Nov 10, 2017

@risdenk re-running with --stacktrace might help as well. I think you might be running into an issue with MacOS and it's localhost domain name resolution. The last time people were reporting the fixture not working on Mac it was fixed by mapping 127.0.0.1 and ::1 to localhost in their /etc/hosts file.

@risdenk

This comment has been minimized.

Show comment
Hide comment
@risdenk

risdenk Nov 10, 2017

Contributor

Both 127.0.0.1 and ::1 are mapped to localhost in my /etc/hots. It looks like the HDFS fixture test is starting correctly just times out. Is there an easy way to increase the timeout to see if that helps?

Contributor

risdenk commented Nov 10, 2017

Both 127.0.0.1 and ::1 are mapped to localhost in my /etc/hots. It looks like the HDFS fixture test is starting correctly just times out. Is there an easy way to increase the timeout to see if that helps?

@risdenk

This comment has been minimized.

Show comment
Hide comment
@risdenk

risdenk Nov 10, 2017

Contributor

For reference I modified https://github.com/elastic/elasticsearch/blob/master/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/AntFixture.groovy#L132 to 60 seconds from 30. The test seem to get further. Currently looking into why the secure tests seem to be stalling some when talking to HDFS.

Contributor

risdenk commented Nov 10, 2017

For reference I modified https://github.com/elastic/elasticsearch/blob/master/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/AntFixture.groovy#L132 to 60 seconds from 30. The test seem to get further. Currently looking into why the secure tests seem to be stalling some when talking to HDFS.

@risdenk

This comment has been minimized.

Show comment
Hide comment
@risdenk

risdenk Nov 10, 2017

Contributor

@jbaiera - Finally got closer to figuring out what is happening:

At this point I have the following tests working:

  • ./gradlew integTestSecure --info
  • ./gradlew integTestSecureHa --info

Thanks so much for your patience on this.

Contributor

risdenk commented Nov 10, 2017

@jbaiera - Finally got closer to figuring out what is happening:

At this point I have the following tests working:

  • ./gradlew integTestSecure --info
  • ./gradlew integTestSecureHa --info

Thanks so much for your patience on this.

@jbaiera

This comment has been minimized.

Show comment
Hide comment
@jbaiera

jbaiera Nov 10, 2017

Contributor

@risdenk Hey man, yeah, no problem! We also appreciate your feedback and work on verifying the fix on this as well.

Contributor

jbaiera commented Nov 10, 2017

@risdenk Hey man, yeah, no problem! We also appreciate your feedback and work on verifying the fix on this as well.

@jbaiera

This comment has been minimized.

Show comment
Hide comment
@jbaiera

jbaiera Nov 16, 2017

Contributor

Thanks @ywelsch, I collapsed a good chunk of the build file down. Let me know if we're all set to go on this.

Contributor

jbaiera commented Nov 16, 2017

Thanks @ywelsch, I collapsed a good chunk of the build file down. Let me know if we're all set to go on this.

@ywelsch

Thanks, LGTM. I've left one small suggestion, which you can address at will. Please also add version labels to the PR.

@jbaiera jbaiera added the v7.0.0 label Dec 1, 2017

@jbaiera

This comment has been minimized.

Show comment
Hide comment
@jbaiera

jbaiera Dec 1, 2017

Contributor

Thanks all, I'll work on backporting this to the appropriate branches. Will update the tags to reflect which ones receive the fix.

Contributor

jbaiera commented Dec 1, 2017

Thanks all, I'll work on backporting this to the appropriate branches. Will update the tags to reflect which ones receive the fix.

@jbaiera jbaiera merged commit e16f127 into elastic:master Dec 1, 2017

2 checks passed

CLA Commit author is a member of Elasticsearch
Details
elasticsearch-ci Build finished.
Details

@jbaiera jbaiera deleted the jbaiera:hdfs-repo-ha-support branch Dec 1, 2017

jbaiera added a commit that referenced this pull request Dec 4, 2017

Fix SecurityException when HDFS Repository used against HA Namenodes (#…
…27196)

* Sense HA HDFS settings and remove permission restrictions during regular execution.

This PR adds integration tests for HA-Enabled HDFS deployments, both regular and secured. 
The Mini HDFS fixture has been updated to optionally run in HA-Mode. A new test suite has 
been added for reproducing the effects of a Namenode failing over during regular repository 
usage. Going forward, the HDFS Repository will still be subject to its self imposed permission 
restrictions during normal use, but will no longer restrict them when running against an HA 
enabled HDFS cluster. Instead, the plugin will rely on the provided security policy and not 
further restrict the permissions so that the transparent operation to failover to a different 
Namenode in the client does not raise security exceptions. Additionally, we are now testing the 
secure mode with SASL based wire encryption of data between Elasticsearch and HDFS. This 
includes a missing library (commons codec) in order to support this change.

jbaiera added a commit that referenced this pull request Dec 4, 2017

Fix SecurityException when HDFS Repository used against HA Namenodes (#…
…27196)

* Sense HA HDFS settings and remove permission restrictions during regular execution.

This PR adds integration tests for HA-Enabled HDFS deployments, both regular and secured. 
The Mini HDFS fixture has been updated to optionally run in HA-Mode. A new test suite has 
been added for reproducing the effects of a Namenode failing over during regular repository 
usage. Going forward, the HDFS Repository will still be subject to its self imposed permission 
restrictions during normal use, but will no longer restrict them when running against an HA 
enabled HDFS cluster. Instead, the plugin will rely on the provided security policy and not 
further restrict the permissions so that the transparent operation to failover to a different 
Namenode in the client does not raise security exceptions. Additionally, we are now testing the 
secure mode with SASL based wire encryption of data between Elasticsearch and HDFS. This 
includes a missing library (commons codec) in order to support this change.

jbaiera added a commit that referenced this pull request Dec 4, 2017

Fix SecurityException when HDFS Repository used against HA Namenodes (#…
…27196)

* Sense HA HDFS settings and remove permission restrictions during regular execution.

This PR adds integration tests for HA-Enabled HDFS deployments, both regular and secured.
The Mini HDFS fixture has been updated to optionally run in HA-Mode. A new test suite has
been added for reproducing the effects of a Namenode failing over during regular repository
usage. Going forward, the HDFS Repository will still be subject to its self imposed permission
restrictions during normal use, but will no longer restrict them when running against an HA
enabled HDFS cluster. Instead, the plugin will rely on the provided security policy and not
further restrict the permissions so that the transparent operation to failover to a different
Namenode in the client does not raise security exceptions. Additionally, we are now testing the
secure mode with SASL based wire encryption of data between Elasticsearch and HDFS. This
includes a missing library (commons codec) in order to support this change.

jbaiera added a commit that referenced this pull request Dec 4, 2017

Fix SecurityException when HDFS Repository used against HA Namenodes (#…
…27196)

* Sense HA HDFS settings and remove permission restrictions during regular execution.

This PR adds integration tests for HA-Enabled HDFS deployments, both regular and secured.
The Mini HDFS fixture has been updated to optionally run in HA-Mode. A new test suite has
been added for reproducing the effects of a Namenode failing over during regular repository
usage. Going forward, the HDFS Repository will still be subject to its self imposed permission
restrictions during normal use, but will no longer restrict them when running against an HA
enabled HDFS cluster. Instead, the plugin will rely on the provided security policy and not
further restrict the permissions so that the transparent operation to failover to a different
Namenode in the client does not raise security exceptions. Additionally, we are now testing the
secure mode with SASL based wire encryption of data between Elasticsearch and HDFS. This
includes a missing library (commons codec) in order to support this change.
@risdenk

This comment has been minimized.

Show comment
Hide comment
@risdenk

risdenk Dec 5, 2017

Contributor

Thanks @jbaiera!

Contributor

risdenk commented Dec 5, 2017

Thanks @jbaiera!

@risdenk

This comment has been minimized.

Show comment
Hide comment
@risdenk

risdenk Dec 14, 2017

Contributor

@jbaiera - I just saw ES 6.1 was released. It looks like the v6.1.x tag was missed for this?

git branch -r --contains f1ba986
  origin/6.1

I also don't see a v6.1.0 tag on github to check if the tag has the commit.

Contributor

risdenk commented Dec 14, 2017

@jbaiera - I just saw ES 6.1 was released. It looks like the v6.1.x tag was missed for this?

git branch -r --contains f1ba986
  origin/6.1

I also don't see a v6.1.0 tag on github to check if the tag has the commit.

@risdenk

This comment has been minimized.

Show comment
Hide comment
@risdenk

risdenk Dec 14, 2017

Contributor

I did some quick sleuthing and found that PR 27652 is in the release notes for 6.1.0 (https://www.elastic.co/guide/en/elasticsearch/reference/current/release-notes-6.1.0.html) and is the next commit (ea588e5) on the 6.1 branch after commit f1ba986 on this issue.

I think that means that this is in ES 6.1.0 and isn't in the release notes.

Contributor

risdenk commented Dec 14, 2017

I did some quick sleuthing and found that PR 27652 is in the release notes for 6.1.0 (https://www.elastic.co/guide/en/elasticsearch/reference/current/release-notes-6.1.0.html) and is the next commit (ea588e5) on the 6.1 branch after commit f1ba986 on this issue.

I think that means that this is in ES 6.1.0 and isn't in the release notes.

@jasontedor jasontedor added the v6.1.0 label Dec 14, 2017

@jbaiera

This comment has been minimized.

Show comment
Hide comment
@jbaiera

jbaiera Dec 14, 2017

Contributor

@risdenk Thanks for pointing that out. It looks like this slipped into 6.1.0 instead of in 6.1.1. I'll see about getting an update to the release notes out.

Contributor

jbaiera commented Dec 14, 2017

@risdenk Thanks for pointing that out. It looks like this slipped into 6.1.0 instead of in 6.1.1. I'll see about getting an update to the release notes out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment