Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ES-v5.0.1 throw java.lang.SecurityException while snapshot #22156

Closed
ervinyang opened this issue Dec 14, 2016 · 28 comments
Closed

ES-v5.0.1 throw java.lang.SecurityException while snapshot #22156

ervinyang opened this issue Dec 14, 2016 · 28 comments
Assignees
Labels
>bug :Distributed/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs

Comments

@ervinyang
Copy link

ervinyang commented Dec 14, 2016

Elasticsearch version: 5.0.1

Plugins installed: repository-hdfs

JVM version:

java version "1.8.0_92"
Java(TM) SE Runtime Environment (build 1.8.0_92-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.92-b14, mixed mode)

OS version:

CentOS release 6.7 (Final)
Linux version 2.6.32-573.26.1.el6.x86_64 (mockbuild@c6b8.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC) ) #1 SMP Wed May 4 00:57:44 UTC 2016

Description of the problem including expected versus actual behavior:
When I create repositories, ES response

{
  "acknowledged": true
}

but when I create snapshot of index, it throws exception:

[2016-12-12T11:38:04,417][WARN ][r.suppressed             ] path: /_snapshot/my_hdfs_repo/20161209-snapshot, params: {repository=my_hdfs_repo, snapshot=20161209-snapshot}
org.elasticsearch.transport.RemoteTransportException: [node-2][10.90.6.234:9340][cluster:admin/snapshot/create]
Caused by: org.elasticsearch.repositories.RepositoryException: [my_hdfs_repo] could not read repository data from index blob
        at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getRepositoryData(BlobStoreRepository.java:751) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.snapshots.SnapshotsService.createSnapshot(SnapshotsService.java:226) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction.masterOperation(TransportCreateSnapshotAction.java:82) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction.masterOperation(TransportCreateSnapshotAction.java:41) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.master.TransportMasterNodeAction.masterOperation(TransportMasterNodeAction.java:86) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.doRun(TransportMasterNodeAction.java:170) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:520) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.0.1.jar:5.0.1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_92]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_92]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_92]
Caused by: java.io.IOException: com.google.protobuf.ServiceException: java.security.AccessControlException: access denied ("javax.security.auth.PrivateCredentialPermission" "org.apache.hadoop.security.Credentials" "read")
        at org.apache.hadoop.ipc.ProtobufHelper.getRemoteException(ProtobufHelper.java:47) ~[?:?]
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:580) ~[?:?]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_92]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[?:?]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[?:?]
        at com.sun.proxy.$Proxy34.getListing(Unknown Source) ~[?:?]
        at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2094) ~[?:?]
        at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2077) ~[?:?]
        at org.apache.hadoop.fs.Hdfs.listStatus(Hdfs.java:254) ~[?:?]
        at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1798) ~[?:?]
        at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1794) ~[?:?]
        at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) ~[?:?]
        at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1800) ~[?:?]
        at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1759) ~[?:?]
        at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1718) ~[?:?]
        at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:145) ~[?:?]
        at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:142) ~[?:?]
        at org.elasticsearch.repositories.hdfs.HdfsBlobStore$4.run(HdfsBlobStore.java:136) ~[?:?]
        at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_92]
        at java.security.AccessController.doPrivileged(AccessController.java:713) ~[?:1.8.0_92]
        at org.elasticsearch.repositories.hdfs.HdfsBlobStore.execute(HdfsBlobStore.java:133) ~[?:?]
        at org.elasticsearch.repositories.hdfs.HdfsBlobContainer.listBlobsByPrefix(HdfsBlobContainer.java:142) ~[?:?]
        at org.elasticsearch.repositories.blobstore.BlobStoreRepository.listBlobsToGetLatestIndexId(BlobStoreRepository.java:849) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.repositories.blobstore.BlobStoreRepository.latestIndexBlobId(BlobStoreRepository.java:818) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getRepositoryData(BlobStoreRepository.java:721) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.snapshots.SnapshotsService.createSnapshot(SnapshotsService.java:226) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction.masterOperation(TransportCreateSnapshotAction.java:82) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction.masterOperation(TransportCreateSnapshotAction.java:41) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.master.TransportMasterNodeAction.masterOperation(TransportMasterNodeAction.java:86) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.doRun(TransportMasterNodeAction.java:170) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:520) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.0.1.jar:5.0.1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_92]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_92]
        at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_92]
Caused by: org.elasticsearch.common.io.stream.NotSerializableExceptionWrapper: service_exception: java.security.AccessControlException: access denied ("javax.security.auth.PrivateCredentialPermission" "org.apache.hadoop.security.Credentials" "read")
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:243) ~[?:?]
        at com.sun.proxy.$Proxy33.getListing(Unknown Source) ~[?:?]
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:573) ~[?:?]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_92]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[?:?]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[?:?]
        at com.sun.proxy.$Proxy34.getListing(Unknown Source) ~[?:?]
        at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2094) ~[?:?]
        at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2077) ~[?:?]
        at org.apache.hadoop.fs.Hdfs.listStatus(Hdfs.java:254) ~[?:?]
        at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1798) ~[?:?]
        at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1794) ~[?:?]
        at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) ~[?:?]
        at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1800) ~[?:?]
        at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1759) ~[?:?]
        at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1718) ~[?:?]
        at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:145) ~[?:?]
        at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:142) ~[?:?]
        at org.elasticsearch.repositories.hdfs.HdfsBlobStore$4.run(HdfsBlobStore.java:136) ~[?:?]
        at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_92]
        at java.security.AccessController.doPrivileged(AccessController.java:713) ~[?:1.8.0_92]
        at org.elasticsearch.repositories.hdfs.HdfsBlobStore.execute(HdfsBlobStore.java:133) ~[?:?]
        at org.elasticsearch.repositories.hdfs.HdfsBlobContainer.listBlobsByPrefix(HdfsBlobContainer.java:142) ~[?:?]
        at org.elasticsearch.repositories.blobstore.BlobStoreRepository.listBlobsToGetLatestIndexId(BlobStoreRepository.java:849) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.repositories.blobstore.BlobStoreRepository.latestIndexBlobId(BlobStoreRepository.java:818) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getRepositoryData(BlobStoreRepository.java:721) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.snapshots.SnapshotsService.createSnapshot(SnapshotsService.java:226) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction.masterOperation(TransportCreateSnapshotAction.java:82) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction.masterOperation(TransportCreateSnapshotAction.java:41) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.master.TransportMasterNodeAction.masterOperation(TransportMasterNodeAction.java:86) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.doRun(TransportMasterNodeAction.java:170) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:520) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.0.1.jar:5.0.1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_92]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_92]
        at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_92]
Caused by: java.lang.SecurityException: access denied ("javax.security.auth.PrivateCredentialPermission" "org.apache.hadoop.security.Credentials" "read")
        at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:1.8.0_92]
        at java.security.AccessControlContext.checkPermission2(AccessControlContext.java:538) ~[?:1.8.0_92]
        at java.security.AccessControlContext.checkPermission(AccessControlContext.java:481) ~[?:1.8.0_92]
        at java.security.AccessController.checkPermission(AccessController.java:884) ~[?:1.8.0_92]
        at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) ~[?:1.8.0_92]
        at javax.security.auth.Subject$ClassSet.populateSet(Subject.java:1414) ~[?:1.8.0_92]
        at javax.security.auth.Subject$ClassSet.<init>(Subject.java:1372) ~[?:1.8.0_92]
        at javax.security.auth.Subject.getPrivateCredentials(Subject.java:767) ~[?:1.8.0_92]
        at org.apache.hadoop.security.UserGroupInformation.getCredentialsInternal(UserGroupInformation.java:1499) ~[?:?]
        at org.apache.hadoop.security.UserGroupInformation.getTokens(UserGroupInformation.java:1464) ~[?:?]
        at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:436) ~[?:?]
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1519) ~[?:?]
        at org.apache.hadoop.ipc.Client.call(Client.java:1446) ~[?:?]
        at org.apache.hadoop.ipc.Client.call(Client.java:1407) ~[?:?]
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[?:?]
        at com.sun.proxy.$Proxy33.getListing(Unknown Source) ~[?:?]
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:573) ~[?:?]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_92]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[?:?]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[?:?]
        at com.sun.proxy.$Proxy34.getListing(Unknown Source) ~[?:?]
        at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2094) ~[?:?]
        at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2077) ~[?:?]
        at org.apache.hadoop.fs.Hdfs.listStatus(Hdfs.java:254) ~[?:?]
        at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1798) ~[?:?]
        at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1794) ~[?:?]
        at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) ~[?:?]
        at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1800) ~[?:?]
        at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1759) ~[?:?]
        at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1718) ~[?:?]
        at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:145) ~[?:?]
        at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:142) ~[?:?]
        at org.elasticsearch.repositories.hdfs.HdfsBlobStore$4.run(HdfsBlobStore.java:136) ~[?:?]
        at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_92]
        at java.security.AccessController.doPrivileged(AccessController.java:713) ~[?:1.8.0_92]
        at org.elasticsearch.repositories.hdfs.HdfsBlobStore.execute(HdfsBlobStore.java:133) ~[?:?]
        at org.elasticsearch.repositories.hdfs.HdfsBlobContainer.listBlobsByPrefix(HdfsBlobContainer.java:142) ~[?:?]
        at org.elasticsearch.repositories.blobstore.BlobStoreRepository.listBlobsToGetLatestIndexId(BlobStoreRepository.java:849) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.repositories.blobstore.BlobStoreRepository.latestIndexBlobId(BlobStoreRepository.java:818) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getRepositoryData(BlobStoreRepository.java:721) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.snapshots.SnapshotsService.createSnapshot(SnapshotsService.java:226) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction.masterOperation(TransportCreateSnapshotAction.java:82) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction.masterOperation(TransportCreateSnapshotAction.java:41) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.master.TransportMasterNodeAction.masterOperation(TransportMasterNodeAction.java:86) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.doRun(TransportMasterNodeAction.java:170) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:520) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.0.1.jar:5.0.1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_92]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_92]
        at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_92]

Steps to reproduce:
1.create repositories

PUT /_snapshot/my_backup
{
"type": "hdfs",
"settings": {
      "path": "/path/on/hadoop",
      "uri": "hdfs://hadoop_cluster_domain:[port]",
      "conf_location":"/hadoop/hdfs-site.xml,/hadoop/core-site.xml",
      "user":"hadoop"
    }
}

2.snapshot my index

PUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true

3.exception is thrown

@clintongormley
Copy link

@jbaiera could you take a look at this please?

@ervinyang
Copy link
Author

Dear @jbaiera @clintongormley, have you fixed the bug? Or should I provide something more for you to solve it?

@jbaiera
Copy link
Member

jbaiera commented Dec 18, 2016

@ervinyang Could you provide some information about how you have HDFS set up? (distribution, version, security on/off)

Thanks!

@ervinyang
Copy link
Author

ervinyang commented Dec 18, 2016

@jbaiera

  • hdfs-version: 2.2.0
  • plugin-security.policy:
/*
 * Licensed to Elasticsearch under one or more contributor
 * license agreements. See the NOTICE file distributed with
 * this work for additional information regarding copyright
 * ownership. Elasticsearch licenses this file to you under
 * the Apache License, Version 2.0 (the "License"); you may
 * not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *    http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing,
 * software distributed under the License is distributed on an
 * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
 * KIND, either express or implied.  See the License for the
 * specific language governing permissions and limitations
 * under the License.
 */

grant {
  // Allow connecting to the internet anywhere
  permission java.net.SocketPermission "*", "connect,resolve";
  
  // Basic permissions needed for Lucene to work:
  permission java.util.PropertyPermission "*", "read,write";
  permission java.lang.reflect.ReflectPermission "*";
  permission java.lang.RuntimePermission "*";

  // These two *have* to be spelled out a separate
  permission java.lang.management.ManagementPermission "control";
  permission java.lang.management.ManagementPermission "monitor";

  // Solr needs those:
  permission java.net.NetPermission "*";
  permission java.sql.SQLPermission "*";
  permission java.util.logging.LoggingPermission "control";
  permission javax.management.MBeanPermission "*", "*";
  permission javax.management.MBeanServerPermission "*";
  permission javax.management.MBeanTrustPermission "*";
  permission javax.security.auth.AuthPermission "*";
  permission javax.security.auth.PrivateCredentialPermission "org.apache.hadoop.security.Credentials * \"*\"", "read";
  permission java.security.SecurityPermission "putProviderProperty.SaslPlainServer";
  permission java.security.SecurityPermission "insertProvider.SaslPlainServer";
  permission javax.xml.bind.JAXBPermission "setDatatypeConverter";
  
  // TIKA uses BouncyCastle and that registers new provider for PDF parsing + MSOffice parsing. Maybe report as bug!
  permission java.security.SecurityPermission "putProviderProperty.BC";
  permission java.security.SecurityPermission "insertProvider.BC";

  // Needed for some things in DNS caching in the JVM
  permission java.security.SecurityPermission "getProperty.networkaddress.cache.ttl";
  permission java.security.SecurityPermission "getProperty.networkaddress.cache.negative.ttl";

  // SSL related properties for Solr tests
  permission java.security.SecurityPermission "getProperty.ssl.*";
};

Thanks!

@mrauter
Copy link

mrauter commented Dec 22, 2016

I'm facing the same problem. If I grant all permissions to the plugin it works. So I should happen because of missing grant permissions (for org.apache.hadoop.security.Credentials)?

@floydlin
Copy link

@mrauter What do you mean by "If I grant all permissions to the plugin" ? Could you paste the plugin-security.policy file? We are hitting this too

@mrauter
Copy link

mrauter commented Dec 27, 2016

@tangfl
permission java.security.AllPermission;

@Tetraneon
Copy link

Tetraneon commented Dec 28, 2016

Hi everyone,

Same problem :

  • It works on my VM prototype (UBUNTU 16.04/elasticsearch 5.0.2 from zip) with 2 nodes and a repository on a Hubic file system (sudo hubicfuse /mnt/hubic -o noauto_cache,sync_read,allow_other,uid=XXX,gid=XXX,nonempty).
  • But with my VPS system (UBUNTU 16.04 2 nodes from elastic packets), it's a drama... The fisrt snapshot began : i can see it in /mnt/hubic/... but when it ends, it's impossible to consult list of snapshots, neither do a new snapshot.

Curl
curl -XPUT 'http://XXX.XXX.XXX.XXX:9200/_snapshot/sauvegarde/all?pretty'

Answer :
{ "error" : { "root_cause" : [ { "type" : "repository_exception", "reason" : "[sauvegarde] could not read repository data from index blob" } ], "type" : "repository_exception", "reason" : "[sauvegarde] could not read repository data from index blob", "caused_by" : { "type" : "i_o_exception", "reason" : "Repérage non permis" } }, "status" : 500 }

Log :
[2016-12-28T11:30:50,215][WARN ][r.suppressed ] path: /_snapshot/sauvegarde/all, params: {pretty=, repository=sauvegarde, snapshot=all} org.elasticsearch.transport.RemoteTransportException: [XX-XXXXXXX][XXX.XXX.XXX.XXX:9300][cluster:admin/snapshot/create] Caused by: org.elasticsearch.repositories.RepositoryException: [sauvegarde] could not read repository data from index blob at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getRepositoryData(BlobStoreRepository.java:751) ~[elasticsearch-5.1.1.jar:5.1.1] at org.elasticsearch.snapshots.SnapshotsService.createSnapshot(SnapshotsService.java:226) ~[elasticsearch-5.1.1.jar:5.1.1] at org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction.masterOperation(TransportCreateSnapshotAction.java:82) ~[elasticsearch-5.1.1.jar:5.1.1] at org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction.masterOperation(TransportCreateSnapshotAction.java:41) ~[elasticsearch-5.1.1.jar:5.1.1] at org.elasticsearch.action.support.master.TransportMasterNodeAction.masterOperation(TransportMasterNodeAction.java:86) ~[elasticsearch-5.1.1.jar:5.1.1] at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.doRun(TransportMasterNodeAction.java:170) ~[elasticsearch-5.1.1.jar:5.1.1] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:527) ~[elasticsearch-5.1.1.jar:5.1.1] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.1.1.jar:5.1.1] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_111] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_111] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111] Caused by: java.io.IOException: Repérage non permis at sun.nio.ch.FileChannelImpl.position0(Native Method) ~[?:?] at sun.nio.ch.FileChannelImpl.position(FileChannelImpl.java:263) ~[?:?] at sun.nio.ch.ChannelInputStream.available(ChannelInputStream.java:116) ~[?:?] at java.io.BufferedInputStream.read(BufferedInputStream.java:353) ~[?:1.8.0_111] at java.io.FilterInputStream.read(FilterInputStream.java:107) ~[?:1.8.0_111] at org.elasticsearch.common.io.Streams.copy(Streams.java:76) ~[elasticsearch-5.1.1.jar:5.1.1] at org.elasticsearch.common.io.Streams.copy(Streams.java:57) ~[elasticsearch-5.1.1.jar:5.1.1] at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getRepositoryData(BlobStoreRepository.java:737) ~[elasticsearch-5.1.1.jar:5.1.1] at org.elasticsearch.snapshots.SnapshotsService.createSnapshot(SnapshotsService.java:226) ~[elasticsearch-5.1.1.jar:5.1.1] at org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction.masterOperation(TransportCreateSnapshotAction.java:82) ~[elasticsearch-5.1.1.jar:5.1.1] at org.elasticsearch.action.admin.cluster.snapshots.create.TransportCreateSnapshotAction.masterOperation(TransportCreateSnapshotAction.java:41) ~[elasticsearch-5.1.1.jar:5.1.1] at org.elasticsearch.action.support.master.TransportMasterNodeAction.masterOperation(TransportMasterNodeAction.java:86) ~[elasticsearch-5.1.1.jar:5.1.1] at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.doRun(TransportMasterNodeAction.java:170) ~[elasticsearch-5.1.1.jar:5.1.1] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:527) ~[elasticsearch-5.1.1.jar:5.1.1] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.1.1.jar:5.1.1] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_111] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_111] at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_111]

I don't know how to produce the plugin-security.policy extract.

@Steven-Z-Yang
Copy link

@mrauter set " permission java.security.AllPermission ;" is also throw exception.

@netmanito
Copy link

Hi everyone,
I've managed to reproduce the same error when trying to create snapshot on hdfs from elasticsearch.
Tried with ES-5.1.1 and repository-hdfs installed through elasticsearch-plugin on centos7.
OpenJDK 64-Bit Server VM (build 25.111-b15, mixed mode)
It worked the first time and I was able to create a first snapshot.
Once done, I couldn't access to it or create any other new snapshot and error logs where just the same all time.

Caused by: java.security.AccessControlException: access denied ("javax.security.auth.PrivateCredentialPermission" "org.apache.hadoop.security.Credentials" "read")
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:1.8.0_111]

I tried to load setting ALL permissions on java policy, but it's like if it doesn't read the config or just ignores it.

If you need anymore info or tests, I'm happy to help.
Regards

@kukuxiahuni
Copy link

same problem

@wyzssw
Copy link

wyzssw commented Mar 1, 2017

so bad

@rjernst
Copy link
Member

rjernst commented Mar 1, 2017

@netmanito Can you please paste the entire stack trace you see in the logs?

@netmanito
Copy link

Hi, for the following GET request, GET _snapshot/hdfs_repository/syslog_test , I get the following message:

[2017-03-01T08:23:44,286][WARN ][r.suppressed ] path: /_snapshot/hdfs_repository/syslog_test, params: {repository=hdfs_repository, snapshot=syslog_test}
org.elasticsearch.repositories.RepositoryException: [hdfs_repository] could not read repository data from index blob
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getRepositoryData(BlobStoreRepository.java:796) ~[elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.snapshots.SnapshotsService.getRepositoryData(SnapshotsService.java:142) ~[elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.action.admin.cluster.snapshots.get.TransportGetSnapshotsAction.masterOperation(TransportGetSnapshotsAction.java:91) [elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.action.admin.cluster.snapshots.get.TransportGetSnapshotsAction.masterOperation(TransportGetSnapshotsAction.java:50) [elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.action.support.master.TransportMasterNodeAction.masterOperation(TransportMasterNodeAction.java:87) [elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$2.doRun(TransportMasterNodeAction.java:167) [elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:596) [elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.2.1.jar:5.2.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]
Caused by: java.io.IOException: com.google.protobuf.ServiceException: java.security.AccessControlException: access denied ("javax.security.auth.PrivateCredentialPermission" "org.apache.hadoop.security.Credentials" "read")
at org.apache.hadoop.ipc.ProtobufHelper.getRemoteException(ProtobufHelper.java:47) ~[?:?]
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:580) ~[?:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_111]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[?:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[?:?]
at com.sun.proxy.$Proxy34.getListing(Unknown Source) ~[?:?]
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2094) ~[?:?]
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2077) ~[?:?]
at org.apache.hadoop.fs.Hdfs.listStatus(Hdfs.java:254) ~[?:?]
at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1798) ~[?:?]
at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1794) ~[?:?]
at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) ~[?:?]
at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1800) ~[?:?]
at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1759) ~[?:?]
at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1718) ~[?:?]
at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:145) ~[?:?]
at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:142) ~[?:?]
at org.elasticsearch.repositories.hdfs.HdfsBlobStore$4.run(HdfsBlobStore.java:136) ~[?:?]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_111]
at java.security.AccessController.doPrivileged(AccessController.java:713) ~[?:1.8.0_111]
at org.elasticsearch.repositories.hdfs.HdfsBlobStore.execute(HdfsBlobStore.java:133) ~[?:?]
at org.elasticsearch.repositories.hdfs.HdfsBlobContainer.listBlobsByPrefix(HdfsBlobContainer.java:142) ~[?:?]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.listBlobsToGetLatestIndexId(BlobStoreRepository.java:917) ~[elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.latestIndexBlobId(BlobStoreRepository.java:900) ~[elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getRepositoryData(BlobStoreRepository.java:753) ~[elasticsearch-5.2.1.jar:5.2.1]
... 10 more
Caused by: com.google.protobuf.ServiceException: java.security.AccessControlException: access denied ("javax.security.auth.PrivateCredentialPermission" "org.apache.hadoop.security.Credentials" "read")
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:243) ~[?:?]
at com.sun.proxy.$Proxy33.getListing(Unknown Source) ~[?:?]
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:573) ~[?:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_111]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[?:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[?:?]
at com.sun.proxy.$Proxy34.getListing(Unknown Source) ~[?:?]
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2094) ~[?:?]
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2077) ~[?:?]
at org.apache.hadoop.fs.Hdfs.listStatus(Hdfs.java:254) ~[?:?]
at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1798) ~[?:?]
at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1794) ~[?:?]
at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) ~[?:?]
at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1800) ~[?:?]
at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1759) ~[?:?]
at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1718) ~[?:?]
at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:145) ~[?:?]
at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:142) ~[?:?]
at org.elasticsearch.repositories.hdfs.HdfsBlobStore$4.run(HdfsBlobStore.java:136) ~[?:?]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_111]
at java.security.AccessController.doPrivileged(AccessController.java:713) ~[?:1.8.0_111]
at org.elasticsearch.repositories.hdfs.HdfsBlobStore.execute(HdfsBlobStore.java:133) ~[?:?]
at org.elasticsearch.repositories.hdfs.HdfsBlobContainer.listBlobsByPrefix(HdfsBlobContainer.java:142) ~[?:?]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.listBlobsToGetLatestIndexId(BlobStoreRepository.java:917) ~[elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.latestIndexBlobId(BlobStoreRepository.java:900) ~[elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getRepositoryData(BlobStoreRepository.java:753) ~[elasticsearch-5.2.1.jar:5.2.1]
... 10 more
Caused by: java.security.AccessControlException: access denied ("javax.security.auth.PrivateCredentialPermission" "org.apache.hadoop.security.Credentials" "read")
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:1.8.0_111]
at java.security.AccessController.checkPermission(AccessController.java:884) ~[?:1.8.0_111]
at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) ~[?:1.8.0_111]
at javax.security.auth.Subject$ClassSet.populateSet(Subject.java:1414) ~[?:1.8.0_111]
at javax.security.auth.Subject$ClassSet.(Subject.java:1372) ~[?:1.8.0_111]
at javax.security.auth.Subject.getPrivateCredentials(Subject.java:767) ~[?:1.8.0_111]
at org.apache.hadoop.security.UserGroupInformation.getCredentialsInternal(UserGroupInformation.java:1499) ~[?:?]
at org.apache.hadoop.security.UserGroupInformation.getTokens(UserGroupInformation.java:1464) ~[?:?]
at org.apache.hadoop.ipc.Client$Connection.(Client.java:436) ~[?:?]
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1519) ~[?:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1446) ~[?:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1407) ~[?:?]
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[?:?]
at com.sun.proxy.$Proxy33.getListing(Unknown Source) ~[?:?]
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:573) ~[?:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_111]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[?:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[?:?]
at com.sun.proxy.$Proxy34.getListing(Unknown Source) ~[?:?]
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2094) ~[?:?]
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2077) ~[?:?]
at org.apache.hadoop.fs.Hdfs.listStatus(Hdfs.java:254) ~[?:?]
at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1798) ~[?:?]
at org.apache.hadoop.fs.FileContext$Util$1.next(FileContext.java:1794) ~[?:?]
at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) ~[?:?]
at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1800) ~[?:?]
at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1759) ~[?:?]
at org.apache.hadoop.fs.FileContext$Util.listStatus(FileContext.java:1718) ~[?:?]
at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:145) ~[?:?]
at org.elasticsearch.repositories.hdfs.HdfsBlobContainer$6.run(HdfsBlobContainer.java:142) ~[?:?]
at org.elasticsearch.repositories.hdfs.HdfsBlobStore$4.run(HdfsBlobStore.java:136) ~[?:?]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_111]
at java.security.AccessController.doPrivileged(AccessController.java:713) ~[?:1.8.0_111]
at org.elasticsearch.repositories.hdfs.HdfsBlobStore.execute(HdfsBlobStore.java:133) ~[?:?]
at org.elasticsearch.repositories.hdfs.HdfsBlobContainer.listBlobsByPrefix(HdfsBlobContainer.java:142) ~[?:?]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.listBlobsToGetLatestIndexId(BlobStoreRepository.java:917) ~[elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.latestIndexBlobId(BlobStoreRepository.java:900) ~[elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.getRepositoryData(BlobStoreRepository.java:753) ~[elasticsearch-5.2.1.jar:5.2.1]
... 10 more

Also, If I restart any node, there's a connection error on start although connectivity is correct.
Here's the pastebin link http://pastebin.com/GW8TDymK

Regards

@wyzssw
Copy link

wyzssw commented Mar 12, 2017

I resove this problem by modify plugin source

plugin-security.policy

grant {
  // Hadoop UserGroupInformation, HdfsConstants, PipelineAck clinit
  permission java.lang.RuntimePermission "getClassLoader";

  // UserGroupInformation (UGI) Metrics clinit
  permission java.lang.RuntimePermission "accessDeclaredMembers";
  permission java.lang.reflect.ReflectPermission "suppressAccessChecks";

  // org.apache.hadoop.util.StringUtils clinit
  permission java.util.PropertyPermission "*", "read,write";

  // org.apache.hadoop.util.ShutdownHookManager clinit
  permission java.lang.RuntimePermission "shutdownHooks";

  // JAAS is used always, we use a fake subject, hurts nobody
  permission javax.security.auth.AuthPermission "getSubject";
  permission javax.security.auth.AuthPermission "doAs";
  permission javax.security.auth.AuthPermission "modifyPrivateCredentials";
  permission java.lang.RuntimePermission "accessDeclaredMembers";
  permission java.lang.RuntimePermission "getClassLoader";
  permission java.lang.RuntimePermission "shutdownHooks";
  permission java.lang.reflect.ReflectPermission "suppressAccessChecks";
  permission javax.security.auth.AuthPermission "doAs";
  permission javax.security.auth.AuthPermission "getSubject";
  permission javax.security.auth.AuthPermission "modifyPrivateCredentials";
  permission java.util.PropertyPermission "*", "read,write";
  permission javax.security.auth.PrivateCredentialPermission "org.apache.hadoop.security.Credentials * \"*\"", "read";
};

HdfsBlobStore.java remove
new ReflectPermission("suppressAccessChecks"),
new AuthPermission("modifyPrivateCredentials"), new SocketPermission("*", "connect")

  <V> V execute(Operation<V> operation) throws IOException {
        SecurityManager sm = System.getSecurityManager();
        if (sm != null) {
            // unprivileged code such as scripts do not have SpecialPermission
            sm.checkPermission(new SpecialPermission());
        }
        if (closed) {
            throw new AlreadyClosedException("HdfsBlobStore is closed: " + this);
        }
        try {
            return AccessController.doPrivileged(new PrivilegedExceptionAction<V>() {
                @Override
                public V run() throws IOException {
                    return operation.run(fileContext);
                }
            });
        } catch (PrivilegedActionException pae) {
            throw (IOException) pae.getException();
        }
    }

@YDHui
Copy link

YDHui commented Mar 16, 2017

I have solved it by add a Java Security Manager settings in jvm.options
modify "plugin-security.policy":

grant {
  // Hadoop UserGroupInformation, HdfsConstants, PipelineAck clinit
  permission java.lang.RuntimePermission "getClassLoader";

  // UserGroupInformation (UGI) Metrics clinit
  permission java.lang.RuntimePermission "accessDeclaredMembers";
  permission java.lang.reflect.ReflectPermission "suppressAccessChecks";

  // org.apache.hadoop.util.StringUtils clinit
  permission java.util.PropertyPermission "*", "read,write";

  // org.apache.hadoop.util.ShutdownHookManager clinit
  permission java.lang.RuntimePermission "shutdownHooks";

  // JAAS is used always, we use a fake subject, hurts nobody
  permission javax.security.auth.AuthPermission "getSubject";
  permission javax.security.auth.AuthPermission "doAs";
  permission javax.security.auth.AuthPermission "modifyPrivateCredentials";
  permission java.lang.RuntimePermission "accessDeclaredMembers";
  permission java.lang.RuntimePermission "getClassLoader";
  permission java.lang.RuntimePermission "shutdownHooks";
  permission java.lang.reflect.ReflectPermission "suppressAccessChecks";
  permission javax.security.auth.AuthPermission "doAs";
  permission javax.security.auth.AuthPermission "getSubject";
  permission javax.security.auth.AuthPermission "modifyPrivateCredentials";
  permission java.security.AllPermission;
  permission java.util.PropertyPermission "*", "read,write";
  permission javax.security.auth.PrivateCredentialPermission "org.apache.hadoop.security.Credentials * \"*\"", "read";
};

My policy file path is data/soft/elasticsearch-5.0.1/plugins/repository-hdfs/plugin-security.policy
so I add -Djava.security.policy=file:///data/soft/elasticsearch-5.0.1/plugins/repository-hdfs/plugin-security.policy in "/data/soft/elasticsearch-5.0.1/config/jvm.options"
and then restart the elasticsearch,and run a command
curl -XPUT http://localhost:9200/_snapshot/my_hdfs_repository/snapshot_1?wait_for_completion=true
the result:
{"snapshot":{"snapshot":"snapshot_1","uuid":"SprY4aHXTE6crhi5duJGAQ","version_id":5000199,"version":"5.0.1","indices":["ttst","test"],"state":"SUCCESS","start_time":"2017-03-16T07:23:54.568Z","start_time_in_millis":1489649034568,"end_time":"2017-03-16T07:24:03.961Z","end_time_in_millis":1489649043961,"duration_in_millis":9393,"failures":[],"shards":{"total":10,"failed":0,"successful":10}}}
it done !

@jasontedor
Copy link
Member

@YDHui You have included permission java.security.AllPermission; which is a security issue (it grants everything) and your other permissions are redundant.

@MrGarry2016
Copy link

Any update on this? I got the same problem.

@jasontedor
Copy link
Member

There's an open PR for it: #23439. This is not a simple issue.

@atomic77
Copy link

atomic77 commented Apr 9, 2017

Not sure if it's helpful at this point, but if you need an easy way to reproduce this problem, I ran into this right away with an out-of-the-box hadoop docker image - in fact, all I was looking to do was to give the HDFS plugin a quick test drive.

@jasontedor jasontedor removed the help wanted adoptme label Apr 9, 2017
@MrGarry2016
Copy link

After Elasticsearch v5.4.0 is out, #23439 seems not helpful.
How to reporduce?

  1. ES 5.4.0 installed,
  2. repository-hdfs-5.4.0.zip. installed
  3. PUT /_snapshot/my_hdfs_repository (with necessary payload)
  4. POST /_snapshot/my_hdfs_repository/_verify still has the same Exception

@clintongormley
Copy link

@MrGarry2016 this is fixed in 5.4.1

@adkhare
Copy link

adkhare commented May 15, 2017

@clintongormley We want to install 5.4.1 version of the plugin. How can we install that specific version of the hdfs plugin to the 5.4.0 running ES cluster?

@clintongormley
Copy link

you have to wait until it is released

@jasontedor
Copy link
Member

@adkhare Also, you simply can't install version 5.4.1 of the plugin (when it is released) on a 5.4.0 node.

@MaxFlanders
Copy link

Is there a way to track when this is released?? If I subscribe to this thread would that be sufficient??

@jasontedor
Copy link
Member

jasontedor commented May 16, 2017

Is there a way to track when this is released?? If I subscribe to this thread would that be sufficient??

@326TimesBetter Yes, although subscribing to this thread is not sufficient yet you can track releases on the Elastic website.

@abdbaddude
Copy link

@YDHui Used your solution and it worked except I didn't have to set "permission java.security.AllPermission;" in the plugin-security.policy , thereby not comprising the entire security definitions. Thanks.

By the way my system configuration is
OS centos7.3.1
Docker 17.05.0-ce
ES 5.4.1
hadoop/hdfs 2.8

N.B I wonder why by default the plugin-security.policy file was not detected. the JAVA_OPTS in the jvm.options file had to the trick

i.e the line
-Djava.security.policy=file:///path/to/plugins/repository-hdfs/plugin-security.policy

@clintongormley clintongormley added :Distributed/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs and removed :Plugin Repository HDFS labels Feb 14, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>bug :Distributed/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs
Projects
None yet
Development

No branches or pull requests