Skip to content

java.lang.SecurityException: "putProviderProperty.SaslPlainServer" and "insertProvider.SaslPlainServer" for :Plugin Repository HDFS #26868

Closed
@risdenk

Description

@risdenk

Elasticsearch version (bin/elasticsearch --version):
Version: 5.6.2, Build: 57e20f3/2017-09-23T13:16:45.703Z, JVM: 1.8.0_121
and
Version: 6.0.0-rc1, Build: b9c0df2/2017-09-25T19:11:45.815Z, JVM: 1.8.0_121

Plugins installed:

  • repository-hdfs
  • x-pack

JVM version (java -version):
java version "1.8.0_121"
Java(TM) SE Runtime Environment (build 1.8.0_121-tdc1-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)

OS version (uname -a if on a Unix-like system):
Linux HOSTNAME 3.0.101-0.113.TDC.1.R.0-default #1 SMP Fri Dec 9 04:51:20 PST 2016 (ca32437) x86_64 x86_64 x86_64 GNU/Linux

Description of the problem including expected versus actual behavior:
Elasticsearch plugin HDFS repository fails to create repositories, with the following two errors java.security.AccessControlException: access denied ("java.security.SecurityPermission" "putProviderProperty.SaslPlainServer") and java.security.AccessControlException: access denied ("java.security.SecurityPermission" "insertProvider.SaslPlainServer") from the JVM security manager.

I worked around each in turn by adding to a java.policy file and passing to Elasticsearch on startup. The second permission error was only found after adding an exception for the first one.

Steps to reproduce:

  1. Install Elasticsearch
  2. Install repository-hdfs plugin
  3. Create Elasticsearch snapshot repository pointing to HDFS
  4. Try to create repository. I am still trying to validate complete reproduction steps

Provide logs (if relevant):
The stacktraces below are from 5.6.2. I can grab from 6.0.0-rc1 if necessary.

Stacktrace from missing security policy permission putProviderProperty.SaslPlainServer

[2017-10-03T11:24:39,212][WARN ][o.e.r.h.HdfsRepository   ] Hadoop authentication method is set to [SIMPLE], but a Kerberos principal is specified. Continuing with [KERBEROS] authentication.
[2017-10-03T11:24:39,246][WARN ][o.e.r.RepositoriesService] [master-HOSTNAME] failed to create repository [hdfs][REPOSITORY]
java.security.AccessControlException: access denied ("java.security.SecurityPermission" "putProviderProperty.SaslPlainServer")
	at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:1.8.0_121]
	at java.security.AccessControlContext.checkPermission2(AccessControlContext.java:538) ~[?:1.8.0_121]
	at java.security.AccessControlContext.checkPermission(AccessControlContext.java:481) ~[?:1.8.0_121]
	at java.security.AccessController.checkPermission(AccessController.java:884) ~[?:1.8.0_121]
	at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) ~[?:1.8.0_121]
	at java.lang.SecurityManager.checkSecurityAccess(SecurityManager.java:1759) ~[?:1.8.0_121]
	at java.security.Provider.check(Provider.java:658) ~[?:1.8.0_121]
	at java.security.Provider.put(Provider.java:317) ~[?:1.8.0_121]
	at org.apache.hadoop.security.SaslPlainServer$SecurityProvider.<init>(SaslPlainServer.java:41) ~[?:?]
	at org.apache.hadoop.security.SaslRpcServer.init(SaslRpcServer.java:181) ~[?:?]
	at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:581) ~[?:?]
	at org.apache.hadoop.hdfs.NameNodeProxiesClient.createNonHAProxyWithClientProtocol(NameNodeProxiesClient.java:343) ~[?:?]
	at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:170) ~[?:?]
	at org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider$DefaultProxyFactory.createProxy(ConfiguredFailoverProxyProvider.java:67) ~[?:?]
	at org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy(ConfiguredFailoverProxyProvider.java:151) ~[?:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler$ProxyDescriptor.failover(RetryInvocationHandler.java:221) ~[?:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.processRetryInfo(RetryInvocationHandler.java:147) ~[?:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.processWaitTimeAndRetryInfo(RetryInvocationHandler.java:140) ~[?:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:107) ~[?:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335) ~[?:?]
	at com.sun.proxy.$Proxy34.mkdirs(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2525) ~[?:?]
	at org.apache.hadoop.fs.Hdfs.mkdir(Hdfs.java:311) ~[?:?]
	at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:738) ~[?:?]
	at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:734) ~[?:?]
	at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) ~[?:?]
	at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:741) ~[?:?]
	at org.elasticsearch.repositories.hdfs.HdfsBlobStore$2.run(HdfsBlobStore.java:65) ~[?:?]
	at org.elasticsearch.repositories.hdfs.HdfsBlobStore$2.run(HdfsBlobStore.java:62) ~[?:?]
	at org.elasticsearch.repositories.hdfs.HdfsBlobStore.lambda$execute$0(HdfsBlobStore.java:132) ~[?:?]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_121]
	at java.security.AccessController.doPrivileged(AccessController.java:713) ~[?:1.8.0_121]
	at org.elasticsearch.repositories.hdfs.HdfsBlobStore.execute(HdfsBlobStore.java:129) ~[?:?]
	at org.elasticsearch.repositories.hdfs.HdfsBlobStore.mkdirs(HdfsBlobStore.java:62) ~[?:?]
	at org.elasticsearch.repositories.hdfs.HdfsBlobStore.<init>(HdfsBlobStore.java:55) ~[?:?]
	at org.elasticsearch.repositories.hdfs.HdfsRepository.doStart(HdfsRepository.java:116) ~[?:?]
	at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:69) ~[elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.repositories.RepositoriesService.createRepository(RepositoriesService.java:384) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.repositories.RepositoriesService.applyClusterState(RepositoriesService.java:303) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.cluster.service.ClusterService.callClusterStateAppliers(ClusterService.java:814) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.cluster.service.ClusterService.publishAndApplyChanges(ClusterService.java:768) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:587) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.cluster.service.ClusterService$ClusterServiceTaskBatcher.run(ClusterService.java:263) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:247) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:210) [elasticsearch-5.6.2.jar:5.6.2]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
[2017-10-03T11:24:39,256][WARN ][o.e.r.RepositoriesService] [master-HOSTNAME] failed to create repository [HDFSREPOSITORY]
org.elasticsearch.repositories.RepositoryException: [HDFSREPOSITORY] failed to create repository
	at org.elasticsearch.repositories.RepositoriesService.createRepository(RepositoriesService.java:388) ~[elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.repositories.RepositoriesService.applyClusterState(RepositoriesService.java:303) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.cluster.service.ClusterService.callClusterStateAppliers(ClusterService.java:814) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.cluster.service.ClusterService.publishAndApplyChanges(ClusterService.java:768) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:587) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.cluster.service.ClusterService$ClusterServiceTaskBatcher.run(ClusterService.java:263) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:247) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:210) [elasticsearch-5.6.2.jar:5.6.2]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
Caused by: java.security.AccessControlException: access denied ("java.security.SecurityPermission" "putProviderProperty.SaslPlainServer")
	at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:1.8.0_121]
	at java.security.AccessControlContext.checkPermission2(AccessControlContext.java:538) ~[?:1.8.0_121]
	at java.security.AccessControlContext.checkPermission(AccessControlContext.java:481) ~[?:1.8.0_121]
	at java.security.AccessController.checkPermission(AccessController.java:884) ~[?:1.8.0_121]
	at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) ~[?:1.8.0_121]
	at java.lang.SecurityManager.checkSecurityAccess(SecurityManager.java:1759) ~[?:1.8.0_121]
	at java.security.Provider.check(Provider.java:658) ~[?:1.8.0_121]
	at java.security.Provider.put(Provider.java:317) ~[?:1.8.0_121]
	at org.apache.hadoop.security.SaslPlainServer$SecurityProvider.<init>(SaslPlainServer.java:41) ~[?:?]
	at org.apache.hadoop.security.SaslRpcServer.init(SaslRpcServer.java:181) ~[?:?]
	at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:581) ~[?:?]
	at org.apache.hadoop.hdfs.NameNodeProxiesClient.createNonHAProxyWithClientProtocol(NameNodeProxiesClient.java:343) ~[?:?]
	at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:170) ~[?:?]
	at org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider$DefaultProxyFactory.createProxy(ConfiguredFailoverProxyProvider.java:67) ~[?:?]
	at org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy(ConfiguredFailoverProxyProvider.java:151) ~[?:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler$ProxyDescriptor.failover(RetryInvocationHandler.java:221) ~[?:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.processRetryInfo(RetryInvocationHandler.java:147) ~[?:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.processWaitTimeAndRetryInfo(RetryInvocationHandler.java:140) ~[?:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:107) ~[?:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335) ~[?:?]
	at com.sun.proxy.$Proxy34.mkdirs(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2525) ~[?:?]
	at org.apache.hadoop.fs.Hdfs.mkdir(Hdfs.java:311) ~[?:?]
	at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:738) ~[?:?]
	at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:734) ~[?:?]
	at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) ~[?:?]
	at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:741) ~[?:?]
	at org.elasticsearch.repositories.hdfs.HdfsBlobStore$2.run(HdfsBlobStore.java:65) ~[?:?]
	at org.elasticsearch.repositories.hdfs.HdfsBlobStore$2.run(HdfsBlobStore.java:62) ~[?:?]
	at org.elasticsearch.repositories.hdfs.HdfsBlobStore.lambda$execute$0(HdfsBlobStore.java:132) ~[?:?]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_121]
	at java.security.AccessController.doPrivileged(AccessController.java:713) ~[?:1.8.0_121]
	at org.elasticsearch.repositories.hdfs.HdfsBlobStore.execute(HdfsBlobStore.java:129) ~[?:?]
	at org.elasticsearch.repositories.hdfs.HdfsBlobStore.mkdirs(HdfsBlobStore.java:62) ~[?:?]
	at org.elasticsearch.repositories.hdfs.HdfsBlobStore.<init>(HdfsBlobStore.java:55) ~[?:?]
	at org.elasticsearch.repositories.hdfs.HdfsRepository.doStart(HdfsRepository.java:116) ~[?:?]
	at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:69) ~[elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.repositories.RepositoriesService.createRepository(RepositoriesService.java:384) ~[elasticsearch-5.6.2.jar:5.6.2]
	... 13 more

Stacktrace from missing security policy permission putProviderProperty.SaslPlainServer

[2017-10-03T11:51:58,287][WARN ][o.e.r.h.HdfsRepository   ] Hadoop authentication method is set to [SIMPLE], but a Kerberos principal is specified. Continuing with [KERBEROS] authentication.
[2017-10-03T11:51:58,320][WARN ][o.e.r.RepositoriesService] [master-HOSTNAME] failed to create repository [hdfs][REPOSITORY]
java.security.AccessControlException: access denied ("java.security.SecurityPermission" "insertProvider.SaslPlainServer")
	at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:1.8.0_121]
	at java.security.AccessController.checkPermission(AccessController.java:884) ~[?:1.8.0_121]
	at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) ~[?:1.8.0_121]
	at java.lang.SecurityManager.checkSecurityAccess(SecurityManager.java:1759) ~[?:1.8.0_121]
	at java.security.Security.checkInsertProvider(Security.java:862) ~[?:1.8.0_121]
	at java.security.Security.insertProviderAt(Security.java:359) ~[?:1.8.0_121]
	at java.security.Security.addProvider(Security.java:403) ~[?:1.8.0_121]
	at org.apache.hadoop.security.SaslRpcServer.init(SaslRpcServer.java:181) ~[?:?]
	at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:581) ~[?:?]
	at org.apache.hadoop.hdfs.NameNodeProxiesClient.createNonHAProxyWithClientProtocol(NameNodeProxiesClient.java:343) ~[?:?]
	at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:170) ~[?:?]
	at org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider$DefaultProxyFactory.createProxy(ConfiguredFailoverProxyProvider.java:67) ~[?:?]
	at org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy(ConfiguredFailoverProxyProvider.java:151) ~[?:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler$ProxyDescriptor.failover(RetryInvocationHandler.java:221) ~[?:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.processRetryInfo(RetryInvocationHandler.java:147) ~[?:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.processWaitTimeAndRetryInfo(RetryInvocationHandler.java:140) ~[?:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:107) ~[?:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335) ~[?:?]
	at com.sun.proxy.$Proxy34.mkdirs(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2525) ~[?:?]
	at org.apache.hadoop.fs.Hdfs.mkdir(Hdfs.java:311) ~[?:?]
	at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:738) ~[?:?]
	at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:734) ~[?:?]
	at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) ~[?:?]
	at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:741) ~[?:?]
	at org.elasticsearch.repositories.hdfs.HdfsBlobStore$2.run(HdfsBlobStore.java:65) ~[?:?]
	at org.elasticsearch.repositories.hdfs.HdfsBlobStore$2.run(HdfsBlobStore.java:62) ~[?:?]
	at org.elasticsearch.repositories.hdfs.HdfsBlobStore.lambda$execute$0(HdfsBlobStore.java:132) ~[?:?]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_121]
	at java.security.AccessController.doPrivileged(AccessController.java:713) ~[?:1.8.0_121]
	at org.elasticsearch.repositories.hdfs.HdfsBlobStore.execute(HdfsBlobStore.java:129) ~[?:?]
	at org.elasticsearch.repositories.hdfs.HdfsBlobStore.mkdirs(HdfsBlobStore.java:62) ~[?:?]
	at org.elasticsearch.repositories.hdfs.HdfsBlobStore.<init>(HdfsBlobStore.java:55) ~[?:?]
	at org.elasticsearch.repositories.hdfs.HdfsRepository.doStart(HdfsRepository.java:116) ~[?:?]
	at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:69) ~[elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.repositories.RepositoriesService.createRepository(RepositoriesService.java:384) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.repositories.RepositoriesService.applyClusterState(RepositoriesService.java:303) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.cluster.service.ClusterService.callClusterStateAppliers(ClusterService.java:814) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.cluster.service.ClusterService.publishAndApplyChanges(ClusterService.java:768) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:587) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.cluster.service.ClusterService$ClusterServiceTaskBatcher.run(ClusterService.java:263) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:247) [elasticsearch-5.6.2.jar:5.6.2]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:210) [elasticsearch-5.6.2.jar:5.6.2]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
	Suppressed: java.security.AccessControlException: access denied ("java.security.SecurityPermission" "insertProvider.SaslPlainServer")
		at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:1.8.0_121]
		at java.security.AccessControlContext.checkPermission2(AccessControlContext.java:538) ~[?:1.8.0_121]
		at java.security.AccessControlContext.checkPermission(AccessControlContext.java:481) ~[?:1.8.0_121]
		at java.security.AccessController.checkPermission(AccessController.java:884) ~[?:1.8.0_121]
		at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) ~[?:1.8.0_121]
		at java.lang.SecurityManager.checkSecurityAccess(SecurityManager.java:1759) ~[?:1.8.0_121]
		at java.security.Security.checkInsertProvider(Security.java:865) ~[?:1.8.0_121]
		at java.security.Security.insertProviderAt(Security.java:359) ~[?:1.8.0_121]
		at java.security.Security.addProvider(Security.java:403) ~[?:1.8.0_121]
		at org.apache.hadoop.security.SaslRpcServer.init(SaslRpcServer.java:181) ~[?:?]
		at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:581) ~[?:?]
		at org.apache.hadoop.hdfs.NameNodeProxiesClient.createNonHAProxyWithClientProtocol(NameNodeProxiesClient.java:343) ~[?:?]
		at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:170) ~[?:?]
		at org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider$DefaultProxyFactory.createProxy(ConfiguredFailoverProxyProvider.java:67) ~[?:?]
		at org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy(ConfiguredFailoverProxyProvider.java:151) ~[?:?]
		at org.apache.hadoop.io.retry.RetryInvocationHandler$ProxyDescriptor.failover(RetryInvocationHandler.java:221) ~[?:?]
		at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.processRetryInfo(RetryInvocationHandler.java:147) ~[?:?]
		at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.processWaitTimeAndRetryInfo(RetryInvocationHandler.java:140) ~[?:?]
		at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:107) ~[?:?]
		at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335) ~[?:?]
		at com.sun.proxy.$Proxy34.mkdirs(Unknown Source) ~[?:?]
		at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2525) ~[?:?]
		at org.apache.hadoop.fs.Hdfs.mkdir(Hdfs.java:311) ~[?:?]
		at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:738) ~[?:?]
		at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:734) ~[?:?]
		at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) ~[?:?]
		at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:741) ~[?:?]
		at org.elasticsearch.repositories.hdfs.HdfsBlobStore$2.run(HdfsBlobStore.java:65) ~[?:?]
		at org.elasticsearch.repositories.hdfs.HdfsBlobStore$2.run(HdfsBlobStore.java:62) ~[?:?]
		at org.elasticsearch.repositories.hdfs.HdfsBlobStore.lambda$execute$0(HdfsBlobStore.java:132) ~[?:?]
		at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_121]
		at java.security.AccessController.doPrivileged(AccessController.java:713) ~[?:1.8.0_121]
		at org.elasticsearch.repositories.hdfs.HdfsBlobStore.execute(HdfsBlobStore.java:129) ~[?:?]
		at org.elasticsearch.repositories.hdfs.HdfsBlobStore.mkdirs(HdfsBlobStore.java:62) ~[?:?]
		at org.elasticsearch.repositories.hdfs.HdfsBlobStore.<init>(HdfsBlobStore.java:55) ~[?:?]
		at org.elasticsearch.repositories.hdfs.HdfsRepository.doStart(HdfsRepository.java:116) ~[?:?]
		at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:69) ~[elasticsearch-5.6.2.jar:5.6.2]
		at org.elasticsearch.repositories.RepositoriesService.createRepository(RepositoriesService.java:384) [elasticsearch-5.6.2.jar:5.6.2]
		at org.elasticsearch.repositories.RepositoriesService.applyClusterState(RepositoriesService.java:303) [elasticsearch-5.6.2.jar:5.6.2]
		at org.elasticsearch.cluster.service.ClusterService.callClusterStateAppliers(ClusterService.java:814) [elasticsearch-5.6.2.jar:5.6.2]
		at org.elasticsearch.cluster.service.ClusterService.publishAndApplyChanges(ClusterService.java:768) [elasticsearch-5.6.2.jar:5.6.2]
		at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:587) [elasticsearch-5.6.2.jar:5.6.2]
		at org.elasticsearch.cluster.service.ClusterService$ClusterServiceTaskBatcher.run(ClusterService.java:263) [elasticsearch-5.6.2.jar:5.6.2]
		at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-5.6.2.jar:5.6.2]
		at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-5.6.2.jar:5.6.2]
		at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.6.2.jar:5.6.2]
		at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:247) [elasticsearch-5.6.2.jar:5.6.2]
		at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:210) [elasticsearch-5.6.2.jar:5.6.2]
		at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
		at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
		at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions