Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HCFS -put gives java.lang.NoSuchFieldError: INSTANCE #1080

Closed
wazy opened this issue Oct 8, 2019 · 15 comments
Closed

HCFS -put gives java.lang.NoSuchFieldError: INSTANCE #1080

wazy opened this issue Oct 8, 2019 · 15 comments

Comments

@wazy
Copy link

wazy commented Oct 8, 2019

Describe the bug
I started a filer and tried to run -put as follows:
hdfs dfs -Djava.net.preferIPv4Stack=true -Dfs.seaweedfs.impl=seaweed.hdfs.SeaweedFileSystem -libjars ./seaweedfs-hadoop2-client-1.1.6.jar -put /tmp/testing seaweedfs://10.49.74.155:8888/tmp/testing
And I get this exception
put: java.util.concurrent.ExecutionException: java.lang.NoSuchFieldError: INSTANCE

System Setup

  • Ran ./weed server -filer=true
  • Centos 7.5
  • version 30GB 1.43 linux amd64
  • leveldb2 is only thing enabled in filer.toml
[leveldb2]
enabled = true
dir = "/data0/weed/storage"

Expected behavior
I'd expect the file to be put into the FS.

I do see a reference to INSTANCE here in context of the SSLFactory which is where I believe this is coming from:
https://github.com/chrislusf/seaweedfs/blob/cb299dfaa279e14def8bf3f26816913213a91097/other/java/client/src/main/java/seaweedfs/client/FilerSslContext.java#L62


Okay so the above was from using a node with Cloudera CDH Hadoop which is what I'd like to get to work. Potentially could be an issue with multiple jars on the path?

When I use a default Apache Hadoop download both hdfs 2 and 3 binaries and run the same hdfs dfs put command I get the below exception (hdfs dfs -ls works on the seaweedfs filer without issue):

put: java.util.concurrent.ExecutionException: org.apache.http.conn.HttpHostConnectException: Connect to localhost:8080 [localhost/127.0.0.1] failed: Connection refused (Connection refused)

The default Apache Hadoop issue with put has something to do with getting the wrong url for the volume server to write to. I ran the put on the same machine as the server/volumes/filer and that went through. I did try setting the publicIP of volume server on starting the weed server but that didn't seem to help.

@chrislusf
Copy link
Collaborator

chrislusf commented Oct 9, 2019

  1. I can not reproduce.
client$ hdfs version
Hadoop 3.1.1
Source code repository https://github.com/apache/hadoop -r 2b9a8c1d3a2caf1e733d57f346af3ff0d5ba529c
Compiled by leftnoteasy on 2018-08-02T04:26Z
Compiled with protoc 2.5.0
From source with checksum f76ac55e5b5ff0382a9f7df36a3ca5a0
This command was run using /usr/local/Cellar/hadoop/3.1.1/libexec/share/hadoop/common/hadoop-common-3.1.1.jar
  1. Need to set IP address, "weed server -ip=xxxx".

@chrislusf
Copy link
Collaborator

The InsecureTrustManagerFactory.INSTANCE is from grpc-netty-shaded-1.23.0.jar

Maybe check your distribution whether the grpc netty also has a shaded jar.

@wazy
Copy link
Author

wazy commented Oct 9, 2019

For 1:

It has to be something related to a jar duplication somewhere perhaps.
I have searched for find / -name "*netty* and found /usr/share/cmf/common_jars/netty-all-4.0.23.Final.jar do you think the netty-all jar could be causing this issue?

For 2:

Took a few tries with:
./weed server -dir /data0/weed -ip=10.49.74.155 -filer=true
and getting
panic: assertion failed: leader.elected.at.same.term.1
before it started okay and elected itself as the leader. I then was able to successfully use the Apache HDFS binary to put a file. The file gets sent to the FS but a warning does appear:

19/10/09 05:30:46 WARN hdfs.SeaweedFileSystemStore: rename source: seaweedfs://10.49.74.155:8888/tmp/testing/tsting1._COPYING_ destination:seaweedfs://10.49.74.155:8888/tmp/testing/tsting1

Is the above WARN expected?

@chrislusf
Copy link
Collaborator

  1. not sure since I can not tell exactly where the INSTANCE comes from. It could be from some dependencies.

  2. WARN is fine. I will change it to INFO later.

@wazy
Copy link
Author

wazy commented Oct 9, 2019

  1. Okay with using a cat command I am able to see a more complete stacktrace for the issue:
 hdfs dfs -Djava.net.preferIPv4Stack=true -Dfs.seaweedfs.impl=seaweed.hdfs.SeaweedFileSystem -libjars ./seaweedfs-hadoop2-client-1.1.6.jar -cat seaweedfs://10.49.74.155:8888/tmp/testing/testing
Exception in thread "SeaweedFS-prefetch-6" java.lang.NoSuchFieldError: INSTANCE
	at org.apache.http.conn.ssl.SSLConnectionSocketFactory.<clinit>(SSLConnectionSocketFactory.java:144)
	at org.apache.http.impl.client.HttpClientBuilder.build(HttpClientBuilder.java:966)
	at seaweedfs.client.SeaweedRead.readChunkView(SeaweedRead.java:62)
	at seaweedfs.client.SeaweedRead.read(SeaweedRead.java:51)
	at seaweed.hdfs.SeaweedInputStream.readRemote(SeaweedInputStream.java:216)
	at seaweed.hdfs.ReadBufferWorker.run(ReadBufferWorker.java:62)
	at java.lang.Thread.run(Thread.java:748)

I am still looking for if something else on the hadoop classpath is causing this.

@chrislusf
Copy link
Collaborator

Thanks for the stackstrace!

Please check hdfs client jar 1.19

@wazy
Copy link
Author

wazy commented Oct 10, 2019

Hi @chrislusf, that fix works great for the -cat and other read commands.

However, the -put (and other write commands) still fail with the same noSuchFieldError: INSTANCE exception and I think it is due to the same issue in the SeaweedWrite.java file that you changed in the SeaweedRead.java file.

Is there anyway to get this changed to use DefaultHttpClient in the SeaweedWrite.java as well?
https://github.com/chrislusf/seaweedfs/blob/6ed69de6bd5dcabc6fa70185bfcb772786b27517/other/java/client/src/main/java/seaweedfs/client/SeaweedWrite.java#L62

@chrislusf
Copy link
Collaborator

Please check hdfs client jar 1.20

@wazy
Copy link
Author

wazy commented Oct 11, 2019

Using 1.20 gives the following for both mkdir and put:

hdfs dfs -Djava.net.preferIPv4Stack=true -Dfs.seaweedfs.impl=seaweed.hdfs.SeaweedFileSystem -libjars ./seaweedfs-hadoop2-client-1.2.0.jar -mkdir seaweedfs://10.49.83.128:8888/tmp/
Exception in thread "main" java.lang.VerifyError: Inconsistent stackmap frames at branch target 108
Exception Details:
  Location:
    seaweedfs/client/SeaweedRead.readChunkView(J[BILseaweedfs/client/SeaweedRead$ChunkView;Lseaweedfs/client/FilerProto$Locations;)I @108: aload
  Reason:
    Type 'org/apache/http/impl/client/DefaultHttpClient' (current frame, locals[6]) is not assignable to 'org/apache/http/impl/client/CloseableHttpClient' (stack map, locals[6])
  Current Frame:
    bci: @52
    flags: { }
    locals: { long, long_2nd, '[B', integer, 'seaweedfs/client/SeaweedRead$ChunkView', 'seaweedfs/client/FilerProto$Locations', 'org/apache/http/impl/client/DefaultHttpClient', 'org/apache/http/client/methods/HttpGet' }
    stack: { integer }
  Stackmap Frame:
    bci: @108
    flags: { }
    locals: { long, long_2nd, '[B', integer, 'seaweedfs/client/SeaweedRead$ChunkView', 'seaweedfs/client/FilerProto$Locations', 'org/apache/http/impl/client/CloseableHttpClient', 'org/apache/http/client/methods/HttpGet' }
    stack: { }
  Bytecode:
    0x0000000: bb00 9559 b700 963a 06bb 0098 5912 9a05
    0x0000010: bd00 0459 0319 0503 b600 9eb6 00a2 5359
    0x0000020: 0419 04b4 004d 53b8 00a8 b700 ab3a 0719
    0x0000030: 04b4 00af 9a00 3819 0712 b112 b3b6 00b7
    0x0000040: 1907 12b9 12bb 05bd 0004 5903 1904 b400
    0x0000050: beb8 00c4 5359 0419 04b4 00be 1904 b400
    0x0000060: c761 b800 c453 b800 a8b6 00b7 1906 1907
    0x0000070: b600 cd3a 0819 08b9 00d3 0100 3a09 1904
    0x0000080: b400 d61e 6519 04b4 00c7 6188 360a bb00
    0x0000090: d859 2c1d 150a b800 deb7 00e1 3a0b 1909
    0x00000a0: 190b b900 e702 0015 0a36 0c19 06b6 00ea
    0x00000b0: 150c ac3a 0d19 06b6 00ea 190d bf       
  Exception Handler Table:
    bci [108, 171] => handler: 179
    bci [179, 181] => handler: 179
  Stackmap Table:
    append_frame(@108,Object[#201],Object[#152])
    same_locals_1_stack_item_extended(@179,Object[#236])

	at seaweed.hdfs.SeaweedFileSystemStore.doGetFileStatus(SeaweedFileSystemStore.java:114)
	at seaweed.hdfs.SeaweedFileSystemStore.getFileStatus(SeaweedFileSystemStore.java:88)
	at seaweed.hdfs.SeaweedFileSystem.getFileStatus(SeaweedFileSystem.java:250)
	at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1418)
	at org.apache.hadoop.fs.shell.Mkdir.processNonexistentPath(Mkdir.java:73)
	at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:273)
	at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
	at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:118)
	at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
	at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
	at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)

@chrislusf
Copy link
Collaborator

Thanks for checking! Please check hdfs client jar 1.21

@wazy
Copy link
Author

wazy commented Oct 11, 2019

Hey @chrislusf thanks a lot for working on this and sorry for the back and forth.

The mkdir command works now but the -put command causes a similar exception:

[centos@ip-10-49-84-39 ~]$  hdfs dfs -Djava.net.preferIPv4Stack=true -Dfs.seaweedfs.impl=seaweed.hdfs.SeaweedFileSystem -libjars ./seaweedfs-hadoop2-client-1.2.1.jar -mkdir seaweedfs://10.49.83.128:8888/tmp

[centos@ip-10-49-84-39 ~]$ hdfs dfs -Djava.net.preferIPv4Stack=true -Dfs.seaweedfs.impl=seaweed.hdfs.SeaweedFileSystem -libjars ./seaweedfs-hadoop2-client-1.2.1.jar -put /tmp/localfile_test seaweedfs://10.49.83.128:8888/tmp/
put: java.util.concurrent.ExecutionException: java.lang.VerifyError: Inconsistent stackmap frames at branch target 71

@chrislusf
Copy link
Collaborator

please check hdfs client jar 1.22

chrislusf added a commit that referenced this issue Oct 12, 2019
fix put gives java.lang.NoSuchFieldError: INSTANCE related to Cloudera CDH Hadoop #1080
@wazy
Copy link
Author

wazy commented Oct 12, 2019

I really appreciate you working on this as it seems good progress has been made.

After running with 1.22 I get what appears to be maybe multiple http clients jars causing a conflict:

[centos@ip-10-49-84-39 ~]$  hdfs dfs -Djava.net.preferIPv4Stack=true -Dfs.seaweedfs.impl=seaweed.hdfs.SeaweedFileSystem -libjars ./seaweedfs-hadoop2-client-1.2.2.jar -put /tmp/localfile_test seaweedfs://10.49.83.128:8888/tmp/
put: java.util.concurrent.ExecutionException: java.lang.NoSuchMethodError: org.apache.http.entity.ContentType.create(Ljava/lang/String;[Lorg/apache/http/NameValuePair;)Lorg/apache/http/entity/ContentType;

@chrislusf
Copy link
Collaborator

I just did not want to mess up my local hdfs installation. please help to test hdfs client jar 1.2.3

chrislusf added a commit that referenced this issue Oct 12, 2019
shade org.apache.http in #1080
@wazy
Copy link
Author

wazy commented Oct 13, 2019

@chrislusf After your last commit with shading org.apache.http, all of the commands now work correctly out of the box on a Cloudera CDH node!

Thanks so much and I believe this issue is now 100% solved.

@wazy wazy closed this as completed Oct 13, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants