-
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HCFS -put gives java.lang.NoSuchFieldError: INSTANCE #1080
Comments
|
The InsecureTrustManagerFactory.INSTANCE is from grpc-netty-shaded-1.23.0.jar Maybe check your distribution whether the grpc netty also has a shaded jar. |
For 1: It has to be something related to a jar duplication somewhere perhaps. For 2: Took a few tries with:
Is the above WARN expected? |
|
I am still looking for if something else on the hadoop classpath is causing this. |
Thanks for the stackstrace! Please check hdfs client jar 1.19 |
Hi @chrislusf, that fix works great for the -cat and other read commands. However, the -put (and other write commands) still fail with the same noSuchFieldError: INSTANCE exception and I think it is due to the same issue in the SeaweedWrite.java file that you changed in the SeaweedRead.java file. Is there anyway to get this changed to use DefaultHttpClient in the SeaweedWrite.java as well? |
Please check hdfs client jar 1.20 |
Using 1.20 gives the following for both mkdir and put:
|
Thanks for checking! Please check hdfs client jar 1.21 |
Hey @chrislusf thanks a lot for working on this and sorry for the back and forth. The mkdir command works now but the -put command causes a similar exception:
|
please check hdfs client jar 1.22 |
fix put gives java.lang.NoSuchFieldError: INSTANCE related to Cloudera CDH Hadoop #1080
I really appreciate you working on this as it seems good progress has been made. After running with 1.22 I get what appears to be maybe multiple http clients jars causing a conflict:
|
I just did not want to mess up my local hdfs installation. please help to test hdfs client jar 1.2.3 |
@chrislusf After your last commit with shading org.apache.http, all of the commands now work correctly out of the box on a Cloudera CDH node! Thanks so much and I believe this issue is now 100% solved. |
Describe the bug
I started a filer and tried to run -put as follows:
hdfs dfs -Djava.net.preferIPv4Stack=true -Dfs.seaweedfs.impl=seaweed.hdfs.SeaweedFileSystem -libjars ./seaweedfs-hadoop2-client-1.1.6.jar -put /tmp/testing seaweedfs://10.49.74.155:8888/tmp/testing
And I get this exception
put: java.util.concurrent.ExecutionException: java.lang.NoSuchFieldError: INSTANCE
System Setup
filer.toml
Expected behavior
I'd expect the file to be put into the FS.
I do see a reference to INSTANCE here in context of the SSLFactory which is where I believe this is coming from:
https://github.com/chrislusf/seaweedfs/blob/cb299dfaa279e14def8bf3f26816913213a91097/other/java/client/src/main/java/seaweedfs/client/FilerSslContext.java#L62
Okay so the above was from using a node with Cloudera CDH Hadoop which is what I'd like to get to work. Potentially could be an issue with multiple jars on the path?
When I use a default Apache Hadoop download both hdfs 2 and 3 binaries and run the same hdfs dfs put command I get the below exception (hdfs dfs -ls works on the seaweedfs filer without issue):
The default Apache Hadoop issue with put has something to do with getting the wrong url for the volume server to write to. I ran the put on the same machine as the server/volumes/filer and that went through. I did try setting the publicIP of volume server on starting the weed server but that didn't seem to help.
The text was updated successfully, but these errors were encountered: