You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The recent changes to sorted recovery in #2117 broke the LogReader utility. It is still using RecoveryLogReader to read from sorted WALs. LogReader needs to be changed to use the updated RecoveryLogsIterator.
accumulo wal-info /accumulo/recovery/9cbe3382-7b2a-477e-b1f6-5edeaa401001
2021-06-22T11:12:33,271 [start.Main] ERROR: Thread 'wal-info' died.
java.io.FileNotFoundException: File does not exist: hdfs://localhost:8020/accumulo/recovery/9cbe3382-7b2a-477e-b1f6-5edeaa401001/part-r-00000.rf/data
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1729) ~[hadoop-client-api-3.3.0.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1722) ~[hadoop-client-api-3.3.0.jar:?]
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[hadoop-client-api-3.3.0.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1737) ~[hadoop-client-api-3.3.0.jar:?]
at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1863) ~[hadoop-client-api-3.3.0.jar:?]
at org.apache.hadoop.io.MapFile$Reader.createDataFileReader(MapFile.java:460) ~[hadoop-client-api-3.3.0.jar:?]
at org.apache.hadoop.io.MapFile$Reader.open(MapFile.java:433) ~[hadoop-client-api-3.3.0.jar:?]
at org.apache.hadoop.io.MapFile$Reader.(MapFile.java:403) ~[hadoop-client-api-3.3.0.jar:?]
at org.apache.accumulo.tserver.log.RecoveryLogReader.(RecoveryLogReader.java:137) ~[accumulo-tserver-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at org.apache.accumulo.tserver.log.RecoveryLogReader.(RecoveryLogReader.java:120) ~[accumulo-tserver-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at org.apache.accumulo.tserver.logger.LogReader.execute(LogReader.java:150) ~[accumulo-tserver-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:126) ~[accumulo-start-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at java.lang.Thread.run(Thread.java:829) [?:?]
Also if you specifiy one of the sorted WALs as a file to read, this error is reported.
accumulo wal-info /accumulo/recovery/9cbe3382-7b2a-477e-b1f6-5edeaa401001/part-r-00000.rf
2021-06-22T11:13:23,898 [start.Main] ERROR: Thread 'wal-info' died.
java.lang.IllegalArgumentException: Unsupported write ahead log version ######
at org.apache.accumulo.tserver.log.DfsLogger.getDecryptingStream(DfsLogger.java:378) ~[accumulo-tserver-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at org.apache.accumulo.tserver.logger.LogReader.execute(LogReader.java:134) ~[accumulo-tserver-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:126) ~[accumulo-start-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at java.lang.Thread.run(Thread.java:829) [?:?]
We may want to keep the old technique for reading the map files and give an option to delete the old files.
The text was updated successfully, but these errors were encountered:
I currently have a workable solution for the first half of the ticket (still needs some revision) but I am currently looking into the second part. It appears the Log File Header is incorrect or at least not something DfsLogger.getDecryptingStream expects. I am still investigating but if someone has an idea on a possible solution, I am all ears.
EDIT: MagicBuffer doesn't equal either magic4 or magic3 inside DfsLogger.getDecryptingStream.
I currently have a workable solution for the first half of the ticket (still needs some revision) but I am currently looking into the second part. It appears the Log File Header is incorrect or at least not something DfsLogger.getDecryptingStream expects. I am still investigating but if someone has an idea on a possible solution, I am all ears.
The DfsLogger class won't be able to read the sorted recovery RFiles, it can only read the original WALs. One solution I was thinking was to just refactor LogReader to use RecoveryLogsIterator to read the sorted recovery RFiles if that is what the user passes as a parameter.
The recent changes to sorted recovery in #2117 broke the
LogReader
utility. It is still usingRecoveryLogReader
to read from sorted WALs.LogReader
needs to be changed to use the updatedRecoveryLogsIterator
.Also if you specifiy one of the sorted WALs as a file to read, this error is reported.
We may want to keep the old technique for reading the map files and give an option to delete the old files.
The text was updated successfully, but these errors were encountered: