Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

java.nio.channels.ClosedChannelException #384

Closed
heeckhau opened this issue Sep 29, 2014 · 9 comments
Closed

java.nio.channels.ClosedChannelException #384

heeckhau opened this issue Sep 29, 2014 · 9 comments

Comments

@heeckhau
Copy link

I am experimenting with mapdb to implement a persistent cache. It seemed to work nice. But when I stress tested it with concurrent reads and writes I run into ClosedChannelExceptions.

I created the db and hashmaps with:

File dbFile = new File(cacheLocation, FILE_NAME)
db = DBMaker.newFileDB(dbFile).compressionEnable.commitFileSyncDisable.closeOnJvmShutdown.sizeLimit(MAX_SIZE_HARD).make()
HTreeMap<String, Tuple2<byte[], byte[]>> cache = db.createHashMap(TABLE_CONTENT).keySerializer(Serializer.STRING_ASCII).makeOrGet()
HTreeMap<String, Long> lastUsed = db.createHashMap(TABLE_LAST_USED).keySerializer(Serializer.STRING_ASCII).makeOrGet()
store = Store.forEngine(db.getEngine())

On cache I only use get, put and remove. On lastUsed I use get, put, remove and replace.

First stack traces:
!ENTRY org.eclipse.core.jobs 4 2 2014-09-29 21:45:59.201
!STACK 0
java.io.IOError: java.nio.channels.ClosedChannelException
at org.mapdb.Volume$FileChannelVol.getLong(Volume.java:837)
at org.mapdb.StoreWAL.getLinkedRecordsFromLog(StoreWAL.java:911)
at org.mapdb.StoreWAL.update(StoreWAL.java:412)
at org.mapdb.Caches$HashTable.update(Caches.java:269)
at org.mapdb.EngineWrapper.update(EngineWrapper.java:63)
at org.mapdb.HTreeMap.putInner(HTreeMap.java:525)
at org.mapdb.HTreeMap.replace(HTreeMap.java:1250)
...
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:695)
Caused by: java.nio.channels.ClosedChannelException
at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:88)
at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:603)
at org.mapdb.Volume$FileChannelVol.readFully(Volume.java:806)
at org.mapdb.Volume$FileChannelVol.getLong(Volume.java:834)
... 22 more

370692 [Thread-1] ERROR ...
java.io.IOError: java.nio.channels.ClosedChannelException
at org.mapdb.Volume$FileChannelVol.putByte(Volume.java:780)
at org.mapdb.StoreWAL.walIndexVal(StoreWAL.java:309)
at org.mapdb.StoreWAL.put(StoreWAL.java:257)
at org.mapdb.Caches$HashTable.put(Caches.java:216)
at org.mapdb.EngineWrapper.put(EngineWrapper.java:53)
at org.mapdb.HTreeMap.putInner(HTreeMap.java:581)
at org.mapdb.HTreeMap.put(HTreeMap.java:480)
...
Caused by: java.nio.channels.ClosedChannelException
at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:88)
at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:629)
at org.mapdb.Volume$FileChannelVol.writeFully(Volume.java:725)
at org.mapdb.Volume$FileChannelVol.putByte(Volume.java:778)
... 76 more

Any tips on where to start looking for the real problem?

Thanks,
Hendrik.

@jankotek
Copy link
Owner

jankotek commented Oct 1, 2014

There is similar issue: #235

In general ClosedChannelException happens when some thread was interrupted while reading/writing. Channel is closed and all future writes fail. I would check if some threads are interrupted.

I need to add better error message to this exception. So I will keep this bug open for a few days.

@yujanshrestha
Copy link

I have a question about this issue. Will the asyncWriteEnable option prevent this from occurring?

@jankotek
Copy link
Owner

jankotek commented Jan 1, 2015

I would recommend to use mmap files as
alternative to prevent this.

Also make sure that no thread is being
interrupted while doing IO. That would close
FileChannel

@gravelld
Copy link

gravelld commented Aug 3, 2015

I'm trying to work out what to do about this. I don't think it's necessarily realistic to not expect interrupts to occur is it? How would one write a system with long running threads that you wanted to interrupt? Would you roll your own flag to interrupt?

Is the answer to simple re-create a MapDB database if this ever happens?

Sorry, I'm not familiar with best practice in this area.

@jankotek
Copy link
Owner

jankotek commented Aug 7, 2015

It is another quirks in JVM, there is not much I can do about it. And yes, most frameworks I know do not use interrupts, but use other mechanisms.

MapDB 2.0 changes default storage from FileChannel to RandomAccessFile. But RAF storage is not available in 1.0.

@jankotek jankotek closed this as completed Aug 7, 2015
@gravelld
Copy link

gravelld commented Aug 7, 2015

The way I fixed this was to make all calls to MapDB part of a critical section and then when I know an interrupt may occur I inform the-thing-that-calls-MapDB, locking access to the DB until the chance for interrupts has passed.

I'm not 100% sure this is deterministically correct, because I'm not sure if interrupts are delivered to threads performing NIO work with any guarantees about timing, e.g. will the interrupt be delivered before the call to interrupt() returns? In my case though it's not critical and any error conditions can be recovered from.

Out of interest, is RAF slower?

@jankotek
Copy link
Owner

jankotek commented Aug 7, 2015 via email

@gravelld
Copy link

gravelld commented Aug 7, 2015

It'll be interesting to compare when the new release is out.

@jankotek
Copy link
Owner

jankotek commented Aug 7, 2015

2.0 is already out in form of beta

On August 7, 2015 5:47:09 PM CEST, gravelld notifications@github.com wrote:

It'll be interesting to compare when the new release is out.


Reply to this email directly or view it on GitHub:
#384 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants