Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ArrayIndexOutOfBoundsException with BTreeMap.put using v0.9.11 #308

Closed
fmannhardt opened this issue Mar 25, 2014 · 16 comments
Closed

ArrayIndexOutOfBoundsException with BTreeMap.put using v0.9.11 #308

fmannhardt opened this issue Mar 25, 2014 · 16 comments
Labels

Comments

@fmannhardt
Copy link

I updated to MapDB 0.9.11 and I'm now getting a ArrayIndexOutOfBoundsException on BTreeMap.put after adding several entries. Looking at the MapDB file while adding entries it never exceed the size of 16 MB, whereas it was growing bigger before (~800MB).

I'm running on Win7 64bit, Java JDK 1.7.0_25 using DBMaker.newFileDB to create the DB. The same error happens also if I disable the cache or the asyncWrite feature. I will try to create a test case, but maybe this helps already.

java.lang.ArrayIndexOutOfBoundsException: 147448
    at org.mapdb.Volume$ByteBufferVol.getLong(Volume.java:327)
    at org.mapdb.StoreDirect.get2(StoreDirect.java:440)
    at org.mapdb.StoreDirect.get(StoreDirect.java:428)
    at org.mapdb.EngineWrapper.get(EngineWrapper.java:60)
    at org.mapdb.AsyncWriteEngine.get(AsyncWriteEngine.java:399)
    at org.mapdb.Caches$HashTable.get(Caches.java:230)
    at org.mapdb.BTreeMap.put2(BTreeMap.java:664)
    at org.mapdb.BTreeMap.put(BTreeMap.java:644)

The ArrayIndexOutOfBoundsException is always at different sizes.

@fmannhardt
Copy link
Author

I found that the error is gone, if I don't select the option valuesOutsideNodesEnable(). (Sometimes the error is gone, but it does not depend on that option.)
Moreover, if I use the option checksumEnable(), then the following IOError is thrown:

java.io.IOError: java.io.IOException: Checksum does not match, data broken
    at org.mapdb.StoreDirect.get(StoreDirect.java:430)
    at org.mapdb.EngineWrapper.get(EngineWrapper.java:60)
    at org.mapdb.AsyncWriteEngine.get(AsyncWriteEngine.java:399)
    at org.mapdb.Caches$HashTable.get(Caches.java:230)
    at org.mapdb.BTreeMap.put2(BTreeMap.java:661)
    at org.mapdb.BTreeMap.put(BTreeMap.java:644)
    at org.test.SimpleMapDBTestCase.testCreateReadRandomLogDisk(SimpleMapDBTestCase.java:42)
Caused by: java.io.IOException: Checksum does not match, data broken
    at org.mapdb.Store.deserialize(Store.java:238)
    at org.mapdb.StoreDirect.get2(StoreDirect.java:475)
    at org.mapdb.StoreDirect.get(StoreDirect.java:428)
    ... 28 more

This is how I create the DBMaker:

File tempFile = File.createTempFile("test", ".db");
DBMaker dbMaker = DBMaker.newFileDB(tempFile)
                // .closeOnJvmShutdown(), we do not use this, as it causes memory leaks
                .deleteFilesAfterClose().transactionDisable()
                .mmapFileEnableIfSupported();
dbMaker = dbMaker.cacheSize(1024 * 16);
dbMaker = dbMaker.asyncWriteEnable().asyncWriteFlushDelay(100).asyncWriteQueueSize(1024 * 64);
dbMaker = dbMaker.checksumEnable();
DB db = dbMaker.make();

And this is how I create the Map:

KeyStringPool keyPool = new KeyStringPool();        
ConcurrentNavigableMap<Long, XAttributeExternalImpl> mapStorage = db.createTreeMap("attributeStore").keySerializer(BTreeKeySerializer.ZERO_OR_POSITIVE_LONG)
                .valueSerializer(new MapDBXAttributeSerializer(keyPool)).valuesOutsideNodesEnable().nodeSize(64)
                .make();

I prepared a small example project that I could send you by mail.

@jankotek
Copy link
Owner

Hi,

there was a race condition in BTreeMap.put fixed recently in #304. So perhaps try with newest 1.0.0 snapshot. If that does not help, please send me example project, my email is in github profile.

@jankotek jankotek added the bug label Mar 27, 2014
@reuschling
Copy link

I have a similar error with 9.11, also tried with 1.0.0 snapshot

java.lang.RuntimeException: Writer thread failed
    at org.mapdb.AsyncWriteEngine.checkState(AsyncWriteEngine.java:328)
    at org.mapdb.AsyncWriteEngine.close(AsyncWriteEngine.java:491)
    at org.mapdb.EngineWrapper.close(EngineWrapper.java:82)
    at org.mapdb.Caches$HashTable.close(Caches.java:312)
    at org.mapdb.EngineWrapper.close(EngineWrapper.java:82)
    at org.mapdb.EngineWrapper$CloseOnJVMShutdown.close(EngineWrapper.java:659)
    at org.mapdb.EngineWrapper$CloseOnJVMShutdown$1.run(EngineWrapper.java:644)
    at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 3108142
    at org.mapdb.Volume$ByteBufferVol.getByte(Volume.java:336)
    at org.mapdb.Volume.getUnsignedShort(Volume.java:96)
    at org.mapdb.StoreDirect.longStackTake(StoreDirect.java:932)
    at org.mapdb.StoreDirect.freePhysTake(StoreDirect.java:1065)
    at org.mapdb.StoreDirect.physAllocate(StoreDirect.java:655)
    at org.mapdb.StoreDirect.update2(StoreDirect.java:535)
    at org.mapdb.StoreDirect.update(StoreDirect.java:491)
    at org.mapdb.EngineWrapper.update(EngineWrapper.java:65)
    at org.mapdb.AsyncWriteEngine.access$101(AsyncWriteEngine.java:74)
    at org.mapdb.AsyncWriteEngine.runWriter(AsyncWriteEngine.java:220)
    at org.mapdb.AsyncWriteEngine$WriterRunnable.run(AsyncWriteEngine.java:170)
    ... 1 more

I create my db as follows:

DB db = DBMaker.newFileDB(new File("dbPath")).closeOnJvmShutdown().asyncWriteEnable().deleteFilesAfterClose()
                        .transactionDisable().mmapFileEnableIfSupported().make();

Map<String, NerEntity> hsId2Entity = db.getTreeMap("id2entity");

public class NerEntity extends Entity implements Serializable, Comparable<NerEntity>
{
    private static final long serialVersionUID = 1334795226689152608L;
    public float score = 0f;
    public String textTrigger;
    public String textTriggerTermPOS;
...}

@JensBee
Copy link

JensBee commented Mar 31, 2014

Looks like the error gets triggered when using AsyncWriteEngine. Disabling all .async* options does not trigger the error and may be used as workaround.

@ghost
Copy link

ghost commented Mar 31, 2014

Hi, I have the problem when I Pump data to a BTree and I have too much data. I tried to disable async stuff but it doesn't seem to help in that case

@ghost
Copy link

ghost commented Mar 31, 2014

I did a very simple test and I could reproduce the issue.

It seems this option causes the problem
.mmapFileEnableIfSupported()

Here is my test code

@Test
    public void test(){
        DB db = DBMaker.newTempFileDB()
                .mmapFileEnableIfSupported()
                .compressionEnable()
                .transactionDisable()
                .checksumEnable()
                .syncOnCommitDisable()
                .make();
        Iterator<Fun.Tuple2<Long,String>> newIterator = new Iterator<Fun.Tuple2<Long, String>>() {
            private AtomicLong value = new AtomicLong(10000000);
            @Override
            public boolean hasNext() {
                return value.get() > 0 ;
            }

            @Override
            public Fun.Tuple2<Long, String> next() {
                Long v = value.decrementAndGet();
                return new Fun.Tuple2<Long,String>(v,v.toString());
            }
        };
        BTreeMap<Long,String> cubeData = db.createTreeMap("data").pumpSource(newIterator).make();

This throws an ArrayOutOfBoundsException IIF mmap is enabled

I use Java 8 btw if it has any importance

@jankotek
Copy link
Owner

jankotek commented Apr 1, 2014

Thanks, last test case is very helpful. I will fix it very soon.

@ghost
Copy link

ghost commented Apr 1, 2014

great! keep in touch and keep up the good work

@jankotek
Copy link
Owner

jankotek commented Apr 5, 2014

This is now fixed.

@jankotek
Copy link
Owner

0.9.13 released with fix

@abarnashankar
Copy link

I am using MapDB version (1.0.1) and in PRODUCTION I am getting the following error.
I need help from some one as this is in PRODUCTION and i have a strict SLA. Any help is appreciated.

I am getting error java.lang.ArrayIndexOutOfBoundsException: 238011748
at org.mapdb.Volume$ByteBufferVol.getByte(Volume.java:385) and also Exception in thread "main" java.lang.AssertionError: data were not fully read, check your serializer when i use MapDB

@abarnashankar
Copy link

I am using MapDB version 1.0.1 and my code is pasted below. Please let me know whether i need to modify any thing in the below code. It would be great if some one helps me on this one.

db = DBMaker
.newFileDB(new File("file"))
.mmapFileEnable()
.closeOnJvmShutdown()
.transactionDisable()
.cacheSize(1000)
.make();

@jankotek
Copy link
Owner

@abarnashankar 1.0.1 is way too old, there were number of issues fixed since than. I would recommend to backup your data for start.

@abarnashankar
Copy link

Hi,
Thanks for your response.. I am happy to see your response… some one there to help me :)

It would be great if you could advice me of the steps I need to do. I even tried to use MapDB version 1.0.8 in my pom.xml and then when I tested, the error is the same.
Please let me know what may be the reason for this error. I even tried to take the same file from PRODUCTION which is used by MapDB and test in my QA region, I am getting the same error.
You suggested me to take a backup of the file… I have a copy of this file… But please tell me whether I need to restart my QA or PRODUCTION server ? it is difficult for me to ask for restarting production server.

NOTE : It was working fine till few weeks back but then all of a sudden, it stopped working

Code :
db = DBMaker

            .newFileDB(new File(“file"))

            .mmapFileEnable()

            .closeOnJvmShutdown()

            .transactionDisable()

            .cacheSize(1000)

            .make();


 Map<Long, String> treeMap = db.getTreeMap(“<string>"); //this line is throwing error at some point of time.


Exception Stack Trace:

java.lang.ArrayIndexOutOfBoundsException: 238011748
at org.mapdb.Volume$ByteBufferVol.getByte(Volume.java:385)
at org.mapdb.Volume.getUnsignedShort(Volume.java:97)
at org.mapdb.StoreDirect.longStackTake(StoreDirect.java:1027)
at org.mapdb.StoreDirect.freePhysTake(StoreDirect.java:1160)
at org.mapdb.StoreDirect.physAllocate(StoreDirect.java:741)
15/08/27 19:18:48 INFO compress.CodecPool: Got brand-new compressor [.deflate]
Exception in thread "main" java.lang.AssertionError: data were not fully read, check your serializer
at org.mapdb.Store.deserialize(Store.java:332)
at org.mapdb.StoreDirect.get2(StoreDirect.java:532)

Thanks and Regards
Shankar V
612-226-1845 (Cell)
SCJP, PMP, Cloudera Certified Hadoop Developer, Datastax Certified Cassandra Developer
From: Jan Kotek <notifications@github.commailto:notifications@github.com>
Reply-To: jankotek/mapdb <reply@reply.github.commailto:reply@reply.github.com>
Date: Thursday, August 27, 2015 at 12:46 PM
To: jankotek/mapdb <mapdb@noreply.github.commailto:mapdb@noreply.github.com>
Cc: Shankar Viswanathan <shviswanathan@stubhub.commailto:shviswanathan@stubhub.com>
Subject: Re: [mapdb] ArrayIndexOutOfBoundsException with BTreeMap.put using v0.9.11 (#308)

@abarnashankarhttps://github.com/abarnashankar 1.0.1 is way too old, there were number of issues fixed since than. I would recommend to backup your data for start.


Reply to this email directly or view it on GitHubhttps://github.com//issues/308#issuecomment-135533621.

@abarnashankar
Copy link

Thanks for your response.. I am happy to see your response… some one there to help me :)

It would be great if you could advice me of the steps I need to do. I even tried to use MapDB version 1.0.8 in my pom.xml and then when I tested, the error is the same.
Please let me know what may be the reason for this error. I even tried to take the same file from PRODUCTION which is used by MapDB and test in my QA region, I am getting the same error.
You suggested me to take a backup of the file… I have a copy of this file… But please tell me whether I need to restart my QA or PRODUCTION server ? it is difficult for me to ask for restarting production server.

NOTE : It was working fine till few weeks back but then all of a sudden, it stopped working

Code :
db = DBMaker
.newFileDB(new File(“file"))
.mmapFileEnable()
.closeOnJvmShutdown()
.transactionDisable()
.cacheSize(1000)
.make();

 Map<Long, String> treeMap = db.getTreeMap(“<string>"); //this line is throwing error at some point of time.

Exception Stack Trace:  

java.lang.ArrayIndexOutOfBoundsException: 238011748
at org.mapdb.Volume$ByteBufferVol.getByte(Volume.java:385)
at org.mapdb.Volume.getUnsignedShort(Volume.java:97)
at org.mapdb.StoreDirect.longStackTake(StoreDirect.java:1027)
at org.mapdb.StoreDirect.freePhysTake(StoreDirect.java:1160)
at org.mapdb.StoreDirect.physAllocate(StoreDirect.java:741)
15/08/27 19:18:48 INFO compress.CodecPool: Got brand-new compressor [.deflate]
Exception in thread "main" java.lang.AssertionError: data were not fully read, check your serializer
at org.mapdb.Store.deserialize(Store.java:332)
at org.mapdb.StoreDirect.get2(StoreDirect.java:532)

@jankotek
Copy link
Owner

That exception says that size of data is
different from number of bytes read
by serializer. So perhaps compression
or serializer has changed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants