Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support needed on Performance Issue #6063

Open
77aditya77 opened this issue Nov 21, 2019 · 1 comment
Open

Support needed on Performance Issue #6063

77aditya77 opened this issue Nov 21, 2019 · 1 comment

Comments

@77aditya77
Copy link

77aditya77 commented Nov 21, 2019

Hi,

I am using RocksDB JNI and I found that reads are getting exponentially slow for this program

`long n = 0;
int counter = 0;
try(
final Options options = new Options()
.setCreateIfMissing(true)
.setComparator(new Comparator(new ComparatorOptions()) {

                @Override
                public String name() {
                    return "default";
                }

                @Override
                public int compare(final Slice a, final Slice b) {
                long x = ByteBuffer.wrap(a.data()).getLong();
                    long y = ByteBuffer.wrap(b.data()).getLong();
                    return (x < y) ? -1 : ((x == y) ? 0 : 1);
            }
            })
                    .setWriteBufferSize(64 * SizeUnit.MB)
                    .setMaxWriteBufferNumber(6)
                    .setMinWriteBufferNumberToMerge(2);
            final RocksDB db = RocksDB.open(options, "/PathToDB/")){
        boolean loop = true;
        while(loop) {
            if(n == Long.MAX_VALUE){
                loop = false;
            }
            for (int j=0;j<4;j++){
                try(WriteBatch writeBatch = new WriteBatch()) {
                    for (int i = 0; i < 200; i++) {
                        String urlStr = "dummy"+counter;
                        counter++;
                    Long score = getScore(urlStr);
                    ByteBuffer buf = ByteBuffer.allocate(Long.BYTES);
                    buf.putLong(score);
                        writeBatch.put( buf.array() , urlStr.getBytes(UTF_8));
                    }
                    long st = System.currentTimeMillis();
                    db.write(new WriteOptions(), writeBatch);
                    long et = System.currentTimeMillis();
                    logger.logMessage(Test.class.getName(), Level.INFO, "RocksDB write of 200 URLs successful. Time taken - {0}", new Object[]{ et-st});
                } catch (RocksDBException ex) {

                }
            }
            byte[] firstKey = null, lastKey = null;
            int readC = 0;
            long st = System.currentTimeMillis();
            final RocksIterator it = db.newIterator();
            it.seekToFirst();
            while(it.isValid() && readC < 50){
                lastKey = it.key();
                if(firstKey == null){
                    firstKey = lastKey;
                }
                it.next();
                readC++;
            }
            long et = System.currentTimeMillis();
            logger.logMessage(Test.class.getName(), Level.INFO, "RocksDB read of 50 URLs successful. Time taken - {0}", new Object[]{ et-st});
            if(lastKey != null){
                db.deleteRange(firstKey, lastKey);
            }
            n++;
        }
    }catch (Exception e){
        logger.logMessage(Level.SEVERE, Test.class.getName(), e);
    }

`
Initially, it's in an acceptable range. But when the program runs so that the total records reach 1 million then the read time logger prints are showing around 200-300 ms. Getting still worse, as the program runs. Am I using the JNI wrong?

@77aditya77
Copy link
Author

Forgive me for the breakage in code paste. The code starts with the character "`".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant