Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Single write optimization #36

Closed
DzmitryShylovich opened this issue Nov 17, 2015 · 3 comments
Closed

Single write optimization #36

DzmitryShylovich opened this issue Nov 17, 2015 · 3 comments

Comments

@DzmitryShylovich
Copy link

Hi.
I have HA requirements similar to #29. To achieve HA I'm going to write persistent log into nfsdb and replicate it. I will write 99% of time and read 1%. So my main criterion is write performance. I should not lose any data during active node failure so I think the best option for me to commit(persist+replicate) data after every append.

I took SimplestAppend.java as a foundation.

Results:

Persisted 1000000 objects in 196ms.

I changed this tests slightly (commit after every append):

public class SimplestAppend {
    public static void main(String[] args) throws JournalException {
        String basePath = "D:\\tmp";
        try (JournalFactory factory = new JournalFactory(ModelConfiguration.CONFIG.build(basePath))) {
            // delete existing price journal
            Files.delete(new File(factory.getConfiguration().getJournalBase(), Price.class.getName()));
            final int count = 1000000;

            try (JournalWriter<Price> writer = factory.writer(Price.class)) {
                long tZero = System.nanoTime();
                Price p = new Price();

                for (int i = 0; i < count; i++) {
                    p.setTimestamp(tZero + i);
                    writer.append(p);
                    writer.commit();
                }

                long end = System.nanoTime();
                System.out.println("Persisted " + count + " objects in " +
                        TimeUnit.NANOSECONDS.toMillis(end - tZero) + "ms.");
            }
        }
    }
}

Results:

0.640: [GC (Allocation Failure) [PSYoungGen: 33280K->864K(38400K)] 33280K->872K(125952K), 0.0013587 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
0.868: [GC (Allocation Failure) [PSYoungGen: 34144K->976K(38400K)] 34152K->984K(125952K), 0.0031355 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
1.118: [GC (Allocation Failure) [PSYoungGen: 34256K->1152K(38400K)] 34264K->1160K(125952K), 0.0586353 secs] [Times: user=0.03 sys=0.00, real=0.06 secs] 
1.437: [GC (Allocation Failure) [PSYoungGen: 34432K->1408K(71680K)] 34440K->1416K(159232K), 0.0045581 secs] [Times: user=0.03 sys=0.00, real=0.01 secs] 
Persisted 1000000 objects in 1294ms.

Is there any way to reduce garbage and improve performance for single append/commit approach?

Thank you.

@bluestreak01
Copy link
Member

Major commit() overhead is dealing with index. If you don't index symbols performance should improve. Commit() is also not optimised to be called frequently but it should be better once indexing is switched off.

@DzmitryShylovich
Copy link
Author

Thx. Remove indexing from configuration helped a bit.

Do you have any plans to add separate implementation of Writer which will be optimized for frequent commits?

@bluestreak01
Copy link
Member

Yes, absolutely. This is a very frequent pattern and it should be optimal and GC free.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants