Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bytes.copyTo(outputStream) fails #80

Closed
joa23 opened this issue Nov 11, 2018 · 10 comments
Closed

bytes.copyTo(outputStream) fails #80

joa23 opened this issue Nov 11, 2018 · 10 comments

Comments

@joa23
Copy link

joa23 commented Nov 11, 2018

Hi there,
Is there a known issue with bytes.copyTo(outputStream)?
I experience a strange behavior where bytes.copyTo(outputStream) stops at 2GB.
However simply doing the following works fine:

byte[] buffer = new byte[4096];
            int len;
            while ((len = bytes.read(buffer)) > 0) {
                outputStream.write(buffer, 0, len);
            }
@robaustin123
Copy link

robaustin123 commented Nov 11, 2018

You have provided an example of what does work. Please provide an example of what does NOT work. Thanks.

@joa23
Copy link
Author

joa23 commented Nov 11, 2018

Sorry, here a fully reproducible example.


import java.io.IOException;
import java.io.OutputStream;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.StandardOpenOption;
import java.util.Random;

import net.openhft.chronicle.bytes.Bytes;

public class BytesTest {

    public static void main(String[] args) throws IOException {
        int initialCapacity = 10 * 1024 * 1024;
        long fileSize = 5368709120l;
        byte[] buffer = new byte[initialCapacity];
        new Random().nextBytes(buffer);

        Bytes bytes = Bytes.allocateElasticDirect(initialCapacity);
        while (bytes.writePosition() < fileSize) {
            bytes.write(buffer);
        }
        System.out.println("Writing file 1");
        Path path = Paths.get("./textFile1.bin");
        try (OutputStream outputStream = Files.newOutputStream(path, StandardOpenOption.CREATE_NEW)) {
            while (bytes.read(buffer) > 0) {
                outputStream.write(buffer);
            }
        }
        long result = path.toFile().length();
        if (fileSize != result) {
            throw new RuntimeException(String.format("Expecting %s but file size is %s", fileSize, result));
        }

        bytes = Bytes.allocateElasticDirect(initialCapacity);
        new Random().nextBytes(buffer);
        while (bytes.writePosition() < fileSize) {
            bytes.write(buffer);
        }
        path = Paths.get("./textFile2.bin");
        System.out.println("Writing file 2");
        // crashing...
        try (OutputStream outputStream = Files.newOutputStream(path, StandardOpenOption.CREATE_NEW)) {
            bytes.copyTo(outputStream);
        }
        result = path.toFile().length();
        if (fileSize != result) {
            throw new RuntimeException(String.format("Expecting %s but file size is %s", fileSize, result));
        }
    }
}

Results in a:
5.0G textFile1.bin
2.0G textFile2.bin

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x0000000117c3ec87, pid=65814, tid=0x0000000000002603
#
# JRE version: Java(TM) SE Runtime Environment (8.0_112-b16) (build 1.8.0_112-b16)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.112-b16 mixed mode bsd-amd64 compressed oops)
# Problematic frame:
# J 3456% C2 net.openhft.chronicle.bytes.BytesInternal.copy(Lnet/openhft/chronicle/bytes/RandomDataInput;Ljava/io/OutputStream;)V (57 bytes) @ 0x0000000117c3ec87 [0x0000000117c3ea80+0x207]
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /Users/joa23/Documents/workspace/saleshero-flow/hs_err_pid65814.log
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
#

on a:
Model Name: MacBook Pro
Model Identifier: MacBookPro13,3
Processor Name: Intel Core i7
Processor Speed: 2.9 GHz
Number of Processors: 1
Total Number of Cores: 4
L2 Cache (per Core): 256 KB
L3 Cache: 8 MB
Memory: 16 GB

using:
java version "1.8.0_112"
Java(TM) SE Runtime Environment (build 1.8.0_112-b16)
Java HotSpot(TM) 64-Bit Server VM (build 25.112-b16, mixed mode)

BTW, is there a mailing list to discuss these kind things?

Amazing library beyond that....
Thanks!

@joa23
Copy link
Author

joa23 commented Nov 12, 2018

I wrote a chaos monkey test having 100 threads writing / reading up to 3GB native memory buffers in parallel. With that, I regularly but not always manage to get the buffer.read method to crash as well. Dump attached.
hs_err_pid68867.log

Any help is very appreciated.

@RobAustin
Copy link
Member

from your log, we see

Stack: [0x0000700008e73000,0x0000700008f73000],  sp=0x0000700008f721d8,  free space=1020k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V  [libjvm.dylib+0xdb8ea]
J 871  sun.misc.Unsafe.copyMemory(Ljava/lang/Object;JLjava/lang/Object;JJ)V (0 bytes) @ 0x00000001151541a1 [0x00000001151540c0+0xe1]
J 964 C1 net.openhft.chronicle.core.UnsafeMemory.copyMemory0(Ljava/lang/Object;JLjava/lang/Object;JJ)V (54 bytes) @ 0x000000011519d15c [0x000000011519cf20+0x23c]
j  net.openhft.chronicle.core.UnsafeMemory.copyMemory(JJJ)V+31
j  net.openhft.chronicle.bytes.NativeBytesStore.copyToDirect(Lnet/openhft/chronicle/bytes/BytesStore;)J+42
j  net.openhft.chronicle.bytes.NativeBytesStore.copyTo(Lnet/openhft/chronicle/bytes/BytesStore;)J+11
j  net.openhft.chronicle.bytes.NativeBytes.resize(J)V+340
j  net.openhft.chronicle.bytes.NativeBytes.checkResize(J)V+9
J 920 C2 companyName.flow.platform.io.memory.ManagedOutputStream.write([B)V (64 bytes) @ 0x0000000115190b50 [0x00000001151908c0+0x290]
J 928% C2 companyName.flow.platform.io.memory.MemoryManagerChaosMonkeyTest$ChaosMonkeyRunnable.call()Ljava/lang/Boolean; (245 bytes) @ 0x0000000115193080 [0x0000000115192f40+0x140]
j  companyName.flow.platform.io.memory.MemoryManagerChaosMonkeyTest$ChaosMonkeyRunnable.call()Ljava/lang/Object;+1
j  java.util.concurrent.FutureTask.run()V+42
j  java.util.concurrent.ThreadPoolExecutor.runWorker(Ljava/util/concurrent/ThreadPoolExecutor$Worker;)V+95
j  java.util.concurrent.ThreadPoolExecutor$Worker.run()V+5
j  java.lang.Thread.run()V+11
v  ~StubRoutines::call_stub
V  [libjvm.dylib+0x2edbd2]
V  [libjvm.dylib+0x2ee360]
V  [libjvm.dylib+0x2ee50c]
V  [libjvm.dylib+0x348c59]
V  [libjvm.dylib+0x56b033]
V  [libjvm.dylib+0x56c720]
V  [libjvm.dylib+0x48a7ae]
C  [libsystem_pthread.dylib+0x3661]  _pthread_body+0x154
C  [libsystem_pthread.dylib+0x350d]  _pthread_body+0x0
C  [libsystem_pthread.dylib+0x2bf9]  thread_start+0xd
C  0x0000000000000000

Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
J 871  sun.misc.Unsafe.copyMemory(Ljava/lang/Object;JLjava/lang/Object;JJ)V (0 bytes) @ 0x0000000115154127 [0x00000001151540c0+0x67]
J 964 C1 net.openhft.chronicle.core.UnsafeMemory.copyMemory0(Ljava/lang/Object;JLjava/lang/Object;JJ)V (54 bytes) @ 0x000000011519d15c [0x000000011519cf20+0x23c]
j  net.openhft.chronicle.core.UnsafeMemory.copyMemory(JJJ)V+31
j  net.openhft.chronicle.bytes.NativeBytesStore.copyToDirect(Lnet/openhft/chronicle/bytes/BytesStore;)J+42
j  net.openhft.chronicle.bytes.NativeBytesStore.copyTo(Lnet/openhft/chronicle/bytes/BytesStore;)J+11
j  net.openhft.chronicle.bytes.NativeBytes.resize(J)V+340
j  net.openhft.chronicle.bytes.NativeBytes.checkResize(J)V+9
J 920 C2 companyName.flow.platform.io.memory.ManagedOutputStream.write([B)V (64 bytes) @ 0x0000000115190b50 [0x00000001151908c0+0x290]
J 928% C2 companyName.flow.platform.io.memory.MemoryManagerChaosMonkeyTest$ChaosMonkeyRunnable.call()Ljava/lang/Boolean; (245 bytes) @ 0x0000000115193080 [0x0000000115192f40+0x140]
j  companyName.flow.platform.io.memory.MemoryManagerChaosMonkeyTest$ChaosMonkeyRunnable.call()Ljava/lang/Object;+1
j  java.util.concurrent.FutureTask.run()V+42
j  java.util.concurrent.ThreadPoolExecutor.runWorker(Ljava/util/concurrent/ThreadPoolExecutor$Worker;)V+95
j  java.util.concurrent.ThreadPoolExecutor$Worker.run()V+5
j  java.lang.Thread.run()V+11

@RobAustin
Copy link
Member

@joa23 I ran your test ( above ) on my iMac, I let it run for a couple of mins, it did not crash, I'm using the latest version of Chronicle-BOM : 2.17.39, please retest with this version an let me know if you still see an issue.

@RobAustin
Copy link
Member

It's likely that the long read was incorrect in the following code:

net.openhft.chronicle.bytes.NativeBytesStore#copyToDirect

   public long copyToDirect(@NotNull BytesStore store) {
        long read = Math.min(readRemaining(), writeRemaining());
        if (read > 0) {
            try {
                long addr = address;
                long addr2 = store.addressForWrite(0);
                memory.copyMemory(addr, addr2, read);
            } catch (BufferOverflowException e) {
                throw new AssertionError(e);
            }
        }
        return read;
    }

I'm not able to reproduce this issue, so are you able to investigate this, perhaps by adding some trace code to record the values just before it crashes. It would be good to see also the (readRemaining() and writeRemaining()

What I think is happening here ( but its a guess ) is that either the addr2 is wrong or its copying too much data, in other words, the read value is wrong. Alternatively, the memory mapping may have been closed, which may cause this issue.

@joa23
Copy link
Author

joa23 commented Nov 12, 2018

Rob,
Thanks for your help. I'm only using Chronicle-bytes version '2.17.3' not the whole BOM project. Could that be an issue? That said - from looking at BOM dependencies - it also just references chronicle-bytes 2.17.3. I will try to checkout chronicle-bytes sources and add the instrumentation as you suggest in the next days and will let you know.
Maybe you can write a loop around my example code and randomize the size of data written and run this in 100 thread and see if that crashes your JVM?

@peter-lawrey
Copy link
Member

I have added a fix for this, sorry about the long delay.

@hft-team-city
Copy link
Collaborator

Released in Chronicle-Bytes-2.20.43, BOM-2.20.67

@hft-team-city
Copy link
Collaborator

Released in Chronicle-Bytes-2.20.101, BOM-2.20.134

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants