Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Heap out of memory when using ReliableTopic with reconnect mode "ASYNC" [API-1514] #1347

Closed
marinrusu1997 opened this issue Aug 15, 2022 · 5 comments · Fixed by #1377
Closed

Comments

@marinrusu1997
Copy link

Describe the bug
When using ReliableTopic and client tries to reconnect with reconnectMode "ASYNC", NodeJS process freezes and crashes with FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory.
Notice that when using reconnectMode "ON" this problem does not occur.

Expected behavior
Hazelcast client should be able to reconnect in the background and calls to publish should fail with error.

Sample Code

import { Client } from 'hazelcast-client';

const client = await Client.newHazelcastClient({
	clusterName: '@FIXME YOUR CLUSTER NAME',
	network: {
		clusterMembers: [
			'localhost:5701'
		]
	},
	connectionStrategy: {
		asyncStart: false,
		reconnectMode: "ASYNC", // <----
		connectionRetry: {
			clusterConnectTimeoutMillis: -1
		}
	}
});
const topic = await client.getReliableTopic('topic-reconnect');
topic.addMessageListener((message) => console.log(message.messageObject));

setInterval(async () => {
	try {
		console.log('*** PUBLISHING MESSAGE ***');
		await topic.publish('hello world!');
		console.log('*** MESSAGE WAS PUBLISHED ***');
	} catch (e) {
		console.error(e);
	}
}, 1000);

Environment (please complete the following information):

  1. Node.js client v16.14.2

  2. Hazelcast client v5.1.0

  3. Cluster size: 1 member running inside Docker container (Hazelcast v5.1.2)

  4. Number of clients: 1 client

  5. Operating system: macOS Monterey 12.5

  6. Logs and stack traces:

❯ node dev/scripts/hazelcast-bug.js
[DefaultLogger] INFO at LifecycleService: HazelcastClient is STARTING
[DefaultLogger] INFO at LifecycleService: HazelcastClient is STARTED
[DefaultLogger] INFO at ConnectionManager: Trying to connect to localhost:5701
[DefaultLogger] INFO at LifecycleService: HazelcastClient is CONNECTED
[DefaultLogger] INFO at ConnectionManager: Authenticated with server 172.16.57.2:5701:c660d883-0d94-4503-8f2c-561455c09fcd, server version: 5.1.2, local address: 127.0.0.1:50000
[DefaultLogger] INFO at ClusterService: 

Members [1] {
        Member [172.16.57.2]:5701 - c660d883-0d94-4503-8f2c-561455c09fcd
}

[DefaultLogger] INFO at Statistics: Client statistics is enabled with period 5 seconds.
*** PUBLISHING MESSAGE ***
*** MESSAGE WAS PUBLISHED ***
hello world!
*** PUBLISHING MESSAGE ***
*** MESSAGE WAS PUBLISHED ***
hello world!
[DefaultLogger] WARN at Connection: Connection{alive=false, connectionId=0, remoteAddress=172.16.57.2:5701} closed. Reason: Connection closed by the other side. - Connection closed by the other side.
[DefaultLogger] INFO at ConnectionManager: Removed connection to endpoint: 172.16.57.2:5701:c660d883-0d94-4503-8f2c-561455c09fcd, connection: Connection{alive=false, connectionId=0, remoteAddress=172.16.57.2:5701}
[DefaultLogger] INFO at LifecycleService: HazelcastClient is DISCONNECTED
[DefaultLogger] INFO at ConnectionManager: Trying to connect to Member [172.16.57.2]:5701 - c660d883-0d94-4503-8f2c-561455c09fcd
*** PUBLISHING MESSAGE ***
ClientOfflineError: No connection found to cluster
    at new ClientOfflineError (/Users/marin/Development/platform-svc-itpm-relay/node_modules/hazelcast-client/lib/core/HazelcastError.js:90:9)
    at ConnectionRegistryImpl.checkIfInvocationAllowed (/Users/marin/Development/platform-svc-itpm-relay/node_modules/hazelcast-client/lib/network/ConnectionRegistry.js:120:21)
    at InvocationService.invokeSmart (/Users/marin/Development/platform-svc-itpm-relay/node_modules/hazelcast-client/lib/invocation/InvocationService.js:393:51)
    at InvocationService.invoke (/Users/marin/Development/platform-svc-itpm-relay/node_modules/hazelcast-client/lib/invocation/InvocationService.js:260:14)
    at InvocationService.invokeOnPartition (/Users/marin/Development/platform-svc-itpm-relay/node_modules/hazelcast-client/lib/invocation/InvocationService.js:288:21)
    at RingbufferProxy.encodeInvokeOnPartition (/Users/marin/Development/platform-svc-itpm-relay/node_modules/hazelcast-client/lib/proxy/BaseProxy.js:115:39)
    at RingbufferProxy.encodeInvoke (/Users/marin/Development/platform-svc-itpm-relay/node_modules/hazelcast-client/lib/proxy/PartitionSpecificProxy.js:28:21)
    at RingbufferProxy.add (/Users/marin/Development/platform-svc-itpm-relay/node_modules/hazelcast-client/lib/proxy/ringbuffer/RingbufferProxy.js:63:21)
    at ReliableTopicProxy.trySendMessage (/Users/marin/Development/platform-svc-itpm-relay/node_modules/hazelcast-client/lib/proxy/topic/ReliableTopicProxy.js:130:25)
    at ReliableTopicProxy.addWithBackoff (/Users/marin/Development/platform-svc-itpm-relay/node_modules/hazelcast-client/lib/proxy/topic/ReliableTopicProxy.js:126:14) {
  cause: undefined,
  serverStackTrace: undefined
}

<--- Last few GCs --->

[4217:0x140008000]    66677 ms: Scavenge 4044.5 (4122.8) -> 4040.7 (4125.1) MB, 7.8 / 0.0 ms  (average mu = 0.752, current mu = 0.368) allocation failure 
[4217:0x140008000]    66707 ms: Scavenge 4046.6 (4125.1) -> 4042.5 (4142.1) MB, 9.1 / 0.0 ms  (average mu = 0.752, current mu = 0.368) allocation failure 
[4217:0x140008000]    68662 ms: Mark-sweep 4056.1 (4142.1) -> 4046.4 (4148.3) MB, 1902.6 / 0.0 ms  (average mu = 0.528, current mu = 0.108) allocation failure scavenge might not succeed


<--- JS stacktrace --->

FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
 1: 0x1024bd260 node::Abort() [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
 2: 0x1024bd3e8 node::errors::TryCatchScope::~TryCatchScope() [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
 3: 0x10260bed0 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
 4: 0x10260be64 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
 5: 0x10278f57c v8::internal::Heap::GarbageCollectionReasonToString(v8::internal::GarbageCollectionReason) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
 6: 0x10278e09c v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
 7: 0x1027993e4 v8::internal::Heap::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
 8: 0x102799478 v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
 9: 0x102766b58 v8::internal::FactoryBase<v8::internal::Factory>::NewConsString(v8::internal::Handle<v8::internal::String>, v8::internal::Handle<v8::internal::String>, int, bool, v8::internal::AllocationType) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
10: 0x102ae9134 v8::internal::IncrementalStringBuilder::AppendString(v8::internal::Handle<v8::internal::String>) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
11: 0x10273cbc4 v8::internal::ErrorUtils::ToString(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Object>) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
12: 0x10268cec0 v8::internal::Builtin_ErrorPrototypeToString(int, unsigned long*, v8::internal::Isolate*) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
13: 0x102db8bcc Builtins_CEntry_Return1_DontSaveFPRegs_ArgvOnStack_BuiltinExit [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
14: 0x102dea1cc Builtins_OrdinaryToPrimitive_Number [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
15: 0x102de9e6c Builtins_NonPrimitiveToPrimitive_Default [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
16: 0x102de69bc Builtins_StringAddConvertRight [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
17: 0x1070a6970 
18: 0x102e00cf8 Builtins_PromiseRejectReactionJob [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
19: 0x102d6e218 Builtins_RunMicrotasks [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
20: 0x102d4a3e4 Builtins_JSRunMicrotasksEntry [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
21: 0x10271bd2c v8::internal::(anonymous namespace)::Invoke(v8::internal::Isolate*, v8::internal::(anonymous namespace)::InvokeParams const&) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
22: 0x10271c160 v8::internal::(anonymous namespace)::InvokeWithTryCatch(v8::internal::Isolate*, v8::internal::(anonymous namespace)::InvokeParams const&) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
23: 0x10271c24c v8::internal::Execution::TryRunMicrotasks(v8::internal::Isolate*, v8::internal::MicrotaskQueue*, v8::internal::MaybeHandle<v8::internal::Object>*) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
24: 0x10273ee88 v8::internal::MicrotaskQueue::RunMicrotasks(v8::internal::Isolate*) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
25: 0x10273f71c v8::internal::MicrotaskQueue::PerformCheckpoint(v8::Isolate*) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
26: 0x102409db4 node::InternalCallbackScope::Close() [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
27: 0x1024097c4 node::InternalCallbackScope::~InternalCallbackScope() [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
28: 0x1024660bc node::Environment::RunTimers(uv_timer_s*) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
29: 0x102d2b5b8 uv__run_timers [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
30: 0x102d2e5f8 uv_run [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
31: 0x10240accc node::SpinEventLoop(node::Environment*) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
32: 0x1024f65b0 node::NodeMainInstance::Run(int*, node::Environment*) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
33: 0x1024f627c node::NodeMainInstance::Run(node::EnvSerializeInfo const*) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
34: 0x10248f080 node::Start(int, char**) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
35: 0x106e5508c 
[1]    4217 abort      node dev/scripts/hazelcast-bug.js
❯ 
❯ 
❯ 
❯ 
❯ 
❯ 
@srknzl
Copy link
Member

srknzl commented Aug 15, 2022

@marinrusu1997

If you can reproduce, can you run again and send the logs when the TRACE logging is on?

const client = await Client.newHazelcastClient({
	clusterName: '@FIXME YOUR CLUSTER NAME',
	network: {
		clusterMembers: [
			'localhost:5701'
		]
	},
	connectionStrategy: {
		asyncStart: false,
		reconnectMode: "ASYNC", // <----
		connectionRetry: {
			clusterConnectTimeoutMillis: -1
		}
	},
        properties: {
                'hazelcast.logging.level': 'TRACE'
        }
});

@yuce yuce added the to-jira label Aug 15, 2022
@github-actions github-actions bot changed the title Heap out of memory when using ReliableTopic with reconnect mode "ASYNC" Heap out of memory when using ReliableTopic with reconnect mode "ASYNC" [API-1514] Aug 15, 2022
@github-actions
Copy link

Internal Jira issue: API-1514

@yuce yuce added this to the 5.2.0 milestone Aug 15, 2022
@marinrusu1997
Copy link
Author

Sure. This is the output:

[DefaultLogger] INFO at LifecycleService: HazelcastClient is STARTING
[DefaultLogger] INFO at LifecycleService: HazelcastClient is STARTED
[DefaultLogger] INFO at ConnectionManager: Trying to connect to localhost:5701
[DefaultLogger] INFO at LifecycleService: HazelcastClient is CONNECTED
[DefaultLogger] INFO at ConnectionManager: Authenticated with server 172.16.57.4:5701:bcaf59db-3876-4a6b-8ef7-a6774ece7102, server version: 5.1.2, local address: 127.0.0.1:50564
[DefaultLogger] TRACE at ClusterViewListenerService: Register attempt of cluster view handler to Connection{alive=true, connectionId=0, remoteAddress=172.16.57.4:5701}
[DefaultLogger] TRACE at ClusterService: Resetting the member list version.
[DefaultLogger] INFO at ClusterService: 

Members [1] {
	Member [172.16.57.4]:5701 - bcaf59db-3876-4a6b-8ef7-a6774ece7102
}

[DefaultLogger] TRACE at ClusterViewListenerService: Registered cluster view handler to Connection{alive=true, connectionId=0, remoteAddress=172.16.57.4:5701}
[DefaultLogger] DEBUG at TranslateAddressProvider: Public address is not available on member bcaf59db-3876-4a6b-8ef7-a6774ece7102. Client will use internal addresses.
[DefaultLogger] INFO at Statistics: Client statistics is enabled with period 5 seconds.
[DefaultLogger] TRACE at ListenerService: Register attempt of [object Object] to Connection{alive=true, connectionId=0, remoteAddress=172.16.57.4:5701}
[DefaultLogger] TRACE at ListenerService: Registered c9714a1b-9fb4-96b8-6f55-b4aaf3415d4a to Connection{alive=true, connectionId=0, remoteAddress=172.16.57.4:5701}
[DefaultLogger] TRACE at SchemaService: There is no schemas to send to the cluster
[DefaultLogger] TRACE at InvocationService: Partition owner is not assigned yet
[DefaultLogger] TRACE at InvocationService: Partition owner is not assigned yet
[DefaultLogger] DEBUG at PartitionService: Handling new partition table with partitionStateVersion: 1
[DefaultLogger] TRACE at PartitionService: Event coming from a new connection. Old connection: undefined, new connection Connection{alive=true, connectionId=0, remoteAddress=172.16.57.4:5701}
*** PUBLISHING MESSAGE ***
hello world!
*** MESSAGE WAS PUBLISHED ***
*** PUBLISHING MESSAGE ***
*** MESSAGE WAS PUBLISHED ***
hello world!
*** PUBLISHING MESSAGE ***
hello world!
*** MESSAGE WAS PUBLISHED ***
*** PUBLISHING MESSAGE ***
*** MESSAGE WAS PUBLISHED ***
hello world!
[DefaultLogger] WARN at Connection: Connection{alive=false, connectionId=0, remoteAddress=172.16.57.4:5701} closed. Reason: Connection closed by the other side. - Connection closed by the other side.
[DefaultLogger] INFO at ConnectionManager: Removed connection to endpoint: 172.16.57.4:5701:bcaf59db-3876-4a6b-8ef7-a6774ece7102, connection: Connection{alive=false, connectionId=0, remoteAddress=172.16.57.4:5701}
[DefaultLogger] INFO at LifecycleService: HazelcastClient is DISCONNECTED
[DefaultLogger] INFO at ConnectionManager: Trying to connect to Member [172.16.57.4]:5701 - bcaf59db-3876-4a6b-8ef7-a6774ece7102
[DefaultLogger] DEBUG at InvocationService: Retrying(1) on correlation-id=16
TargetDisconnectedError: Connection closed by the other side.
    at new TargetDisconnectedError (/Users/marin/Development/platform-svc-itpm-relay/node_modules/hazelcast-client/lib/core/HazelcastError.js:240:9)
    at Timeout._onTimeout (/Users/marin/Development/platform-svc-itpm-relay/node_modules/hazelcast-client/lib/invocation/InvocationService.js:232:50)
    at listOnTimeout (node:internal/timers:559:17)
    at processTimers (node:internal/timers:502:7) {
  cause: undefined,
  serverStackTrace: undefined
}
[DefaultLogger] TRACE at ReliableTopicListenerRunner: Listener of topic: topic-reconnect got error: Error: No connection found to cluster. Continuing from last known sequence: 4
[DefaultLogger] TRACE at ReliableTopicListenerRunner: Listener of topic: topic-reconnect got error: Error: No connection found to cluster. Continuing from last known sequence: 4
....
....

<--- Last few GCs --->

[53408:0x160008000]    92475 ms: Scavenge 4044.0 (4122.8) -> 4039.3 (4124.3) MB, 9.3 / 0.0 ms  (average mu = 0.744, current mu = 0.442) allocation failure
[53408:0x160008000]    92511 ms: Scavenge 4045.4 (4124.3) -> 4040.6 (4141.1) MB, 10.2 / 0.0 ms  (average mu = 0.744, current mu = 0.442) allocation failure
[53408:0x160008000]    93848 ms: Mark-sweep 4054.7 (4141.1) -> 4042.0 (4144.6) MB, 1278.4 / 0.0 ms  (average mu = 0.586, current mu = 0.203) allocation failure scavenge might not succeed


<--- JS stacktrace --->

FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
 1: 0x102ee9260 node::Abort() [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
 2: 0x102ee93e8 node::errors::TryCatchScope::~TryCatchScope() [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
 3: 0x103037ed0 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
 4: 0x103037e64 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
 5: 0x1031bb57c v8::internal::Heap::GarbageCollectionReasonToString(v8::internal::GarbageCollectionReason) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
 6: 0x1031ba09c v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
 7: 0x1031c53e4 v8::internal::Heap::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
 8: 0x1031c5478 v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
 9: 0x103190674 v8::internal::FactoryBase<v8::internal::Factory>::NewFixedArrayWithFiller(v8::internal::Handle<v8::internal::Map>, int, v8::internal::Handle<v8::internal::Oddball>, v8::internal::AllocationType) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
10: 0x103156110 v8::internal::(anonymous namespace)::CaptureStackTrace(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Object>, v8::internal::(anonymous namespace)::CaptureStackTraceOptions) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
11: 0x10315690c v8::internal::Isolate::CaptureAndSetSimpleStackTrace(v8::internal::Handle<v8::internal::JSReceiver>, v8::internal::FrameSkipMode, v8::internal::Handle<v8::internal::Object>) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
12: 0x1030b8de8 v8::internal::Builtin_ErrorCaptureStackTrace(int, unsigned long*, v8::internal::Isolate*) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
13: 0x1037e4bcc Builtins_CEntry_Return1_DontSaveFPRegs_ArgvOnStack_BuiltinExit [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
14: 0x107aced64
15: 0x107aea744
16: 0x107aeed6c
17: 0x10382ccf8 Builtins_PromiseRejectReactionJob [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
18: 0x10379a218 Builtins_RunMicrotasks [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
19: 0x1037763e4 Builtins_JSRunMicrotasksEntry [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
20: 0x103147d2c v8::internal::(anonymous namespace)::Invoke(v8::internal::Isolate*, v8::internal::(anonymous namespace)::InvokeParams const&) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
21: 0x103148160 v8::internal::(anonymous namespace)::InvokeWithTryCatch(v8::internal::Isolate*, v8::internal::(anonymous namespace)::InvokeParams const&) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
22: 0x10314824c v8::internal::Execution::TryRunMicrotasks(v8::internal::Isolate*, v8::internal::MicrotaskQueue*, v8::internal::MaybeHandle<v8::internal::Object>*) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
23: 0x10316ae88 v8::internal::MicrotaskQueue::RunMicrotasks(v8::internal::Isolate*) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
24: 0x10316b71c v8::internal::MicrotaskQueue::PerformCheckpoint(v8::Isolate*) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
25: 0x1030a4d78 v8::internal::FunctionCallbackArguments::Call(v8::internal::CallHandlerInfo) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
26: 0x1030a4870 v8::internal::MaybeHandle<v8::internal::Object> v8::internal::(anonymous namespace)::HandleApiCallHelper<false>(v8::internal::Isolate*, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::FunctionTemplateInfo>, v8::internal::Handle<v8::internal::Object>, v8::internal::BuiltinArguments) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
27: 0x1030a40fc v8::internal::Builtin_HandleApiCall(int, unsigned long*, v8::internal::Isolate*) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
28: 0x1037e4bcc Builtins_CEntry_Return1_DontSaveFPRegs_ArgvOnStack_BuiltinExit [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
29: 0x107a204f8
30: 0x10377650c Builtins_JSEntryTrampoline [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
31: 0x1037761a4 Builtins_JSEntry [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
32: 0x103147d64 v8::internal::(anonymous namespace)::Invoke(v8::internal::Isolate*, v8::internal::(anonymous namespace)::InvokeParams const&) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
33: 0x1031473f8 v8::internal::Execution::Call(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>, int, v8::internal::Handle<v8::internal::Object>*) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
34: 0x103054888 v8::Function::Call(v8::Local<v8::Context>, v8::Local<v8::Value>, int, v8::Local<v8::Value>*) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
35: 0x102e35eec node::InternalCallbackScope::Close() [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
36: 0x102e357c4 node::InternalCallbackScope::~InternalCallbackScope() [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
37: 0x102e920bc node::Environment::RunTimers(uv_timer_s*) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
38: 0x1037575b8 uv__run_timers [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
39: 0x10375a5f8 uv_run [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
40: 0x102e36ccc node::SpinEventLoop(node::Environment*) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
41: 0x102f225b0 node::NodeMainInstance::Run(int*, node::Environment*) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
42: 0x102f2227c node::NodeMainInstance::Run(node::EnvSerializeInfo const*) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
43: 0x102ebb080 node::Start(int, char**) [/Users/marin/.nvm/versions/node/v16.14.2/bin/node]
44: 0x10778908c
[1]    53408 abort      node dev/scripts/hazelcast-bug.js > output.txt

The line [DefaultLogger] TRACE at ReliableTopicListenerRunner: Listener of topic: topic-reconnect got error: Error: No connection found to cluster. Continuing from last known sequence: 4 is logged continuously (millions of times) until process crashes.

cat output.txt | wc -l
 3317205

@srknzl
Copy link
Member

srknzl commented Sep 28, 2022

Opened a fix PR #1377

Probably, the leak happens in the older versions of the client as well. We need to confirm. For example, the problematic
code causing the bug is the same as master in 4.2.0 client.

@srknzl
Copy link
Member

srknzl commented Sep 30, 2022

@marinrusu1997 The memory leak is fixed. Thanks for reporting this. We appreciate it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants