Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Seeing multiple concurrent locks on a distributed lock #168

Closed
dnedde opened this issue May 21, 2012 · 4 comments

Comments

Projects
None yet
4 participants
@dnedde
Copy link

commented May 21, 2012

I'm seeing a case where the same lock can be locked on two different machines at the same time. I perform a ping before attempting the distributed lock to avoid attempting to lock in a split brain scenario.

Hazelcast version: Hazelcast Community Edition 2.0.3 (20120418)

I looked at the release notes for version Hazelcast v2.1, and don't see any fixes that look relevant.

In the logs below, each line is tagged with the machine name, "dg0" or "dg1".

I had the Hazelcast logging set to FINEST, but I don't really understand the Hazelcast logging messages or how they might relate to my problem. Any help here would be appreciated.

# Starting with the Hazelcast logging messages and the successful locks right before where the problem happened.

0516+12:44:24.027 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 1 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:44:24.027 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:1, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:44:24.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 1 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-3]

0516+12:44:25.278 dg0 [d3a30188] DistributedLock  Completed calling lock() for DistributedLock@7614d6[owner=null,lockName=SERIALIZE_ALL_LOCK],locked=true  [Thread-1435]
  # Lock - ok

0516+12:44:35.022 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:44:35.022 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:44:35.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:44:35.591 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 1 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:44:46.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:44:46.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:44:46.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 1 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:44:46.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:44:57.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:44:57.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-2]

0516+12:44:57.282 dg1 [301e08c7] DistributedLock  Calling lock() for DistributedLock@14115fb[owner=null,lockName=SERIALIZE_ALL_LOCK]  [Thread-181]
  # Thread-181 tries to lock, Thread-1435 currently has lock - ok
0516+12:44:57.284 dg0 [d3a30188] DistributedLock  Completed calling unlock() for DistributedLock@7614d6[owner=null,lockName=SERIALIZE_ALL_LOCK]  [Thread-1435]
  # Thread-1435 releases lock - ok
0516+12:44:57.284 dg1 [301e08c7] DistributedLock  Completed calling lock() for DistributedLock@14115fb[owner=null,lockName=SERIALIZE_ALL_LOCK],locked=true  [Thread-181]
  # Thread-181 lock() call returns with true - ok
0516+12:44:57.309 dg1 [301e08c7] DistributedLock  Completed calling unlock() for DistributedLock@14115fb[owner=null,lockName=SERIALIZE_ALL_LOCK]  [Thread-181]
  # Thread-181 unlocks - ok

0516+12:44:57.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 1 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-3]
0516+12:44:57.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:1, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-3]
0516+12:45:08.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 1 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:45:08.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:1, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:45:08.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-3]
0516+12:45:08.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-3]
0516+12:45:19.020 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:45:19.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:45:19.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:45:19.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:45:29.022 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:45:29.022 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:45:30.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:45:30.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:45:40.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:45:40.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:45:40.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:45:40.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:45:51.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:45:51.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:45:51.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:45:51.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:45:54.580 dg0 Hazelcast: PartitionManager /169.254.0.1:5701 [dev] Checking partition table for repartitioning... [hz._hzInstance_1_dev.scheduled.thread-3]
0516+12:46:01.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:46:01.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:46:02.020 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:46:02.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:46:12.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-3]
0516+12:46:12.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-3]
0516+12:46:12.589 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:46:12.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:46:23.020 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:46:23.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:46:23.591 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-3]
0516+12:46:23.591 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-3]
0516+12:46:33.618 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:46:33.618 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:46:34.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:46:34.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:46:44.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:46:44.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:46:45.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:46:45.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:46:55.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:46:55.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:46:56.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:46:56.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:47:01.011 dg1 Hazelcast: PartitionManager /169.254.0.2:5701 [dev] Checking partition table for repartitioning... [hz._hzInstance_1_dev.scheduled.thread-3]
0516+12:47:06.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:47:06.608 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:47:07.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:47:07.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-1]


# Problems start here

0516+12:47:11.988 dg1 [301e08c8] DistributedLock  Completed calling lock() for DistributedLock@14115fb[owner=null,lockName=SERIALIZE_ALL_LOCK],locked=true  [Thread-182]
  # Thread-182 locks - ok
  # dg0 is pingable at this point


0516+12:47:17.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 1 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:47:17.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-1]
0516+12:47:17.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:47:17.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-2]
0516+12:47:28.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:47:28.022 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 1 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:47:28.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:47:28.590 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:47:39.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] Address[169.254.0.2:5701] mapRecords: 1 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:47:39.021 dg1 Hazelcast: CMap /169.254.0.2:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:47:39.593 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] c:__hz_Locks Cleanup , dirty:0, purge:0, evict:0, unknown:0, stillOwned:0, backupPurge:0 [hz._hzInstance_1_dev.scheduled.thread-4]
0516+12:47:39.594 dg0 Hazelcast: CMap /169.254.0.1:5701 [dev] Address[169.254.0.1:5701] mapRecords: 0 indexes: 0 [hz._hzInstance_1_dev.scheduled.thread-4]


0516+12:47:48.301 dg0 [d3a301c6] DistributedLock  Calling lock() for DistributedLock@7614d6[owner=null,lockName=SERIALIZE_ALL_LOCK]  [Thread-1455]
0516+12:47:48.302 dg0 [d3a301c6] DistributedLock  Completed calling lock() for DistributedLock@7614d6[owner=null,lockName=SERIALIZE_ALL_LOCK],locked=true  [Thread-1455]
  # Thread-1455 gets a lock
  # This lock should not have been allowed, as it is currently held by Thread-182

0516+12:48:16.200 dg0 [d3a301c6] DistributedLock  Completed calling unlock() for DistributedLock@7614d6[owner=null,lockName=SERIALIZE_ALL_LOCK]  [Thread-1455]
  # Thread-1455 unlocked - ok

0516+12:48:16.200 dg0 [d3a301c8] DistributedLock  Completed calling lock() for DistributedLock@7614d6[owner=null,lockName=SERIALIZE_ALL_LOCK],locked=true  [Thread-1457]
  # this lock should not have been allowed, currently held by Thread-182

0516+12:48:16.200 dg1 [301e08cb] DistributedLock  Calling lock() for DistributedLock@14115fb[owner=Thread-182,lockName=SERIALIZE_ALL_LOCK]  [Thread-185]
  # lock is correctly blocked because Thread-182 has it
  # dg0 is pingable at this time

0516+12:48:16.299 dg0 [d3a301c9] DistributedLock  Completed calling lock() for DistributedLock@7614d6[owner=null,lockName=SERIALIZE_ALL_LOCK],locked=true  [Thread-1458]
  # this lock should not have been allowed, currently held by Thread-182

0516+12:48:16.459 dg0 [d3a301ca] DistributedLock  Calling lock() for DistributedLock@7614d6[owner=Thread-1458,lockName=SERIALIZE_ALL_LOCK]  [Thread-1459]
  # lock is blocked.  It should be blocked due to Thread-182 having the lock,
  # but it looks like it is blocked because Thread Thread-1458 has the lock

0516+12:48:26.227 dg1 Hazelcast: ConcurrentMapManager /169.254.0.2:5701 [dev] Still no response! Request{name='c:__hz_Locks',CONCURRENT_MAP_LOCK, redoCount='0', callId='188', lockThreadId='137'} [Thread-185]
0516+12:48:26.513 dg0 Hazelcast: ConcurrentMapManager /169.254.0.1:5701 [dev] Still no response! Request{name='c:__hz_Locks',CONCURRENT_MAP_LOCK, redoCount='0', callId='412', lockThreadId='255'} [Thread-1459]

0516+12:48:28.949 dg0 [d3a301ca] DistributedLock  Completed calling lock() for DistributedLock@7614d6[owner=null,lockName=SERIALIZE_ALL_LOCK],locked=true  [Thread-1459]
0516+12:48:29.006 dg0 [d3a301e1] DistributedLock  Completed calling lock() for DistributedLock@7614d6[owner=null,lockName=SERIALIZE_ALL_LOCK],locked=true  [Thread-1461]
  # this lock should not have been allowed, currently held by Thread-182

0516+12:48:33.227 dg1 [301e08c8] DistributedLock  Completed calling unlock() for DistributedLock@14115fb[owner=null,lockName=SERIALIZE_ALL_LOCK]  [Thread-182]
  # Thread-182 unlocks

0516+12:48:33.227 dg1 [301e08cb] DistributedLock  Completed calling lock() for DistributedLock@14115fb[owner=null,lockName=SERIALIZE_ALL_LOCK],locked=true  [Thread-185]
  # ok
@mdogan

This comment has been minimized.

Copy link
Member

commented May 24, 2012

@mdogan

This comment has been minimized.

Copy link
Member

commented Sep 20, 2012

Merging this issue into #267.

Fix for #267 is expected to apply to this issue too.

@mdogan mdogan closed this Sep 20, 2012

@ghost ghost assigned mdogan Sep 20, 2012

@jentfoo

This comment has been minimized.

Copy link

commented Feb 13, 2013

I commented on issue #267 that I am still seeing that issue in 2.4.1. I have updated to 2.5, and am still seeing both issues. I was able to hack around #267 pretty easily (manually tracking which thread owns the lock, and catching the exception when it is not valid). This issue still happens and exists for me, although I am also able to detect this and work around out in a bit of a hacky way. It seems that when one node fails in the cluster, this issue may be more likely to occur.

Just wanted to make sure it was documented that both this, and #267 appear to still be defects.

@SingralliSeena

This comment has been minimized.

Copy link

commented Jan 9, 2015

We are seeing this issue in version 3.1.5. Should I be alarmed, or is this issue still not fixed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.