Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discovery: Support local (JVM level) discovery #2

Closed
kimchy opened this issue Feb 10, 2010 · 1 comment

Comments

@kimchy
Copy link
Member

commented Feb 10, 2010

Allow to have a JVM (well, actually class loader) level discovery for simple testing / embedding of a single node (which, potentially exists with other nodes in the same class loader).

Enable it using:

discovery:
    type: local

Or using:

node:
    local: true

(which will also enable other modules to be local, such as the transport - once we have that...)

@kimchy

This comment has been minimized.

Copy link
Member Author

commented Feb 10, 2010

Discovery: Support local (JVM level) discovery. Closed by b61964a.

martijnvg added a commit to martijnvg/elasticsearch that referenced this issue May 3, 2013
martijnvg added a commit to martijnvg/elasticsearch that referenced this issue Mar 25, 2014
kblincoe pushed a commit to kblincoe/elasticsearch that referenced this issue Apr 3, 2017
Merge pull request elastic#2 from kblincoe/master
Merge latest changes from kblincoe/elasticsearch
jasontedor added a commit to jasontedor/elasticsearch that referenced this issue May 29, 2017
matarrese added a commit to matarrese/elasticsearch that referenced this issue Jul 12, 2017
bleskes added a commit that referenced this issue Jul 21, 2017
Engine - do not index operations with seq# lower than the local check…
…point into lucene (#25827)

When a replica processes out of order operations, it can drop some due to version comparisons. In the past that would have resulted in a VersionConflictException being thrown and the operation was totally ignored. With the seq# push, we started storing these operations in the translog (but not indexing them into lucene) in order to have complete op histories to facilitate ops based recoveries. This in turn had the undesired effect that deleted docs may be resurrected during recovery in some extreme edge situation (see a complete explanation below). This PR contains a simple fix, which is also an optimization for the recovery process, incoming operation that have a seq# lower than the current local checkpoint (i.e., have already been processed) should not be indexed into lucene. Note that sometimes we can also skip storing them in the translog, but this is not required for the fix and is more complicated.

This is the equivalent of #25592

## More details on resurrected ops 

Consider two operations: 
 - Index d1, seq no 1
 - Delete d1, seq no 3

On a replica they come out of order:
 - Translog gen 1 contains:
    - delete (seqNo 3)
 - Translog gen 2 contains:
    - index (seqNo 1) (wasn't indexed into lucene, but put into the translog)
    - another operation (seqNo 10)
 - Translog gen 3 
    - another op (seqNo 9)
 - Engine commits with:
    - local checkpoint 9
    - refers to gen 2 

If this replica becomes a primary:
    - Local recovery will replay translog gen 2 and up, causing index #1 to be re-index. 
    - Even if recovery will start at gen 3, the translog retention policy will cause file based recovery to replay the entire translog. If it happens to start at gen 2 (but not 1), we will run into the same problem.

#### Some context - out of order delivery involving deletes:

On normal operations, this relies on the gc_deletes setting. We assume that the setting represents an upper bound on the time between the index and the delete operation. The index operation will be detected as stale based on the tombstone map in the LiveVersionMap.

Recovery presents a challenge as it can replay an old index operation that was in the translog and override a delete operation that was done when the engine was opened (and is not part of the replayed snapshot). To deal with this situation, we disable GC deletes (i.e. retain all deletes) for the duration of recoveries. This means that the delete operation will be remembered and the index operation ignored.

Both of the above scenarios (local recover + peer recovery) create a situation where the delete operation is never replayed. It this "lost" as lucene doesn't remember it happened and our LiveVersionMap is populated with it.

#### Solution:

Note that both local and peer recovery represent a scenario where we replay translog ops on top of an existing lucene index, potentially with ongoing indexing. Therefore we can treat them the same.

The local checkpoint in Lucene represent a marker indicating that all operations below it were performed on the index. This is the only form of "memory" that we have that relates to deletes. If we can achieve the following:
1) All ops below the local checkpoint are not indexed to lucene.
2) All ops above the local checkpoint are

It will mean that all  variants are covered: (i# == index op seq#, d# == delete op seq#, lc == local checkpoint in commit)
1) i# < d# <= lc - document is already deleted in lucene and stays that way.
2) i# <= lc < d# - delete is replayed on index - document is deleted
3) lc < i# < d# - index is replayed and then delete - document is deleted.

More formally - we want to make sure that for all ops that performed on the primary o1 and o2, if o2 is processed on a shard before o1, o1 will be dropped. We have the following scenarios

1) If both o1 or o2 are not included in the replayed snapshot and are above it (i.e., have a higher seq#), they fall under the gc deletes assumption.
2) If both o1 is part of the replayed snapshot but o2 is above it:
	- if o2 arrives first, o1 must arrive due to the recovery and potentially via replication as well. since gc deletes is disabled we are guaranteed to know of o2's existence.
3) If both o2 and o1 are part of the replayed snapshot:
	- we fall under the same scenarios as #2 - disabling GC deletes ensures we know of o2 if it arrives first.
4) If o1 falls before the snapshot and o2 is either part of the snapshot or higher:
	- Since the snapshot is guaranteed to contain all ops that are not part of lucene and are above the lc in the commit used, this means that o1 is part of lucene and o1 < local checkpoint. This means it won't be processed and we're not in the scenario we're discussing.
5) If o2 falls before the snapshot but o1 is part of it:
	- by the same reasoning above, o2 is < local checkpoint. Since o1 < o2, we also get o1 < local checkpoint and this will be dropped.


#### Implementation:

For local recovery, we can filter the ops we read of the translog and avoid replaying them. For peer recovery this is tricky as we do want to send the operations in order to have some history on the target shard. Filtering operations on the engine level (i.e., not indexing to lucene if op seq# <= lc) would work for both.
dnhatn added a commit to dnhatn/elasticsearch that referenced this issue Mar 17, 2018
Harden periodically check to avoid endless flush loop
In elastic#28350, we fixed an endless flushing loop which can happen on
replicas by tightening the relation between the flush action and the
periodically flush condition.

1. The periodically flush condition is enabled only if it will be
disabled after a flush.

2. If the periodically flush condition is true then a flush will
actually happen regardless of Lucene state.

(1) and (2) guarantee a flushing loop will be terminated. Sadly, the
condition elastic#1 can be violated in edge cases as we used two different
algorithms to evaluate the current and future uncommitted size.

- We use method `uncommittedSizeInBytes` to calculate current
uncommitted size. It is the sum of translogs whose generation at least
the minGen (determined by a given seqno). We pick a continuous range of
translogs since the minGen to evaluate the current uncommitted size.

- We use method `sizeOfGensAboveSeqNoInBytes` to calculate the future
uncommitted size. It is the sum of translogs whose maxSeqNo at least
the given seqNo. Here we don't pick a range but select translog one
by one.

Suppose we have 3 translogs gen1={elastic#1,elastic#2}, gen2={}, gen3={elastic#3} and
seqno=elastic#1, uncommittedSizeInBytes is the sum of gen1, gen2, and gen3
while sizeOfGensAboveSeqNoInBytes is sum of gen1 and gen3. Gen2 is
excluded because its maxSeqno is still -1.

This commit ensures sizeOfGensAboveSeqNoInBytes use the same algorithm
from uncommittedSizeInBytes

Closes elastic#29097
dnhatn added a commit that referenced this issue Mar 22, 2018
Harden periodically check to avoid endless flush loop (#29125)
In #28350, we fixed an endless flushing loop which may happen on 
replicas by tightening the relation between the flush action and the
periodically flush condition.

1. The periodically flush condition is enabled only if it is disabled 
after a flush.

2. If the periodically flush condition is enabled then a flush will
actually happen regardless of Lucene state.

(1) and (2) guarantee that a flushing loop will be terminated. Sadly, 
the condition 1 can be violated in edge cases as we used two different
algorithms to evaluate the current and future uncommitted translog size.

- We use method `uncommittedSizeInBytes` to calculate current 
  uncommitted size. It is the sum of translogs whose generation at least
the minGen (determined by a given seqno). We pick a continuous range of
translogs since the minGen to evaluate the current uncommitted size.

- We use method `sizeOfGensAboveSeqNoInBytes` to calculate the future 
  uncommitted size. It is the sum of translogs whose maxSeqNo at least
the given seqNo. Here we don't pick a range but select translog one by
one.

Suppose we have 3 translogs `gen1={#1,#2}, gen2={}, gen3={#3} and 
seqno=#1`, `uncommittedSizeInBytes` is the sum of gen1, gen2, and gen3
while `sizeOfGensAboveSeqNoInBytes` is the sum of gen1 and gen3. Gen2 is
excluded because its maxSeqno is still -1.

This commit removes both `sizeOfGensAboveSeqNoInBytes` and 
`uncommittedSizeInBytes` methods, then enforces an engine to use only
`sizeInBytesByMinGen` method to evaluate the periodically flush condition.

Closes #29097
Relates ##28350
dnhatn added a commit that referenced this issue Mar 22, 2018
Harden periodically check to avoid endless flush loop (#29125)
In #28350, we fixed an endless flushing loop which may happen on
replicas by tightening the relation between the flush action and the
periodically flush condition.

1. The periodically flush condition is enabled only if it is disabled
after a flush.

2. If the periodically flush condition is enabled then a flush will
actually happen regardless of Lucene state.

(1) and (2) guarantee that a flushing loop will be terminated. Sadly,
the condition 1 can be violated in edge cases as we used two different
algorithms to evaluate the current and future uncommitted translog size.

- We use method `uncommittedSizeInBytes` to calculate current
  uncommitted size. It is the sum of translogs whose generation at least
the minGen (determined by a given seqno). We pick a continuous range of
translogs since the minGen to evaluate the current uncommitted size.

- We use method `sizeOfGensAboveSeqNoInBytes` to calculate the future
  uncommitted size. It is the sum of translogs whose maxSeqNo at least
the given seqNo. Here we don't pick a range but select translog one by
one.

Suppose we have 3 translogs `gen1={#1,#2}, gen2={}, gen3={#3} and
seqno=#1`, `uncommittedSizeInBytes` is the sum of gen1, gen2, and gen3
while `sizeOfGensAboveSeqNoInBytes` is the sum of gen1 and gen3. Gen2 is
excluded because its maxSeqno is still -1.

This commit removes both `sizeOfGensAboveSeqNoInBytes` and
`uncommittedSizeInBytes` methods, then enforces an engine to use only
`sizeInBytesByMinGen` method to evaluate the periodically flush condition.

Closes #29097
Relates ##28350
dnhatn added a commit that referenced this issue Mar 22, 2018
Harden periodically check to avoid endless flush loop (#29125)
In #28350, we fixed an endless flushing loop which may happen on
replicas by tightening the relation between the flush action and the
periodically flush condition.

1. The periodically flush condition is enabled only if it is disabled
after a flush.

2. If the periodically flush condition is enabled then a flush will
actually happen regardless of Lucene state.

(1) and (2) guarantee that a flushing loop will be terminated. Sadly,
the condition 1 can be violated in edge cases as we used two different
algorithms to evaluate the current and future uncommitted translog size.

- We use method `uncommittedSizeInBytes` to calculate current
  uncommitted size. It is the sum of translogs whose generation at least
the minGen (determined by a given seqno). We pick a continuous range of
translogs since the minGen to evaluate the current uncommitted size.

- We use method `sizeOfGensAboveSeqNoInBytes` to calculate the future
  uncommitted size. It is the sum of translogs whose maxSeqNo at least
the given seqNo. Here we don't pick a range but select translog one by
one.

Suppose we have 3 translogs `gen1={#1,#2}, gen2={}, gen3={#3} and
seqno=#1`, `uncommittedSizeInBytes` is the sum of gen1, gen2, and gen3
while `sizeOfGensAboveSeqNoInBytes` is the sum of gen1 and gen3. Gen2 is
excluded because its maxSeqno is still -1.

This commit removes both `sizeOfGensAboveSeqNoInBytes` and
`uncommittedSizeInBytes` methods, then enforces an engine to use only
`sizeInBytesByMinGen` method to evaluate the periodically flush condition.

Closes #29097
Relates ##28350
dadoonet added a commit to dadoonet/elasticsearch that referenced this issue Jul 26, 2018
Move all classes under a single package
Adapt code for master changes.

We still have issues when running the code because the Azure client
is not Closeable and it fails when running the test:

```
Suite: org.elasticsearch.discovery.azure.arm.AzureArmClientTests
  1> [2018-07-26T17:27:39,577][WARN ][o.e.b.JNANatives         ] Unable to lock JVM Memory: error=78, reason=Function not implemented
  1> [2018-07-26T17:27:39,584][WARN ][o.e.b.JNANatives         ] This can result in part of the JVM being swapped out.
  1> [2018-07-26T11:27:47,547][INFO ][o.e.d.a.a.AzureArmClientTests] [testConnectWithKeySecret]: before test
  2> [pool-2-thread-1] INFO com.microsoft.aad.adal4j.AuthenticationAuthority - [Correlation ID: 3270e9d0-cd58-4fd9-8511-9cad48e2736f] Instance discovery was successful
  1> [2018-07-26T11:27:56,259][INFO ][o.e.d.a.a.AzureArmClientTests]  -> AzureVirtualMachine{groupName='ELASTIC-SA', name='base6', region='eastus', publicIp='null', privateIp='10.0.0.4', powerState='DEALLOCATED'}
  1> [2018-07-26T11:27:56,259][INFO ][o.e.d.a.a.AzureArmClientTests]  -> AzureVirtualMachine{groupName='LOGSTASH-DEMO', name='logstash', region='centralus', publicIp='13.89.222.47', privateIp='10.0.1.9', powerState='RUNNING'}
  1> [2018-07-26T11:27:56,259][INFO ][o.e.d.a.a.AzureArmClientTests]  -> AzureVirtualMachine{groupName='LOGSTASH-DEMO', name='lsdata-0', region='centralus', publicIp='null', privateIp='10.0.1.6', powerState='RUNNING'}
  1> [2018-07-26T11:27:56,259][INFO ][o.e.d.a.a.AzureArmClientTests]  -> AzureVirtualMachine{groupName='LOGSTASH-DEMO', name='lsdata-1', region='centralus', publicIp='null', privateIp='10.0.1.7', powerState='RUNNING'}
  1> [2018-07-26T11:27:56,260][INFO ][o.e.d.a.a.AzureArmClientTests]  -> AzureVirtualMachine{groupName='LOGSTASH-DEMO', name='lsdata-2', region='centralus', publicIp='null', privateIp='10.0.1.8', powerState='RUNNING'}
  1> [2018-07-26T11:27:56,260][INFO ][o.e.d.a.a.AzureArmClientTests]  -> AzureVirtualMachine{groupName='LOGSTASH-DEMO', name='lskibana', region='centralus', publicIp='13.89.232.140', privateIp='10.0.1.5', powerState='RUNNING'}
  1> [2018-07-26T11:27:56,260][INFO ][o.e.d.a.a.AzureArmClientTests]  -> AzureVirtualMachine{groupName='DPI-ARM-TEST', name='dpi-arm-test', region='null', publicIp='40.89.139.46', privateIp='10.0.2.4', powerState='RUNNING'}
  1> [2018-07-26T11:27:56,295][INFO ][o.e.d.a.a.AzureArmClientTests] [testConnectWithKeySecret]: after test
  2> juil. 26, 2018 5:28:56 PM com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks
  2> AVERTISSEMENT: Will linger awaiting termination of 2 leaked thread(s).
  2> juil. 26, 2018 5:29:01 PM com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks
  2> GRAVE: 2 threads leaked from SUITE scope at org.elasticsearch.discovery.azure.arm.AzureArmClientTests:
  2>    1) Thread[id=21, name=RxIoScheduler-1 (Evictor), state=TIMED_WAITING, group=TGRP-AzureArmClientTests]
  2>         at java.base@10.0.2/jdk.internal.misc.Unsafe.park(Native Method)
  2>         at java.base@10.0.2/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
  2>         at java.base@10.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2117)
  2>         at java.base@10.0.2/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1182)
  2>         at java.base@10.0.2/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899)
  2>         at java.base@10.0.2/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1061)
  2>         at java.base@10.0.2/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121)
  2>         at java.base@10.0.2/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
  2>         at java.base@10.0.2/java.lang.Thread.run(Thread.java:844)
  2>    2) Thread[id=20, name=Okio Watchdog, state=WAITING, group=TGRP-AzureArmClientTests]
  2>         at java.base@10.0.2/java.lang.Object.wait(Native Method)
  2>         at java.base@10.0.2/java.lang.Object.wait(Object.java:328)
  2>         at app//okio.AsyncTimeout.awaitTimeout(AsyncTimeout.java:338)
  2>         at app//okio.AsyncTimeout$Watchdog.run(AsyncTimeout.java:313)
  2> juil. 26, 2018 5:29:01 PM com.carrotsearch.randomizedtesting.ThreadLeakControl tryToInterruptAll
  2> INFOS: Starting to interrupt leaked threads:
  2>    1) Thread[id=21, name=RxIoScheduler-1 (Evictor), state=TIMED_WAITING, group=TGRP-AzureArmClientTests]
  2>    2) Thread[id=20, name=Okio Watchdog, state=WAITING, group=TGRP-AzureArmClientTests]
  2> juil. 26, 2018 5:29:04 PM com.carrotsearch.randomizedtesting.ThreadLeakControl tryToInterruptAll
  2> GRAVE: There are still zombie threads that couldn't be terminated:
  2>    1) Thread[id=21, name=RxIoScheduler-1 (Evictor), state=TIMED_WAITING, group=TGRP-AzureArmClientTests]
  2>         at java.base@10.0.2/jdk.internal.misc.Unsafe.park(Native Method)
  2>         at java.base@10.0.2/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
  2>         at java.base@10.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2117)
  2>         at java.base@10.0.2/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1182)
  2>         at java.base@10.0.2/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899)
  2>         at java.base@10.0.2/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1061)
  2>         at java.base@10.0.2/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121)
  2>         at java.base@10.0.2/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
  2>         at java.base@10.0.2/java.lang.Thread.run(Thread.java:844)
  2>    2) Thread[id=20, name=Okio Watchdog, state=WAITING, group=TGRP-AzureArmClientTests]
  2>         at java.base@10.0.2/java.lang.Object.wait(Native Method)
  2>         at java.base@10.0.2/java.lang.Object.wait(Object.java:328)
  2>         at app//okio.AsyncTimeout.awaitTimeout(AsyncTimeout.java:338)
  2>         at app//okio.AsyncTimeout$Watchdog.run(AsyncTimeout.java:313)
  2> REPRODUCE WITH: ./gradlew :plugins:discovery-azure-arm:test -Dtests.seed=25079E754DA2AE86 -Dtests.class=org.elasticsearch.discovery.azure.arm.AzureArmClientTests -Dtests.security.manager=true -Dtests.locale=fr-FR -Dtests.timezone=Europe/Paris
  2> REPRODUCE WITH: ./gradlew :plugins:discovery-azure-arm:test -Dtests.seed=25079E754DA2AE86 -Dtests.class=org.elasticsearch.discovery.azure.arm.AzureArmClientTests -Dtests.security.manager=true -Dtests.locale=fr-FR -Dtests.timezone=Europe/Paris
  2> NOTE: test params are: codec=Asserting(Lucene70): {}, docValues:{}, maxPointsInLeafNode=1723, maxMBSortInHeap=7.495275164367896, sim=RandomSimilarity(queryNorm=false): {}, locale=en-NR, timezone=America/Indiana/Vincennes
  2> NOTE: Mac OS X 10.13.6 x86_64/Oracle Corporation 10.0.2 (64-bit)/cpus=4,threads=3,free=416855592,total=536870912
  2> NOTE: All tests run in this JVM: [AzureArmClientTests]
ERROR   0.00s J0 | AzureArmClientTests (suite) <<< FAILURES!
   > Throwable #1: com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE scope at org.elasticsearch.discovery.azure.arm.AzureArmClientTests:
   >    1) Thread[id=21, name=RxIoScheduler-1 (Evictor), state=TIMED_WAITING, group=TGRP-AzureArmClientTests]
   >         at java.base@10.0.2/jdk.internal.misc.Unsafe.park(Native Method)
   >         at java.base@10.0.2/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
   >         at java.base@10.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2117)
   >         at java.base@10.0.2/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1182)
   >         at java.base@10.0.2/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899)
   >         at java.base@10.0.2/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1061)
   >         at java.base@10.0.2/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121)
   >         at java.base@10.0.2/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
   >         at java.base@10.0.2/java.lang.Thread.run(Thread.java:844)
   >    2) Thread[id=20, name=Okio Watchdog, state=WAITING, group=TGRP-AzureArmClientTests]
   >         at java.base@10.0.2/java.lang.Object.wait(Native Method)
   >         at java.base@10.0.2/java.lang.Object.wait(Object.java:328)
   >         at app//okio.AsyncTimeout.awaitTimeout(AsyncTimeout.java:338)
   >         at app//okio.AsyncTimeout$Watchdog.run(AsyncTimeout.java:313)
   >    at __randomizedtesting.SeedInfo.seed([25079E754DA2AE86]:0)Throwable elastic#2: com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie threads that couldn't be terminated:
   >    1) Thread[id=21, name=RxIoScheduler-1 (Evictor), state=TIMED_WAITING, group=TGRP-AzureArmClientTests]
   >         at java.base@10.0.2/jdk.internal.misc.Unsafe.park(Native Method)
   >         at java.base@10.0.2/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
   >         at java.base@10.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2117)
   >         at java.base@10.0.2/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1182)
   >         at java.base@10.0.2/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899)
   >         at java.base@10.0.2/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1061)
   >         at java.base@10.0.2/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121)
   >         at java.base@10.0.2/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
   >         at java.base@10.0.2/java.lang.Thread.run(Thread.java:844)
   >    2) Thread[id=20, name=Okio Watchdog, state=WAITING, group=TGRP-AzureArmClientTests]
   >         at java.base@10.0.2/java.lang.Object.wait(Native Method)
   >         at java.base@10.0.2/java.lang.Object.wait(Object.java:328)
   >         at app//okio.AsyncTimeout.awaitTimeout(AsyncTimeout.java:338)
   >         at app//okio.AsyncTimeout$Watchdog.run(AsyncTimeout.java:313)
   >    at __randomizedtesting.SeedInfo.seed([25079E754DA2AE86]:0)
Completed [2/2] on J0 in 86.95s, 1 test, 2 errors <<< FAILURES!
```
@tbrooks8 tbrooks8 referenced this issue Oct 11, 2018
2 of 5 tasks complete
@tbrooks8 tbrooks8 referenced this issue Aug 22, 2019
0 of 22 tasks complete
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
1 participant
You can’t perform that action at this time.