Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -44,12 +44,12 @@
import java.io.IOException;
import java.nio.ByteBuffer;
import java.security.cert.X509Certificate;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.UUID;
import java.util.stream.Collectors;

import static java.nio.charset.StandardCharsets.UTF_8;

Expand Down Expand Up @@ -424,6 +424,12 @@ public void testWriteShouldSuccessIfLessThanParityNodesFail()
testNodeFailuresWhileWriting(1, 2);
}

@Test
public void testWriteShouldSuccessIfLessThanParityNodesFail2()
throws IOException {
testNodeFailuresWhileWriting(2, 1);
}

@Test
public void testWriteShouldSuccessIf4NodesFailed()
throws IOException {
Expand All @@ -433,7 +439,7 @@ public void testWriteShouldSuccessIf4NodesFailed()
@Test
public void testWriteShouldSuccessIfAllNodesFailed()
throws IOException {
testNodeFailuresWhileWriting(4, 1);
testNodeFailuresWhileWriting(5, 1);
}

public void testNodeFailuresWhileWriting(int numFailureToInject,
Expand All @@ -450,14 +456,28 @@ public void testNodeFailuresWhileWriting(int numFailureToInject,
out.write(inputChunks[i]);
}

List<DatanodeDetails> failedDNs = new ArrayList<>();
Map<DatanodeDetails, MockDatanodeStorage> storages =
((MockXceiverClientFactory) factoryStub).getStorages();
DatanodeDetails[] dnDetails =
storages.keySet().toArray(new DatanodeDetails[storages.size()]);
for (int i = 0; i < numFailureToInject; i++) {
failedDNs.add(dnDetails[i]);
}
List<DatanodeDetails> failedDNs =
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about changing the MockXceiverClientFactory#storage to LinkedHashMap instead of HashMap?
That should maintain the ordering.
If that does not solve, then I would suggest to move this sorting logic to getStorages method, so that anybody else invoke also will get in order.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to maintain the ordering? :)
With the randomness here, I believe we gain that there are test runs that tests parity failures, and there are test runs that tests data failures. Flaky results still might be a problem, I think it is more tedious to test all possible scenarios, that is why I chose this way of fixing it.
Though I think it is still not optimal, as it relies on the internal representation where we store the index of the DN in the most significant bits of the uuid of the DataNode, so I am open to any better idea, but if we order the nodes, and always select the first n to fail in this method, that might hide some of the possible issues that we can uncover easier with a proper random selection.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we discussed, just hold this JIRA until we commit HDDS-6036.
HDDS-6036 is refactored to provide more flexibility to inject the failures in specific nodes in block group.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yepp, moving this one to draft, and probably I will force-push a new version later on after #2910 is merged. Thank you!

storages.keySet().stream()
// we need to sort the keyset, to ensure that nodes that we
// actually write to, are selected to fail, otherwise, we won't
// get to the error handling path, and won't open a second
// block to write to, hence keylocations size assertion fails
// below.
.sorted((d1, d2) -> {
long id1 = d1.getUuid().getMostSignificantBits();
long id2 = d2.getUuid().getMostSignificantBits();
if (id1 <= numChunksToWriteAfterFailure || id1 > dataBlocks) {
return -1;
}
if (id2 < numChunksToWriteAfterFailure || id2 > dataBlocks) {
return 1;
}
return 0;
})
.limit(numFailureToInject)
.collect(Collectors.toList());

// First let's set storage as bad
((MockXceiverClientFactory) factoryStub).setFailedStorages(failedDNs);
Expand Down