-
Notifications
You must be signed in to change notification settings - Fork 28.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-7081] Faster sort-based shuffle path using binary processing cache-aware sort #5868
Closed
Closed
Changes from 18 commits
Commits
Show all changes
96 commits
Select commit
Hold shift + click to select a range
81d52c5
WIP on UnsafeSorter
JoshRosen abf7bfe
Add basic test case.
JoshRosen 57a4ea0
Make initialSize configurable in UnsafeSorter
JoshRosen e900152
Add test for empty iterator in UnsafeSorter
JoshRosen 767d3ca
Fix invalid range in UnsafeSorter.
JoshRosen 3db12de
Minor simplification and sanity checks in UnsafeSorter
JoshRosen 4d2f5e1
WIP
JoshRosen 8e3ec20
Begin code cleanup.
JoshRosen 253f13e
More cleanup
JoshRosen 9c6cf58
Refactor to use DiskBlockObjectWriter.
JoshRosen e267cee
Fix compilation of UnsafeSorterSuite
JoshRosen e2d96ca
Expand serializer API and use new function to help control when new U…
JoshRosen d3cc310
Flag that SparkSqlSerializer2 supports relocation
JoshRosen 87e721b
Renaming and comments
JoshRosen 0748458
Port UnsafeShuffleWriter to Java.
JoshRosen 026b497
Re-use a buffer in UnsafeShuffleWriter
JoshRosen 1433b42
Store record length as int instead of long.
JoshRosen 240864c
Remove PrefixComputer and require prefix to be specified as part of i…
JoshRosen bfc12d3
Add tests for serializer relocation property.
JoshRosen b8a09fe
Back out accidental log4j.properties change
JoshRosen c2fca17
Small refactoring of SerializerPropertiesSuite to enable test re-use:
JoshRosen f17fa8f
Add missing newline
JoshRosen 8958584
Fix bug in calculating free space in current page.
JoshRosen 595923a
Remove some unused variables.
JoshRosen 5e100b2
Super-messy WIP on external sort
JoshRosen 2776aca
First passing test for ExternalSorter.
JoshRosen f156a8f
Hacky metrics integration; refactor some interfaces.
JoshRosen 3490512
Misc. cleanup
JoshRosen 3aeaff7
More refactoring and cleanup; begin cleaning iterator interfaces
JoshRosen 7ee918e
Re-order imports in tests
JoshRosen 69232fd
Enable compressible address encoding for off-heap mode.
JoshRosen 57f1ec0
WIP towards packed record pointers for use in optimized shuffle sort.
JoshRosen f480fb2
WIP in mega-refactoring towards shuffle-specific sort.
JoshRosen 133c8c9
WIP towards testing UnsafeShuffleWriter.
JoshRosen 4f70141
Fix merging; now passes UnsafeShuffleSuite tests.
JoshRosen aaea17b
Add comments to UnsafeShuffleSpillWriter.
JoshRosen b674412
Merge remote-tracking branch 'origin/master' into unsafe-sort
JoshRosen 11feeb6
Update TODOs related to shuffle write metrics.
JoshRosen 8a6fe52
Rename UnsafeShuffleSpillWriter to UnsafeShuffleExternalSorter
JoshRosen cfe0ec4
Address a number of minor review comments:
JoshRosen e67f1ea
Remove upper type bound in ShuffleWriter interface.
JoshRosen 5e8cf75
More minor cleanup
JoshRosen 1ce1300
More minor cleanup
JoshRosen b95e642
Refactor and document logic that decides when to spill.
JoshRosen 9883e30
Merge remote-tracking branch 'origin/master' into unsafe-sort
JoshRosen 722849b
Add workaround for transferTo() bug in merging code; refactor tests.
JoshRosen 7cd013b
Begin refactoring to enable proper tests for spilling.
JoshRosen 9b7ebed
More defensive programming RE: cleaning up spill files and memory aft…
JoshRosen e8718dd
Merge remote-tracking branch 'origin/master' into unsafe-sort
JoshRosen 1929a74
Update to reflect upstream ShuffleBlockManager -> ShuffleBlockResolve…
JoshRosen 01afc74
Actually read data in UnsafeShuffleWriterSuite
JoshRosen 8f5061a
Strengthen assertion to check partitioning
JoshRosen 67d25ba
Update Exchange operator's copying logic to account for new shuffle m…
JoshRosen fd4bb9e
Use own ByteBufferOutputStream rather than Kryo's
JoshRosen 9d1ee7c
Fix MiMa excludes for ShuffleWriter change
JoshRosen fcd9a3c
Add notes + tests for maximum record / page sizes.
JoshRosen 27b18b0
That for inserting records AT the max record size.
JoshRosen 4a01c45
Remove unnecessary log message
JoshRosen f780fb1
Add test demonstrating which compression codecs support concatenation.
JoshRosen b57c17f
Disable some overly-verbose logs that rendered DEBUG useless.
JoshRosen 1ef56c7
Revise compression codec support in merger; test cross product of con…
JoshRosen b3b1924
Properly implement close() and flush() in DummySerializerInstance.
JoshRosen 0d4d199
Bump up shuffle.memoryFraction to make tests pass.
JoshRosen ec6d626
Add notes on maximum # of supported shuffle partitions.
JoshRosen ae538dc
Document UnsafeShuffleManager.
JoshRosen ea4f85f
Roll back an unnecessary change in Spillable.
JoshRosen 1e3ad52
Delete unused ByteBufferOutputStream class.
JoshRosen 39434f9
Avoid integer multiplication overflow in getMemoryUsage (thanks FindB…
JoshRosen e1855e5
Fix a handful of misc. IntelliJ inspections
JoshRosen 7c953f9
Add test that covers UnsafeShuffleSortDataFormat.swap().
JoshRosen 8531286
Add tests that automatically trigger spills.
JoshRosen 69d5899
Remove some unnecessary override vals
JoshRosen d4e6d89
Update to bit shifting constants
JoshRosen 4f0b770
Attempt to implement proper shuffle write metrics.
JoshRosen e58a6b4
Add more tests for PackedRecordPointer encoding.
JoshRosen e995d1a
Introduce MAX_SHUFFLE_OUTPUT_PARTITIONS.
JoshRosen 56781a1
Rename UnsafeShuffleSorter to UnsafeShuffleInMemorySorter
JoshRosen 0ad34da
Fix off-by-one in nextInt() call
JoshRosen 85da63f
Cleanup in UnsafeShuffleSorterIterator.
JoshRosen fdcac08
Guard against overflow when expanding sort buffer.
JoshRosen 2d4e4f4
Address some minor comments in UnsafeShuffleExternalSorter.
JoshRosen 57312c9
Clarify fileBufferSize units
JoshRosen 6276168
Remove ability to disable spilling in UnsafeShuffleExternalSorter.
JoshRosen 4a2c785
rename 'sort buffer' to 'pointer array'
JoshRosen e3b8855
Cleanup in UnsafeShuffleWriter
JoshRosen c2ce78e
Fix a missed usage of MAX_PARTITION_ID
JoshRosen d5779c6
Merge remote-tracking branch 'origin/master' into unsafe-sort
JoshRosen 5e189c6
Track time spend closing / flushing files; split TimeTrackingOutputSt…
JoshRosen df07699
Attempt to clarify confusing metrics update code
JoshRosen de40b9d
More comments to try to explain metrics code
JoshRosen 4023fa4
Add @Private annotation to some Java classes.
JoshRosen 51812a7
Change shuffle manager sort name to tungsten-sort
JoshRosen 52a9981
Fix some bugs in the address packing code.
JoshRosen d494ffe
Fix deserialization of JavaSerializer instances.
JoshRosen 7610f2f
Add tests for proper cleanup of shuffle data.
JoshRosen ef0a86e
Fix scalastyle errors
JoshRosen File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -29,6 +29,7 @@ conf/*.properties | |
conf/*.conf | ||
conf/*.xml | ||
conf/slaves | ||
core/build/py4j/ | ||
docs/_site | ||
docs/api | ||
target/ | ||
|
265 changes: 265 additions & 0 deletions
265
core/src/main/java/org/apache/spark/shuffle/unsafe/UnsafeShuffleWriter.java
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,265 @@ | ||
/* | ||
* Licensed to the Apache Software Foundation (ASF) under one or more | ||
* contributor license agreements. See the NOTICE file distributed with | ||
* this work for additional information regarding copyright ownership. | ||
* The ASF licenses this file to You under the Apache License, Version 2.0 | ||
* (the "License"); you may not use this file except in compliance with | ||
* the License. You may obtain a copy of the License at | ||
* | ||
* http://www.apache.org/licenses/LICENSE-2.0 | ||
* | ||
* Unless required by applicable law or agreed to in writing, software | ||
* distributed under the License is distributed on an "AS IS" BASIS, | ||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
* See the License for the specific language governing permissions and | ||
* limitations under the License. | ||
*/ | ||
|
||
package org.apache.spark.shuffle.unsafe; | ||
|
||
import scala.Option; | ||
import scala.Product2; | ||
import scala.reflect.ClassTag; | ||
import scala.reflect.ClassTag$; | ||
|
||
import java.io.File; | ||
import java.io.IOException; | ||
import java.nio.ByteBuffer; | ||
import java.util.Iterator; | ||
import java.util.LinkedList; | ||
|
||
import com.esotericsoftware.kryo.io.ByteBufferOutputStream; | ||
|
||
import org.apache.spark.Partitioner; | ||
import org.apache.spark.ShuffleDependency; | ||
import org.apache.spark.SparkEnv; | ||
import org.apache.spark.TaskContext; | ||
import org.apache.spark.executor.ShuffleWriteMetrics; | ||
import org.apache.spark.scheduler.MapStatus; | ||
import org.apache.spark.scheduler.MapStatus$; | ||
import org.apache.spark.serializer.SerializationStream; | ||
import org.apache.spark.serializer.Serializer; | ||
import org.apache.spark.serializer.SerializerInstance; | ||
import org.apache.spark.shuffle.IndexShuffleBlockManager; | ||
import org.apache.spark.shuffle.ShuffleWriter; | ||
import org.apache.spark.storage.BlockManager; | ||
import org.apache.spark.storage.BlockObjectWriter; | ||
import org.apache.spark.storage.ShuffleBlockId; | ||
import org.apache.spark.unsafe.PlatformDependent; | ||
import org.apache.spark.unsafe.memory.MemoryBlock; | ||
import org.apache.spark.unsafe.memory.TaskMemoryManager; | ||
import org.apache.spark.unsafe.sort.UnsafeSorter; | ||
import static org.apache.spark.unsafe.sort.UnsafeSorter.*; | ||
|
||
// IntelliJ gets confused and claims that this class should be abstract, but this actually compiles | ||
public class UnsafeShuffleWriter<K, V> implements ShuffleWriter<K, V> { | ||
|
||
private static final int PAGE_SIZE = 1024 * 1024; // TODO: tune this | ||
private static final int SER_BUFFER_SIZE = 1024 * 1024; // TODO: tune this | ||
private static final ClassTag<Object> OBJECT_CLASS_TAG = ClassTag$.MODULE$.Object(); | ||
|
||
private final IndexShuffleBlockManager shuffleBlockManager; | ||
private final BlockManager blockManager = SparkEnv.get().blockManager(); | ||
private final int shuffleId; | ||
private final int mapId; | ||
private final TaskMemoryManager memoryManager; | ||
private final SerializerInstance serializer; | ||
private final Partitioner partitioner; | ||
private final ShuffleWriteMetrics writeMetrics; | ||
private final LinkedList<MemoryBlock> allocatedPages = new LinkedList<MemoryBlock>(); | ||
private final int fileBufferSize; | ||
private MapStatus mapStatus = null; | ||
|
||
private MemoryBlock currentPage = null; | ||
private long currentPagePosition = PAGE_SIZE; | ||
|
||
/** | ||
* Are we in the process of stopping? Because map tasks can call stop() with success = true | ||
* and then call stop() with success = false if they get an exception, we want to make sure | ||
* we don't try deleting files, etc twice. | ||
*/ | ||
private boolean stopping = false; | ||
|
||
public UnsafeShuffleWriter( | ||
IndexShuffleBlockManager shuffleBlockManager, | ||
UnsafeShuffleHandle<K, V> handle, | ||
int mapId, | ||
TaskContext context) { | ||
this.shuffleBlockManager = shuffleBlockManager; | ||
this.mapId = mapId; | ||
this.memoryManager = context.taskMemoryManager(); | ||
final ShuffleDependency<K, V, V> dep = handle.dependency(); | ||
this.shuffleId = dep.shuffleId(); | ||
this.serializer = Serializer.getSerializer(dep.serializer()).newInstance(); | ||
this.partitioner = dep.partitioner(); | ||
this.writeMetrics = new ShuffleWriteMetrics(); | ||
context.taskMetrics().shuffleWriteMetrics_$eq(Option.apply(writeMetrics)); | ||
this.fileBufferSize = | ||
// Use getSizeAsKb (not bytes) to maintain backwards compatibility for units | ||
(int) SparkEnv.get().conf().getSizeAsKb("spark.shuffle.file.buffer", "32k") * 1024; | ||
} | ||
|
||
public void write(scala.collection.Iterator<Product2<K, V>> records) { | ||
try { | ||
final long[] partitionLengths = writeSortedRecordsToFile(sortRecords(records)); | ||
shuffleBlockManager.writeIndexFile(shuffleId, mapId, partitionLengths); | ||
mapStatus = MapStatus$.MODULE$.apply(blockManager.shuffleServerId(), partitionLengths); | ||
} catch (Exception e) { | ||
PlatformDependent.throwException(e); | ||
} | ||
} | ||
|
||
private void ensureSpaceInDataPage(long requiredSpace) throws Exception { | ||
if (requiredSpace > PAGE_SIZE) { | ||
// TODO: throw a more specific exception? | ||
throw new Exception("Required space " + requiredSpace + " is greater than page size (" + | ||
PAGE_SIZE + ")"); | ||
} else if (requiredSpace > (PAGE_SIZE - currentPagePosition)) { | ||
currentPage = memoryManager.allocatePage(PAGE_SIZE); | ||
currentPagePosition = currentPage.getBaseOffset(); | ||
allocatedPages.add(currentPage); | ||
} | ||
} | ||
|
||
private void freeMemory() { | ||
final Iterator<MemoryBlock> iter = allocatedPages.iterator(); | ||
while (iter.hasNext()) { | ||
memoryManager.freePage(iter.next()); | ||
iter.remove(); | ||
} | ||
} | ||
|
||
private Iterator<RecordPointerAndKeyPrefix> sortRecords( | ||
scala.collection.Iterator<? extends Product2<K, V>> records) throws Exception { | ||
final UnsafeSorter sorter = new UnsafeSorter( | ||
memoryManager, | ||
RECORD_COMPARATOR, | ||
PREFIX_COMPARATOR, | ||
4096 // Initial size (TODO: tune this!) | ||
); | ||
|
||
final byte[] serArray = new byte[SER_BUFFER_SIZE]; | ||
final ByteBuffer serByteBuffer = ByteBuffer.wrap(serArray); | ||
// TODO: we should not depend on this class from Kryo; copy its source or find an alternative | ||
final SerializationStream serOutputStream = | ||
serializer.serializeStream(new ByteBufferOutputStream(serByteBuffer)); | ||
|
||
while (records.hasNext()) { | ||
final Product2<K, V> record = records.next(); | ||
final K key = record._1(); | ||
final int partitionId = partitioner.getPartition(key); | ||
serByteBuffer.position(0); | ||
serOutputStream.writeKey(key, OBJECT_CLASS_TAG); | ||
serOutputStream.writeValue(record._2(), OBJECT_CLASS_TAG); | ||
serOutputStream.flush(); | ||
|
||
final int serializedRecordSize = serByteBuffer.position(); | ||
assert (serializedRecordSize > 0); | ||
// Need 4 bytes to store the record length. | ||
ensureSpaceInDataPage(serializedRecordSize + 4); | ||
|
||
final long recordAddress = | ||
memoryManager.encodePageNumberAndOffset(currentPage, currentPagePosition); | ||
final Object baseObject = currentPage.getBaseObject(); | ||
PlatformDependent.UNSAFE.putInt(baseObject, currentPagePosition, serializedRecordSize); | ||
currentPagePosition += 4; | ||
PlatformDependent.copyMemory( | ||
serArray, | ||
PlatformDependent.BYTE_ARRAY_OFFSET, | ||
baseObject, | ||
currentPagePosition, | ||
serializedRecordSize); | ||
currentPagePosition += serializedRecordSize; | ||
|
||
sorter.insertRecord(recordAddress, partitionId); | ||
} | ||
|
||
return sorter.getSortedIterator(); | ||
} | ||
|
||
private long[] writeSortedRecordsToFile( | ||
Iterator<RecordPointerAndKeyPrefix> sortedRecords) throws IOException { | ||
final File outputFile = shuffleBlockManager.getDataFile(shuffleId, mapId); | ||
final ShuffleBlockId blockId = | ||
new ShuffleBlockId(shuffleId, mapId, IndexShuffleBlockManager.NOOP_REDUCE_ID()); | ||
final long[] partitionLengths = new long[partitioner.numPartitions()]; | ||
|
||
int currentPartition = -1; | ||
BlockObjectWriter writer = null; | ||
|
||
final byte[] arr = new byte[SER_BUFFER_SIZE]; | ||
while (sortedRecords.hasNext()) { | ||
final RecordPointerAndKeyPrefix recordPointer = sortedRecords.next(); | ||
final int partition = (int) recordPointer.keyPrefix; | ||
assert (partition >= currentPartition); | ||
if (partition != currentPartition) { | ||
// Switch to the new partition | ||
if (currentPartition != -1) { | ||
writer.commitAndClose(); | ||
partitionLengths[currentPartition] = writer.fileSegment().length(); | ||
} | ||
currentPartition = partition; | ||
writer = | ||
blockManager.getDiskWriter(blockId, outputFile, serializer, fileBufferSize, writeMetrics); | ||
} | ||
|
||
final Object baseObject = memoryManager.getPage(recordPointer.recordPointer); | ||
final long baseOffset = memoryManager.getOffsetInPage(recordPointer.recordPointer); | ||
final int recordLength = (int) PlatformDependent.UNSAFE.getLong(baseObject, baseOffset); | ||
PlatformDependent.copyMemory( | ||
baseObject, | ||
baseOffset + 4, | ||
arr, | ||
PlatformDependent.BYTE_ARRAY_OFFSET, | ||
recordLength); | ||
assert (writer != null); // To suppress an IntelliJ warning | ||
writer.write(arr, 0, recordLength); | ||
// TODO: add a test that detects whether we leave this call out: | ||
writer.recordWritten(); | ||
} | ||
|
||
if (writer != null) { | ||
writer.commitAndClose(); | ||
partitionLengths[currentPartition] = writer.fileSegment().length(); | ||
} | ||
|
||
return partitionLengths; | ||
} | ||
|
||
@Override | ||
public Option<MapStatus> stop(boolean success) { | ||
try { | ||
if (stopping) { | ||
return Option.apply(null); | ||
} else { | ||
stopping = true; | ||
freeMemory(); | ||
if (success) { | ||
return Option.apply(mapStatus); | ||
} else { | ||
// The map task failed, so delete our output data. | ||
shuffleBlockManager.removeDataByMap(shuffleId, mapId); | ||
return Option.apply(null); | ||
} | ||
} | ||
} finally { | ||
freeMemory(); | ||
// TODO: increment the shuffle write time metrics | ||
} | ||
} | ||
|
||
private static final RecordComparator RECORD_COMPARATOR = new RecordComparator() { | ||
@Override | ||
public int compare( | ||
Object leftBaseObject, long leftBaseOffset, Object rightBaseObject, long rightBaseOffset) { | ||
return 0; | ||
} | ||
}; | ||
|
||
private static final PrefixComparator PREFIX_COMPARATOR = new PrefixComparator() { | ||
@Override | ||
public int compare(long prefix1, long prefix2) { | ||
return (int) (prefix1 - prefix2); | ||
} | ||
}; | ||
} |
80 changes: 80 additions & 0 deletions
80
core/src/main/java/org/apache/spark/unsafe/sort/UnsafeSortDataFormat.java
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,80 @@ | ||
/* | ||
* Licensed to the Apache Software Foundation (ASF) under one or more | ||
* contributor license agreements. See the NOTICE file distributed with | ||
* this work for additional information regarding copyright ownership. | ||
* The ASF licenses this file to You under the Apache License, Version 2.0 | ||
* (the "License"); you may not use this file except in compliance with | ||
* the License. You may obtain a copy of the License at | ||
* | ||
* http://www.apache.org/licenses/LICENSE-2.0 | ||
* | ||
* Unless required by applicable law or agreed to in writing, software | ||
* distributed under the License is distributed on an "AS IS" BASIS, | ||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
* See the License for the specific language governing permissions and | ||
* limitations under the License. | ||
*/ | ||
|
||
package org.apache.spark.unsafe.sort; | ||
|
||
import static org.apache.spark.unsafe.sort.UnsafeSorter.RecordPointerAndKeyPrefix; | ||
import org.apache.spark.util.collection.SortDataFormat; | ||
|
||
/** | ||
* Supports sorting an array of (record pointer, key prefix) pairs. Used in {@link UnsafeSorter}. | ||
* | ||
* Within each long[] buffer, position {@code 2 * i} holds a pointer pointer to the record at | ||
* index {@code i}, while position {@code 2 * i + 1} in the array holds an 8-byte key prefix. | ||
*/ | ||
final class UnsafeSortDataFormat extends SortDataFormat<RecordPointerAndKeyPrefix, long[]> { | ||
|
||
public static final UnsafeSortDataFormat INSTANCE = new UnsafeSortDataFormat(); | ||
|
||
private UnsafeSortDataFormat() { } | ||
|
||
@Override | ||
public RecordPointerAndKeyPrefix getKey(long[] data, int pos) { | ||
// Since we re-use keys, this method shouldn't be called. | ||
throw new UnsupportedOperationException(); | ||
} | ||
|
||
@Override | ||
public RecordPointerAndKeyPrefix newKey() { | ||
return new RecordPointerAndKeyPrefix(); | ||
} | ||
|
||
@Override | ||
public RecordPointerAndKeyPrefix getKey(long[] data, int pos, RecordPointerAndKeyPrefix reuse) { | ||
reuse.recordPointer = data[pos * 2]; | ||
reuse.keyPrefix = data[pos * 2 + 1]; | ||
return reuse; | ||
} | ||
|
||
@Override | ||
public void swap(long[] data, int pos0, int pos1) { | ||
long tempPointer = data[pos0 * 2]; | ||
long tempKeyPrefix = data[pos0 * 2 + 1]; | ||
data[pos0 * 2] = data[pos1 * 2]; | ||
data[pos0 * 2 + 1] = data[pos1 * 2 + 1]; | ||
data[pos1 * 2] = tempPointer; | ||
data[pos1 * 2 + 1] = tempKeyPrefix; | ||
} | ||
|
||
@Override | ||
public void copyElement(long[] src, int srcPos, long[] dst, int dstPos) { | ||
dst[dstPos * 2] = src[srcPos * 2]; | ||
dst[dstPos * 2 + 1] = src[srcPos * 2 + 1]; | ||
} | ||
|
||
@Override | ||
public void copyRange(long[] src, int srcPos, long[] dst, int dstPos, int length) { | ||
System.arraycopy(src, srcPos * 2, dst, dstPos * 2, length * 2); | ||
} | ||
|
||
@Override | ||
public long[] allocate(int length) { | ||
assert (length < Integer.MAX_VALUE / 2) : "Length " + length + " is too large"; | ||
return new long[length * 2]; | ||
} | ||
|
||
} |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do u want to catch throwable or exception?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Catching
Throwable
is risky if we can't re-throw it, since that might cause us to drop anOutOfMemoryError
. Instead, I think that we can use afinally
block that checks a boolean flag to see whether cleanup needs to be performed. This avoids the bad practice in the current code of throwing from a finally block.