Skip to content

Commit

Permalink
Merge master to upstream (#7945) (#7953)
Browse files Browse the repository at this point in the history
* Shugeo strided slice zeros (#14)

* Modified strided_slice op to properly work with empty-like shapes.

* Fixed test for reduce_mean with empty-like input.

* [WIP] Last merge (#15)

* correct logsoftmax looss (#2)

* Small SameDiff listener fix (#4)

* Various fixes (#6)

* #7839 Fix for asXMatrix and tests

* #7866 EmbeddingSequenceLayer dtype fix + test

* #7856 SameDiff save/load stream methods

* #7859 RegressionEvaluation rank 4 fix + tests + axis configuration

* EvaluationBinary 3d/4d

* More evaluation 3d/4d tests

* #7847 Evaluation empty checks

* Small test ifx

* #7848 Fix median edge case

* Improve DL4J samediff layer tests

* [WIP] FastText wrapper implemented (#8)

* FastText implemented

* Some fixes

* Fix shapes for wordsNearest

* Validation of input vectors

* Fixes

* Fixed test

* Thread tagged

* Some tweaks

* setContextClassLoader for DeallocatorServiceThread

* Numpy format tests (#1)

* Various fixes (#11)

* #7852 SameDiff gather fix

* #7892 SameDiff placeholder to constant conversion

* #7890 validate input rank for MLN/CG init methods

* Fix broken permute shape calculation

* Permute and gather fixes

* Tests

* #7850 LogSumExp fix + test

* Handful of test fixes

* Empty arrays with non-scalar shapes (#10)

* minor rearrangements for lambdas

* empty tensors with non-scalar shapes

* numpy empty tensors with non-scalar shapes

* few more empty tweaks

* Small fixes

* conv3d signature update

* micro fix in batchnorm mkldnn

* Import fixes

* Fix

* MKL-DNN update

* Small fill fix

* fill with empty input + test

* Fixes

* Small error improvement

* Fix

* one special test

* couple of fixes for lstm

* Rewrite TFGraphMapper.getNDArrayFromTensor to be maintainable and less error prone

* Fixes

* FP16

* Unsigned

* BFloat16

* Fill op - empty tweaks

* - couple of fixes for empty arrays construction
- stack updated

* strided slice fix

* one transform test

* provide method for reducing shapeInfo in case of input array is empty

* Fixed reduceAlongDimensions to use empty input properly.

* couple of broadcast tests

* couple of tests broadcast tests + tweak to make them pass

* add check of non-empty to methods producing sub-arrays

* Fixed reshapeC with zeros in shape.

* complete empty check in reduce_... legacy ops

* Concat and cumsum/prod

* Tweak to empty shape inference on import

* add empty check to the rest of reduce legacy ops

* one more test

* correct typo in evalReduceShapeInfoEmpty

* Added tests for reduce_* ops to tests with zero shapes.

* few more tests for empty reductions

* Fixed strided_slice op with empty case and tests.

* one more empty reduction test

* Fixed strided_slice test.

* add empty check to NDArray::reshapei

* infOrMax

* empty min/max with infinity tests

* made unstack working correctly with empty arrays

* few IndexReduce tests + tweaks for empty shapes

* add test for empty concat

* few tests fixed

* Validation fix for reductions on empty shapes

* Reverse fix

* Reduction shape calc fixes

* SameDiff.generateOutputVariable: don't use shape function to determine number of outputs

* Range fix

* - NDArray constructor updated for scalars/empty arrays
- few tests fixed

* More fixes

* Empty creator fixes

* concat fix

* concat fix

* TF import tests: allow 'both all NaN' and 'both all inf' to pass

* Slice, zero fraction, and reshape fixes

* transpose, gather

* Zero fraction

* scalar cast fix

* Empty reduction axis support

* few more tests fixed

* Fixed input checks conforming with TF for concat op and tests.

* few tests fixed

* matmul scalar shape fix

* Fixed checkout for data type and scalarity with concat to allow non-empty scalars with vector concats.

* broadcast bool fix

* few more tests

* few more tests

* correct evalReduceShapeInfoEmpty

* argmax/argmin + tests

* one more empty edge case + one more test

* argmax/argmin/realdiv_bp tweaks

* empty reshape test + fix

* Helper fixes

* Small fixes

* Gather test fix

* Gather test fix

* Small fixes

* reduce scalar zero values

* scalar mean workaround

* Remove debug code

* along dim mean workaround

* one more test

* - equalsTo() tweak for empty arrays
- one more test

* broadcast tweaks

* [WIP] Fixing outstanding issues for NLP (#9)

* Avoid using not-inited objects

* Test fixed.

* Redundant method avoided for models like FastText

* KMeans++ implementation

* KMeans++ implementation

* Disable parallel execution

* KMeans++

* Tests

* Dev branch merge (#16)

* SameDiff: convertDataType and gradient check util improvements (#12)

* GradCheck util improvements

* StopGradient constructor + test

* SameDiff: Add datatype conversion

* Javadoc and add DataType.isNumerical()

* Small fix

* Fix SameDiff TF import test cases intermediate naming (workaround for bad default)

* TFGraphTestAllHelper: check intermediates in execution order

* Add missing debug listener

* [WIP] lstmBlock fix + other changes (#13)

- fixes lstmBlock issue
- changes NDArray method reshape(), permute(), transpose() by making them return instance instead of pointer
- CheckNumerics op
- fixes for ReduceBool IsInfOrNan & IsFinite

* Small test fix

* CheckNumerics op wrapper

* Fix some issues on master (#17)

* Fix DataVec test issue

* Fix issue with dl4j SameDiff output layer

* Dtype fix for lambda layers

* #7912 BertIterator dtype fix (use float32 not global default)

* [WIP] Next set of CUDA stuff (#7)

New CUDA implementations and improvements

* bad file

* Dev branch master merge (#23)

* SameDiff: convertDataType and gradient check util improvements (#12)

* GradCheck util improvements

* StopGradient constructor + test

* SameDiff: Add datatype conversion

* Javadoc and add DataType.isNumerical()

* Small fix

* Fix SameDiff TF import test cases intermediate naming (workaround for bad default)

* TFGraphTestAllHelper: check intermediates in execution order

* Add missing debug listener

* [WIP] lstmBlock fix + other changes (#13)

- fixes lstmBlock issue
- changes NDArray method reshape(), permute(), transpose() by making them return instance instead of pointer
- CheckNumerics op
- fixes for ReduceBool IsInfOrNan & IsFinite

* Small test fix

* CheckNumerics op wrapper

* Compatibility of deserialization (#18)

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* SameDiff: add activation gradient checking support for debugging (#19)

* SameDiff gradient checker: first pass on activation gradient checks

* Fixes + tests for activation gradient checking

* Javadoc

* [WIP] Some nd4j data type corrections (#20)

* Adjust data type

* Set correct Data type.

* Size of proper data type.

* fix averaged cpu load (#22)

* SameDiff ops, TF import and fixes (#24)

* CheckNumerics tests + fixes + misc fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fake quant

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* FakeQuantWithMinMaxArgs

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* CheckNumerics fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix libnd4j ALL_INTS and ALL_FLOATS declaration (uint and bfloat types)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Javadoc

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Exception tweak

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix for out of scope stack allocated var use

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Ignores

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Ignore for known failing test (already logged issue)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Merge upstream to fork (#25)

* Add thousand-separator commas to TotalParams (#7915)

* Add thousand-separator commas to TotalParams

The number of parameters can be quite large, and it would help the reading of the summary printout to have the TotalParams column & values at the bottom have thousand-separator-commas in them.

* Add thousand-separator commas to MultiLayerNetwork

Corresponding change to MultiLayerNetwork

Signed-off-by: Jxtps Jxtps <jxtps435@gmail.com>

* Update contributing and issue/PR templates (#7934)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix link to AdaDelta paper (#7942)

Fix link to AdaDelta paper hosted on matthewzeiler.com

Signed-off-by: Jxtps

* Fixes, and ignores for known/logged failing issues (#7943)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* SameDiff + DL4J/SameDiff: Multiple fixes (#28)

* #7919 HDF5 attribute buffer length fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #7909 Arbiter constructor exception ux improvements

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #7925 RNN output layer length checks

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #7939 Add listener for validating inputs are not incorrectly modified

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #7939 Integrate NonInplaceValidationListener into tests

* #7844 DL4J SameDiff fixes for variable minibatch size

* DL4J SameDiff fixes - ensure gradient for input placeholder is available

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Tweaks to ExternalErrorsFunction - use placeholders, make more robust

* Another fix

* More fixes

* More SameDiff/DL4J fixes

* Scope out scalar array creation in BaseScalarOp

* Remove debug code

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* [WIP] Final dev branch merge (#29)

* SameDiff: convertDataType and gradient check util improvements (#12)

* GradCheck util improvements

* StopGradient constructor + test

* SameDiff: Add datatype conversion

* Javadoc and add DataType.isNumerical()

* Small fix

* Fix SameDiff TF import test cases intermediate naming (workaround for bad default)

* TFGraphTestAllHelper: check intermediates in execution order

* Add missing debug listener

* [WIP] lstmBlock fix + other changes (#13)

- fixes lstmBlock issue
- changes NDArray method reshape(), permute(), transpose() by making them return instance instead of pointer
- CheckNumerics op
- fixes for ReduceBool IsInfOrNan & IsFinite

* Small test fix

* CheckNumerics op wrapper

* Compatibility of deserialization (#18)

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* SameDiff: add activation gradient checking support for debugging (#19)

* SameDiff gradient checker: first pass on activation gradient checks

* Fixes + tests for activation gradient checking

* Javadoc

* [WIP] Some nd4j data type corrections (#20)

* Adjust data type

* Set correct Data type.

* Size of proper data type.

* fix averaged cpu load (#22)

* [WIP] Multiple dataset iterators (#27)

* Splitting dataset into arbitrary number

* Fixes

* Multiple split of iterator

* Test

* Test

* Some fixes

* signature change

* one more tweak

Signed-off-by: raver119 <raver119@gmail.com>

* one more test for sequential use of DataSetIteratorSplitter

Signed-off-by: raver119 <raver119@gmail.com>

* Fixes

* Fixes

* one more test for Alexander

Signed-off-by: raver119 <raver119@gmail.com>

* Some fixes

* Some fixes

* one more test for Alexander

Signed-off-by: raver119 <raver119@gmail.com>

* minor test fix

Signed-off-by: raver119 <raver119@gmail.com>

* Some fixes

* Some fixes

* couple of assertions tweaked

Signed-off-by: raver119 <raver119@gmail.com>

* MDS splitter test :/

Signed-off-by: raver119 <raver119@gmail.com>

* Minor refactoring

* Multi dataset

* Some fixes

* More tests

* Small number of test fixes/improvements (failures on CI) (#31)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* [WIP] More CUDA stuff (#26)

* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* LRN BP CUDA

Signed-off-by: raver119 <raver119@gmail.com>

* less memory

Signed-off-by: raver119 <raver119@gmail.com>

* Fixed bug with crop_and_resize op helper.

* get rid of unnecessary index-calculation dunction

Signed-off-by: Yurii <yurii@skymind.io>

* Fixed sort with nth_element cuda-based helper.

* Refactored nth_element.

* Refactored nth_element op and tests.

* Modified usage of dim array with sortTad routine.

* Refactored main routine of helper for non_max_image_suppression op.

* non_max_image_suppression op helper with cuda kernel implementation. Initial revision.

* fix vol2col cuda kernel

* meh

Signed-off-by: raver119 <raver119@gmail.com>

* topK concept

Signed-off-by: raver119 <raver119@gmail.com>

* unsorted topK with scanWitdh of 1

Signed-off-by: raver119 <raver119@gmail.com>

* correct vol2col tests

* sorted/unsorted topK

Signed-off-by: raver119 <raver119@gmail.com>

* implementation and fixing col2im/col2vol

* Corrected usage flags with input/output with reverse op.

* dup is const now

Signed-off-by: raver119 <raver119@gmail.com>

* percentile op

Signed-off-by: raver119 <raver119@gmail.com>

* group tests for mapool2d

Signed-off-by: Yurii <yurii@skymind.io>

* special test for george

Signed-off-by: raver119 <raver119@gmail.com>

* less threads for sortTad

Signed-off-by: raver119 <raver119@gmail.com>

* provide conv2d for cuda

Signed-off-by: Yurii <yurii@skymind.io>

* remove auther in sort tad kernel code

Signed-off-by: Yurii <yurii@skymind.io>

* provide depthwise_conv2d for cuda

Signed-off-by: Yurii <yurii@skymind.io>

* - max_pooling_with_argmax
- null check for special use

Signed-off-by: raver119 <raver119@gmail.com>

* dts cuda

Signed-off-by: raver119 <raver119@gmail.com>

* provide sconv2d for cuda

Signed-off-by: Yurii <yurii@skymind.io>

* std cuda

Signed-off-by: raver119 <raver119@gmail.com>

* Refactored non_max_suppression op to conform TF implementation.

* Improved suppression helper.

* provide pooling3d for cuda

Signed-off-by: Yurii <yurii@skymind.io>

* minor lstm rearrangements

Signed-off-by: raver119 <raver119@gmail.com>

* more of minor lstm rearrangements

Signed-off-by: raver119 <raver119@gmail.com>

* (bi)dynamic_rnn

Signed-off-by: raver119 <raver119@gmail.com>

* templates init order

Signed-off-by: raver119 <raver119@gmail.com>

* Refactored non_max_suppression op.

* Added cuda kernel for non_max_suppression.

* CPU sort by key/value

Signed-off-by: raver119 <raver119@gmail.com>

* CPU sort TAD by key/value

Signed-off-by: raver119 <raver119@gmail.com>

* CPU sort TAD by key/value tests

Signed-off-by: raver119 <raver119@gmail.com>

* Eliminate compiler error with cuda implementation.

* - repaired gradCheck in cuda
- provide conv2d_bp for cuda

Signed-off-by: Yurii <yurii@skymind.io>

* missed signature

Signed-off-by: raver119 <raver119@gmail.com>

* provide depthwise_conv2d_bp for cuda

Signed-off-by: Yurii <yurii@skymind.io>

* Implementation of lup helper with cuda kernel. Initial commit.

* further work on backprops for convolutions

Signed-off-by: Yurii <yurii@skymind.io>

* CUDA linear sort by key/val

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA tad sort by key/val

Signed-off-by: raver119 <raver119@gmail.com>

* start providing of backprop for pooling2d/3d

Signed-off-by: Yurii <yurii@skymind.io>

* Added atomicAdd for bool datatype.

* dynamic partition concept

Signed-off-by: raver119 <raver119@gmail.com>

* dynamic partition concept

Signed-off-by: raver119 <raver119@gmail.com>

* dynamic partition scalar CUDA

Signed-off-by: raver119 <raver119@gmail.com>

* important comment

Signed-off-by: raver119 <raver119@gmail.com>

* fix pooling2d/3d backprop helpers

Signed-off-by: Yurii <yurii@skymind.io>

* Added non-linear test with dynamic_partition.

* Improved test for dynamic_partition.

* dynamic_partition TAD concept

Signed-off-by: raver119 <raver119@gmail.com>

* - dynamic_partition TAD CUDA impl
- dynamic_partition TAD CPU fix

Signed-off-by: raver119 <raver119@gmail.com>

* - rewrite cpu code for usampling2d/3d
- write cuda code for usampling2d/3d

Signed-off-by: Yurii <yurii@skymind.io>

* dynamic_stitch CUDA vector case

Signed-off-by: raver119 <raver119@gmail.com>

* dynamic_stitch CUDA TAD case concept

Signed-off-by: raver119 <raver119@gmail.com>

* dynamic_stitch CUDA TAD case impl

Signed-off-by: raver119 <raver119@gmail.com>

* Added tests for dynamic_stitch 3D-4D cases.

* minor tests tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* Fixed type check for dynamic stitch.

* min/max bp

Signed-off-by: raver119 <raver119@gmail.com>

* rewrite code for upsampling2d/3d cpu

Signed-off-by: Yurii <yurii@skymind.io>

* reduce min/max/norm_max bp

Signed-off-by: raver119 <raver119@gmail.com>

* lup implementation. Additional enhancements.

* provide code for upsamling2d/3d backprop

Signed-off-by: Yurii <yurii@skymind.io>

* weightedCrossEntropyWithLogits

Signed-off-by: raver119 <raver119@gmail.com>

* Fixed template math atomicMul for 64bit ints.

* Refactored dynamic_partition_bp op.

* inverseBroadcast fix

Signed-off-by: raver119 <raver119@gmail.com>

* DynamicPartitionBP test datatype fixed.

* - nd4j_atomicMul Windows fix
- cpu/NDArrayLambda.hpp excluded from CUDA

Signed-off-by: raver119 <raver119@gmail.com>
  • Loading branch information
AlexDBlack committed Jun 28, 2019
1 parent d296026 commit 0eaaabb
Show file tree
Hide file tree
Showing 331 changed files with 18,080 additions and 7,484 deletions.
Expand Up @@ -31,7 +31,7 @@ public synchronized static TaskCreator defaultTaskCreatorFor(Class<? extends Par
}
return c.newInstance();
} catch (Exception e){
throw new RuntimeException("Could not create new instance of task creator class: " + c, e);
throw new RuntimeException("Could not create new instance of task creator class: " + c + " - missing no-arg constructor?", e);
}
}

Expand Down
Expand Up @@ -83,7 +83,7 @@ private DataSetIteratorFactory create(Map<String, Object> dataParameters) {
(Class<? extends DataSetIteratorFactory>) Class.forName(value);
return clazz.newInstance();
} catch (Exception e) {
throw new RuntimeException(e);
throw new RuntimeException("Could not create DataSetIteratorFactory instance - missing no-arg constructor?", e);
}
}
}
Expand Up @@ -79,7 +79,7 @@ private DataSetIteratorFactory create(Map<String, Object> dataParameters) {
(Class<? extends DataSetIteratorFactory>) Class.forName(value);
return clazz.newInstance();
} catch (Exception e) {
throw new RuntimeException(e);
throw new RuntimeException("Could not create DataSetIteratorFactory instance - missing no-arg constructor?", e);
}
}
}
Expand Up @@ -54,7 +54,7 @@ public double score(Object model, Class<? extends DataSource> dataSource, Proper
ds.configure(dataSourceProperties);
}
} catch (Exception e){
throw new RuntimeException(e);
throw new RuntimeException("Error creating DataSource instance - missing no-arg constructor?", e);
}
return score(model, ds.testData());
}
Expand Down
Expand Up @@ -188,10 +188,15 @@ private OptimizationResult callHelper() throws Exception {
//For DataSetIterator: wraps in a MultiDataSetIterator, hence method can be used for both
MultiDataSetIterator iterator;
if(dataSource != null){
DataSource dsInstance = dataSource.newInstance();
if(dataSourceProperties != null)
dsInstance.configure(dataSourceProperties);
iterator = ScoreUtil.getMultiIterator(dsInstance.trainData());
try {
DataSource dsInstance = dataSource.newInstance();
if (dataSourceProperties != null)
dsInstance.configure(dataSourceProperties);
iterator = ScoreUtil.getMultiIterator(dsInstance.trainData());
} catch (Exception e){
throw new RuntimeException("Error instantiating instance of DataSource for class " + dataSource.getName() +
" - no zero-arg constructor?",e);
}
} else {
iterator = ScoreUtil.getMultiIterator(dataProvider.trainData(candidate.getDataParameters()));
}
Expand Down
Expand Up @@ -190,7 +190,8 @@ private OptimizationResult callHelper() {
try{
dsInstance = dataSource.newInstance();
} catch (Exception e){
throw new RuntimeException("Error instantiating instance of DataSource for class " + dataSource.getName());
throw new RuntimeException("Error instantiating instance of DataSource for class " + dataSource.getName() +
" - no zero-arg constructor?",e);
}
if(dataSourceProperties != null)
dsInstance.configure(dataSourceProperties);
Expand Down
Expand Up @@ -26,6 +26,7 @@
import org.datavec.api.writable.Text;
import org.datavec.api.writable.Writable;
import org.junit.Test;
import org.nd4j.linalg.api.buffer.DataType;
import org.nd4j.linalg.api.ndarray.INDArray;
import org.nd4j.linalg.factory.Nd4j;
import org.nd4j.linalg.ops.transforms.Transforms;
Expand Down Expand Up @@ -78,14 +79,14 @@ public void testNDArrayColumnsMathOpTransform() {
assertEquals(expColNames, tp.getFinalSchema().getColumnNames());


List<Writable> in = Arrays.<Writable>asList(new DoubleWritable(0), new NDArrayWritable(Nd4j.linspace(0, 9, 10)),
new NDArrayWritable(Nd4j.valueArrayOf(1, 10, 2.0)));
List<Writable> in = Arrays.<Writable>asList(new DoubleWritable(0), new NDArrayWritable(Nd4j.linspace(DataType.DOUBLE,0, 10, 1).reshape(1,10)),
new NDArrayWritable(Nd4j.valueArrayOf(1, 10, 2.0).castTo(DataType.DOUBLE)));
List<Writable> out = tp.execute(in);

List<Writable> exp =
Arrays.<Writable>asList(new DoubleWritable(0), new NDArrayWritable(Nd4j.linspace(0, 9, 10)),
new NDArrayWritable(Nd4j.valueArrayOf(1, 10, 2.0)),
new NDArrayWritable(Nd4j.linspace(0, 9, 10).addi(2.0)));
Arrays.<Writable>asList(new DoubleWritable(0), new NDArrayWritable(Nd4j.linspace(DataType.DOUBLE,0, 10, 1).reshape(1,10)),
new NDArrayWritable(Nd4j.valueArrayOf(1, 10, 2.0).castTo(DataType.DOUBLE)),
new NDArrayWritable(Nd4j.linspace(DataType.DOUBLE, 0, 10, 1).addi(2.0).reshape(1,10)));

assertEquals(exp, out);
}
Expand Down
Expand Up @@ -20,9 +20,15 @@
import org.deeplearning4j.BaseDL4JTest;
import org.deeplearning4j.datasets.iterator.tools.DataSetGenerator;
import org.junit.Test;
import org.nd4j.linalg.dataset.api.iterator.DataSetIterator;
import org.nd4j.linalg.exception.ND4JIllegalStateException;
import org.nd4j.linalg.factory.Nd4j;

import static org.junit.Assert.assertEquals;
import java.util.Collections;
import java.util.List;
import java.util.Random;

import static org.junit.Assert.*;

public class DataSetSplitterTests extends BaseDL4JTest {
@Test
Expand All @@ -39,7 +45,7 @@ public void testSplitter_1() throws Exception {
int gcntTest = 0;
int global = 0;
// emulating epochs here
for (int e = 0; e < numEpochs; e++){
for (int e = 0; e < numEpochs; e++) {
int cnt = 0;
while (train.hasNext()) {
val data = train.next().getFeatures();
Expand Down Expand Up @@ -79,7 +85,7 @@ public void testSplitter_2() throws Exception {
int gcntTest = 0;
int global = 0;
// emulating epochs here
for (int e = 0; e < numEpochs; e++){
for (int e = 0; e < numEpochs; e++) {
int cnt = 0;
while (train.hasNext()) {
val data = train.next().getFeatures();
Expand Down Expand Up @@ -117,7 +123,7 @@ public void testSplitter_3() throws Exception {
int gcntTest = 0;
int global = 0;
// emulating epochs here
for (int e = 0; e < numEpochs; e++){
for (int e = 0; e < numEpochs; e++) {
int cnt = 0;
while (train.hasNext()) {
val data = train.next().getFeatures();
Expand All @@ -144,4 +150,245 @@ public void testSplitter_3() throws Exception {

assertEquals(1000 * numEpochs, global);
}

@Test
public void testSplitter_4() {
val back = new DataSetGenerator(1000, new int[]{32, 100}, new int[]{32, 5});

val splitter = new DataSetIteratorSplitter(back, 1000, new double[]{0.5, 0.3, 0.2});
List<DataSetIterator> iteratorList = splitter.getIterators();
val numEpochs = 10;
int global = 0;
// emulating epochs here
for (int e = 0; e < numEpochs; e++) {
int iterNo = 0;
int perEpoch = 0;
for (val partIterator : iteratorList) {
int cnt = 0;
partIterator.reset();
while (partIterator.hasNext()) {
val data = partIterator.next().getFeatures();
assertEquals("Train failed on iteration " + cnt + "; epoch: " + e,
(float) perEpoch, data.getFloat(0), 1e-5);
//gcntTrain++;
global++;
cnt++;
++perEpoch;
}
++iterNo;
}
}

assertEquals(1000* numEpochs, global);
}

@Test
public void testSplitter_5() {
val back = new DataSetGenerator(1000, new int[]{32, 100}, new int[]{32, 5});

val splitter = new DataSetIteratorSplitter(back, new int[]{900, 100});

List<DataSetIterator> iteratorList = splitter.getIterators();
val numEpochs = 10;

int global = 0;
// emulating epochs here
for (int e = 0; e < numEpochs; e++) {
int iterNo = 0;
int perEpoch = 0;
for (val partIterator : iteratorList) {
partIterator.reset();
while (partIterator.hasNext()) {
int cnt = 0;
val data = partIterator.next().getFeatures();

assertEquals("Train failed on iteration " + cnt + "; epoch: " + e,
(float) perEpoch, data.getFloat(0), 1e-5);
//gcntTrain++;
global++;
cnt++;
++perEpoch;
}
++iterNo;
}
}

assertEquals(1000 * numEpochs, global);
}

@Test
public void testSplitter_6() {
val back = new DataSetGenerator(1000, new int[]{32, 100}, new int[]{32, 5});

// we're going to mimic train+test+validation split
val splitter = new DataSetIteratorSplitter(back, new int[]{800, 100, 100});

assertEquals(3, splitter.getIterators().size());

val trainIter = splitter.getIterators().get(0);
val testIter = splitter.getIterators().get(1);
val validationIter = splitter.getIterators().get(2);

// we're going to have multiple epochs
int numEpochs = 10;
for (int e = 0; e < numEpochs; e++) {
int globalIter = 0;
trainIter.reset();
testIter.reset();
validationIter.reset();

boolean trained = false;
while (trainIter.hasNext()) {
trained = true;
val ds = trainIter.next();
assertNotNull(ds);

assertEquals("Failed at iteration [" + globalIter + "]", (double) globalIter, ds.getFeatures().getDouble(0), 1e-5f);
globalIter++;
}
assertTrue("Failed at epoch [" + e + "]", trained);
assertEquals(800, globalIter);


// test set is used every epoch
boolean tested = false;
//testIter.reset();
while (testIter.hasNext()) {
tested = true;
val ds = testIter.next();
assertNotNull(ds);

assertEquals("Failed at iteration [" + globalIter + "]", (double) globalIter, ds.getFeatures().getDouble(0), 1e-5f);
globalIter++;
}
assertTrue("Failed at epoch [" + e + "]", tested);
assertEquals(900, globalIter);

// validation set is used every 5 epochs
if (e % 5 == 0) {
boolean validated = false;
//validationIter.reset();
while (validationIter.hasNext()) {
validated = true;
val ds = validationIter.next();
assertNotNull(ds);

assertEquals("Failed at iteration [" + globalIter + "]", (double) globalIter, ds.getFeatures().getDouble(0), 1e-5f);
globalIter++;
}
assertTrue("Failed at epoch [" + e + "]", validated);
}

// all 3 iterators have exactly 1000 elements combined
if (e % 5 == 0)
assertEquals(1000, globalIter);
else
assertEquals(900, globalIter);
trainIter.reset();
}
}

@Test
public void testUnorderedSplitter_1() {
val back = new DataSetGenerator(1000, new int[]{32, 100}, new int[]{32, 5});

val splitter = new DataSetIteratorSplitter(back, new int[]{500, 500});

List<DataSetIterator> iteratorList = splitter.getIterators();
val numEpochs = 10;

int global = 0;
// emulating epochs here
for (int e = 0; e < numEpochs; e++) {

// Get data from second part, then rewind for the first one.
int cnt = 0;
int partNumber = 1;
while (iteratorList.get(partNumber).hasNext()) {
int farCnt = (1000 / 2) * (partNumber) + cnt;
val data = iteratorList.get(partNumber).next().getFeatures();

assertEquals("Train failed on iteration " + cnt + "; epoch: " + e, (float) farCnt, data.getFloat(0), 1e-5);
cnt++;
global++;
}
iteratorList.get(partNumber).reset();
partNumber = 0;
cnt = 0;
while (iteratorList.get(0).hasNext()) {
val data = iteratorList.get(0).next().getFeatures();

assertEquals("Train failed on iteration " + cnt + "; epoch: " + e, (float) cnt++, data.getFloat(0), 1e-5);
global++;
}
}
}

@Test
public void testUnorderedSplitter_2() {
val back = new DataSetGenerator(1000, new int[]{32, 100}, new int[]{32, 5});

val splitter = new DataSetIteratorSplitter(back, new int[]{2});

List<DataSetIterator> iteratorList = splitter.getIterators();

for (int partNumber = 0 ; partNumber < iteratorList.size(); ++partNumber) {
int cnt = 0;
while (iteratorList.get(partNumber).hasNext()) {
val data = iteratorList.get(partNumber).next().getFeatures();

assertEquals("Train failed on iteration " + cnt, (float) (500*partNumber + cnt), data.getFloat(0), 1e-5);
cnt++;
}
}
}

@Test
public void testUnorderedSplitter_3() {
val back = new DataSetGenerator(1000, new int[]{32, 100}, new int[]{32, 5});

val splitter = new DataSetIteratorSplitter(back, new int[]{10});

List<DataSetIterator> iteratorList = splitter.getIterators();
Random random = new Random();
int[] indexes = new int[iteratorList.size()];
for (int i = 0; i < indexes.length; ++i) {
indexes[i] = random.nextInt(iteratorList.size());
}

for (int partNumber : indexes) {
int cnt = 0;
while (iteratorList.get(partNumber).hasNext()) {
val data = iteratorList.get(partNumber).next().getFeatures();

assertEquals("Train failed on iteration " + cnt, (float) (500*partNumber + cnt), data.getFloat(0), 1e-5);
cnt++;
}
}
}

@Test
public void testUnorderedSplitter_4() {
val back = new DataSetGenerator(1000, new int[]{32, 100}, new int[]{32, 5});

// we're going to mimic train+test+validation split
val splitter = new DataSetIteratorSplitter(back, new int[]{80, 10, 5});

assertEquals(3, splitter.getIterators().size());

val trainIter = splitter.getIterators().get(0); // 0..79
val testIter = splitter.getIterators().get(1); // 80 ..89
val validationIter = splitter.getIterators().get(2); // 90..94

// we're skipping train/test and go for validation first. we're that crazy, right.
int valCnt = 0;
while (validationIter.hasNext()) {
val ds = validationIter.next();
assertNotNull(ds);

assertEquals("Validation failed on iteration " + valCnt, (float) valCnt + 90, ds.getFeatures().getFloat(0), 1e-5);
valCnt++;
}
assertEquals(5, valCnt);
}
}

0 comments on commit 0eaaabb

Please sign in to comment.