Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Baseline set of fixes for release #10052

Open
wants to merge 64 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
64 commits
Select commit Hold shift + click to select a range
77aabe9
Fix ndarray compilation
agibsonccc Jul 30, 2023
40cac19
Fix ndarray compilation
agibsonccc Jul 30, 2023
0f780b2
Add new build scripts
agibsonccc Aug 4, 2023
9865169
Add deallocation and allocation logging behind a flag to allow bette…
agibsonccc Aug 8, 2023
b3abfdc
Fix up transpose op bugs. (deallocations and shape buffer related)
agibsonccc Aug 13, 2023
569337c
Clean up string assign support
agibsonccc Aug 15, 2023
3b2ada2
Add proper thread sanitizer support
agibsonccc Aug 21, 2023
9269620
Fix up scalar shape builders support
agibsonccc Aug 26, 2023
8aa0ef1
WIP:
agibsonccc Sep 19, 2023
0b96de1
Fix dynamic stitch in cuda
agibsonccc Sep 27, 2023
836f51f
Fix conv2d cuda invocation
agibsonccc Sep 27, 2023
2a9cd46
Fix cpu build
agibsonccc Sep 28, 2023
749ed8d
Add new result set printing
agibsonccc Oct 1, 2023
949e7f8
Remove print statements
agibsonccc Oct 1, 2023
44746f4
Add databuffer debugging
agibsonccc Oct 7, 2023
0ddaf46
Remove print statements
agibsonccc Oct 11, 2023
967fb47
Fix underlyng tad cuda issue
agibsonccc Oct 14, 2023
a33c200
Fix solve to reach parity with cuda
agibsonccc Oct 21, 2023
09d3cfc
Fix array options data type
agibsonccc Oct 27, 2023
8da6147
Fix array options data type
agibsonccc Oct 27, 2023
e89c928
Ensure a check is performed for empty bufers when validating data types
agibsonccc Oct 27, 2023
abf2bfa
Add partitioned tensorflow tests to alleviate cuda resource constraints
agibsonccc Nov 1, 2023
1248eb5
Remove old assumption about reduce + empty shapes always being 0 length.
agibsonccc Nov 1, 2023
87ea11e
Fix range shape return type
agibsonccc Nov 2, 2023
d4419b8
Fix up more range invocations and empty shapes
agibsonccc Nov 2, 2023
d4df0a1
Fix all TF partitions
agibsonccc Nov 7, 2023
7923dc8
Add more order validation.
agibsonccc Nov 9, 2023
06f0e06
Exclude validation on certain shape descriptor constructors
agibsonccc Nov 9, 2023
7e092ae
Add reshape to flattened params
agibsonccc Nov 17, 2023
86e5975
Fix misc reshape issues with 2d biases
agibsonccc Dec 3, 2023
eb39721
Fix assign on flatten arrays
agibsonccc Dec 7, 2023
9dc5bff
Overhaul workspaces usage in deeplearning4j-nn.
agibsonccc Dec 15, 2023
3b57454
Update computation graph to achieve parity with new workspace managem…
agibsonccc Dec 15, 2023
fa037f2
Fix reverse time series mask
agibsonccc Dec 16, 2023
7e162fa
Add model manager in python in dl4j contrib.
agibsonccc Dec 19, 2023
650007a
Add first bidirectional test
agibsonccc Dec 19, 2023
2ecd289
Fix model manager running models
agibsonccc Dec 20, 2023
ec28aca
Finish fixing recurrent layer
agibsonccc Dec 22, 2023
3538c99
Add new ndarray logging to track changes to arrays over time.
agibsonccc Feb 10, 2024
891dbe6
Fix bidirectional tests
agibsonccc Mar 5, 2024
864caa5
Clean up print statements, delete unused classes.
agibsonccc Mar 5, 2024
4f195a5
Removes graveslstm, fix comp graph test for bidirectional
agibsonccc Mar 6, 2024
ca78a4b
Fix lstm gradient check tests
agibsonccc Mar 12, 2024
0d467ea
Fix up shape.h linking
agibsonccc Mar 15, 2024
f24a437
Add c++ based log ndarray events (print statements in ndarray and dat…
agibsonccc Mar 16, 2024
17e85ae
Fix subtle view issues
agibsonccc Mar 17, 2024
f2ecc7b
Fix simplernn gemm
agibsonccc Mar 18, 2024
7d0920e
Add missing function
agibsonccc Mar 18, 2024
f26dfa6
Fix conv3d gradient checks
agibsonccc Mar 26, 2024
a5bf886
Fix convolution output size calculations for 2/3d
agibsonccc Apr 4, 2024
837b3e2
Rewrite locallyconnected 2d.
agibsonccc Apr 18, 2024
127f7e0
WIP: update conv2d to work with different weight formats.
agibsonccc Apr 18, 2024
5ac76d0
Finish conv2d layer conversion
agibsonccc Apr 23, 2024
fe86569
Fix conv2d gradient checks (ports java logic to c++)
agibsonccc Apr 27, 2024
1ba16b3
Remove print statements
agibsonccc Apr 27, 2024
6887d2b
Fix nchw setting in conv2d.
agibsonccc Apr 28, 2024
07b7e07
Clean up print statements
agibsonccc Apr 29, 2024
d32edbf
Add creation stack trace tracking for constructors
agibsonccc May 2, 2024
ae93deb
Fix batched gemm indexing.
agibsonccc May 3, 2024
3a1db4c
Fix constant shape helper with reshape in place.
agibsonccc May 6, 2024
0b300c6
Fix more constant buffer cache in place modification.
agibsonccc May 6, 2024
dffcc13
Fix grad checks for conv2d bp.
agibsonccc May 9, 2024
0bc7237
Ensure compatibility with old dl4j conv2d implementation.
agibsonccc May 15, 2024
dc108a7
Get rid of tests.
agibsonccc May 15, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
12 changes: 5 additions & 7 deletions LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -375,10 +375,8 @@ Apache License, Version 2.0

##########################

Kuromoji Code

Codebase: deeplearning4j/deeplearning4j-nlp-parent/deeplearning4j-nlp-japanese/src/main/java/com/atilika/kuromoji/

Copyright (c) 2010-2015 Atilika Inc. and contributors. All rights reserved.

Apache License, Version 2.0
##########################
Copyright 2016 Scalified <http://www.scalified.com>
src/main/java/org/nd4j/common/com/scalified/tree
https://github.com/Scalified/tree
##########################
3 changes: 3 additions & 0 deletions build-scripts/build-cpu-backend-address-sanitizer.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/bash
cd ..
mvn -Pcpu clean install -DskipTests -pl :libnd4j,:nd4j-native-preset,:nd4j-native -Dlibnd4j.sanitize=ON -Dlibnd4j.sanitizers=address,undefined,float-divide-by-zero,float-cast-overflow
3 changes: 3 additions & 0 deletions build-scripts/build-cpu-backend-debug-address-sanitizer.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/bash
cd ..
mvn -Pcpu clean install -Dlibnd4j.calltrace=ON -Dlibnd4j.build=debug -DskipTests -pl :libnd4j,:nd4j-native-preset,:nd4j-native -Dlibnd4j.sanitize=ON -Dlibnd4j.sanitizers=address,undefined,float-divide-by-zero,float-cast-overflow
3 changes: 3 additions & 0 deletions build-scripts/build-cpu-backend-debug.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/bash
cd ..
mvn -Pcpu clean install -Dlibnd4j.calltrace=ON -Dlibnd4j.build=debug -DskipTests -pl :libnd4j,:nd4j-native-preset,:nd4j-native
3 changes: 3 additions & 0 deletions build-scripts/build-cpu-backend-onednn-address-sanitizer.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/bash
cd ..
mvn -Pcpu clean install -DskipTests -Dlibnd4j.helper=onednn -pl :libnd4j,:nd4j-native-preset,:nd4j-native -Dlibnd4j.sanitize=ON -Dlibnd4j.sanitizers=address,undefined,float-divide-by-zero,float-cast-overflow
3 changes: 3 additions & 0 deletions build-scripts/build-cpu-backend-onednn-debug.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/bash
cd ..
mvn -Pcpu clean install -DskipTests -Dlibnd4j.calltrace=ON -Dlibnd4j.helper=onednn -Dlibnd4j.build=debug -pl :libnd4j,:nd4j-native-preset,:nd4j-native
3 changes: 3 additions & 0 deletions build-scripts/build-cpu-backend-onednn-thread-sanitizer.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/bash
cd ..
mvn -Pcpu clean install -DskipTests -Dlibnd4j.helper=onednn -pl :libnd4j,:nd4j-native-preset,:nd4j-native -Dlibnd4j.sanitize=ON -Dlibnd4j.sanitizers=thread,undefined,float-divide-by-zero,float-cast-overflow
3 changes: 3 additions & 0 deletions build-scripts/build-cpu-backend-onednn.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/bash
cd ..
mvn -Pcpu clean install -DskipTests -Dlibnd4j.helper=onednn -pl :libnd4j,:nd4j-native-preset,:nd4j-native
3 changes: 3 additions & 0 deletions build-scripts/build-cpu-backend.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/bash
cd ..
mvn -Pcpu clean install -DskipTests -pl :libnd4j,:nd4j-native-preset,:nd4j-native
3 changes: 3 additions & 0 deletions build-scripts/build-cuda-backend-address-sanitizer.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/bash
cd ..
mvn -Pcuda -Dlibnd4j.chip=cuda clean install -DskipTests -pl :libnd4j,:nd4j-cuda-12.1-preset,:nd4j-cuda-12.1 -Dlibnd4j.sanitize=ON -Dlibnd4j.sanitizers=address,undefined,float-divide-by-zero,float-cast-overflow
3 changes: 3 additions & 0 deletions build-scripts/build-cuda-backend-cudnn-address-sanitizer.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/bash
cd ..
mvn -Pcuda -Dlibnd4j.compute=86 -Dlibnd4j.chip=cuda clean install -DskipTests -Dlibnd4j.helper=cudnn -pl :libnd4j,:nd4j-cuda-12.1-preset,:nd4j-cuda-12.1 -Dlibnd4j.sanitize=ON -Dlibnd4j.sanitizers=address,undefined,float-divide-by-zero,float-cast-overflow
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/bash
cd ..
mvn -Pcuda -Dlibnd4j.compute=86 -Dlibnd4j.chip=cuda -Dlibnd4j.helper=cudnn clean install -Dlibnd4j.build=debug -Dlibnd4j.calltrace=ON -DskipTests -pl :libnd4j,:nd4j-cuda-12.1-preset,:nd4j-cuda-12.1 -Dlibnd4j.sanitize=ON -Dlibnd4j.sanitizers=address,undefined,float-divide-by-zero,float-cast-overflow
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/bash
cd ..
mvn -Pcuda -Dlibnd4j.compute=86 -Dlibnd4j.chip=cuda -Dlibnd4j.helper=cudnn clean install -Dlibnd4j.build=debug -Dlibnd4j.calltrace=ON -DskipTests -pl :libnd4j,:nd4j-cuda-12.1-preset,:nd4j-cuda-12.1 -Dlibnd4j.sanitize=ON -Dlibnd4j.sanitizers=thread,undefined,float-divide-by-zero,float-cast-overflow
3 changes: 3 additions & 0 deletions build-scripts/build-cuda-backend-cudnn-debug.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/bash
cd ..
mvn -Pcuda -Dlibnd4j.compute=86 -Dlibnd4j.chip=cuda -Dlibnd4j.helper=cudnn clean install -Dlibnd4j.build=debug -Dlibnd4j.calltrace=ON -DskipTests -pl :libnd4j,:nd4j-cuda-12.1-preset,:nd4j-cuda-12.1
3 changes: 3 additions & 0 deletions build-scripts/build-cuda-backend-cudnn-thread-sanitizer.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/bash
cd ..
mvn -Pcuda -Dlibnd4j.compute=86 -Dlibnd4j.chip=cuda clean install -DskipTests -Dlibnd4j.helper=cudnn -pl :libnd4j,:nd4j-cuda-12.1-preset,:nd4j-cuda-12.1 -Dlibnd4j.sanitize=ON -Dlibnd4j.sanitizers=thread,undefined,float-divide-by-zero,float-cast-overflow
3 changes: 3 additions & 0 deletions build-scripts/build-cuda-backend-cudnn.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/bash
cd ..
mvn -Pcuda -Dlibnd4j.compute=86 -Dlibnd4j.chip=cuda clean install -DskipTests -Dlibnd4j.helper=cudnn -pl :libnd4j,:nd4j-cuda-12.1-preset,:nd4j-cuda-12.1
3 changes: 3 additions & 0 deletions build-scripts/build-cuda-backend-debug.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/bash
cd ..
mvn -Pcuda -Dlibnd4j.compute=86 -Dlibnd4j.chip=cuda clean install -Dlibnd4j.build=debug -Dlibnd4j.calltrace=ON -DskipTests -pl :libnd4j,:nd4j-cuda-12.1-preset,:nd4j-cuda-12.1
3 changes: 3 additions & 0 deletions build-scripts/build-cuda-backend-thread-sanitizer.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/bash
cd ..
mvn -Pcuda -Dlibnd4j.chip=cuda clean install -DskipTests -pl :libnd4j,:nd4j-cuda-12.1-preset,:nd4j-cuda-12.1 -Dlibnd4j.sanitize=ON -Dlibnd4j.sanitizers=thread,undefined,float-divide-by-zero,float-cast-overflow
3 changes: 3 additions & 0 deletions build-scripts/build-cuda-backend.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/bash
cd ..
mvn -Pcuda -Dlibnd4j.chip=cuda clean install -DskipTests -pl :libnd4j,:nd4j-cuda-12.1-preset,:nd4j-cuda-12.1
2 changes: 1 addition & 1 deletion codegen/libnd4j-gen/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@
<maven.compiler.source>11</maven.compiler.source>
<maven.compiler.target>11</maven.compiler.target>
<nd4j.version>1.0.0-SNAPSHOT</nd4j.version>
<maven-shade-plugin.version>3.1.1</maven-shade-plugin.version>
<maven-shade-plugin.version>3.5.1</maven-shade-plugin.version>
<javaparser.version>3.24.4</javaparser.version>
</properties>

Expand Down
9 changes: 7 additions & 2 deletions codegen/op-codegen/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
<junit.platform.launcher.version>1.8.0-M1</junit.platform.launcher.version>
<junit-jupiter.version>5.4.2</junit-jupiter.version>
<java.version>11</java.version>
<maven-shade-plugin.version>3.2.1</maven-shade-plugin.version>
<maven-shade-plugin.version>3.5.1</maven-shade-plugin.version>
<kotlin.compiler.jvmTarget>11</kotlin.compiler.jvmTarget>
<kotlin.compiler.incremental>true</kotlin.compiler.incremental>
<javapoet.version>1.13.0</javapoet.version>
Expand Down Expand Up @@ -53,7 +53,12 @@
<artifactId>slf4j-api</artifactId>
<version>1.7.28</version>
</dependency>

<!-- Redirect jackson to slf4j. -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>log4j-over-slf4j</artifactId>
<version>1.7.28</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,6 @@ private Nd4jNamespaceGenerator() { }

public static void generate(NamespaceOps namespace, GeneratorConfig config, File outputDirectory, String className,
String basePackage, String docsDirectory) throws IOException {
//String basePackage = "org.nd4j.linalg.factory";

generateEnums(outputDirectory, basePackage);
generateConfigs(outputDirectory, basePackage);
Expand All @@ -117,7 +116,6 @@ public static void generate(NamespaceOps namespace, GeneratorConfig config, File

public static void generate(NamespaceOps namespace, GeneratorConfig config, File outputDirectory, String className,
String basePackage, String parentClass, String docsDirectory) throws IOException {
//String basePackage = "org.nd4j.linalg.factory";

generateEnums(outputDirectory, basePackage);
generateConfigs(outputDirectory, basePackage);
Expand Down Expand Up @@ -280,8 +278,8 @@ private static void buildJavaDoc(Op op, Signature s, MethodSpec.Builder c, boole
}
List<Parameter> params = s.getParameters();
if(!params.isEmpty()){
for(Parameter p : params){
if(p instanceof Input){
for(Parameter p : params) {
if(p instanceof Input) {
Input i = (Input)p;
c.addJavadoc("@param " + i.getName() + " " + (i.getDescription() == null ? "" : DocTokens.processDocText(i.getDescription(), op, DocTokens.GenerationType.ND4J)) + " (" + i.getType() + " type)\n");
} else if(p instanceof Arg) {
Expand Down Expand Up @@ -454,7 +452,7 @@ private static void buildExecution(MethodSpec.Builder c, Op op, List<String> inN
return 0;
}
).map(it -> {
if(inNames.contains(it.name())){
if(inNames.contains(it.name())) {
return it.name();
}else{
if(!it.hasDefaultValue()) throw new IllegalStateException("The parameter "+it.name()+" has no default value, but is also not part of "+inNames.toString());
Expand Down Expand Up @@ -542,7 +540,7 @@ private static void buildExecution(MethodSpec.Builder c, Op op, List<String> inN

private static void enableVarargsOnLastArg(MethodSpec.Builder c, Op op, Signature s) {
List<Parameter> p = s.getParameters();
if(!p.isEmpty()){
if(!p.isEmpty()) {
Parameter lastP = p.get(p.size() - 1);
if (lastP instanceof Arg) {
Arg arg = (Arg) lastP;
Expand Down Expand Up @@ -634,7 +632,6 @@ else if (withName)
private static StringBuilder buildDocSectionText(List<DocSection> docSections) {
StringBuilder sb = new StringBuilder();
for (DocSection ds : docSections) {
//if(ds.applies(Language.JAVA, CodeComponent.OP_CREATOR)){
String text = ds.getText();
String[] lines = text.split("\n");
for (int i = 0; i < lines.length; i++) {
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@
import abc
import os
from typing import Dict, List
import tensorflow as tf
import numpy as np
from keras import Sequential, Model
from keras.engine.functional import Functional

"""
The state of a model, including the model itself, the inputs, and the outputs
"""


class ModelState:
def __init__(self, model_name: str, model: tf.keras.models.Model, inputs: List[tf.Tensor] = [],
outputs: List[tf.Tensor] = []):
self.model_name = model_name
self.model = model
self.inputs = inputs
self.outputs = outputs


"""

The ModelManager class is responsible for building and saving models. It is an abstract class, and subclasses
must implement the model_builder and set_default_inputs methods. The model_builder method is responsible for
building the models and adding them to the model manager. The set_default_inputs method is responsible for
setting the default inputs for the models. The compute_all_outputs method is responsible for computing the outputs
for all models. The save_all method is responsible for saving all models and inputs/outputs to the output directory.

"""


class ModelManager(abc.ABC):
def __init__(self):
self.models: Dict[str, ModelState] = {}
self.base_dir = ''

def make_dir(self, path: str) -> None:
os.makedirs(os.path.join(path, self.test_name()), exist_ok=True)
self.base_dir = os.path.join(path, self.test_name())

def test_name(self) -> str:
"""
:return: the name of the test - this is typically used as the directory.
:return: the class name
"""
return self.__class__.__name__

def add_model(self, model_name: str, model: tf.keras.models.Model) -> None:
"""
Adds a model to the model manager
:param model_name: the name of the model
:param model: the model itself
:return: none
"""
self.models[model_name] = ModelState(model_name, model)

@abc.abstractmethod
def model_builder(self):
"""
Builds the models and adds them to the model manager
:return:
"""
pass

@abc.abstractmethod
def set_default_inputs(self) -> None:
"""
Sets the default inputs for the models
:return:
"""
pass

def compute_all_outputs(self) -> None:
"""
Computes the outputs for all models
:return:
"""
for model_name in self.models.keys():
self.compute_outputs(model_name)

def set_inputs(self, model_name: str, inputs: List[tf.Tensor]) -> None:
self.models[model_name].inputs = inputs

def compute_outputs(self, model_name: str) -> None:
"""
Computes the outputs for a single model
:param model_name:
:return:
"""
model_state = self.models.get(model_name)
if not model_state:
raise ValueError(f"Model not found for model: {model_name}")

model_state.outputs = model_state.model(model_state.inputs)

def save_all(self) -> None:
"""
Saves all models and inputs/outputs to the output directory
:return:
"""
for model_name, model_state in self.models.items():
model_dir = os.path.join(self.base_dir, model_name)
os.makedirs(model_dir, exist_ok=True)

# Save the model in .h5 format
model_state.model.save(os.path.join(model_dir, "model.h5"), save_format='h5')
if not isinstance(model_state.inputs, list):
model_state.inputs = [model_state.inputs]
if not isinstance(model_state.outputs, list):
model_state.outputs = [model_state.outputs]

# Save the inputs, outputs, and gradients
inputs, outputs = model_state.inputs, model_state.outputs
for i, (input_, output) in enumerate(zip(inputs, outputs)):
np.save(os.path.join(model_dir, f"{model_name}_input_{i}.npy"), input_.numpy())
np.save(os.path.join(model_dir, f"{model_name}_output_{i}.npy"), output.numpy())

# note this is fairly simplistic, all we want to do here
# is have a human readable way of knowing what the model is without
# a bunch of json and hdf5 complexity
if type(model_state.model) is Sequential:
with open(os.path.join(model_dir, "model_type.txt"), 'w') as f:
f.write('Sequential')
elif type(model_state.model) is Functional:
with open(os.path.join(model_dir, "model_type.txt"), 'w') as f:
f.write('Functional')
elif type(model_state.model) is Model:
with open(os.path.join(model_dir, "model_type.txt"), 'w') as f:
f.write('Model')
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
import argparse
import os

import keras.utils
from tests.bidirectional_test import BidirectionalModelManager
import tempfile
managers = [
BidirectionalModelManager()
]


def main():
# ensure random weights are reproducible
keras.utils.set_random_seed(42)
parser = argparse.ArgumentParser(description='Save ModelManager subclasses.')
parser.add_argument('--output_dir', type=str, required=False,
help='Directory to store models',default=os.path.join(tempfile.gettempdir(),'keras-dl4j-verification-models'))
args = parser.parse_args()



os.makedirs(args.output_dir, exist_ok=True)

# Instantiate the ModelManager subclass, and build the models
for model_manager in managers:
# make the directory for the test in the output models are stored here from the root
model_manager.make_dir(args.output_dir)
# build the models
model_manager.model_builder()
# set the default inputs
model_manager.set_default_inputs()
model_manager.compute_all_outputs()
# save to the parent directory passed in earlier with a subdirectory of the test name.
model_manager.save_all()


if __name__ == "__main__":
main()