Permalink
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
1 lines (1 sloc) 43.9 KB
{"paragraphs":[{"text":"%md\n### Note\n\nPlease view the [README](https://github.com/deeplearning4j/deeplearning4j/tree/master/dl4j-examples/tutorials/README.md) to learn about installing, setting up dependencies, and importing notebooks in Zeppelin","user":"admin","dateUpdated":"2018-05-17T02:01:28+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true,"tableHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<h3>Note</h3>\n<p>Please view the <a href=\"https://github.com/deeplearning4j/deeplearning4j/tree/master/dl4j-examples/tutorials/README.md\">README</a> to learn about installing, setting up dependencies, and importing notebooks in Zeppelin</p>\n"}]},"apps":[],"jobName":"paragraph_1526096834389_-1277057876","id":"20180512-034714_1413764542","dateCreated":"2018-05-12T03:47:14+0000","dateStarted":"2018-05-17T02:01:28+0000","dateFinished":"2018-05-17T02:01:28+0000","status":"FINISHED","progressUpdateIntervalMs":500,"focus":true,"$$hashKey":"object:1500"},{"text":"%md\n### Background","user":"admin","dateUpdated":"2018-05-17T02:04:28+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true,"tableHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<h3>Background</h3>\n"}]},"apps":[],"jobName":"paragraph_1526522662969_1247526316","id":"20180517-020422_16424839","dateCreated":"2018-05-17T02:04:22+0000","dateStarted":"2018-05-17T02:04:28+0000","dateFinished":"2018-05-17T02:04:28+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:1501"},{"text":"%md\n\nIn this tutorial, we will apply a neural network model to a cloud detection application using satellite imaging data. The data is from NASA's Multi-angle Imaging SpectroRadiometer (MISR) which was launched in 1999. The MISR has nine cameras that view the Earth from nine different directions which allows the MISR to measure elevations and angular radiance signatures of objects. We will use the radiances measured from the MISR and features developed using domain expertise to learn to detect whether clouds are present in polar regions. This is a particularly challenging task due to the snow and ice covering the ground surfaces.\n","user":"admin","dateUpdated":"2018-05-17T06:00:03+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true,"tableHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<p>In this tutorial, we will apply a neural network model to a cloud detection application using satellite imaging data. The data is from NASA's Multi-angle Imaging SpectroRadiometer (MISR) which was launched in 1999. The MISR has nine cameras that view the Earth from nine different directions which allows the MISR to measure elevations and angular radiance signatures of objects. We will use the radiances measured from the MISR and features developed using domain expertise to learn to detect whether clouds are present in polar regions. This is a particularly challenging task due to the snow and ice covering the ground surfaces.</p>\n"}]},"apps":[],"jobName":"paragraph_1526522677551_-1457627877","id":"20180517-020437_909678623","dateCreated":"2018-05-17T02:04:37+0000","dateStarted":"2018-05-17T06:00:03+0000","dateFinished":"2018-05-17T06:00:03+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:1502"},{"text":"%md\n### Imports","user":"admin","dateUpdated":"2018-05-17T02:13:58+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true,"tableHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<h3>Imports</h3>\n"}]},"apps":[],"jobName":"paragraph_1526523234038_1427185962","id":"20180517-021354_1905529525","dateCreated":"2018-05-17T02:13:54+0000","dateStarted":"2018-05-17T02:13:58+0000","dateFinished":"2018-05-17T02:13:58+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:1503"},{"text":"import org.datavec.api.records.reader.impl.csv.CSVRecordReader;\nimport org.deeplearning4j.eval.ROC;\nimport org.deeplearning4j.nn.api.OptimizationAlgorithm;\nimport org.deeplearning4j.nn.conf.NeuralNetConfiguration;\nimport org.deeplearning4j.nn.conf.Updater;\nimport org.deeplearning4j.nn.conf.layers.DenseLayer;\nimport org.deeplearning4j.nn.weights.WeightInit;\nimport org.nd4j.linalg.activations.Activation;\nimport org.nd4j.linalg.api.ndarray.INDArray;\nimport org.datavec.api.records.reader.RecordReader;\nimport org.datavec.api.split.FileSplit;\nimport org.deeplearning4j.datasets.datavec.RecordReaderMultiDataSetIterator;\nimport org.deeplearning4j.nn.conf.layers.OutputLayer;\nimport org.deeplearning4j.eval.Evaluation;\nimport org.nd4j.linalg.dataset.api.iterator.MultiDataSetIterator;\nimport org.nd4j.linalg.dataset.api.MultiDataSet;\nimport org.deeplearning4j.nn.conf.ComputationGraphConfiguration;\nimport org.nd4j.linalg.lossfunctions.LossFunctions;\nimport org.deeplearning4j.nn.conf.graph.MergeVertex;\nimport org.deeplearning4j.nn.graph.ComputationGraph;\n\nimport org.nd4j.linalg.api.ndarray.INDArray;\nimport java.io.File;\nimport java.net.URL;\nimport java.io.BufferedInputStream;\nimport java.io.FileInputStream;\nimport java.io.BufferedOutputStream;\nimport java.io.FileOutputStream;\nimport org.apache.commons.io.FilenameUtils;\nimport org.apache.commons.io.FileUtils;\nimport org.apache.commons.compress.archivers.tar.TarArchiveInputStream;\nimport org.apache.commons.compress.compressors.gzip.GzipCompressorInputStream;\nimport org.apache.commons.compress.archivers.tar.TarArchiveEntry;","user":"admin","dateUpdated":"2018-05-17T02:01:30+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"scala","editOnDblClick":false},"editorMode":"ace/mode/scala"},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"import org.datavec.api.records.reader.impl.csv.CSVRecordReader\nimport org.deeplearning4j.eval.ROC\nimport org.deeplearning4j.nn.api.OptimizationAlgorithm\nimport org.deeplearning4j.nn.conf.NeuralNetConfiguration\nimport org.deeplearning4j.nn.conf.Updater\nimport org.deeplearning4j.nn.conf.layers.DenseLayer\nimport org.deeplearning4j.nn.weights.WeightInit\nimport org.nd4j.linalg.activations.Activation\nimport org.nd4j.linalg.api.ndarray.INDArray\nimport org.datavec.api.records.reader.RecordReader\nimport org.datavec.api.split.FileSplit\nimport org.deeplearning4j.datasets.datavec.RecordReaderMultiDataSetIterator\nimport org.deeplearning4j.nn.conf.layers.OutputLayer\nimport org.deeplearning4j.eval.Evaluation\nimport org.nd4j.linalg.dataset.api.iterator.MultiDataSetIterator\nimport org.nd4j.linalg.dataset.api.MultiDataSet\nimport org.deeplearning4j.nn.conf.ComputationGraphConfiguration\nimport org.nd4j.linalg.lossfunctions.LossFunctions\nimport org.deeplearning4j.nn.conf.graph.MergeVertex\nimport org.deeplearning4j.nn.graph.ComputationGraph\nimport org.nd4j.linalg.api.ndarray.INDArray\nimport java.io.File\nimport java.net.URL\nimport java.io.BufferedInputStream\nimport java.io.FileInputStream\nimport java.io.BufferedOutputStream\nimport java.io.FileOutputStream\nimport org.apache.commons.io.FilenameUtils\nimport org.apache.commons.io.FileUtils\nimport org.apache.commons.compress.archivers.tar.TarArchiveInputStream\nimport org.apache.commons.compress.compressors.gzip.GzipCompressorInputStream\nimport org.apache.commons.compress.archivers.tar.TarArchiveEntry\n"}]},"apps":[],"jobName":"paragraph_1526096893400_-1942011245","id":"20180512-034813_871684599","dateCreated":"2018-05-12T03:48:13+0000","dateStarted":"2018-05-17T02:01:30+0000","dateFinished":"2018-05-17T02:01:45+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:1504"},{"text":"%md\n### Data","user":"admin","dateUpdated":"2018-05-17T02:14:34+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true,"tableHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<h3>Data</h3>\n"}]},"apps":[],"jobName":"paragraph_1526523266865_1778892430","id":"20180517-021426_915862275","dateCreated":"2018-05-17T02:14:26+0000","dateStarted":"2018-05-17T02:14:34+0000","dateFinished":"2018-05-17T02:14:34+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:1505"},{"text":"%md\nThe data is taken from MISR measurements and expert features of 3 images of polar regions. For each location in the grid, there is an expert label whether or not clouds are present and 8 features (radiances + expert labels). Data from two images will comprise the training set and the left out image is in the test set.\n\nThe data can be found in a tar.gz file located at the url provided below in the next cell. It is organized into two directories (train and test). In each directory there are five subdirectories: n1, n2, n3, n4, and n5. The data in n1 contains expert features and the label pertaining to a particular location in an image. n2, n3, n4, and n5 contain the expert features corresponding to the nearest locations to the original location. \n\nWe will additionally use features from a location's nearest neighbors as features to feed into our model, because there are dependencies across neighboring locations. In other words, if a location's neighbors have a positive cloud label, it is more likely for the original location to have a positive cloud label as well. The reverse also applies as well. ","user":"admin","dateUpdated":"2018-05-17T06:01:17+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true,"tableHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<p>The data is taken from MISR measurements and expert features of 3 images of polar regions. For each location in the grid, there is an expert label whether or not clouds are present and 8 features (radiances + expert labels). Data from two images will comprise the training set and the left out image is in the test set.</p>\n<p>The data can be found in a tar.gz file located at the url provided below in the next cell. It is organized into two directories (train and test). In each directory there are five subdirectories: n1, n2, n3, n4, and n5. The data in n1 contains expert features and the label pertaining to a particular location in an image. n2, n3, n4, and n5 contain the expert features corresponding to the nearest locations to the original location.</p>\n<p>We will additionally use features from a location's nearest neighbors as features to feed into our model, because there are dependencies across neighboring locations. In other words, if a location's neighbors have a positive cloud label, it is more likely for the original location to have a positive cloud label as well. The reverse also applies as well.</p>\n"}]},"apps":[],"jobName":"paragraph_1526523659954_-1454368761","id":"20180517-022059_199497961","dateCreated":"2018-05-17T02:20:59+0000","dateStarted":"2018-05-17T06:01:17+0000","dateFinished":"2018-05-17T06:01:17+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:1506"},{"text":"val DATA_URL = \"https://bpstore1.blob.core.windows.net/tutorials/Cloud.tar.gz\"\nval DATA_PATH = FilenameUtils.concat(System.getProperty(\"java.io.tmpdir\"), \"dl4j_cloud/\")","user":"admin","dateUpdated":"2018-05-17T02:01:32+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"scala","editOnDblClick":false},"editorMode":"ace/mode/scala"},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"DATA_URL: String = https://bpstore1.blob.core.windows.net/tutorials/Cloud.tar.gz\nDATA_PATH: String = /tmp/dl4j_cloud/\n"}]},"apps":[],"jobName":"paragraph_1526097793080_-1994479422","id":"20180512-040313_2076451104","dateCreated":"2018-05-12T04:03:13+0000","dateStarted":"2018-05-17T02:01:32+0000","dateFinished":"2018-05-17T02:01:46+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:1507"},{"text":"%md\n### Download Data","user":"admin","dateUpdated":"2018-05-17T02:30:52+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true,"tableHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<h3>Download Data</h3>\n"}]},"apps":[],"jobName":"paragraph_1526524250611_-412609946","id":"20180517-023050_430162136","dateCreated":"2018-05-17T02:30:50+0000","dateStarted":"2018-05-17T02:30:52+0000","dateFinished":"2018-05-17T02:30:52+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:1508"},{"text":"%md\nTo download the data, we will create a temporary directory that will store the data files, extract the tar.gz file from the url, and place it in the specified directory.","user":"admin","dateUpdated":"2018-05-17T02:31:14+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true,"tableHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<p>To download the data, we will create a temporary directory that will store the data files, extract the tar.gz file from the url, and place it in the specified directory.</p>\n"}]},"apps":[],"jobName":"paragraph_1526524266469_486194133","id":"20180517-023106_1069621179","dateCreated":"2018-05-17T02:31:06+0000","dateStarted":"2018-05-17T02:31:14+0000","dateFinished":"2018-05-17T02:31:14+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:1509"},{"text":"val directory = new File(DATA_PATH)\ndirectory.mkdir() \n\nval archizePath = DATA_PATH + \"Cloud.tar.gz\"\nval archiveFile = new File(archizePath)\nval extractedPath = DATA_PATH + \"Cloud\" \nval extractedFile = new File(extractedPath)\n\nFileUtils.copyURLToFile(new URL(DATA_URL), archiveFile) \n","user":"admin","dateUpdated":"2018-05-17T02:01:33+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"scala","editOnDblClick":false},"editorMode":"ace/mode/scala"},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"directory: java.io.File = /tmp/dl4j_cloud\nres1: Boolean = false\narchizePath: String = /tmp/dl4j_cloud/Cloud.tar.gz\narchiveFile: java.io.File = /tmp/dl4j_cloud/Cloud.tar.gz\nextractedPath: String = /tmp/dl4j_cloud/Cloud\nextractedFile: java.io.File = /tmp/dl4j_cloud/Cloud\n"}]},"apps":[],"jobName":"paragraph_1526097855835_-137757851","id":"20180512-040415_327009012","dateCreated":"2018-05-12T04:04:15+0000","dateStarted":"2018-05-17T02:01:46+0000","dateFinished":"2018-05-17T02:01:49+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:1510"},{"text":"%md \n\nNext, we must extract the data from the tar.gz file, recreate directories within the tar.gz file into our temporary directory, and copy the files into our temporary directory. ","user":"admin","dateUpdated":"2018-05-17T02:31:30+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true,"tableHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<p>Next, we must extract the data from the tar.gz file, recreate directories within the tar.gz file into our temporary directory, and copy the files into our temporary directory.</p>\n"}]},"apps":[],"jobName":"paragraph_1526524287402_-85948263","id":"20180517-023127_1349112347","dateCreated":"2018-05-17T02:31:27+0000","dateStarted":"2018-05-17T02:31:30+0000","dateFinished":"2018-05-17T02:31:30+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:1511"},{"text":"var fileCount = 0\nvar dirCount = 0\nval BUFFER_SIZE = 4096\n\nval tais = new TarArchiveInputStream(new GzipCompressorInputStream( new BufferedInputStream( new FileInputStream(archizePath))))\n\nvar entry = tais.getNextEntry().asInstanceOf[TarArchiveEntry]\n\nwhile(entry != null){\n if (entry.isDirectory()) {\n new File(DATA_PATH + entry.getName()).mkdirs()\n dirCount = dirCount + 1\n fileCount = 0\n }\n else {\n \n val data = new Array[scala.Byte](4 * BUFFER_SIZE)\n\n val fos = new FileOutputStream(DATA_PATH + entry.getName());\n val dest = new BufferedOutputStream(fos, BUFFER_SIZE);\n var count = tais.read(data, 0, BUFFER_SIZE)\n \n while (count != -1) {\n dest.write(data, 0, count)\n count = tais.read(data, 0, BUFFER_SIZE)\n }\n \n dest.close()\n fileCount = fileCount + 1\n }\n if(fileCount % 1000 == 0){\n print(\".\")\n }\n \n entry = tais.getNextEntry().asInstanceOf[TarArchiveEntry]\n}","user":"admin","dateUpdated":"2018-05-17T02:01:35+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"scala","editOnDblClick":false},"editorMode":"ace/mode/scala"},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"fileCount: Int = 0\ndirCount: Int = 0\nBUFFER_SIZE: Int = 4096\ntais: org.apache.commons.compress.archivers.tar.TarArchiveInputStream = org.apache.commons.compress.archivers.tar.TarArchiveInputStream@43b19776\nentry: org.apache.commons.compress.archivers.tar.TarArchiveEntry = org.apache.commons.compress.archivers.tar.TarArchiveEntry@de0125e3\n............."}]},"apps":[],"jobName":"paragraph_1526097865722_2027624282","id":"20180512-040425_978515768","dateCreated":"2018-05-12T04:04:25+0000","dateStarted":"2018-05-17T02:01:46+0000","dateFinished":"2018-05-17T02:01:55+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:1512"},{"text":"%md\n### DataSetIterators","user":"admin","dateUpdated":"2018-05-17T02:31:47+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true,"tableHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<h3>DataSetIterators</h3>\n"}]},"apps":[],"jobName":"paragraph_1526524298426_-649959502","id":"20180517-023138_1644366332","dateCreated":"2018-05-17T02:31:38+0000","dateStarted":"2018-05-17T02:31:47+0000","dateFinished":"2018-05-17T02:31:47+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:1513"},{"text":"%md\nOur next goal is to convert the raw data (csv files) into a DataSetIterator, which can then be fed into a neural network for training. We will first obtain the paths containing the raw data, which is in csv file format. ","user":"admin","dateUpdated":"2018-05-17T02:32:54+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true,"tableHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<p>Our next goal is to convert the raw data (csv files) into a DataSetIterator, which can then be fed into a neural network for training. We will first obtain the paths containing the raw data, which is in csv file format.</p>\n"}]},"apps":[],"jobName":"paragraph_1526524309803_-1789200995","id":"20180517-023149_786039763","dateCreated":"2018-05-17T02:31:49+0000","dateStarted":"2018-05-17T02:32:54+0000","dateFinished":"2018-05-17T02:32:54+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:1514"},{"text":"val path = FilenameUtils.concat(DATA_PATH, \"Cloud/\") // set parent directory\n\nval trainBaseDir1 = FilenameUtils.concat(path, \"train/n1/train.csv\") \nval trainBaseDir2 = FilenameUtils.concat(path, \"train/n2/train.csv\")\nval trainBaseDir3 = FilenameUtils.concat(path, \"train/n3/train.csv\")\nval trainBaseDir4 = FilenameUtils.concat(path, \"train/n4/train.csv\")\nval trainBaseDir5 = FilenameUtils.concat(path, \"train/n5/train.csv\") \n\nval testBaseDir1 = FilenameUtils.concat(path, \"test/n1/test.csv\")\nval testBaseDir2 = FilenameUtils.concat(path, \"test/n2/test.csv\")\nval testBaseDir3 = FilenameUtils.concat(path, \"test/n3/test.csv\")\nval testBaseDir4 = FilenameUtils.concat(path, \"test/n4/test.csv\") \nval testBaseDir5 = FilenameUtils.concat(path, \"test/n5/test.csv\")\n\n","user":"admin","dateUpdated":"2018-05-17T02:01:37+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"scala","editOnDblClick":false},"editorMode":"ace/mode/scala"},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"path: String = /tmp/dl4j_cloud/Cloud/\ntrainBaseDir1: String = /tmp/dl4j_cloud/Cloud/train/n1/train.csv\ntrainBaseDir2: String = /tmp/dl4j_cloud/Cloud/train/n2/train.csv\ntrainBaseDir3: String = /tmp/dl4j_cloud/Cloud/train/n3/train.csv\ntrainBaseDir4: String = /tmp/dl4j_cloud/Cloud/train/n4/train.csv\ntrainBaseDir5: String = /tmp/dl4j_cloud/Cloud/train/n5/train.csv\ntestBaseDir1: String = /tmp/dl4j_cloud/Cloud/test/n1/test.csv\ntestBaseDir2: String = /tmp/dl4j_cloud/Cloud/test/n2/test.csv\ntestBaseDir3: String = /tmp/dl4j_cloud/Cloud/test/n3/test.csv\ntestBaseDir4: String = /tmp/dl4j_cloud/Cloud/test/n4/test.csv\ntestBaseDir5: String = /tmp/dl4j_cloud/Cloud/test/n5/test.csv\n"}]},"apps":[],"jobName":"paragraph_1526097960018_1853656483","id":"20180512-040600_827236004","dateCreated":"2018-05-12T04:06:00+0000","dateStarted":"2018-05-17T02:01:50+0000","dateFinished":"2018-05-17T02:01:56+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:1515"},{"text":"%md\nWe then will create two DataSetIterators to feed the data into a neural network. But first, we will initialize CSVRecordReaders to parse the raw data and convert it to record-like format. We create separate CSVRecordReaders for the original location and each nearest neighbor. Since the data is contained in separate RecordReaders, we will use a RecordReaderMultiDataSetIterator, which allows for multiple inputs or outputs. We then add the RecordReaders to the DataSetIterator using the addReader method of the DataSetIterator.Builder() class. We specify the inputs using the addInput method and the label using the addOutputOneHot method. ","user":"admin","dateUpdated":"2018-05-17T02:37:33+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true,"tableHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<p>We then will create two DataSetIterators to feed the data into a neural network. But first, we will initialize CSVRecordReaders to parse the raw data and convert it to record-like format. We create separate CSVRecordReaders for the original location and each nearest neighbor. Since the data is contained in separate RecordReaders, we will use a RecordReaderMultiDataSetIterator, which allows for multiple inputs or outputs. We then add the RecordReaders to the DataSetIterator using the addReader method of the DataSetIterator.Builder() class. We specify the inputs using the addInput method and the label using the addOutputOneHot method.</p>\n"}]},"apps":[],"jobName":"paragraph_1526524376752_1319493497","id":"20180517-023256_1491420426","dateCreated":"2018-05-17T02:32:56+0000","dateStarted":"2018-05-17T02:37:33+0000","dateFinished":"2018-05-17T02:37:33+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:1516"},{"text":"val rrTrain1 = new CSVRecordReader(1);\nrrTrain1.initialize(new FileSplit(new File(trainBaseDir1)));\nval rrTrain2 = new CSVRecordReader(1);\nrrTrain2.initialize(new FileSplit(new File(trainBaseDir2)))\n\nval rrTrain3 = new CSVRecordReader(1);\nrrTrain3.initialize(new FileSplit(new File(trainBaseDir3)))\n\nval rrTrain4 = new CSVRecordReader(1);\nrrTrain4.initialize(new FileSplit(new File(trainBaseDir4)))\n\nval rrTrain5 = new CSVRecordReader(1);\nrrTrain5.initialize(new FileSplit(new File(trainBaseDir5)))\n\n\nval trainIter = new RecordReaderMultiDataSetIterator.Builder(20)\n .addReader(\"rr1\",rrTrain1)\n .addReader(\"rr2\",rrTrain2)\n .addReader(\"rr3\",rrTrain3)\n .addReader(\"rr4\",rrTrain4)\n .addReader(\"rr5\",rrTrain5)\n .addInput(\"rr1\", 1, 3)\n .addInput(\"rr2\", 0, 2)\n .addInput(\"rr3\", 0, 2)\n .addInput(\"rr4\", 0, 2)\n .addInput(\"rr5\", 0, 2)\n .addOutputOneHot(\"rr1\", 0, 2)\n .build();","user":"admin","dateUpdated":"2018-05-17T02:01:39+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"scala","editOnDblClick":false},"editorMode":"ace/mode/scala"},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"rrTrain1: org.datavec.api.records.reader.impl.csv.CSVRecordReader = org.datavec.api.records.reader.impl.csv.CSVRecordReader@8cbbdc5\nrrTrain2: org.datavec.api.records.reader.impl.csv.CSVRecordReader = org.datavec.api.records.reader.impl.csv.CSVRecordReader@224f554f\nrrTrain3: org.datavec.api.records.reader.impl.csv.CSVRecordReader = org.datavec.api.records.reader.impl.csv.CSVRecordReader@4e4ecbab\nrrTrain4: org.datavec.api.records.reader.impl.csv.CSVRecordReader = org.datavec.api.records.reader.impl.csv.CSVRecordReader@4fc9785d\nrrTrain5: org.datavec.api.records.reader.impl.csv.CSVRecordReader = org.datavec.api.records.reader.impl.csv.CSVRecordReader@1e5b313d\ntrainIter: org.deeplearning4j.datasets.datavec.RecordReaderMultiDataSetIterator = org.deeplearning4j.datasets.datavec.RecordReaderMultiDataSetIterator@6fdb6a30\n"}]},"apps":[],"jobName":"paragraph_1526098925541_-1024490219","id":"20180512-042205_1621500631","dateCreated":"2018-05-12T04:22:05+0000","dateStarted":"2018-05-17T02:01:56+0000","dateFinished":"2018-05-17T02:01:57+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:1517"},{"text":"%md\nThe same process is applied to the testing data.","user":"admin","dateUpdated":"2018-05-17T02:37:55+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true,"tableHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<p>The same process is applied to the testing data.</p>\n"}]},"apps":[],"jobName":"paragraph_1526524661598_1918561214","id":"20180517-023741_1733943542","dateCreated":"2018-05-17T02:37:41+0000","dateStarted":"2018-05-17T02:37:55+0000","dateFinished":"2018-05-17T02:37:55+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:1518"},{"text":"val rrTest1 = new CSVRecordReader(1);\nrrTest1.initialize(new FileSplit(new File(testBaseDir1)));\n\nval rrTest2 = new CSVRecordReader(1);\nrrTest2.initialize(new FileSplit(new File(testBaseDir2)));\n\nval rrTest3 = new CSVRecordReader(1);\nrrTest3.initialize(new FileSplit(new File(testBaseDir3)));\n\nval rrTest4 = new CSVRecordReader(1);\nrrTest4.initialize(new FileSplit(new File(testBaseDir4)));\n\nval rrTest5 = new CSVRecordReader(1);\nrrTest5.initialize(new FileSplit(new File(testBaseDir5)));\n\nval testIter = new RecordReaderMultiDataSetIterator.Builder(20)\n .addReader(\"rr1\",rrTest1)\n .addReader(\"rr2\",rrTest2)\n .addReader(\"rr3\",rrTest3)\n .addReader(\"rr4\",rrTest4)\n .addReader(\"rr5\",rrTest5)\n .addInput(\"rr1\", 1, 3)\n .addInput(\"rr2\", 0, 2)\n .addInput(\"rr3\", 0, 2)\n .addInput(\"rr4\", 0, 2)\n .addInput(\"rr5\", 0, 2)\n .addOutputOneHot(\"rr1\", 0, 2)\n .build();","user":"admin","dateUpdated":"2018-05-17T02:01:42+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"scala","editOnDblClick":false},"editorMode":"ace/mode/scala"},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"rrTest1: org.datavec.api.records.reader.impl.csv.CSVRecordReader = org.datavec.api.records.reader.impl.csv.CSVRecordReader@3cf059ee\nrrTest2: org.datavec.api.records.reader.impl.csv.CSVRecordReader = org.datavec.api.records.reader.impl.csv.CSVRecordReader@6ea94a5c\nrrTest3: org.datavec.api.records.reader.impl.csv.CSVRecordReader = org.datavec.api.records.reader.impl.csv.CSVRecordReader@31fe4bc9\nrrTest4: org.datavec.api.records.reader.impl.csv.CSVRecordReader = org.datavec.api.records.reader.impl.csv.CSVRecordReader@20d4232a\nrrTest5: org.datavec.api.records.reader.impl.csv.CSVRecordReader = org.datavec.api.records.reader.impl.csv.CSVRecordReader@d9aa555\ntestIter: org.deeplearning4j.datasets.datavec.RecordReaderMultiDataSetIterator = org.deeplearning4j.datasets.datavec.RecordReaderMultiDataSetIterator@5ca9483d\nwarning: there were 1 deprecation warning(s); re-run with -deprecation for details\nconf: org.deeplearning4j.nn.conf.ComputationGraphConfiguration = \n{\n \"backprop\" : true,\n \"backpropType\" : \"Standard\",\n \"cacheMode\" : \"NONE\",\n \"defaultConfiguration\" : {\n \"cacheMode\" : \"NONE\",\n \"epochCount\" : 0,\n \"iterationCount\" : 0,\n \"l1ByParam\" : { },\n \"l2ByParam\" : { },\n \"layer\" : null,\n \"maxNumLineSearchIterations\" : 5,\n \"miniBatch\" : true,\n \"minimize\" : true,\n \"optimizationAlgo\" : \"STOCHASTIC_GRADIENT_DESCENT\",\n \"pretrain\" : false,\n \"seed\" : 1526522518162,\n \"stepFunction\" : null,\n \"variables\" : [ ]\n },\n \"epochCount\" : 0,\n \"inferenceWorkspaceMode\" : \"SEPARATE\",\n \"iterationCount\" : 0,\n \"networkInputs\" : [ \"input1\", \"input2\", \"input3\", \"input4\", \"input5\" ],\n \"networkOutputs\" : [ \"out\" ],\n \"pretrain\" : false,\n \"tbpttBackLength\" : 20..."}]},"apps":[],"jobName":"paragraph_1526098165460_1326240717","id":"20180512-040925_760248818","dateCreated":"2018-05-12T04:09:25+0000","dateStarted":"2018-05-17T02:01:56+0000","dateFinished":"2018-05-17T02:01:59+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:1519"},{"text":"%md\n### Neural Net Configuration","user":"admin","dateUpdated":"2018-05-17T02:39:03+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true,"tableHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<h3>Neural Net Configuration</h3>\n"}]},"apps":[],"jobName":"paragraph_1526524723171_1831500693","id":"20180517-023843_1359949045","dateCreated":"2018-05-17T02:38:43+0000","dateStarted":"2018-05-17T02:39:03+0000","dateFinished":"2018-05-17T02:39:03+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:1520"},{"text":"%md\nNow that the DataSetIterators are initialized, we can now specify the configuration of the neural network. We will ultimately use a ComputationGraph since we will have multiple inputs to the network. MultiLayerNetworks cannot be used when there are multiple inputs and/or outputs. \n\nTo specify the network architecture and the hyperparameters, we use the NeuralNetConfiguraiton.Builder class. We can add each input using the addLayer method of the class. Because the inputs are separate, the addVertex method is used to add a MergeVertex to the network. This vertex will merge the outputs from the previous input layers into a combined representation. Finally, a fully connected layer is applied to the merged output, which passes the activations to the final output layer.\n\nThe other hyperparameters, such as the optimization algorithm, updater, number of hidden nodes, and etc are also specified in this block of code as well. ","user":"admin","dateUpdated":"2018-05-17T02:45:02+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true,"tableHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<p>Now that the DataSetIterators are initialized, we can now specify the configuration of the neural network. We will ultimately use a ComputationGraph since we will have multiple inputs to the network. MultiLayerNetworks cannot be used when there are multiple inputs and/or outputs.</p>\n<p>To specify the network architecture and the hyperparameters, we use the NeuralNetConfiguraiton.Builder class. We can add each input using the addLayer method of the class. Because the inputs are separate, the addVertex method is used to add a MergeVertex to the network. This vertex will merge the outputs from the previous input layers into a combined representation. Finally, a fully connected layer is applied to the merged output, which passes the activations to the final output layer.</p>\n<p>The other hyperparameters, such as the optimization algorithm, updater, number of hidden nodes, and etc are also specified in this block of code as well.</p>\n"}]},"apps":[],"jobName":"paragraph_1526524744720_-839964162","id":"20180517-023904_615105650","dateCreated":"2018-05-17T02:39:04+0000","dateStarted":"2018-05-17T02:45:02+0000","dateFinished":"2018-05-17T02:45:02+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:1521"},{"text":"val conf = new NeuralNetConfiguration.Builder()\n .optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)\n .updater(Updater.ADAM)\n .graphBuilder()\n .addInputs(\"input1\", \"input2\", \"input3\", \"input4\", \"input5\")\n .addLayer(\"L1\", new DenseLayer.Builder()\n .weightInit(WeightInit.XAVIER)\n .activation(Activation.RELU)\n .nIn(3).nOut(50)\n .build(), \"input1\")\n .addLayer(\"L2\", new DenseLayer.Builder()\n .weightInit(WeightInit.XAVIER)\n .activation(Activation.RELU)\n .nIn(3).nOut(50)\n .build(), \"input2\")\n .addLayer(\"L3\", new DenseLayer.Builder()\n .weightInit(WeightInit.XAVIER)\n .activation(Activation.RELU)\n .nIn(3).nOut(50)\n .build(), \"input3\")\n .addLayer(\"L4\", new DenseLayer.Builder()\n .weightInit(WeightInit.XAVIER)\n .activation(Activation.RELU)\n .nIn(3).nOut(50)\n .build(), \"input4\")\n .addLayer(\"L5\", new DenseLayer.Builder()\n .weightInit(WeightInit.XAVIER)\n .activation(Activation.RELU)\n .nIn(3).nOut(50)\n .build(), \"input5\")\n .addVertex(\"merge\", new MergeVertex(), \"L1\", \"L2\", \"L3\", \"L4\", \"L5\")\n .addLayer(\"L6\", new DenseLayer.Builder()\n .weightInit(WeightInit.XAVIER)\n .activation(Activation.RELU)\n .nIn(250).nOut(125).build(), \"merge\")\n .addLayer(\"out\", new OutputLayer.Builder()\n .lossFunction(LossFunctions.LossFunction.MCXENT)\n .weightInit(WeightInit.XAVIER)\n .activation(Activation.SOFTMAX)\n .nIn(125)\n .nOut(2).build(), \"L6\")\n .setOutputs(\"out\")\n .pretrain(false).backprop(true)\n .build();","user":"admin","dateUpdated":"2018-05-17T06:02:21+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"scala","editOnDblClick":true},"editorMode":"ace/mode/scala","editorHide":false,"tableHide":true},"settings":{"params":{},"forms":{}},"apps":[],"jobName":"paragraph_1526536897107_1581850766","id":"20180517-060137_422257459","dateCreated":"2018-05-17T06:01:37+0000","status":"FINISHED","progressUpdateIntervalMs":500,"focus":true,"$$hashKey":"object:3788","dateFinished":"2018-05-17T06:02:14+0000","dateStarted":"2018-05-17T06:02:14+0000","results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"warning: there were 1 deprecation warning(s); re-run with -deprecation for details\nconf: org.deeplearning4j.nn.conf.ComputationGraphConfiguration = \n{\n \"backprop\" : true,\n \"backpropType\" : \"Standard\",\n \"cacheMode\" : \"NONE\",\n \"defaultConfiguration\" : {\n \"cacheMode\" : \"NONE\",\n \"epochCount\" : 0,\n \"iterationCount\" : 0,\n \"l1ByParam\" : { },\n \"l2ByParam\" : { },\n \"layer\" : null,\n \"maxNumLineSearchIterations\" : 5,\n \"miniBatch\" : true,\n \"minimize\" : true,\n \"optimizationAlgo\" : \"STOCHASTIC_GRADIENT_DESCENT\",\n \"pretrain\" : false,\n \"seed\" : 1526536934374,\n \"stepFunction\" : null,\n \"variables\" : [ ]\n },\n \"epochCount\" : 0,\n \"inferenceWorkspaceMode\" : \"SEPARATE\",\n \"iterationCount\" : 0,\n \"networkInputs\" : [ \"input1\", \"input2\", \"input3\", \"input4\", \"input5\" ],\n \"networkOutputs\" : [ \"out\" ],\n \"pretrain\" : false,\n \"tbpttBackLength\" : 20..."}]}},{"text":"%md\n### Model Training","user":"admin","dateUpdated":"2018-05-17T02:45:42+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true,"tableHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<h3>Model Training</h3>\n"}]},"apps":[],"jobName":"paragraph_1526525134533_-1342311597","id":"20180517-024534_1989847296","dateCreated":"2018-05-17T02:45:34+0000","dateStarted":"2018-05-17T02:45:42+0000","dateFinished":"2018-05-17T02:45:42+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:1522"},{"text":"%md\nWe are now ready to train our model. We initialize our ComptutationGraph and loop over the number of epochs and call the fit method of the ComputationGraph to train our specified model. ","user":"admin","dateUpdated":"2018-05-17T02:46:11+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true,"tableHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<p>We are now ready to train our model. We initialize our ComptutationGraph and loop over the number of epochs and call the fit method of the ComputationGraph to train our specified model.</p>\n"}]},"apps":[],"jobName":"paragraph_1526525107929_1858552421","id":"20180517-024507_618904260","dateCreated":"2018-05-17T02:45:07+0000","dateStarted":"2018-05-17T02:46:11+0000","dateFinished":"2018-05-17T02:46:11+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:1523"},{"text":"val model = new ComputationGraph(conf);\nmodel.init()\nfor ( epoch <- 1 to 5) {\n println(\"Epoch number: \" + epoch );\n model.fit( trainIter );\n}","user":"admin","dateUpdated":"2018-05-17T02:59:49+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"scala"},"editorMode":"ace/mode/scala"},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"model: org.deeplearning4j.nn.graph.ComputationGraph = org.deeplearning4j.nn.graph.ComputationGraph@622844cf\nEpoch number: 1\nEpoch number: 2\nEpoch number: 3\nEpoch number: 4\nEpoch number: 5\n"}]},"apps":[],"jobName":"paragraph_1526098688917_1410263762","id":"20180512-041808_1600004690","dateCreated":"2018-05-12T04:18:08+0000","dateStarted":"2018-05-17T02:59:49+0000","dateFinished":"2018-05-17T03:15:16+0000","status":"FINISHED","progressUpdateIntervalMs":500,"$$hashKey":"object:1524"},{"user":"admin","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true,"tableHide":false},"settings":{"params":{},"forms":{}},"apps":[],"jobName":"paragraph_1526536702107_440028757","id":"20180517-055822_1594110179","dateCreated":"2018-05-17T05:58:22+0000","status":"FINISHED","progressUpdateIntervalMs":500,"focus":true,"$$hashKey":"object:3541","text":"%md \nTo evaluate our model, we simply use the evaluateROC method of the ComptuationGraph class.","dateUpdated":"2018-05-17T05:58:55+0000","dateFinished":"2018-05-17T05:58:55+0000","dateStarted":"2018-05-17T05:58:55+0000","results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<p>To evaluate our model, we simply use the evaluateROC method of the ComptuationGraph class.</p>\n"}]}},{"user":"admin","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"scala"},"editorMode":"ace/mode/scala"},"settings":{"params":{},"forms":{}},"apps":[],"jobName":"paragraph_1526535166426_248245969","id":"20180517-053246_1106102363","dateCreated":"2018-05-17T05:32:46+0000","status":"FINISHED","progressUpdateIntervalMs":500,"focus":true,"$$hashKey":"object:3214","text":"val roc = model.evaluateROC(testIter, 100)","dateUpdated":"2018-05-17T05:35:13+0000","dateFinished":"2018-05-17T05:37:38+0000","dateStarted":"2018-05-17T05:35:13+0000","results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"roc: org.deeplearning4j.eval.ROC = ROC(thresholdSteps=100, countActualPositive=60510, countActualNegative=76634, counts={0.0=ROC.CountsForThreshold(threshold=0.0, countTruePositive=60510, countFalsePositive=76634), 0.01=ROC.CountsForThreshold(threshold=0.01, countTruePositive=60249, countFalsePositive=13655), 0.02=ROC.CountsForThreshold(threshold=0.02, countTruePositive=60057, countFalsePositive=12523), 0.03=ROC.CountsForThreshold(threshold=0.03, countTruePositive=59897, countFalsePositive=11806), 0.04=ROC.CountsForThreshold(threshold=0.04, countTruePositive=59757, countFalsePositive=11288), 0.05=ROC.CountsForThreshold(threshold=0.05, countTruePositive=59616, countFalsePositive=10868), 0.06=ROC.CountsForThreshold(threshold=0.06, countTruePositive=59476, countFalsePositive=10522), 0.07=R..."}]}},{"user":"admin","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true,"tableHide":false},"settings":{"params":{},"forms":{}},"apps":[],"jobName":"paragraph_1526536737634_208272001","id":"20180517-055857_961344903","dateCreated":"2018-05-17T05:58:57+0000","status":"FINISHED","progressUpdateIntervalMs":500,"focus":true,"$$hashKey":"object:3618","text":"%md\nFinally we can print out the area under the curve (AUC) metric!","dateUpdated":"2018-05-17T05:59:17+0000","dateFinished":"2018-05-17T05:59:17+0000","dateStarted":"2018-05-17T05:59:17+0000","results":{"code":"SUCCESS","msg":[{"type":"HTML","data":"<p>Finally we can print out the area under the curve (AUC) metric!</p>\n"}]}},{"user":"admin","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"scala"},"editorMode":"ace/mode/scala"},"settings":{"params":{},"forms":{}},"apps":[],"jobName":"paragraph_1526535258363_1790673661","id":"20180517-053418_246157808","dateCreated":"2018-05-17T05:34:18+0000","status":"FINISHED","progressUpdateIntervalMs":500,"focus":true,"$$hashKey":"object:3368","text":"println(\"FINAL TEST AUC: \" + roc.calculateAUC());","dateUpdated":"2018-05-17T05:35:49+0000","dateFinished":"2018-05-17T05:37:38+0000","dateStarted":"2018-05-17T05:35:49+0000","results":{"code":"SUCCESS","msg":[{"type":"TEXT","data":"FINAL TEST AUC: 0.9790427761190411\n"}]}},{"user":"admin","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"scala"},"editorMode":"ace/mode/scala"},"settings":{"params":{},"forms":{}},"apps":[],"jobName":"paragraph_1526535349464_132295343","id":"20180517-053549_375962833","dateCreated":"2018-05-17T05:35:49+0000","status":"READY","progressUpdateIntervalMs":500,"focus":true,"$$hashKey":"object:3467"}],"name":"Cloud Example","id":"2DFDX22U2","angularObjects":{"2DGE6S7QV:existing_process":[],"2DD5SRBWK:existing_process":[],"2DEG6MHGM:existing_process":[],"2DGEBUAP6:existing_process":[],"2DCUYBCGV:existing_process":[],"2DF7AZBAH:existing_process":[],"2DEA7TGVP:existing_process":[],"2DD8GS77B:existing_process":[]},"config":{"looknfeel":"default","personalizedMode":"false"},"info":{}}