You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You can run a bigdl-dllib program, e.g., the [Image Inference](https://github.com/intel-analytics/BigDL/blob/branch-2.0/scala/dllib/src/main/scala/com/intel/analytics/bigdl/dllib/example/nnframes/imageInference), as a standard Spark program (running on either a local machine or a distributed cluster) as follows:
93
+
94
+
1. Download the pretrained caffe model and prepare the images
${BIGDL_HOME}/jars/bigdl-dllib-0.14.0-SNAPSHOT-jar-with-dependencies.jar \ #change to your jar file if your download is not spark_2.4.3-0.14.0
146
+
-f DATA_PATH \
147
+
-b 4 \
148
+
--numLayers 2 --vocab 100 --hidden 6 \
149
+
--numSteps 3 --learningRate 0.005 -e 1 \
150
+
--learningRateDecay 0.001 --keepProb 0.5
151
+
```
152
+
153
+
The parameters used in the above command are:
154
+
155
+
* -f: The path where you put your PTB data.
156
+
* -b: The mini-batch size. The mini-batch size is expected to be a multiple of *total cores* used in the job. In this example, the mini-batch size is suggested to be set to *total cores * 4*
157
+
* --learningRate: learning rate for adagrad
158
+
* --learningRateDecay: learning rate decay for adagrad
159
+
* --hidden: hiddensize for lstm
160
+
* --vocabSize: vocabulary size, default 10000
161
+
* --numLayers: numbers of lstm cell, default 2 lstm cells
162
+
* --numSteps: number of words per record in LM
163
+
* --keepProb: the probability to do dropout
164
+
165
+
If you are to run your own program, do remember to do the initialize before call other bigdl-dllib API's, as shown below.
166
+
```scala
167
+
// Scala code example
168
+
importcom.intel.analytics.bigdl.dllib.NNContext
169
+
NNContext.initNNContext()
170
+
```
171
+
---
172
+
173
+
### 2.3 Get started
174
+
---
175
+
176
+
This section show a single example of how to use dllib to build a deep learning application on Spark, using Keras APIs
177
+
178
+
---
179
+
#### **LeNet Model on MNIST using Keras-Style API**
180
+
181
+
This tutorial is an explanation of what is happening in the [lenet](https://github.com/intel-analytics/BigDL/tree/branch-2.0/scala/dllib/src/main/scala/com/intel/analytics/bigdl/dllib/example/keras) example
182
+
183
+
A bigdl-dllib program starts with initialize as follows.
184
+
````scala
185
+
valconf=Engine.createSparkConf()
186
+
.setAppName("Train Lenet on MNIST")
187
+
.set("spark.task.maxFailures", "1")
188
+
valsc=newSparkContext(conf)
189
+
Engine.init
190
+
````
191
+
192
+
After the initialization, we need to:
193
+
194
+
1. Load train and validation data by _**creating the [```DataSet```](https://github.com/intel-analytics/BigDL/blob/branch-2.0/scala/dllib/src/main/scala/com/intel/analytics/bigdl/dllib/feature/dataset/DataSet.scala)**_ (e.g., ````SampleToGreyImg````, ````GreyImgNormalizer```` and ````GreyImgToBatch````):
3. After that, we configure the learning process. Set the ````optimization method```` and the ````Criterion```` (which, given input and target, computes gradient per given loss function):
224
+
````scala
225
+
model.compile(optimizer = optimMethod,
226
+
loss =ClassNLLCriterion[Float](logProbAsInput =false),
0 commit comments