@@ -211,13 +211,13 @@ Let's print the final MSE on the cross-validation data.
211211
212212``` {code-cell} ipython3
213213print("Testing loss on the validation set.")
214- regression_model.evaluate(x_validate, y_validate)
214+ regression_model.evaluate(x_validate, y_validate, verbose=2 )
215215```
216216
217217Here's our output predictions on the cross-validation data.
218218
219219``` {code-cell} ipython3
220- y_predict = regression_model.predict(x_validate)
220+ y_predict = regression_model.predict(x_validate, verbose=2 )
221221```
222222
223223We use the following function to plot our predictions along with the data.
@@ -265,7 +265,7 @@ Here's the final MSE for the deep learning model.
265265
266266``` {code-cell} ipython3
267267print("Testing loss on the validation set.")
268- nn_model.evaluate(x_validate, y_validate)
268+ nn_model.evaluate(x_validate, y_validate, verbose=2 )
269269```
270270
271271You will notice that this loss is much lower than the one we achieved with
@@ -274,7 +274,7 @@ linear regression, suggesting a better fit.
274274To confirm this, let's look at the fitted function.
275275
276276``` {code-cell} ipython3
277- y_predict = nn_model.predict(x_validate)
277+ y_predict = nn_model.predict(x_validate, verbose=2 )
278278```
279279
280280``` {code-cell} ipython3
@@ -290,4 +290,3 @@ fig, ax = plt.subplots()
290290plot_results(x_validate, y_validate, y_predict, ax)
291291plt.show()
292292```
293-
0 commit comments