From 32937c9cb6a19d3244a741fb62b2ed95c328e979 Mon Sep 17 00:00:00 2001 From: Lopes Date: Wed, 15 Apr 2026 15:32:40 +0200 Subject: [PATCH 1/2] Corrected minor typos in image classification notebook --- 31_image_classification.ipynb | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/31_image_classification.ipynb b/31_image_classification.ipynb index 9ff5698c..051a75fe 100644 --- a/31_image_classification.ipynb +++ b/31_image_classification.ipynb @@ -1932,7 +1932,8 @@ "metadata": {}, "source": [ "After initialising the trainer instance, check whether a trained model already exists.\n", - "If so, load the weights using ```model_weights = torch.load(model_path, weights_only=True)```.\n", + "If so, load the weights using ```model_weights = torch.load(model_path, weights_only=True, map_location=torch.device('cpu'))```. \n", + "The ```map_location=torch.device('cpu')``` is only needed if you are running the code in a computer that does not have CUDA cores.\n", "Then, load the weights into the model using (```model.load_state_dict(model_weights)```).\n", "Finally, set the model to evaluation model (```model.eval()```).\n", "This step is essential because certain layers, such as batch normalization and dropout, behave differently during training and evaluation.\n", @@ -1988,7 +1989,7 @@ "import pandas as pd\n", "from matplotlib import pyplot as plt\n", "\n", - "# Load the training log file (In case you want to use the already trained model, replace this by model_path = dataset_folder / \"training_log.txt\")\n", + "# Load the training log file (In case you want to use the already trained model, replace this by training_log = dataset_folder / \"training_log.txt\")\n", "training_log = None\n", "\n", "plt.figure()\n", From b0f081a1c1b937dd4aac6bf7ca99ead0d120dede Mon Sep 17 00:00:00 2001 From: Lopes Date: Wed, 15 Apr 2026 15:36:35 +0200 Subject: [PATCH 2/2] Small fixes --- 31_image_classification.ipynb | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/31_image_classification.ipynb b/31_image_classification.ipynb index 051a75fe..17a97e36 100644 --- a/31_image_classification.ipynb +++ b/31_image_classification.ipynb @@ -1975,7 +1975,7 @@ "Overfitting occurs when the model performs well on the training data but poorly on the validation data, usually indicated by a widening gap between the two curves.\n", "Underfitting, on the other hand, is suggested when both the training and validation curves show poor performance and fail to improve. By monitoring these curves, we can adjust hyperparameters or modify the model architecture to address such issues. \n", "\n", - "First, load the log file using ```pandas``` (```training_log = pd.read_csv(\"training_log.txt\")```).\n", + "First, load the log file using ```pandas``` (```training_log = pd.read_csv(\"training_log.txt\")``` or ```training_log = pd.read_csv(dataset_folder / \"training_log.txt\")``` in case you did not train the model by yourself).\n", "Then, use the ```matplotlib``` library to plot the learning curves." ] }, @@ -1989,7 +1989,7 @@ "import pandas as pd\n", "from matplotlib import pyplot as plt\n", "\n", - "# Load the training log file (In case you want to use the already trained model, replace this by training_log = dataset_folder / \"training_log.txt\")\n", + "# Load the training log file\n", "training_log = None\n", "\n", "plt.figure()\n",