Runtime error expected 10 channels but got 3 channels isntead #892
-
Heres my notebook if you need to check it RuntimeError Traceback (most recent call last) 11 frames RuntimeError: Given groups=1, weight of size [10, 10, 3, 3], expected input[32, 3, 64, 64] to have 10 channels, but got 3 channels instead |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
from pathlib import Path
from torchvision import transforms
On section 6Paste this train_transform = transforms.Compose([
transforms.Resize(size=(224, 224)),
transforms.TrivialAugmentWide(num_magnitude_bins=31),
transforms.ToTensor()
])
test_transform = transforms.Compose([
transforms.Resize(size=(224, 224)),
transforms.ToTensor()
]) On section 7.1I made this to take better train_data_simple = datasets.ImageFolder(root=train_dir,
transform=train_transform)
test_data_simple = datasets.ImageFolder(root=test_dir,
transform=test_transform) On section 7.2Rename class class TinyVGG(nn.Module):
-----------------------------------
--- (Sequential CNN Logic)---
----------------------
--------------------
self.classifier = nn.Sequential(
nn.Flatten(),
nn.Linear(in_features=3136 * hidden_units, # To match the matrices multiplication
out_features=output_shape)
)
-----------
def forward(): ------logic---
----- On section 7.5under train step function there is typo for batch, (X, y) in enumerate(dataloader):
# Send data to the target device
X, y = X.to(device), y.to(device)
# 1. Forward pass
y_pred = model(X) # output model logits
# 2. Calculate the loss
loss = loss_fn(y_pred, y)
train_loss += loss.item() On section 7.6There were some typos on train function like dictionary key miss named, printing statements and from tqdm.auto import tqdm
# 1. Create a train function that takes in various model parameters + optimizer + dataloaders + loss functions
def train(model: torch.nn.Module,
train_dataloader: torch.utils.data.DataLoader,
test_dataloader: torch.utils.data.DataLoader,
optimizer: torch.optim.Optimizer,
loss_fn: torch.nn.Module = nn.CrossEntropyLoss(),
epochs: int = 5,
device=device):
# 2. Create empty results dictionary
results = {"train_loss": [],
"train_acc": [],
"test_loss": [],
"test_acc": []}
# 3. Loop through training and testing steps for a number of epcochs
for epoch in tqdm(range(epochs)):
train_loss, train_acc = train_step(model=model,
dataloader=train_dataloader,
optimizer=optimizer,
loss_fn=loss_fn,
device=device)
test_loss, test_acc = test_step(model=model,
dataloader=test_dataloader,
loss_fn=loss_fn,
device=device)
# 4. Print out whats happening
print(f"Epoch: {epoch} | Train loss: {train_loss:.4f} | Train acc: {train_acc:.4f} | Test loss: {test_loss:.4f} | Test acc: {test_acc:.4f}")
# 5. Update results dictionary
results["train_loss"].append(train_loss)
results["train_acc"].append(train_acc)
results["test_loss"].append(test_loss)
results["test_acc"].append(test_acc)
# 6. Return the filled results at the end of the epochs
return results On section 7.7While initiating Train and Test Loop there is another missing parenthesis on # Set random seeds
torch.manual_seed(42)
torch.cuda.manual_seed(42)
# Set number of epochs
NUM_EPOCHS = 5
# Recreate an instrance of TinyVGG
model_0 = TinyVGG(input_shape=3, # number of color channels of our target images
hidden_units=10,
output_shape=len(train_data.classes)).to(device)
# Setup loss function and optimizer
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(params=model_0.parameters(),
lr=0.001)
# Start the timer
start_time = timeit.default_timer()
# Train model_0
model_0_results = train(model=model_0,
train_dataloader=train_dataloader_simple,
test_dataloader=test_dataloader_simple,
optimizer=optimizer,
loss_fn=loss_fn,
epochs=NUM_EPOCHS)
# End the timer and print out how long it took
end_time = timeit.default_timer()
print(f"Total training time: {end_time-start_time:.3f} seconds") ResultsNow it runs fine
|
Beta Was this translation helpful? Give feedback.
random_image_paths
torandom_image_path
On section 6
Paste this
On section 7.1
I made this to take better
transform.Compose
though if you …