Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get score? #294

Closed
changchun-zhou opened this issue Oct 26, 2021 · 8 comments
Closed

How to get score? #294

changchun-zhou opened this issue Oct 26, 2021 · 8 comments
Labels

Comments

@changchun-zhou
Copy link

Please, anyone can tell me how to get the score of prediction? like accuracy of classification. Thank you

@ellisdg
Copy link
Owner

ellisdg commented Oct 28, 2021

You can get the dice score with a function like this:

def dice_coefficient(truth, prediction):
    return 2 * np.sum(truth * prediction)/(np.sum(truth) + np.sum(prediction))

@changchun-zhou
Copy link
Author

You can get the dice score with a function like this:

def dice_coefficient(truth, prediction):
    return 2 * np.sum(truth * prediction)/(np.sum(truth) + np.sum(prediction))

Thank you for your kind reply. Your framework is quite useful. I have two questions:

  1. According to your reply, I insert the function into the function _batch_loss like follows:
    `def _batch_loss(model, images, target, criterion, regularized=False, vae=False):
    output = model(images)
    batch_size = images.size(0)
    if regularized:
    try:
    output, output_vae, mu, logvar = output
    loss = criterion(output, output_vae, mu, logvar, images, target)
    except ValueError:
    pred_y, pred_x = output
    loss = criterion(pred_y, pred_x, images, target)
    elif vae:
    pred_x, mu, logvar = output
    loss = criterion(pred_x, mu, logvar, target)
    else:
    loss = criterion(output, target)

    score

    iflat = output.view(-1).float()
    tflat = target.view(-1).float()
    intersection = (iflat * tflat).sum()
    smooth = 0
    score = 2. * intersection + smooth)/(iflat.sum() + tflat.sum() + smooth)
    return loss, batch_size, scoreI want to check whether the score isMead dice` in Table 1, in your paper Trialing U-Net Training Modifications
    for Segmenting Gliomas Using Open Source Deep Learning Framework
    ?
    image

  2. Can you provide some experimental results of IoU?

  3. Can this framework be used for general segmentation datasets, such as ShapeNet? If yes, how to modify the framework?

I appreciate your help!

@ellisdg
Copy link
Owner

ellisdg commented Nov 1, 2021

  1. Yes, the dice score should be consistent with the reported dice scores. However, those dice scores were computed on the validation set by the BRATS challenge organizers. I do not have access to their validation dataset.
  2. I have not calculated any IoU scores for my experiments.
  3. The way my framework operates is that it requires a "Sequence" or "Loader" that reads the input and target data from file and then passes it to the model for 3D data. The sequences that I have made read in NIFTI files which are designed for medical imaging. I'm not familiar with ShapeNet, but to get it to work you would need to either convert the data set to NIFTI format or create a custom sequence to load the data.

@changchun-zhou
Copy link
Author

Thank you! I have another question: Why is the number of channels different between images and target? To verify this, I print the shapes in epoch_training function:
for i, (images, target) in enumerate(train_loader): print("images.shape: ", images.shape) print("target.shape: ", target.shape)
output:
images.shape: [1, **4**, 112, 112, 112] target.shape: [1, **3**, 112, 112, 112]
As the output is shown, the image has 4 channels, but the target has 3 channels. I thought that both image and target have 3 channels meaning RGB. Could you explain why the image has an extra channel?
Thanks for your kindness.
@ellisdg

@ellisdg
Copy link
Owner

ellisdg commented Nov 2, 2021

I assume you are referring to the BRATS dataset and model.
The input channels refer to separate MR acquisition parameters. They are T1 weighted (T1w), T1w with contrast enhancing agent, FLAIR, and T2 weighted.
The output channels refer to separate labeled regions: whole tumor, enhancing tumor, and necrotic core.

@changchun-zhou
Copy link
Author

Thank you! Do output channel [0], output channel [1], and output channel [2] represent WT, TC, and ET, respectively? Because I get the score of dice decreasing from channel [0] to channel [2].

I assume you are referring to the BRATS dataset and model. The input channels refer to separate MR acquisition parameters. They are T1 weighted (T1w), T1w with contrast enhancing agent, FLAIR, and T2 weighted. The output channels refer to separate labeled regions: whole tumor, enhancing tumor, and necrotic core.

@ellisdg
Copy link
Owner

ellisdg commented Dec 20, 2021

Do output channel [0], output channel [1], and output channel [2] represent WT, TC, and ET, respectively?

Yes, that is correct.

@stale
Copy link

stale bot commented Feb 19, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. If you are still wanting followup to this issue, please ping the thread by leaving a comment. You may also contact david.ellis@unmc.edu with questions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants