Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch to multi-class classifier #30

Closed
dzenanz opened this issue Jun 9, 2021 · 6 comments · Fixed by #115
Closed

Switch to multi-class classifier #30

dzenanz opened this issue Jun 9, 2021 · 6 comments · Fixed by #115
Assignees

Comments

@dzenanz
Copy link
Member

dzenanz commented Jun 9, 2021

Do not predict just good/bad classes. Predict each artifact separately, and do regression to SNR, CNR and overall QA numbers. Regressions could be categorical (categories 0, 1, ..., 10).

#23 should probably be done before this.

@dzenanz
Copy link
Member Author

dzenanz commented Aug 25, 2021

Progress made. Bad prediction results cause an exception in AUC computation:

epoch 1/44
epoch_len: 144
epoch 1 average loss: 181.2971
confusion matrix:
[[ 3  9 12  4  0  0  0  0  0]
 [ 2 10 14  3  0  0  0  0  0]
 [ 2 14 29  3  0  0  0  0  0]
 [ 0  0  0  0  0  0  0  0  0]
 [ 1  2  8  1  0  0  0  0  0]
 [ 3  2  4  0  0  0  0  0  0]
 [ 0  3  3  2  0  0  0  0  0]
 [ 0  1  2  0  0  0  0  0  0]
 [ 1  5  0  1  0  0  0  0  0]]

epoch 2/44
epoch_len: 144
epoch 2 average loss: 122.4680
confusion matrix:
[[ 0  0  0 22  2  0  0  0  0  0]
 [ 0  0  0 29  0  0  0  0  0  0]
 [ 0  0  0 39 11  0  0  0  0  0]
 [ 0  0  0  0  0  0  0  0  0  0]
 [ 0  0  0  0  0  0  0  0  0  0]
 [ 0  0  0 11  0  0  0  0  0  0]
 [ 0  0  1 12  0  0  0  0  0  0]
 [ 0  0  0  5  0  0  0  0  0  0]
 [ 0  0  0  4  0  0  0  0  0  0]
 [ 0  0  0  8  0  0  0  0  0  0]]

val_confusion_matrix:
[[ 0  0  6  0  0  0  0  0]
 [ 0  0  1  0  0  0  0  0]
 [ 0  0  0  0  0  0  0  0]
 [ 0  0 30  0  0  0  0  0]
 [ 0  0  7  0  0  0  0  0]
 [ 0  0 13  0  0  0  0  0]
 [ 0  0 11  0  0  0  0  0]
 [ 0  0 16  0  0  0  0  0]]


              precision    recall  f1-score   support
           0       0.00      0.00      0.00       6.0
           2       0.00      0.00      0.00       1.0
           3       0.00      0.00      0.00       0.0
           6       0.00      0.00      0.00      30.0
           7       0.00      0.00      0.00       7.0
           8       0.00      0.00      0.00      13.0
           9       0.00      0.00      0.00      11.0
          10       0.00      0.00      0.00      16.0

    accuracy                           0.00      84.0
   macro avg       0.00      0.00      0.00      84.0
weighted avg       0.00      0.00      0.00      84.0
Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples. Use `zero_division` parameter to control this behavior.

Traceback (most recent call last):
  File "M:/MIQA/miqa/learning/nn_classifier.py", line 231, in evaluate_model
    torch.as_tensor(y_pred), torch.as_tensor(y_true), average=monai.utils.Average.MACRO
  File "C:\Program Files\Python37\lib\site-packages\monai\metrics\rocauc.py", line 144, in compute_roc_auc
    return _calculate(y_pred, y)
  File "C:\Program Files\Python37\lib\site-packages\monai\metrics\rocauc.py", line 69, in _calculate
    raise AssertionError("y values must be 0 or 1, can not be all 0 or all 1.")
AssertionError: y values must be 0 or 1, can not be all 0 or all 1.

I will let it run over night on the big set.

@dzenanz
Copy link
Member Author

dzenanz commented Aug 26, 2021

Results are better on the bigger set:

epoch 1 average loss: 112.5483
confusion matrix:
[[  0   6   7  26 243 178  14   3   1   0   0]
 [  1   4   7  37 255 138  18   6   0   0   0]
 [  2   2   2  22 179 101   6   3   0   0   0]
 [  3   5   8  67 322 130   3   0   0   0   0]
 [  3   7   3  48 245 202  30   5   0   0   0]
 [  1   5   4  27 190  97   1   0   0   0   0]
 [  2   1   2  13 154  91  16   0   0   0   0]
 [  3   2   1  10 108 122  35  11   0   0   0]
 [  0   0   1   4  86 110  35  15   2   0   0]
 [  1   3   3   4 100 139  57  17   1   1   0]
 [  1   2   1   8 136 146  55  14   4   0   0]]

val_confusion_matrix:
[[  0   0   1  10  26  14   3   2   1   0   0]
 [  0   0   0   0   0   2   2   0   0   0   0]
 [  0   0   1   0   2   1   0   1   0   0   0]
 [  0   0   0   0   0   2   0   0   0   0   0]
 [  0   0   0   1   1   2   0   0   0   0   0]
 [  0   0   0   0   1   0   2   0   0   0   0]
 [  0   0   4  23  83 151  41   3   0   0   0]
 [  0   0   0   1  10   8  14  14   0   0   0]
 [  0   0   0   2  22  58  66  58   8   0   0]
 [  0   0   0   0  12  40  64  84  21   2   0]
 [  0   0   0   3  20  55  49  36   6   0   0]]
Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.

              precision    recall  f1-score   support
           0       0.00      0.00      0.00        57
           1       0.00      0.00      0.00         4
           2       0.17      0.20      0.18         5
           3       0.00      0.00      0.00         2
           4       0.01      0.25      0.01         4
           5       0.00      0.00      0.00         3
           6       0.17      0.13      0.15       305
           7       0.07      0.30      0.11        47
           8       0.22      0.04      0.06       214
           9       1.00      0.01      0.02       223
          10       0.00      0.00      0.00       169

    accuracy                           0.06      1033
   macro avg       0.15      0.08      0.05      1033
weighted avg       0.32      0.06      0.07      1033

@dzenanz
Copy link
Member Author

dzenanz commented Aug 26, 2021

I realized that ROC AUC metric will not work with this approach. Switching to RMS of overall QA allows the script to run to completion. Epoch 4 produced highest RMS error:

epoch 4/44
epoch_len: 144
epoch 4 average loss: 105.1451
confusion matrix:
[[ 0  0  1 35  0  0  0  0  0]
 [ 0  0 17 16  0  0  0  0  0]
 [ 0  0 11 31  0  0  0  0  0]
 [ 0  0  0  0  0  0  0  0  0]
 [ 0  0  0 13  0  0  0  0  0]
 [ 0  0  0  6  0  0  0  0  0]
 [ 0  0  1  6  0  0  0  0  0]
 [ 0  0  1  2  0  0  0  0  0]
 [ 0  0  0  4  0  0  0  0  0]]

val_confusion_matrix:
[[ 0  4  2  0  0  0  0  0]
 [ 0  1  0  0  0  0  0  0]
 [ 0  0  0  0  0  0  0  0]
 [ 0 21  9  0  0  0  0  0]
 [ 0  5  2  0  0  0  0  0]
 [ 0 11  2  0  0  0  0  0]
 [ 0 11  0  0  0  0  0  0]
 [ 0 12  4  0  0  0  0  0]]

              precision    recall  f1-score   support
           0       0.00      0.00      0.00         6
           2       0.02      1.00      0.03         1
           3       0.00      0.00      0.00         0
           6       0.00      0.00      0.00        30
           7       0.00      0.00      0.00         7
           8       0.00      0.00      0.00        13
           9       0.00      0.00      0.00        11
          10       0.00      0.00      0.00        16

    accuracy                           0.01        84
   macro avg       0.00      0.12      0.00        84
weighted avg       0.00      0.01      0.00        84

saved new best metric model as M:\MIQA\miqa\learning/miqaT1-val0.pth
current epoch: 4 current AUC: 5.3081 best AUC: 5.3081 at epoch 4
Learning rate after epoch 4: 3.6125000000000004e-05

Here is the last epoch:

epoch 44 average loss: 93.5541
confusion matrix:
[[ 0  0  0 14 11  1  0  0  0  0  0]
 [ 0  0 35  0  0  0  0  0  0  0  0]
 [ 0  0  0 45  0  0  0  0  0  0  0]
 [ 0  0  0  0  0  0  0  0  0  0  0]
 [ 0  0  0  0  0  0  0  0  0  0  0]
 [ 0  0  0  0  0  0  0  0  0  0  0]
 [ 0  0  0  1  7  0  0  0  0  0  0]
 [ 0  0  0  2  9  0  0  0  0  0  0]
 [ 0  0  0  0  6  1  0  0  0  0  0]
 [ 0  0  0  0  4  0  0  0  0  0  0]
 [ 0  0  0  1  7  0  0  0  0  0  0]]

val_confusion_matrix:
[[ 0  0  5  1  0  0  0  0  0  0]
 [ 0  0  0  1  0  0  0  0  0  0]
 [ 0  0  0  0  0  0  0  0  0  0]
 [ 0  0  0  0  0  0  0  0  0  0]
 [ 0  0  0  0  0  0  0  0  0  0]
 [ 0  0  6 24  0  0  0  0  0  0]
 [ 0  0  1  4  2  0  0  0  0  0]
 [ 0  0  4  8  1  0  0  0  0  0]
 [ 0  0  4  6  1  0  0  0  0  0]
 [ 0  0  2 12  2  0  0  0  0  0]]

              precision    recall  f1-score   support
           0       0.00      0.00      0.00       6.0
           2       0.00      0.00      0.00       1.0
           3       0.00      0.00      0.00       0.0
           4       0.00      0.00      0.00       0.0
           5       0.00      0.00      0.00       0.0
           6       0.00      0.00      0.00      30.0
           7       0.00      0.00      0.00       7.0
           8       0.00      0.00      0.00      13.0
           9       0.00      0.00      0.00      11.0
          10       0.00      0.00      0.00      16.0

    accuracy                           0.00      84.0
   macro avg       0.00      0.00      0.00      84.0
weighted avg       0.00      0.00      0.00      84.0

current epoch: 44 current AUC: 4.1193 best AUC: 5.3081 at epoch 4
Learning rate after epoch 44: 1.4001880604280803e-06

@dzenanz
Copy link
Member Author

dzenanz commented Aug 30, 2021

Results are noticeably better after first pass at adjusting hyper-parameters:

epoch 7 average loss: 46.0782
confusion matrix:
[[ 19  64  98  97 121  62  27  11   5   3   2]
 [ 35 258 185  27   0   0   0   0   0   0   0]
 [  0  30 190  97   3   0   0   0   0   0   0]
 [  0   0  85 410  46   0   0   0   0   0   0]
 [  0   0   2 133 304  72   1   0   0   0   0]
 [  0   0   0   1  38 200  49   0   0   0   0]
 [  1   3   8  17  39  67  82  75  21   6   0]
 [  0   1   1   4  11  18  50  66  48  36  12]
 [  0   0   1   4  10  16  39  59  60  36  24]
 [  0   0   3   7   9  13  33  80  77  79  26]
 [  0   1   5   8  21  40  70  78  61  56  27]]

.................................
val_confusion_matrix:
[[ 0  4 10 14 12  2  8  2  2  1  2]
 [ 0  0  0  3  0  0  0  0  1  0  0]
 [ 0  0  2  0  1  0  0  0  1  0  1]
 [ 0  0  1  1  0  0  0  0  0  0  0]
 [ 0  0  1  0  1  1  0  1  0  0  0]
 [ 0  0  0  1  1  1  0  0  0  0  0]
 [ 0  1  3 14 29 61 83 65 35 11  3]
 [ 0  0  0  1  1  2  7  7 15 10  4]
 [ 0  0  0  3  7 11 19 35 73 45 21]
 [ 0  0  0  3  4  6 21 26 51 66 46]
 [ 0  0  2  3  5 17 23 36 25 39 19]]

              precision    recall  f1-score   support
           0       0.00      0.00      0.00        57
           1       0.00      0.00      0.00         4
           2       0.11      0.40      0.17         5
           3       0.02      0.50      0.04         2
           4       0.02      0.25      0.03         4
           5       0.01      0.33      0.02         3
           6       0.52      0.27      0.36       305
           7       0.04      0.15      0.06        47
           8       0.36      0.34      0.35       214
           9       0.38      0.30      0.33       223
          10       0.20      0.11      0.14       169

    accuracy                           0.24      1033
   macro avg       0.15      0.24      0.14      1033
weighted avg       0.34      0.24      0.28      1033

saved new best metric model as /home/dzenan/miqa/learning/miqaT1-val0.pth
current epoch: 7 current metric: 0.12 best metric: 0.12 at epoch 7
Learning rate after epoch 7: 6.285035664843749e-05

This 10-class validation corresponds to binary confusion matrix:

 56  19
173 785

which is slightly better than previous result:

[[ 54  21]
 [141 817]]

@dzenanz
Copy link
Member Author

dzenanz commented Sep 1, 2021

After some tuning, it was clear that the results on the small set (T1, 230 images) are no longer representative of the results on the large (AllT1, 5k images) set. T1bis set of 270 images was created by adding more images to the small set, in order to have representation of all the overall QA classes (classes 1-5 were almost completely absent). Sadly, this did not improve correspondence to the AllT1 set.

Best epoch on T1bis:

epoch 72 average loss: 18.1711
confusion matrix:
[[ 0  0  0  1  0  0  0  0  0  0]
 [ 0 27  5  0  0  0  0  0  0  0]
 [ 0  0 17  6  0  0  0  0  0  0]
 [ 0  0  0 36  0  0  0  0  0  0]
 [ 0  0  0  0 45  0  0  0  0  0]
 [ 0  0  0  0  6 19  0  0  0  0]
 [ 0  0  0  0  1  0  0  0  0  0]
 [ 0  0  0  1  1  1  0  0  0  0]
 [ 0  0  0  1  3  0  0  0  0  0]
 [ 0  0  0  0  3  0  0  0  0  0]]

val_confusion_matrix:
[[ 0  0  0  4  2  0  0  0  0  0  0]
 [ 0  0  1  2  1  0  0  0  0  0  0]
 [ 0  0  1  2  1  0  0  0  0  0  0]
 [ 0  0  0  1  1  0  0  0  0  0  0]
 [ 0  0  1  0  2  0  0  0  0  0  0]
 [ 0  0  0  1  1  0  0  0  0  0  0]
 [ 0  2  0 11 17  0  0  0  0  0  0]
 [ 0  0  0  1  5  1  0  0  0  0  0]
 [ 0  0  1  2  8  2  0  0  0  0  0]
 [ 0  0  0  4  7  0  0  0  0  0  0]
 [ 0  0  2  5  9  0  0  0  0  0  0]]

              precision    recall  f1-score   support
           0       0.00      0.00      0.00         6
           1       0.00      0.00      0.00         4
           2       0.17      0.25      0.20         4
           3       0.03      0.50      0.06         2
           4       0.04      0.67      0.07         3
           5       0.00      0.00      0.00         2
           6       0.00      0.00      0.00        30
           7       0.00      0.00      0.00         7
           8       0.00      0.00      0.00        13
           9       0.00      0.00      0.00        11
          10       0.00      0.00      0.00        16

    accuracy                           0.04        98
   macro avg       0.02      0.13      0.03        98
weighted avg       0.01      0.04      0.01        98

saved new best metric model as M:\MIQA\miqa\learning/miqaT1-val0.pth
current epoch: 72 current metric: -1.01 best metric: -1.01 at epoch 72
Learning rate after epoch 72: 2.0275559590445276e-06

Best epoch on AllT1:

epoch 11 average loss: 41.5171
confusion matrix:
[[ 28  73 117 115  78  46  24  13   4   0   0]
 [ 30 269 161   7   0   0   0   0   0   0   0]
 [  0  41 189 103   1   0   0   0   0   0   0]
 [  0   0  32 467  37   0   0   0   0   0   0]
 [  0   0   1  81 415  47   1   0   0   0   0]
 [  0   0   0   0  32 244  35   0   0   0   0]
 [  0   3   8  22  37  62  55  62  38   6   4]
 [  0   0   1  11  23  16  34  54  59  37  21]
 [  0   1   3   2   8  20  27  51  58  49  12]
 [  0   0   2   6  11  10  23  63 120  88  33]
 [  0   1   6   9  15  37  56  75  69  57  28]]

val_confusion_matrix:
[[ 2  8  7 16  8  2  4  3  3  2  2]
 [ 0  0  1  2  0  0  1  0  0  0  0]
 [ 1  0  0  1  1  0  0  0  2  0  0]
 [ 0  0  0  1  1  0  0  0  0  0  0]
 [ 0  0  0  2  0  1  0  0  1  0  0]
 [ 0  0  0  0  2  0  1  0  0  0  0]
 [ 0  2  2  6 20 53 85 66 54 14  3]
 [ 0  0  0  1  3  0  6 11 13 10  3]
 [ 0  0  0  2  9 12 22 46 68 38 17]
 [ 0  0  1  2  5  4 23 27 61 68 32]
 [ 0  0  0  4  7 16 32 32 34 32 12]]

              precision    recall  f1-score   support
           0       0.67      0.04      0.07        57
           1       0.00      0.00      0.00         4
           2       0.00      0.00      0.00         5
           3       0.03      0.50      0.05         2
           4       0.00      0.00      0.00         4
           5       0.00      0.00      0.00         3
           6       0.49      0.28      0.35       305
           7       0.06      0.23      0.09        47
           8       0.29      0.32      0.30       214
           9       0.41      0.30      0.35       223
          10       0.17      0.07      0.10       169

    accuracy                           0.24      1033
   macro avg       0.19      0.16      0.12      1033
weighted avg       0.36      0.24      0.27      1033

saved new best metric model as /home/dzenan/miqa/learning/miqaT1-val0.pth
current epoch: 11 current metric: 0.12 best metric: 0.12 at epoch 11
Learning rate after epoch 11: 2.8242953648100018e-05

Old results (binary confusion matrices with binary classification approach):

T1:
[[ 6  1]
 [58 19]]

AllT1:
[[ 54  21]
 [141 817]]

New results converted into binary confusion matrices:

T1:
21 0
77 0

AllT1:
 56  19
149 809

The result for small set is infinitely worse than before, the results for the large set are slightly better.

@dzenanz
Copy link
Member Author

dzenanz commented Sep 6, 2021

5-fold cross validation with the new approach:

[5217 rows x 22 columns]
Using fold 0 for validation
weights_array: [0.90607075 0.98852772 0.05258126 0.99498088 0.99163816 0.89243634
 0.88790631 0.94108704 0.83030593 0.75597514]
 
Loaded NN model from file "/home/dzenan/miqa/learning/miqaT1-val0.pth"
val_confusion_matrix:
[[ 0  3 21  9  7  5  3  3  0  3  3]
 [ 0  0  0  1  1  1  0  0  1  0  0]
 [ 0  0  1  0  2  0  1  0  0  1  0]
 [ 0  0  0  1  1  0  0  0  0  0  0]
 [ 0  0  1  1  0  1  0  0  0  1  0]
 [ 0  0  0  0  1  1  1  0  0  0  0]
 [ 1  0  4  5 19 32 72 93 52 22  5]
 [ 0  0  0  1  1  0  8 10 12 11  4]
 [ 0  0  0  3  8  7 20 38 65 54 19]
 [ 0  0  0  0  6  6 17 39 50 60 45]
 [ 0  0  2  1  8 10 21 39 32 39 17]]
 
              precision    recall  f1-score   support
           0       0.00      0.00      0.00        57
           1       0.00      0.00      0.00         4
           2       0.03      0.20      0.06         5
           3       0.05      0.50      0.08         2
           4       0.00      0.00      0.00         4
           5       0.02      0.33      0.03         3
           6       0.50      0.24      0.32       305
           7       0.05      0.21      0.07        47
           8       0.31      0.30      0.31       214
           9       0.31      0.27      0.29       223
          10       0.18      0.10      0.13       169

    accuracy                           0.22      1033
   macro avg       0.13      0.20      0.12      1033
weighted avg       0.31      0.22      0.25      1033


train_confusion_matrix:
[[  3  49  50 109 108  61  71  45  26  11   4]
 [  0 141 203 131  40  32   0   0   0   0   0]
 [  0   0 137  86  63  29   0   0   0   0   0]
 [  0   0   0 204 292   0   0   0   0   0   0]
 [  0   0   0  47 257 132  70   0   0   0   0]
 [  0   0   0   0   0 125 184   0   0   0   0]
 [  0   0   0   7  20  39  71  76  49  27   6]
 [  0   3   0   4  13  13  26  44  74  41  40]
 [  0   0   0   6   3   4  10  44  83  57  35]
 [  0   0   2   2   9  14  19  53  94  83  58]
 [  0   0   0  10   7  28  43  59  84  73  41]]
 
              precision    recall  f1-score   support
           0       1.00      0.01      0.01       537
           1       0.73      0.26      0.38       547
           2       0.35      0.43      0.39       315
           3       0.34      0.41      0.37       496
           4       0.32      0.51      0.39       506
           5       0.26      0.40      0.32       309
           6       0.14      0.24      0.18       295
           7       0.14      0.17      0.15       258
           8       0.20      0.34      0.25       242
           9       0.28      0.25      0.27       334
          10       0.22      0.12      0.16       345

    accuracy                           0.28      4184
   macro avg       0.36      0.29      0.26      4184
weighted avg       0.42      0.28      0.27      4184



Loaded NN model from file "/home/dzenan/miqa/learning/miqaT1-val1.pth"
val_confusion_matrix:
[[  3   5   9  10  10   7   5  10   2   0   0]
 [  0   0   0   1   0   1   0   1   1   0   0]
 [  0   0   2   1   0   0   0   0   0   0   0]
 [  0   0   0   2   0   0   0   0   0   0   0]
 [  0   0   1   2   0   1   0   1   0   0   0]
 [  0   0   0   1   0   0   1   0   0   0   0]
 [  0   2   0  13  29  59 101  87  45  13   2]
 [  0   0   0   1   3   7  12  10  14  15   4]
 [  0   2   0   2   6  11  29  61  60  45  19]
 [  0   0   0   1   3   4  21  48  47  35  19]
 [  0   0   3   5   3  11  28  28  28  15  12]]
 
              precision    recall  f1-score   support
           0       1.00      0.05      0.09        61
           1       0.00      0.00      0.00         4
           2       0.13      0.67      0.22         3
           3       0.05      1.00      0.10         2
           4       0.00      0.00      0.00         5
           5       0.00      0.00      0.00         2
           6       0.51      0.29      0.37       351
           7       0.04      0.15      0.06        66
           8       0.30      0.26      0.28       235
           9       0.28      0.20      0.23       178
          10       0.21      0.09      0.13       133

    accuracy                           0.22      1040
   macro avg       0.23      0.25      0.13      1040
weighted avg       0.38      0.22      0.25      1040

train_confusion_matrix:
[[ 11  25 115 111  88  64  20  19  23   7   0]
 [ 43 134 316   0   0   0   0   0   0   0   0]
 [  0   0 178 146   0   0   0   0   0   0   0]
 [  0   0   0 460   0   0   0   0   0   0   0]
 [  0   0   0  72 243 202   0   0   0   0   0]
 [  0   0   0   0  64 194 134   0   0   0   0]
 [  0   0   8  11  16  43  78  77  54  15   1]
 [  0   1   3   2  17  14  28  65  52  36   9]
 [  0   1   0   5   9  17  27  47  73  42   8]
 [  0   0   3   4   5  18  29  61 109  86  38]
 [  0   0   0   6  18  23  62  80 107  71  29]]
 
              precision    recall  f1-score   support
           0       0.20      0.02      0.04       483
           1       0.83      0.27      0.41       493
           2       0.29      0.55      0.38       324
           3       0.56      1.00      0.72       460
           4       0.53      0.47      0.50       517
           5       0.34      0.49      0.40       392
           6       0.21      0.26      0.23       303
           7       0.19      0.29      0.23       227
           8       0.17      0.32      0.23       229
           9       0.33      0.24      0.28       353
          10       0.34      0.07      0.12       396

    accuracy                           0.37      4177
   macro avg       0.36      0.36      0.32      4177
weighted avg       0.40      0.37      0.34      4177



Loaded NN model from file "/home/dzenan/miqa/learning/miqaT1-val2.pth"
val_confusion_matrix:
[[ 0  0  8  9 16 18  8  5  3  1  0]
 [ 0  0  0  0  2  0  0  0  0  0  0]
 [ 0  0  0  0  0  0  0  0  0  0  0]
 [ 0  0  0  0  0  0  0  0  0  0  0]
 [ 0  0  0  1  0  1  0  0  1  0  0]
 [ 0  0  0  0  0  1  0  0  0  0  0]
 [ 0  0  1  6 12 42 94 98 56 17  4]
 [ 0  0  0  1  0  2  5  7 13  9  3]
 [ 0  0  0  4  2  3 18 36 57 65 13]
 [ 0  0  0  1  4 11 11 26 58 66 39]
 [ 0  0  0  0  3 12 20 36 33 39 27]]

              precision    recall  f1-score   support
           0       0.00      0.00      0.00        68
           1       0.00      0.00      0.00         2
           2       0.00      0.00      0.00         0
           3       0.00      0.00      0.00         0
           4       0.00      0.00      0.00         3
           5       0.01      1.00      0.02         1
           6       0.60      0.28      0.39       330
           7       0.03      0.17      0.06        40
           8       0.26      0.29      0.27       198
           9       0.34      0.31      0.32       216
          10       0.31      0.16      0.21       170

    accuracy                           0.25      1028
   macro avg       0.14      0.20      0.12      1028
weighted avg       0.37      0.25      0.28      1028

train_confusion_matrix:
[[  0  31  69  95  71  75  34  22   8   7   0]
 [  0  90 365  17   0   0   0   0   0   0   0]
 [  0   0  29 288  86   0   0   0   0   0   0]
 [  0   0   0   0 644   0   0   0   0   0   0]
 [  0   0   0   0  36 480   0   0   0   0   0]
 [  0   0   0   0   0   0 397   0   0   0   0]
 [  0   0   1   5  14  36  65  77  49  12   5]
 [  0   0   0   1  11   8  26  50  83  53  24]
 [  0   0   0   2   4   9  17  38  52  68  24]
 [  0   0   1   0   2   6  11  28  91  92  59]
 [  0   0   0   4  14  20  49  59  77  51  47]]

              precision    recall  f1-score   support
           0       0.00      0.00      0.00       412
           1       0.74      0.19      0.30       472
           2       0.06      0.07      0.07       403
           3       0.00      0.00      0.00       644
           4       0.04      0.07      0.05       516
           5       0.00      0.00      0.00       397
           6       0.11      0.25      0.15       264
           7       0.18      0.20      0.19       256
           8       0.14      0.24      0.18       214
           9       0.33      0.32      0.32       290
          10       0.30      0.15      0.20       321

    accuracy                           0.11      4189
   macro avg       0.17      0.13      0.13      4189
weighted avg       0.17      0.11      0.11      4189



Loaded NN model from file "/home/dzenan/miqa/learning/miqaT1-val3.pth"
val_confusion_matrix:
[[ 0  0  5 12 24  7  5  6  9  2  1]
 [ 0  0  0  2  1  1  0  0  0  0  0]
 [ 0  0  0  0  3  0  1  0  0  0  0]
 [ 0  0  0  1  1  0  0  0  0  0  0]
 [ 0  0  2  0  0  0  0  0  0  0  0]
 [ 0  0  0  1  0  0  0  0  0  0  0]
 [ 0  1  3  7 23 39 52 79 59 22  9]
 [ 0  0  1  1  1  4  4  8  3  5  4]
 [ 0  0  2  3 11  7 28 44 61 31 17]
 [ 0  0  1  2 10 13 14 43 87 58 26]
 [ 0  1  1  6 13  9 26 37 37 37 13]]

              precision    recall  f1-score   support
           0       0.00      0.00      0.00        71
           1       0.00      0.00      0.00         4
           2       0.00      0.00      0.00         4
           3       0.03      0.50      0.05         2
           4       0.00      0.00      0.00         2
           5       0.00      0.00      0.00         1
           6       0.40      0.18      0.25       294
           7       0.04      0.26      0.06        31
           8       0.24      0.30      0.27       204
           9       0.37      0.23      0.28       254
          10       0.19      0.07      0.10       180

    accuracy                           0.18      1047
   macro avg       0.11      0.14      0.09      1047
weighted avg       0.28      0.18      0.21      1047

train_confusion_matrix:
[[  3   1  53 102 134  72  38  22  25   6   0]
 [ 30 103 153 142   0  39   0   0   0   0   0]
 [  0   0  55 211  38  27   0   0   0   0   0]
 [  0   0   0 343  84   0   0   0   0   0   0]
 [  0   0   0   0 326 174  71   0   0   0   0]
 [  0   0   0   0   0 124 306   0   0   0   0]
 [  0   1   3  11  15  29  67  89  74  42   8]
 [  0   0   1  12   6  23  35  59  89  52   7]
 [  0   0   1   4  10  19  20  54  75  52  11]
 [  0   0   2   3  11   3  20  42  76  86  25]
 [  0   1   1   3  17  20  30  54 113  76  36]]
 
              precision    recall  f1-score   support
           0       0.09      0.01      0.01       456
           1       0.97      0.22      0.36       467
           2       0.20      0.17      0.18       331
           3       0.41      0.80      0.55       427
           4       0.51      0.57      0.54       571
           5       0.23      0.29      0.26       430
           6       0.11      0.20      0.14       339
           7       0.18      0.21      0.20       284
           8       0.17      0.30      0.21       246
           9       0.27      0.32      0.30       268
          10       0.41      0.10      0.16       351

    accuracy                           0.31      4170
   macro avg       0.32      0.29      0.26      4170
weighted avg       0.36      0.31      0.28      4170



Loaded NN model from file "/home/dzenan/miqa/learning/miqaT1-val4.pth"
val_confusion_matrix:
[[ 0  2  5  4 14 16 12  5  6  4  1]
 [ 0  0  0  1  1  1  0  0  0  0  0]
 [ 0  0  0  1  1  1  0  1  0  0  0]
 [ 0  0  0  0  0  1  0  0  0  0  0]
 [ 0  0  1  0  1  1  0  0  1  0  0]
 [ 0  0  0  1  0  0  0  0  0  0  0]
 [ 0  1  5 12 26 47 65 61 46 28  7]
 [ 0  1  2  0  1  2  6  9 16  9  6]
 [ 0  0  0  2  3 11 23 36 71 60 26]
 [ 3  1  0  3  0  3 13 42 63 62 42]
 [ 0  0  0  7 10 22 28 24 37 23 22]]
 
              precision    recall  f1-score   support
           0       0.00      0.00      0.00        69
           1       0.00      0.00      0.00         3
           2       0.00      0.00      0.00         4
           3       0.00      0.00      0.00         1
           4       0.02      0.25      0.03         4
           5       0.00      0.00      0.00         1
           6       0.44      0.22      0.29       298
           7       0.05      0.17      0.08        52
           8       0.30      0.31      0.30       232
           9       0.33      0.27      0.30       232
          10       0.21      0.13      0.16       173

    accuracy                           0.22      1069
   macro avg       0.12      0.12      0.11      1069
weighted avg       0.30      0.22      0.24      1069

train_confusion_matrix:
[[ 53  73 113 116  50  26  16   6   0   1   0]
 [  0 338 185   0   0   0   0   0   0   0   0]
 [  0   0 241  72   0   0   0   0   0   0   0]
 [  0   0   0 574   0   0   0   0   0   0   0]
 [  0   0   0   0 302 177   0   0   0   0   0]
 [  0   0   0   0   0 220 199   0   0   0   0]
 [  0   1   6   9  21  37  82  62  47  13   9]
 [  0   0   0   4   7   6  31  28  62  62  18]
 [  0   2   0   1   4   9  13  47  79  65  28]
 [  0   0   1   2   7   8  12  43  82  91  54]
 [  0   0   1   1  11  15  33  74  70  74  54]]
 
              precision    recall  f1-score   support
           0       1.00      0.12      0.21       454
           1       0.82      0.65      0.72       523
           2       0.44      0.77      0.56       313
           3       0.74      1.00      0.85       574
           4       0.75      0.63      0.69       479
           5       0.44      0.53      0.48       419
           6       0.21      0.29      0.24       287
           7       0.11      0.13      0.12       218
           8       0.23      0.32      0.27       248
           9       0.30      0.30      0.30       300
          10       0.33      0.16      0.22       333

    accuracy                           0.50      4148
   macro avg       0.49      0.44      0.42      4148
weighted avg       0.56      0.50      0.48      4148

wandb: Synced sage-pine-58: https://wandb.ai/dzenanz/miqaT1/runs/60magjcp

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant