Tensorflow implementation of Curriculum Adaptive Sampling for Extreme Data Imbalance with multi GPU using LUNA16
> all_in_one.py = convert_luna_to_npy + create_patch
> python all_in_one.py
- Check
src_root
andsave_path
> python main_train.py
- See
main_train.py
for other arguments.
> python main_test.py
- The hyper-parameter information is not listed in the paper, so I'm still testing it.
- Use Snapshot Ensemble (M=10, init_lr=0.1)
- Or Fix learning rate 0.01
def Snapshot(t, T, M, alpha_zero) :
"""
t = # of current iteration
T = # of total iteration
M = # of snapshot
alpha_zero = init learning rate
"""
x = (np.pi * (t % (T // M))) / (T // M)
x = np.cos(x) + 1
lr = (alpha_zero / 2) * x
return lr
- Resample
> 1.25mm
- Hounsfield
> minHU = -1000
> maxHU = 400
- Zero centering
> Pixel Mean = 0.25
If you want to do augmentation, see this link
- Affine rotate
-2 to 2 degree
- Scale
0.9 to 1.1
p_x = 1.0
for i in iteration :
p = uniform(0,1)
if p <= p_x :
g_n_index = np.random.choice(N, size=batch_size, replace=False)
batch_patch = nodule_patch[g_n_index]
batch_y = nodule_patch_y[g_n_index]
else :
predictor_dict = Predictor(all_patch) # key = index, value = loss
g_r_index = nlargest(batch_size, predictor_dict, key=predictor_dict.get)
batch_patch = all_patch[g_r_index]
batch_y = all_patch_y[g_r_index]
p_x *= pow(1/M, 1/iteration)
Junho Kim / @Lunit