Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Checkpoint #7

Closed
Auth0rM0rgan opened this issue May 10, 2018 · 7 comments
Closed

Checkpoint #7

Auth0rM0rgan opened this issue May 10, 2018 · 7 comments

Comments

@Auth0rM0rgan
Copy link

Hey @dgurkaynak,

I want to use the checkpoint which saved during training and validating. I mean, I want to evaluate model by using saved checkpoint on test_data. First I load my test data like this:

test_preprocessor = BatchPreprocessor(dataset_file_path=FLAGS.test_file, num_classes=FLAGS.num_classes, output_size=[FLAGS.img_size, FLAGS.img_size])
test_batches_per_epoch = np.floor(len(test_preprocessor.labels) / FLAGS.batch_size).astype(np.int16)

Then I have to restore the checkpoint which I did it in a new Session
saver.restore(sess, checkpoint_path)

Final step, I have to feed the accuracy with all my test images and labels.
print('Test_Accuracy: {:.3%}'.format(accuracy.eval({x: **??**, y_true: **??**})))

I have tried to feed it by test_preprocessor.images and test_preprocessor. labels but I got an error because images and labels are path and string not float. what should be the value for ?? in above line?

thanks in advance.

@dgurkaynak
Copy link
Owner

Hey @arminXerror, I have given a sample code to perform just prediction in another issue, please check #4 (comment). I haven't tested the code, but you can get the idea.

@Auth0rM0rgan
Copy link
Author

Hey @dgurkaynak,

Sorry for taking your time, I am trying to understand your code line by line and have questions in my mind I would be appreciated if you can help me to find out the answer.

  1. In the line for multi_scale should I give a number for multi_scale or it has a default number which is 2?
    tf.app.flags.DEFINE_string('multi_scale', ' **??** ', 'As preprocessing; scale the image randomly between 2 numbers and crop randomly at network\'s input size')

  2. Are you calculating the accuracy as average accuracy on each epoch? for example, if I have 3K images for my val_data and batch size is 128. According to this line
    val_batches_per_epoch = np.floor(len(val_preprocessor.labels) / FLAGS.batch_size).astype(np.int16)
    my val_batches_per_epoch is 23 which means for completing 1 epoch, you are feeding network 23 times and each time you calculate the accuracy and at the end of the first epoch you sum up all accuracy and get the average, am I right?

  3. If I want to know about training accuracy should I do it as follow:
    opt, tr_acc = sess.run([train_op, accuracy], feed_dict={x: batch_xs, y: batch_ys, is_training: True})
    and then print the tr_acc. Is it a correct way to get train accuracy?

  4. If I want to feed network without batch size what should be variable?
    feed_dict={x: batch_xs, y: batch_ys, is_training: False})
    I mean what should be the variable for x and y without next batch, I have tried val_preprocessor.images for x and val_preprocessor.labels but I got an error which these 2 are string and must be float. you are reading images from text file and converting the images and labels to numpy array, how can i get these numpy array in finetune.py ?

  5. Could you please help me to get the accuracy for each category if you have 10 categories and wants to know accuracy of each one and plot curves per category for your network?

I know is so much....

thanks in advance.

@dgurkaynak
Copy link
Owner

  1. You should try different values, every dataset is different. In my experiences, I got best scores with 224,256.
  2. Yes.
  3. Yes.
  4. If you want to online training, you can think like a batch with size 1.
  5. You should search and implement it yourself. You have already asked about confusion matrix in Confusion Matrix #5.

@Auth0rM0rgan
Copy link
Author

Auth0rM0rgan commented May 18, 2018

Hey @dgurkaynak,

I really appreciate your help.

If I want to remove the weights of the final layer and "re-train" the net again with the new data set. Should I do as follow?

In ResNet, you load pre-trained weights as follow:

model.load_original_weights(sess, skip_layers=train_layers) which, train_layers is FC layer in ResNet.

should I just remove skip_layers and change the line like below?
model.load_original_weights(sess)

is going to re-train the net again with my new data set or I have to change another part as well?

Thanks in advance.

@dgurkaynak
Copy link
Owner

dgurkaynak commented May 18, 2018

If the number of your classes is not 1000, the weights of final layer is initialised from scratch. You don't have to do anything.

@Auth0rM0rgan
Copy link
Author

Hey @dgurkaynak,

Great, I have 10 classes in my dataset.

What will happen to the network if I remove skip_layers as explained?
Where did you define for the network if the number of classes is less than 1000, the weights of the final layer will initialize from scratch?

Thanks in advance.

@dgurkaynak
Copy link
Owner

dgurkaynak commented May 18, 2018

In fact skip_layers is not used at all. Look at load_original_weights method:

def load_original_weights(self, session, skip_layers=[]):
weights_path = 'ResNet-L{}.npy'.format(self.depth)
weights_dict = np.load(weights_path, encoding='bytes').item()
for op_name in weights_dict:
parts = op_name.split('/')
# if contains(op_name, skip_layers):
# continue
if parts[0] == 'fc' and self.num_classes != 1000:
continue
full_name = "{}:0".format(op_name)
var = [v for v in tf.global_variables() if v.name == full_name][0]
session.run(var.assign(weights_dict[op_name]))

Initially it was used for skipping weight transfer for that layers. Later, I commented out that check because I wanted to transfer weights for all layers except the last layer. Starting these layers from transferred weights performs better than initialising random. If you want to initalise them random, you can uncomment these 2 lines.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants