-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluating test data #472
Comments
On the classification dataset creation page there is a field where you can specify a percentage of images to hold out for the test dataset. If you specify anything greater than 0 a file As an interim solution you may use the REST API to programmatically perform inference on your
This will output a large JSON array that you can use to compute the accuracy yourself if you parse your |
Closing as a duplicate of #17. Please continue the discussion there if you have anything to add. Please continue the discussion here if you have any questions about Greg's [fantastic] answer. |
@gheinrich Yes that is exactly what I am asking about. But, your REST API suggestion is fantastic. I will give it a try. |
@gheinrich I'm trying to implement your REST API solution but I'm running into the following error:
I've taken a look at these line in the app but I still can't find where it breaks. Any thoughts? |
Hi @andrubrown the error you get suggests that you might have accidentally specified the job ID of your dataset. You need to specify the job ID of your model. If you click on your model on the DIGITS homepage then you will be taken to a URL that might resemble |
@gheinrich you are totally right my mistake. Works like a charm. Thanks! |
Hi, @gheinrich is there a way to do this method (REST API) for DetectNet? I checked and it only seemed to have classification and regression. |
The REST API applies to DetectNet in the same way as it does for regression networks. You should just have to replace the Job ID and path to files. Let us know if you're having difficulties with this. |
@gheinrich, Hello again and thank you, I tested and it worked on DetectNet giving a bunch of bounding box coordinates, now regarding the inference time, how can I get the pure inference time used to process the image? I mean without the overhead of calling to localhost and ...? when I use inference many in the GUI form I get 13 second for 46 images (282 ms each) but with this I get Time Spent: 0:00:04 for one image (which I think means 4 seconds!) |
You could try to test several image lists, each with a different number of images, and do the linear regression to figure out the slope of the inference time as a function of the number of images. Check this online tool for the linear regression. |
hello again, Isn't there a way to run detectnet outside digits, without using curl? suppose we want to use it for a real time or near real time application then curl would be really impractical (it takes ages to infer a single image) for example in caffe there are cpp files which can be called via terminal to perform classification or feature extraction. I've seen this which uses a python script to perform classification, and have checked the code but I'm not sure how can I change it to run detectnet |
Hi, I was able to change classification example,py (from here) to perform detection and it's quite good with much better speed. however the speed pattern is strange! I have changed the code to load the model once and then loop over images from input directory here the first image takes about 200 ms to process, the second one takes 6!!! seconds and the rest about 50 ms (no matter which images are first or second), I really can't understand the reason!?! another thing, as it's mentioned in classification example, the provided code only performs mean pixel subtraction, can anyone tell me how I may change this?! because in my actual training I have used Image mean (at inference time there's no option to choose mean type and I don't know what it does but there I get slightly better results than this python script for which I'm not providing any mean files because I don't understand what it means by npy format, the mean file format created when making lmdb data set is .binaryproto so what is this npy?!) and maybe I should ask this first, what is the difference between mean pixel and Image?! I've seen #169 but I don't understand it very much |
Hello, I would suggest you search in the DIGITS users list for answers to this question. |
Thank you, I will, but can you kindly answer my other questions as well? |
Hello, if you are using DetectNet we recommend that you don't enable mean subtraction during pre-processing - see walk-through. Mean subtraction in DetectNet is done within the network in the Regarding the question on speed: are all your images the same size? You can check the shape of |
@szm2015 Hello, could you please kindly provide the classification example.py that you changed to perform detection? |
@ShervinAr: please see #1404. |
This is the code I used (which is the same code provided in
DIGITS-master/examples/classification with some little tweaks to get
detectnet output)
…On Thu, Jan 19, 2017 at 10:52 PM, Luke Yeager ***@***.***> wrote:
@ShervinAr <https://github.com/ShervinAr>: please see #1404
<#1404>.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#472 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/APaJX4afVjm9fuRgT09PkwgPiM_FvkFyks5rT7fdgaJpZM4G0qqp>
.
|
While training a deep network, the framework displays the training and validation loss which is fine. After the training ends, I can test a single image or many images at once.
I am wondering, how can I evaluate the overall loss over the test dataset? , Currently, I have to export the model and test it outside digits to compute the average loss on the test data.
The only option available in digits is to upload images and see the classification result one by one on the browser. Am I missing something?
The text was updated successfully, but these errors were encountered: