Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to do real-time testing with a camera? #17

Open
ahmetgunduz opened this issue May 12, 2019 · 8 comments
Open

How to do real-time testing with a camera? #17

ahmetgunduz opened this issue May 12, 2019 · 8 comments
Labels
enhancement New feature or request question Further information is requested

Comments

@ahmetgunduz
Copy link
Owner

No description provided.

@ahmetgunduz ahmetgunduz added enhancement New feature or request question Further information is requested labels May 12, 2019
@sanaz97
Copy link

sanaz97 commented May 13, 2019

Hi Ahmet,
I ran online_test.py and it doesn't have any error and creates 2 json files:

Opt_clf.json
Opt_det.json

can you explain what are these jsons?

and my question is, Is it possible to run your model to test it in real time using webcam?
if it is can you explain the procedure?

I'm testing it with the model that trained by egogesture dataset and I want to test it on my laptap 's webcam

@ahmetgunduz
Copy link
Owner Author

Hi @sanaz97 ,

Those jsons are simply for saving the parameters where you specified for classifier and detector architectures. It does not used in the code. You do not need to worry about them unless you want to rerun the test with exactly the same settings you may use them as reference for yourself.

It is possible to run these models with a wabcam but that will require different data loading procedure. Please check opencv specifically cv2.VideoCapture() method. And with a quick search I found this medium blog where it explain how to run a keras model on webcam video. As long as you capture the data from webcam and cast the frames into torch.FloatTensor you can run your model like online_test.py with different data loading.

@sathiez
Copy link

sathiez commented May 16, 2019

Hi,
How to feed input to classifier in online_test.py using tensor.float .
I tried ,

frame= np.reshape(frame,(1,1,1,512,512)) frame=cv2.normalize(frame,None,alpha=0,beta=1,norm_type=cv2.NORM_MINMAX,dtype=cv2.CV_32F)
input_clf = torch.from_numpy(frame).float()
outputs_det = classifier(inputs_clf)

I get the following error,

RuntimeError: invalid argument 2: input image (T: 1 H: 32 W: 16) smaller than kernel size (kT: 2 kH: 3 kW: 3) at /pytorch/aten/src/THCUNN/generic/VolumetricAveragePooling.cu:57

@ahmetgunduz
Copy link
Owner Author

@sathiez this is not related with this issue, Please open a new one for it!!.

@ghost
Copy link

ghost commented Oct 30, 2019

Is the homepage simulator code available?

@ghost
Copy link

ghost commented Oct 30, 2019

@ahmetgunduz Is the home simulator code available?

@ahmetgunduz
Copy link
Owner Author

Unfortunately it is not available. What I did back then, using matplotlib simulation plot on class probabilities on a sample video from Egogesture.

@ChunJyeBehBeh
Copy link

Hello, may I know how to run the recognition_model with RGB camera? Which code file to run and where should I put the pretrained model?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants