Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

any interest in supporting oak-d/oak-d-lite camera? #32

Closed
silverhikari opened this issue Oct 16, 2021 · 9 comments
Closed

any interest in supporting oak-d/oak-d-lite camera? #32

silverhikari opened this issue Oct 16, 2021 · 9 comments

Comments

@silverhikari
Copy link

as stated above do you have any interest in supporting the oak-d line of spatial 3d cameras?, as with the kickstarter that is going around they are now at the same price as the leap controller though that price will rise. the products use a open python sdk called depthai.

@emilianavt
Copy link
Owner

Those cameras look interesting, but OpenSeeFace is mainly concerned with inferring landmarks from RGB images and a putting together the final training dataset for the models was a lot of work. I'm unlikely to find the time or resources to put together anything remotely similar for depth cameras.

@TheMasterofBlubb
Copy link

@silverhikari theoretically you can take the onnx model and convert it to OpenVino so it run on the Oak-D.

I got a OAK-D Lite and will try to make it work with VSeeFace. If i dont forget i can inform you if my experiment works out.

@emilianavt the OAK-Ds have RGB cameras too, so technically you only need to convert the model (there is a python script for that) and interface the camera instead of calling the CNN yourself.

Using the Position data from depthcamera is more of a bonus (or in my case for handtracking instead of a leapmotion)

@emilianavt
Copy link
Owner

emilianavt commented Jan 16, 2022

I see, if they have RGB too, it should work!

@emilianavt emilianavt reopened this Jan 16, 2022
@TheMasterofBlubb
Copy link

yep there is a 4k RGB camera (the middle one usually), but also the stereo cameras are accessible individually as black and white cams (480p iirc)
The more interesting thing is to run the NN on the cam though as it has a AI-Chip onboard and then just grabbing the output data, hence the conversion to OpenVINO

@TheMasterofBlubb
Copy link

TheMasterofBlubb commented Jan 16, 2022

btw, do you by any chance have a layout of the OSC / VMC protocol that VSeeFace uses (the message names so to speak)
im not very good with Japanese andd it seems not creating landmarks (if not specifically needed would be helping a lot)

@emilianavt
Copy link
Owner

The VMC protocol only transmits blendshapes and bones. OpenSeeFace's face tracking data is transmitted using custom UDP packets. It's probably easiest to understand from the parser: https://github.com/emilianavt/OpenSeeFace/blob/master/Unity/OpenSee.cs#L137

There is also some English language documentation on the VMC protocol here: https://protocol.vmc.info/english.html

@TheMasterofBlubb
Copy link

TheMasterofBlubb commented Jan 16, 2022 via email

@emilianavt
Copy link
Owner

emilianavt commented Jan 16, 2022

If you are not familiar with python, the trickier part might be figuring out the decoding for the model's output. The current code for that is a bit dense and optimized:

OpenSeeFace/model.py

Lines 168 to 178 in baff2c0

t_main = x[:, 0:66].reshape((-1, 66, 28*28))
t_m = t_main.argmax(dim=2)
indices = t_m.unsqueeze(2)
t_conf = t_main.gather(2, indices).squeeze(2)
t_off_x = x[:, 66:132].reshape((-1, 66, 28*28)).gather(2, indices).squeeze(2)
t_off_y = x[:, 132:198].reshape((-1, 66, 28*28)).gather(2, indices).squeeze(2)
t_off_x = (223. * logit_arr(t_off_x) + 0.5).floor()
t_off_y = (223. * logit_arr(t_off_y) + 0.5).floor()
t_x = 223. * (t_m / 28.).floor() / 27. + t_off_x
t_y = 223. * t_m.remainder(28.).float() / 27. + t_off_y
x = (t_conf.mean(1), torch.stack([t_x, t_y, t_conf], 2))

In some very early versions, there should be a more readable function for decoding landmarks in tracker.py though.

Edit: I found it:

OpenSeeFace/tracker.py

Lines 105 to 111 in 0690bdd

def logit(p, factor=16.0):
if p >= 1.0:
p = 0.9999999
if p <= 0.0:
p = 0.0000001
p = p/(1-p)
return float(np.log(p)) / float(factor)

OpenSeeFace/tracker.py

Lines 641 to 660 in 0690bdd

def landmarks(self, tensor, crop_info):
crop_x1, crop_y1, scale_x, scale_y, _ = crop_info
avg_conf = 0
lms = []
res = self.res - 1
for i in range(0, 66):
m = int(tensor[i].argmax())
x = m // 28
y = m % 28
conf = float(tensor[i][x,y])
avg_conf = avg_conf + conf
off_x = res * ((1. * logit(tensor[66 + i][x, y])) - 0.0)
off_y = res * ((1. * logit(tensor[66 * 2 + i][x, y])) - 0.0)
off_x = math.floor(off_x + 0.5)
off_y = math.floor(off_y + 0.5)
lm_x = crop_y1 + scale_y * (res * (float(x) / 27.) + off_x)
lm_y = crop_x1 + scale_x * (res * (float(y) / 27.) + off_y)
lms.append((lm_x,lm_y,conf))
avg_conf = avg_conf / 66.
return (avg_conf, np.array(lms))

@PheebeUK
Copy link

@TheMasterofBlubb How did you get on with converting the model to OpenVino and generating OpenSEeFace compatible packets?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants