Hi, was trying to use this model for segregation pipelines for new data for emotion cassification. Unable to get desired results,
image= io.imread('imagedata.jpg')
bounding_box = [landmarks.min(axis=0)[0], landmarks.min(axis=0)[1],
landmarks.max(axis=0)[0], landmarks.max(axis=0)[1]]
image, landmarks = transform_image_shape_no_flip(image, bb=bounding_box)
image = np.ascontiguousarray(image)
image = image.reshape((1,3,256,256))
image = torch.Tensor(image)
#image = transform_image(image)
with torch.no_grad():
out = net(image)
Getting landmarks from another model. This gives me negative values in expression values, emotion classification seems wrong. Am I missing some normalization. Really appreciate the support.
@antoinetlc
Hi, was trying to use this model for segregation pipelines for new data for emotion cassification. Unable to get desired results,
image= io.imread('imagedata.jpg')
bounding_box = [landmarks.min(axis=0)[0], landmarks.min(axis=0)[1],
landmarks.max(axis=0)[0], landmarks.max(axis=0)[1]]
image, landmarks = transform_image_shape_no_flip(image, bb=bounding_box)
image = np.ascontiguousarray(image)
image = image.reshape((1,3,256,256))
image = torch.Tensor(image)
#image = transform_image(image)
with torch.no_grad():
out = net(image)
Getting landmarks from another model. This gives me negative values in expression values, emotion classification seems wrong. Am I missing some normalization. Really appreciate the support.
@antoinetlc