Skip to content

Inconsistently Multi-Modal Few-Shot Learning Research in the Computational Behavioral Analysis Lab @ GT

Notifications You must be signed in to change notification settings

vamsidesu5/pcom

Repository files navigation

pcom

Inconsistently Multi-Modal Few-Shot Learning

In many tasks, we have numerous input modalities (joint skeleton, RGB, accelerometer) but there are few datasets that have all of these modalities (and even fewer that have them for the task that we’d like) In this Experiment, we build a multi-input dataset that trains a representation across many modalities, andm is capable of training on datasets that have only one or a few of the input modalities. We can feed dummy data into the network areas of the missing links. I believe that using a generative network would be able to “hallucinate” the data for the activity

About

Inconsistently Multi-Modal Few-Shot Learning Research in the Computational Behavioral Analysis Lab @ GT

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published