Skip to content

lhc1224/PIAL-Net

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 

Repository files navigation

PIAL-Net

  1. 📎 Paper Link
  2. 💡 Abstract
  3. 📖 Method
  4. 📂 Dataset
  5. 📊 Experimental Results
  6. ✉️ Statement
  7. ✨ Other Relevant Works
  8. 🔍 Citation

📎 Paper Link

  • Leverage Interactive Affinity for Affordance Learning [PDF]

Authors: Hongchen Luo, Wei Zhai, Jing Zhang, Yang Cao, Dacheng Tao

💡 Abstract

Perceiving potential "action possibilities" (i.e., affordance) regions of images and learning interactive functionalities of objects from human demonstration is a challenging task due to the diversity of human-object interactions. Prevailing affordance learning algorithms often adopt the label assignment paradigm and presume that there is a unique relationship between functional region and affordance label, yielding poor performance when adapting to unseen environments with large appearance variations. In this paper, we propose to leverage interactive affinity for affordance learning, i.e., extracting interactive affinity from human-object interaction and transferring it to non-interactive objects. Interactive affinity, which represents the contacts between different parts of the human body and local regions of the target object, can provide inherent cues of interconnectivity between humans and objects, thereby reducing the ambiguity of the perceived action possibilities. Specifically, we propose a pose-aided interactive affinity learning framework that exploits human pose to guide the network to learn the interactive affinity from human-object interactions. Particularly, a keypoint heuristic perception (KHP) scheme is devised to exploit the keypoint association of human pose to alleviate the uncertainties due to interaction diversities and contact occlusions. Besides, a contact-driven affordance learning (CAL) dataset is constructed by collecting and labeling over 5,000 images. Experimental results demonstrate that our method outperforms the representative models regarding objective metrics and visual quality.


Interactive affinity. (a) This paper explores the associations of interactable regions between diverse images by considering the context of contact regions with different body parts. (b) This paper considers leveraging the connection of human pose keypoints to alleviate the uncertainties due to interaction diversities and contact occlusions.


Motivation. (a) This paper explores the associations of interactable regions between diverse images by considering the context of contact regions with different body parts. (b) This paper considers leveraging the connection of human pose keypoints to alleviate the uncertainties due to interaction diversities and contact occlusions.

📖 Method


Overview of the proposed pose-aided interactive affinity learning framework. Our model mainly consists of an interactive feature enhancement (IFE) module and a keypoint heuristic perception (KHP) scheme.

📂 Dataset


You can download the CAL dataset from [ Google Drive | Baidu Pan (ap83) ].

Some examples and properties of Contact-driven Affordance Learning (CAL) dataset. (a) Statistics on the quantity of interactive and non-interactive images in each affordance category. (b) Confusion matrix for each affordance category interacting with body parts. (c) Some examples of interactive and non-interactive images and annotations in the dataset.

📊 Experimental Results


The results of different methods on the CAL dataset.


Visualization of prediction results. We show the visualization results of our model, few-shot segmentation (HSNet [33]), the best human pose estimation model (HRFormer [68]) and the segmentation model (SegFormer [61]).

✉️ Statement

For any other questions please contact lhc12@mail.ustc.edu.cn or wzhai056@ustc.edu.cn.

✨ Other Relevant Works

1.The paper "One-Shot Affordance Detection" was accepted by IJCAI2021 and the corresponding paper and code are available from the [link].

2.The paper "Phrase-Based Affordance Detection via Cyclic Bilateral Interaction" was accepted by IEEE Transactions on Artificial Intelligence (T-AI). The papers and code can be downloaded from the [link].

3.The paper "Learning Affordance Grounding from Exocentric Images" was accepted by CVPR22 and the corresponding paper and code are available from [link].

4.The paper "Grounding 3D Object Affordance from 2D Interactions in Images" and corresponding code are obtained from the [link].

🔍 Citation

@InProceedings{Luo_2023_CVPR,
    author    = {Luo, Hongchen and Zhai, Wei and Zhang, Jing and Cao, Yang and Tao, Dacheng},
    title     = {Leverage Interactive Affinity for Affordance Learning},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2023},
    pages     = {6809-6819}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages