Skip to content
/ CAPE Public

Causality-Induced Positional Encoding for Transformer-Based Representation Learning of Non-Sequential Features

License

Notifications You must be signed in to change notification settings

Catchxu/CAPE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Causality-Induced Positional Encoding for Transformer-Based Representation Learning of Non-Sequential Features

In this study, we propose Causality-Aware Position Encoder (CAPE), a novel method for generating causality-aware positional encodings that extend the transformer architecture to data with non-sequential yet causally-related features.

About

Causality-Induced Positional Encoding for Transformer-Based Representation Learning of Non-Sequential Features

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages