Causality-Induced Positional Encoding for Transformer-Based Representation Learning of Non-Sequential Features
In this study, we propose Causality-Aware Position Encoder (CAPE), a novel method for generating causality-aware positional encodings that extend the transformer architecture to data with non-sequential yet causally-related features.