Skip to content

Spatially Separable Attention Transformer Network implemented in Pytorch

Notifications You must be signed in to change notification settings

sharanmourya/Pytorch_STNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 

Repository files navigation

Pytorch code for "A Spatially Separable Attention Mechanism For Massive MIMO CSI Feedback"

(c) Sharan Mourya, email: sharanmourya7@gmail.com

Introduction

This repository holds the pytorch implementation of the original models described in the paper

Sharan Mourya, Sai Dhiraj Amuru, "A Spatially Separable Attention Mechanism For Massive MIMO CSI Feedback"

Requirements

Steps to follow

1) Download Dataset

For simulation purposes, we generate channel matrices from COST2100 model. Chao-Kai Wen and Shi Jin group provides a ready-made version of COST2100 dataset in Dropbox.

2) Organize Dataset

Once dataset is downloaded, we recommend to organize the folders as follows

├── STNet  # The cloned STNet repository
│   ├── stnet.py
├── data  # The data folder
│   ├── DATA_Htestin.mat
│   ├── ...

3) Training STNet

Firstly, choose the compression ratio 1/4, 1/8, 1/16, 1/32 or 1/64 by populating the variable encoded_dim with 512, 256, 128, 64 or 32 respectively.

Secondly, choose a scenario "indoor" or "outdoor" by assiging the variable envir the same.

Finally run the file STNet.py to begin training...

Results

Normalized Mean Square Error (NMSE) and Floating-Point Operations per second (FLOPS) achieved by STNet for different compression ratios under different scenarios are tabulated below.

S.No Compression Ratio indoor outdoor Flops
1 1/4 -31.81 -12.91 5.22M
2 1/8 -21.28 -8.53 4.38M
3 1/16 -15.43 -5.72 3.96M
4 1/32 -9.42 -3.51 3.75M
5 1/64 -7.81 -2.46 3.65M

About

Spatially Separable Attention Transformer Network implemented in Pytorch

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages