Releases: Zuxier/xformers
TORCH2 xformers-0.0.14.dev0 CU118 --FAST MATH
works for training
TORCH2 xformers-0.0.14.dev0 CU118
the safest option
xformers-0.0.15.dev0+1924b19.d20230111-cp310-cp310-win_amd64
1924b19.d20230111
aka no-NaN edition
fast math, supported arch up to lovelace
Cuda V11.8
Torch 2 cu118 fast_math flash_attn_on -dxformerdisablebackwrd
Xformers for windows build with Flash-Attention v0.2.4
build for extra speed, hopefully without the batch size limitation
NVCC_FLAGS=--use_fast_math -DXFORMERS_MEM_EFF_ATTENTION_DISABLE_BACKWARD
For Compute capability: 6.0;6.1;6.2;7.0;7.2;8.0;8.6;8.9 (pascal,volta,turing,ampere,lovelace)
Torch 2 cu118 fast_math no_flash_attn
Xformers for windows build with Flash-Attention v0.2.4
build for extra speed has batch size limitation in txt2img
NVCC_FLAGS=--use_fast_math
XFORMERS_DISABLE_FLASH_ATTN=1
For Compute capability: 6.0;6.1;6.2;7.0;7.2;8.0;8.6;8.9 (pascal,volta,turing,ampere,lovelace)
Torch 2 cu118 fast_math
Xformers for windows build with Flash-Attention v0.2.4
build for extra speed
NVCC_FLAGS=--use_fast_math
For Compute capability: 6.0;6.1;6.2;7.0;7.2;8.0;8.6;8.9 (pascal,volta,turing,ampere,lovelace)
Torch 2 cu118
Xformers for windows build with Flash-Attention v0.2.4
For Compute capability: 6.0;6.1;6.2;7.0;7.2;8.0;8.6;8.9 (pascal,volta,turing,ampere,lovelace)
TORCH1.13 xformers-0.0.14dev0 CU116
built by d8ahazard
on python 3.9 with torch 1.13.1+cu116 for linux
xformers-0.0.14.dev0-cp310-cp310-linux_x86_64.whl
Version: 1.13.1+cu116
Built by haqthat on Ubuntu 18.04
Ubuntu GLIBC 2.27-3ubuntu1.6
Python 3.10.9
TORCH_CUDA_ARCH_LIST="5.0;5.2;5.3;6.0;6.1;6.2;7.0;7.2;7.5;8.0;8.6"
"NVCC_FLAGS="--use_fast_math"