|  | _ = xops.memory_efficient_attention(q, q, q) | 
    
   
 
It seems that the latest xformers has the memory_efficient_attention function under xops.ops and not xops in this case.
xops.version
'0.0.32.post2'
Maybe "import xformers as xops" should be "import xformers.ops as xops" ?
For reference:
https://github.com/facebookresearch/xformers/blob/c159edc05ae5a0192ab0558e834b946155790371/xformers/ops/fmha/__init__.py#L186