Skip to content
@Qrange-group

Qrange-group

Popular repositories

  1. SUR-adapter SUR-adapter Public

    ACM MM'23 (oral), SUR-adapter for pre-trained diffusion models can acquire the powerful semantic understanding and reasoning capabilities from large language models to build a high-quality textual …

    Python 106 2

  2. CEM CEM Public

    EMNLP'22, CEM improves MHCH performance by correcting prediction bias and training an auxiliary cost simulator based on user state and labor cost causal graph, without requiring complex model craft…

    Python 11

  3. SEM SEM Public

    SEM can automatically decide to select and integrate attention operators to compute attention maps.

    Python 8 2

  4. Mirror-Gradient Mirror-Gradient Public

    WWW'24, Mirror Gradient (MG) makes multimodal recommendation models approach flat local minima easier compared to models with normal training.

    Python 8 1

  5. LSAS LSAS Public

    Lightweight sub-attention strategy (LSAS) utilizes high-order sub-attention modules to improve the original self-attention modules.

    Python 3

  6. SPEM SPEM Public

    SPEM adopts a self-adaptive pooling strategy based on global max-pooling, global min-pooling and a lightweight module for producing the attention map.

    Python 1

Repositories

Showing 6 of 6 repositories
  • SUR-adapter Public

    ACM MM'23 (oral), SUR-adapter for pre-trained diffusion models can acquire the powerful semantic understanding and reasoning capabilities from large language models to build a high-quality textual semantic representation for text-to-image generation.

    Python 106 MIT 2 7 0 Updated Apr 24, 2024
  • Mirror-Gradient Public

    WWW'24, Mirror Gradient (MG) makes multimodal recommendation models approach flat local minima easier compared to models with normal training.

    Python 8 MIT 1 0 0 Updated Feb 22, 2024
  • SEM Public

    SEM can automatically decide to select and integrate attention operators to compute attention maps.

    Python 8 MIT 2 0 0 Updated Jun 16, 2023
  • SPEM Public

    SPEM adopts a self-adaptive pooling strategy based on global max-pooling, global min-pooling and a lightweight module for producing the attention map.

    Python 1 MIT 0 0 0 Updated Jun 16, 2023
  • LSAS Public

    Lightweight sub-attention strategy (LSAS) utilizes high-order sub-attention modules to improve the original self-attention modules.

    Python 3 MIT 0 0 0 Updated Jun 16, 2023
  • CEM Public

    EMNLP'22, CEM improves MHCH performance by correcting prediction bias and training an auxiliary cost simulator based on user state and labor cost causal graph, without requiring complex model crafting.

    Python 11 MIT 0 0 0 Updated Oct 9, 2022

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…