Skip to content

Impact of RoPE modifications on pretrained model behavior #6

@ChiWeiHsiao

Description

@ChiWeiHsiao

Hi, this is really interesting work! I have a question about the impact of modifying the RoPE of the pretrained Flux-Kontext.

Since you modified the RoPE to incorporate depth-aware 3D coordinates and hierarchical resolutions, does this significantly alter the model's behavior compared to the pretrained model? I'm wondering whether this requires extensive retraining to accommodate the differences in RoPE designs. Could you share training details like GPU hours and iterations?

Thank you for your time!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions