Skip to content

LTXVid text2vid pipeline#208

Merged
entrpn merged 77 commits intomainfrom
vae-pipeline-cleaned
Jul 28, 2025
Merged

LTXVid text2vid pipeline#208
entrpn merged 77 commits intomainfrom
vae-pipeline-cleaned

Conversation

@Serenagu525
Copy link
Copy Markdown
Contributor

@Serenagu525 Serenagu525 commented Jul 23, 2025

Running Instructions:
Create a new virtual environment with conda create, and run bash setup.sh MODE=stable DEVICE=tpu

  1. In the folder src/maxdiffusion/models/ltx_video/utils, run:
    python convert_torch_weights_to_jax.py --ckpt_path [LOCAL DIRECTORY FOR WEIGHTS] --transformer_config_path ../xora_v1.2-13B-balanced-128.json
  2. In the repo folder, run:
    python src/maxdiffusion/generate_ltx_video.py src/maxdiffusion/configs/ltx_video.yml output_dir="[SAME DIRECTORY]" config_path="src/maxdiffusion/models/ltx_video/xora_v1.2-13B-balanced-128.json"
    Note: this should be with the quotes!
    Other generation parameters can be set in ltx_video.yml file.

@entrpn entrpn self-requested a review July 24, 2025 13:11
if not enable_single_replica_ckpt_restoring:
item = {checkpoint_item: orbax.checkpoint.args.PyTreeRestore(item=abstract_unboxed_pre_state)}
return checkpoint_manager.restore(latest_step, args=orbax.checkpoint.args.Composite(**item))
if checkpoint_item == " ":
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be if checkpoint_item is None?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if checkpoint set to None, cannot pass the check "if checkpoint_manager and checkpoint_item:" in max_utils.py. So I set it to empty string to get around this

Comment thread src/maxdiffusion/generate_ltx_video.py Outdated
pipeline = LTXVideoPipeline.from_pretrained(config, enhance_prompt=enhance_prompt)
if config.pipeline_type == "multi-scale":
pipeline = LTXMultiScalePipeline(pipeline)
# s0 = time.perf_counter()
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove commented out lines.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to keep the time benchmarking in the code?

Comment thread src/maxdiffusion/max_utils.py Outdated
)
if state:
state = state[checkpoint_item]
if checkpoint_item == " ":
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is checkpoint_item checking against " " instead of None?

skip_block_list=config.first_pass["skip_block_list"],
)
latents = result
print("first pass done")
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use max_logger for print statements

@entrpn
Copy link
Copy Markdown
Collaborator

entrpn commented Jul 24, 2025

python src/maxdiffusion/generate_ltx_video.py src/maxdiffusion/configs/ltx_video.yml output_dir=LOCAL_DIR

when I run these instructions, I get an error:

Traceback (most recent call last):
  File "/mnt/disks/external_disk/maxdiffusion/src/maxdiffusion/generate_ltx_video.py", line 20, in <module>
    from maxdiffusion.pipelines.ltx_video.ltx_video_pipeline import LTXVideoPipeline
  File "/mnt/disks/external_disk/maxdiffusion/src/maxdiffusion/pipelines/ltx_video/ltx_video_pipeline.py", line 20, in <module>
    from maxdiffusion.models.ltx_video.autoencoders.vae_torchax import TorchaxCausalVideoAutoencoder
  File "/mnt/disks/external_disk/maxdiffusion/src/maxdiffusion/models/ltx_video/autoencoders/vae_torchax.py", line 17, in <module>
    from maxdiffusion.models.ltx_video.autoencoders.causal_video_autoencoder import CausalVideoAutoencoder
  File "/mnt/disks/external_disk/maxdiffusion/src/maxdiffusion/models/ltx_video/autoencoders/causal_video_autoencoder.py", line 28, in <module>
    from diffusers.utils import logging
ModuleNotFoundError: No module named 'diffusers'

@entrpn
Copy link
Copy Markdown
Collaborator

entrpn commented Jul 24, 2025

Do you need to update requirements.txt for new dependencies?

@Serenagu525
Copy link
Copy Markdown
Contributor Author

Serenagu525 commented Jul 24, 2025

python src/maxdiffusion/generate_ltx_video.py src/maxdiffusion/configs/ltx_video.yml output_dir=LOCAL_DIR

when I run these instructions, I get an error:

Traceback (most recent call last):
  File "/mnt/disks/external_disk/maxdiffusion/src/maxdiffusion/generate_ltx_video.py", line 20, in <module>
    from maxdiffusion.pipelines.ltx_video.ltx_video_pipeline import LTXVideoPipeline
  File "/mnt/disks/external_disk/maxdiffusion/src/maxdiffusion/pipelines/ltx_video/ltx_video_pipeline.py", line 20, in <module>
    from maxdiffusion.models.ltx_video.autoencoders.vae_torchax import TorchaxCausalVideoAutoencoder
  File "/mnt/disks/external_disk/maxdiffusion/src/maxdiffusion/models/ltx_video/autoencoders/vae_torchax.py", line 17, in <module>
    from maxdiffusion.models.ltx_video.autoencoders.causal_video_autoencoder import CausalVideoAutoencoder
  File "/mnt/disks/external_disk/maxdiffusion/src/maxdiffusion/models/ltx_video/autoencoders/causal_video_autoencoder.py", line 28, in <module>
    from diffusers.utils import logging
ModuleNotFoundError: No module named 'diffusers'

Will need to install diffusers. Is this okay? I can add this in the requirements.txt

@Serenagu525 Serenagu525 reopened this Jul 24, 2025
@entrpn
Copy link
Copy Markdown
Collaborator

entrpn commented Jul 24, 2025

python src/maxdiffusion/generate_ltx_video.py src/maxdiffusion/configs/ltx_video.yml output_dir=LOCAL_DIR

when I run these instructions, I get an error:

Traceback (most recent call last):
  File "/mnt/disks/external_disk/maxdiffusion/src/maxdiffusion/generate_ltx_video.py", line 20, in <module>
    from maxdiffusion.pipelines.ltx_video.ltx_video_pipeline import LTXVideoPipeline
  File "/mnt/disks/external_disk/maxdiffusion/src/maxdiffusion/pipelines/ltx_video/ltx_video_pipeline.py", line 20, in <module>
    from maxdiffusion.models.ltx_video.autoencoders.vae_torchax import TorchaxCausalVideoAutoencoder
  File "/mnt/disks/external_disk/maxdiffusion/src/maxdiffusion/models/ltx_video/autoencoders/vae_torchax.py", line 17, in <module>
    from maxdiffusion.models.ltx_video.autoencoders.causal_video_autoencoder import CausalVideoAutoencoder
  File "/mnt/disks/external_disk/maxdiffusion/src/maxdiffusion/models/ltx_video/autoencoders/causal_video_autoencoder.py", line 28, in <module>
    from diffusers.utils import logging
ModuleNotFoundError: No module named 'diffusers'

Will need to install diffusers. Is this okay? I can add this in the requirements.txt

Yes just update the requirements accordingly.

@github-actions
Copy link
Copy Markdown

@github-actions
Copy link
Copy Markdown

@github-actions
Copy link
Copy Markdown

@github-actions
Copy link
Copy Markdown

@entrpn entrpn merged commit 80771b1 into main Jul 28, 2025
3 of 5 checks passed
Perseus14 added a commit that referenced this pull request Apr 11, 2026
An empty file with a single-space filename (' ') was accidentally
committed in PR #208. This is an invalid path on Windows/NTFS,
causing git checkout, git stash, and other operations to fail with:

  error: invalid path ' '

This removes the empty file to fix Windows compatibility.
entrpn pushed a commit that referenced this pull request Apr 13, 2026
…375)

An empty file with a single-space filename (' ') was accidentally
committed in PR #208. This is an invalid path on Windows/NTFS,
causing git checkout, git stash, and other operations to fail with:

  error: invalid path ' '

This removes the empty file to fix Windows compatibility.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants