Skip to content

Commit

Permalink
v 0.0.7 (#84)
Browse files Browse the repository at this point in the history
* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

* Fix lora inject, added weight self apply lora (#39)

* Develop (#34)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

Co-authored-by: brian6091 <brian6091@gmail.com>

* release : version 0.0.4, now able to tune rank, now add loras dynamically

* readme : add brain6091's discussions

* fix:inject lora in to_out module list

* feat: added weight self apply lora

* chore: add import copy

* fix: readded r

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr>

* Revert "Fix lora inject, added weight self apply lora (#39)" (#40)

This reverts commit fececf3.

* fix : rank bug in monkeypatch

* fix cli fix

* visualizatio on effect of LR

* Fix save_steps, max_train_steps, and logging (#45)

* v 0.0.5 (#42)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

* Fix lora inject, added weight self apply lora (#39)

* Develop (#34)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

Co-authored-by: brian6091 <brian6091@gmail.com>

* release : version 0.0.4, now able to tune rank, now add loras dynamically

* readme : add brain6091's discussions

* fix:inject lora in to_out module list

* feat: added weight self apply lora

* chore: add import copy

* fix: readded r

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr>

* Revert "Fix lora inject, added weight self apply lora (#39)" (#40)

This reverts commit fececf3.

* fix : rank bug in monkeypatch

* fix cli fix

* visualizatio on effect of LR

Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>

* Fix save_steps, max_train_steps, and logging

Corrected indenting so checking save_steps, max_train_steps, and updating logs are performed every step instead at the end of an epoch.

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>

* Enable resuming (#52)

* v 0.0.5 (#42)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

* Fix lora inject, added weight self apply lora (#39)

* Develop (#34)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

Co-authored-by: brian6091 <brian6091@gmail.com>

* release : version 0.0.4, now able to tune rank, now add loras dynamically

* readme : add brain6091's discussions

* fix:inject lora in to_out module list

* feat: added weight self apply lora

* chore: add import copy

* fix: readded r

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr>

* Revert "Fix lora inject, added weight self apply lora (#39)" (#40)

This reverts commit fececf3.

* fix : rank bug in monkeypatch

* fix cli fix

* visualizatio on effect of LR

Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>

* Enable resume training unet/text encoder (#48)

* Enable resume training unet/text encoder

New flags --resume_text_encoder --resume_unet accept the paths to .pt files to resume.
Make sure to change the output directory from the previous training session, or else .pt files will be overwritten since training does not resume from previous global step.

* Load weights from .pt with inject_trainable_lora

Adds new loras argument to inject_trainable_lora function which accepts path to a .pt file containing previously trained weights.

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>

* feat : low-rank pivotal tuning

* feat :  pivotal tuning

* v 0.0.6

* Learning rate switching & fix indent (#57)

* Learning rate switching & fix indent

Make learning rates switch from training textual inversion to unet/text encoder after unfreeze_lora_step.
I think this is how it was explained in the paper linked(?)

Either way, it might be useful to add another parameter to activate unet/text encoder training at a certain step instead of at unfreeze_lora_step.
This would let the user have more control.

Also fix indenting to make save_steps and logging work properly.

* Fix indent

fix accelerator.wait_for_everyone() indent according to original dreambooth training

* Re:Fix indent (#58)

Fix indenting of accelerator.wait_for_everyone()
according to original dreambooth training

* ff now training default

* feat : dataset

* feat : utils to back training

* readme : more contents. citations, etc.

* fix : weight init

* Feature/monkeypatch improvements (#73)

* Refactor module replacement to work with nested Linears

* Make monkeypatch_remove_lora remove all LoraInjectedLinear instances

* Turn off resizing images with --resize=False (#71)

* Make image resize optional with --resize

Toggle off image resizing using --resize=False. Default is true for to maintain consistent operation.

* Make image resize optional with --resize

Toggle off image resizing using --resize=False. Default is true for to maintain consistent operation.

* Make image resize optional with --resize

Toggle off image resizing using --resize=False. Default is true for to maintain consistent operation.

* Revert "Turn off resizing images with --resize=False (#71)" (#77)

This reverts commit 39affb7.

* Use safetensors to store Loras (#74)

* Add safetensors supports

* Add some documentation for the safetensors load and save methods

* Fix typing-related syntax errors in Python < 3.10 introduced in recent refactor (#79)

* Fix the --resize=False option (#81)

* Make image resize optional with --resize

Toggle off image resizing using --resize=False. Default is true for to maintain consistent operation.

* Make image resize optional with --resize

Toggle off image resizing using --resize=False. Default is true for to maintain consistent operation.

* Make image resize optional with --resize

Toggle off image resizing using --resize=False. Default is true for to maintain consistent operation.

* Fix resize==False functionality

* Update train_lora_pt_caption.py

* Update train_lora_w_ti.py

* Pivotal Tuning with hackable training code for CLI (#83)

* feat : save utils on lora

* fix : stochastic attribute

* feat : cleaner training code

* fix : bit of bugs on inspect and trainer

* fix : moved pti training to cli

* feat : patch now accepts target arg

* fix : gelu in target

* fix : gradient being way too large : autocast was the problem

* fix : hflip

* fix : example running well!

* merge master

Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>
Co-authored-by: hdeezy <82070413+hdeezy@users.noreply.github.com>
Co-authored-by: Hamish Friedlander <hafriedlander@gmail.com>
  • Loading branch information
5 people authored Dec 25, 2022
1 parent 8d9d47e commit 66c18d3
Show file tree
Hide file tree
Showing 14 changed files with 1,155 additions and 288 deletions.
482 changes: 482 additions & 0 deletions lora_diffusion/cli_lora_pti.py

Large diffs are not rendered by default.

59 changes: 41 additions & 18 deletions lora_diffusion/dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -185,22 +185,29 @@ def __getitem__(self, index):


class PivotalTuningDatasetCapation(Dataset):
"""
A dataset to prepare the instance and class images with the prompts for fine-tuning the model.
It pre-processes the images and the tokenizes prompts.
"""

def __init__(
self,
instance_data_root,
learnable_property,
placeholder_token,
stochastic_attribute,
tokenizer,
class_data_root=None,
class_prompt=None,
size=512,
h_flip=True,
center_crop=False,
color_jitter=False,
resize=True,
):
self.size = size
self.center_crop = center_crop
self.tokenizer = tokenizer
self.resize = resize

self.instance_data_root = Path(instance_data_root)
if not self.instance_data_root.exists():
Expand All @@ -210,7 +217,6 @@ def __init__(
self.num_instance_images = len(self.instance_images_path)

self.placeholder_token = placeholder_token
self.stochastic_attribute = stochastic_attribute.split(",")

self._length = self.num_instance_images

Expand All @@ -224,22 +230,38 @@ def __init__(
else:
self.class_data_root = None

self.image_transforms = transforms.Compose(
[
transforms.Resize(
size, interpolation=transforms.InterpolationMode.BILINEAR
),
transforms.CenterCrop(size)
if center_crop
else transforms.RandomCrop(size),
transforms.ColorJitter(0.2, 0.1)
if color_jitter
else transforms.Lambda(lambda x: x),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.5], [0.5]),
]
)
if resize:
self.image_transforms = transforms.Compose(
[
transforms.Resize(
size, interpolation=transforms.InterpolationMode.BILINEAR
),
transforms.ColorJitter(0.2, 0.1)
if color_jitter
else transforms.Lambda(lambda x: x),
transforms.RandomHorizontalFlip()
if h_flip
else transforms.Lambda(lambda x: x),
transforms.ToTensor(),
transforms.Normalize([0.5], [0.5]),
]
)
else:
self.image_transforms = transforms.Compose(
[
transforms.CenterCrop(size)
if center_crop
else transforms.Lambda(lambda x: x),
transforms.ColorJitter(0.2, 0.1)
if color_jitter
else transforms.Lambda(lambda x: x),
transforms.RandomHorizontalFlip()
if h_flip
else transforms.Lambda(lambda x: x),
transforms.ToTensor(),
transforms.Normalize([0.5], [0.5]),
]
)

def __len__(self):
return self._length
Expand All @@ -255,6 +277,7 @@ def __getitem__(self, index):

text = self.instance_images_path[index % self.num_instance_images].stem

# print(text)
example["instance_prompt_ids"] = self.tokenizer(
text,
padding="do_not_pad",
Expand Down
Loading

0 comments on commit 66c18d3

Please sign in to comment.