Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Api cleanup #648

Merged
merged 4 commits into from
Jul 2, 2024
Merged

Api cleanup #648

merged 4 commits into from
Jul 2, 2024

Conversation

constantinpape
Copy link
Contributor

No description provided.

@constantinpape constantinpape changed the base branch from master to dev July 1, 2024 15:19
@constantinpape
Copy link
Contributor Author

I went ahead and cleaned up the function signatures related to both LoRA and 3D models.
This will need some changes in your code to account for this (and I will wait with merging so that you can check the changes, I pinged you in some of the most important places in the code).
But this now has a more unified interface, especially for the 3d models, which use the same function signatures as the original SAM model, making compatibility of the training code easier.

cc @anwai98 @lufre1 @caroteu

@constantinpape
Copy link
Contributor Author

Also, I created a new submodule models and moved everything related to custom models there, + created some unit tests for them.



def get_3d_sam_model(
def get_sam_3d_model(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed these names to match the order of sam_3d we use everywhere else, cc @lufre1

# unclear how batches are handled
def forward(self, batched_input, multimask_output, image_size) -> torch.Tensor:
return self._forward_train(batched_input, multimask_output, image_size)
def forward(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This now matches the input and output signature of the original SAM model.
cc @lufre1 @anwai98

use_lora: bool = False,
rank: Optional[int] = None,
lora_rank: Optional[int] = None,
lora_kwargs: Optional[Dict] = None,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed the use_lora, since it is redundant with rank = None and renamed rank -> lora_rank to have a more explicit name.
I added lora_kwargs, so that we can also customize the PEFT SAM layers from here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. Thanks!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For me as well 🤝

Copy link
Contributor

@anwai98 anwai98 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @constantinpape for the refactoring. This is GTG from my side.

Let's merge this once you think this is ready.

(PS. I was thinking if TrainableSAM is adapted with PEFT_Sam now as the test case seems to me that it is build in two stages: get_sam_model + PEFT_Sam, where it could be done in one-stage with lora_rank, but that's really minor and I can also take care of it later once we look at the LoRA experiments)

@constantinpape
Copy link
Contributor Author

Thanks for checking everyone, I will go ahead and merge!

@constantinpape constantinpape merged commit 6f7db9d into dev Jul 2, 2024
3 checks passed
@constantinpape constantinpape deleted the api-cleanup branch July 2, 2024 08:25
lufre1 pushed a commit that referenced this pull request Jul 4, 2024
Clean up interfaces related to 3d models and PEFT
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants