You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wish to extend the multimodal library and the code in the Convlora example
To fine-tune SAM through ConvLoRA, I hope to use the prompter of SAM itself to input point and label for inference to see whether the model has obtained the ability of the corresponding task during fine-tuning, rather than obtaining a good prompt encoding. This question is actually It's very important. I hope the code can be expanded to implement a prompt segmentation function similar to SAM, not just an end-to-end model.
The text was updated successfully, but these errors were encountered:
Description
I wish to extend the multimodal library and the code in the Convlora example
To fine-tune SAM through ConvLoRA, I hope to use the prompter of SAM itself to input point and label for inference to see whether the model has obtained the ability of the corresponding task during fine-tuning, rather than obtaining a good prompt encoding. This question is actually It's very important. I hope the code can be expanded to implement a prompt segmentation function similar to SAM, not just an end-to-end model.
The text was updated successfully, but these errors were encountered: