FedCLIPOT
We suggest you to use the following packages:
clip==1.0
loraclip==0.1.0
numpy==1.22.0
opencv-python==4.9.0.80
openpyxl==3.1.2
Pillow==9.3.0
scikit-image==0.21.0
scikit-learn==1.1.3
scipy==1.10.0
tqdm==4.66.1
torch==1.13.1+cu117
torchvision=0.14.1+cu117
python pFedCLIP++.pyRun DN-1.py to simulate FedCLIPOT algorithms. For convenience, we split all methodologies into 11 files, each file can be used for only one Method.
parser.add_argument('--test_envs', type=int, nargs='+', default=[3]) # default here is to set the global testing set, suppose there are 4 Clients, 3 here means it will treat Client 4 as the global while the rest as training clients.
models.py is the model backbone file.
clip_util.py is the utils that CLIP will use. For FedAVG, MOON and FedProx, you have to do the following steps:
def freeze_param(model):
for name, param in model.named_parameters():
param.requires_grad = TrueFor FedCLIPOT, FedCLIP, PromptFL, CocoOP, abd LP++, you have to set it as False.
prepare_data_dg_clip.py is the dataloader CLIP will use. You can define the percentage for training, val and test via:
l1, l2, l3 = int(l*0.6), int(l*0.2), int(l*0.2)training.py is the training function for all methods.
Here is a case study about the structure of our dataset as follows:
./data/ModernOffice31/
a/
bike/
frame_0001.jpg
...
back_pack/
bottle/
.../
d/
s/
w/