Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Difference between the two versions of signflipping? #78

Closed
imamtom opened this issue Jan 8, 2024 · 0 comments
Closed

Difference between the two versions of signflipping? #78

imamtom opened this issue Jan 8, 2024 · 0 comments

Comments

@imamtom
Copy link

imamtom commented Jan 8, 2024

When I ran Mean under the previous version of the signflipping attack, the attack was very effective. The accuracy of the global model aggregated by Mean is only 10%. Here is the code for the previous signflipping attack.

class SignflippingClient(ByzantineClient):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
    
    def local_training(self, data_batches):
        for data, target in data_batches:
            data, target = data.to(self.device), target.to(self.device)
            data, target = self.on_train_batch_begin(data=data, target=target)
            self.optimizer.zero_grad()
            
            output = self.model(data)
            loss = torch.clamp(self.loss_func(output, target), 0, 1e5)
            loss.backward()
            for name, p in self.model.named_parameters():
                p.grad.data = -p.grad.data
            self.optimizer.step()

But when I am running the signflipping attack on the current version, mean is convergent. I'm not sure what the problem is.

class SignFlipAdversary(Adversary):
    def on_algorithm_start(self, algorithm: Algorithm):
        class SignFlipCallback(ClientCallback):
            def on_backward_end(self, task):
                model = task.model
                for _, para in model.named_parameters():
                    para.grad.data = -para.grad.data

        for client in self.clients:
            client.to_malicious(callbacks_cls=SignFlipCallback, local_training=True)

My config is

fedavg_blades:
  run: FEDAVG
  stop:
    training_iteration: 200
    # train_loss: 100000

  config:
    random_seed:
        # grid_search: [122, 123]
      grid_search: [111]
      # grid_search: [111, 112, 123, 124, 125]
    dataset_config:
      type: MNIST
      num_clients: 20
      train_batch_size: 128

    evaluation_interval: 5


    num_remote_workers: 0
    num_gpus_per_worker: 0.6
    num_cpus_per_worker: 0
    num_cpus_for_driver: 8
    num_gpus_for_driver: 0.3

    global_model: mlp

    client_config:
        lr: 0.1
        momentum:
          grid_search: [0.9]

    server_config:
      aggregator:
        grid_search: [
          type: Mean,
          ]

      optimizer:
        type: SGD
        lr: 1
        # lr_schedule: [[0, 0.1], [1500, 0.1], [1501, 0.01], [2000, 0.01]]
        momentum:
          grid_search: [0.0]
          # grid_search: [0.0, 0.5, 0.9]

    num_malicious_clients:
      grid_search: [8 ]

    adversary_config:

      grid_search:

        - type: blades.adversaries.SignFlipAdversary

My guess is that the current version of signflipping was written incorrectly, and it should have inverted the sign of the gradient for all malicious users by def on_local_round_end().

@imamtom imamtom closed this as completed Jan 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant