Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QFedAvg Loss Unitialized Error #659

Closed
smoser82 opened this issue Mar 3, 2021 · 3 comments · Fixed by #802
Closed

QFedAvg Loss Unitialized Error #659

smoser82 opened this issue Mar 3, 2021 · 3 comments · Fixed by #802

Comments

@smoser82
Copy link

smoser82 commented Mar 3, 2021

Hello,
I am trying to use the qffedavg strategy and getting "NameError: free variable 'loss' referenced before assignment in enclosing scope." I think the problem is that I did not specify an evaluation function as a parameter so the loss variable is not initialized before it is called later in the program. If I understand right, then if the loss really is necessary for q-FedAvg the evaluation function parameter should not be optional. Otherwise, the for loop after the line where loss is supposed to be initialized (line 185) should first check that loss is not None.

Also, from the paper, I think the correct name of the algorithm is qFedAvg not qFFedAvg as the strategy is named here.

Is there an example for the evaluation function?

Server code:

import flwr as fl
from flwr.server import Server, SimpleClientManager

def main():
    strategy = fl.server.strategy.QffedAvg(min_available_clients=2)

    myServer = Server(client_manager = SimpleClientManager(), strategy=strategy)

    fl.server.start_server(server_address= "localhost:8080", server = myServer, config={"num_rounds": 5})

if __name__ == "__main__":
    main()

Error message produced:

Traceback (most recent call last):
  File "C:\Users\wishi\OneDrive - University of Kentucky\FLCode\Organized_Pytorch\DataDistributions\centralizedDist\trimmedServer.py", line 12, in <module>
    main()
  File "C:\Users\wishi\OneDrive - University of Kentucky\FLCode\Organized_Pytorch\DataDistributions\centralizedDist\trimmedServer.py", line 9, in main
    fl.server.start_server(server_address= "localhost:8080", server = myServer, config={"num_rounds": 5})
  File "C:\Users\wishi\AppData\Local\Programs\Python\Python37\lib\site-packages\flwr\server\app.py", line 79, in start_server
    _fl(server=initialized_server, config=initialized_config)
  File "C:\Users\wishi\AppData\Local\Programs\Python\Python37\lib\site-packages\flwr\server\app.py", line 108, in _fl
    hist = server.fit(num_rounds=config["num_rounds"])
  File "C:\Users\wishi\AppData\Local\Programs\Python\Python37\lib\site-packages\flwr\server\server.py", line 92, in fit
    weights_prime = self.fit_round(rnd=current_round)
  File "C:\Users\wishi\AppData\Local\Programs\Python\Python37\lib\site-packages\flwr\server\server.py", line 181, in fit_round
    return self.strategy.aggregate_fit(rnd, results, failures)
  File "C:\Users\wishi\AppData\Local\Programs\Python\Python37\lib\site-packages\flwr\server\strategy\qffedavg.py", line 193, in aggregate_fit
    [np.float_power(loss + 1e-10, self.q_param) * grad for grad in grads]
  File "C:\Users\wishi\AppData\Local\Programs\Python\Python37\lib\site-packages\flwr\server\strategy\qffedavg.py", line 193, in <listcomp>
    [np.float_power(loss + 1e-10, self.q_param) * grad for grad in grads]
NameError: free variable 'loss' referenced before assignment in enclosing scope


@danieljanes
Copy link
Member

Hi @smoser82 , thanks a lot for the detailed report!

We do have an example for the evaluation function in the Advanced TensorFlow Example (https://github.com/adap/flower/tree/main/examples/advanced_tensorflow, and a few more in deprecated parts of the codebase under src/py/flwr_example and src/py/flwr_experimental, but those will be removed eventually).

The gist is to hand a function to the strategy that takes model parameters and return the evaluation result:

def main() -> None:
    # Create strategy
    strategy = fl.server.strategy.FedAvg(
        fraction_fit=0.3,
        fraction_eval=0.2,
        min_fit_clients=3,
        min_eval_clients=2,
        min_available_clients=10,
        eval_fn=get_eval_fn(),
        on_fit_config_fn=fit_config,
        on_evaluate_config_fn=evaluate_config,
    )
    # Start Flower server for four rounds of federated learning
    fl.server.start_server("[::]:8080", config={"num_rounds": 4}, strategy=strategy)


def get_eval_fn():
    """Return an evaluation function for server-side evaluation."""

    # Load data and model here to avoid the overhead of doing it in `evaluate` itself
    (x_train, y_train), _ = tf.keras.datasets.cifar10.load_data()

    # Use the last 5k training examples as a validation set
    x_val, y_val = x_train[45000:50000], y_train[45000:50000]

    # Load and compile model
    model = tf.keras.applications.EfficientNetB0(
        input_shape=(32, 32, 3), weights=None, classes=10
    )
    model.compile("adam", "sparse_categorical_crossentropy", metrics=["accuracy"])

    # The `evaluate` function will be called after every round
    def evaluate(weights: fl.common.Weights) -> Optional[Tuple[float, float]]:
        model.set_weights(weights)  # Update model with the latest parameters
        loss, accuracy = model.evaluate(x_val, y_val)
        return loss, accuracy

    return evaluate

Does this solve your issue?

I'll take a look at the paper and talk to the original author of this strategy to learn more about the naming and the issue with the optional nature of the evaluation function.

@smoser82
Copy link
Author

smoser82 commented Mar 6, 2021

That's a very helpful example, thank you. I think it's working now!

@danieljanes
Copy link
Member

The renaming just got merged into main and will become available in tomorrow's Flower nightly release (and the full 0.17 a little later) - thanks again for pointing this out @smoser82 !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants