Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how can i use my saving model #762

Closed
junzlovegood opened this issue Mar 11, 2024 · 3 comments
Closed

how can i use my saving model #762

junzlovegood opened this issue Mar 11, 2024 · 3 comments

Comments

@junzlovegood
Copy link

junzlovegood commented Mar 11, 2024

I want to use my saving model , but i get error:

Traceback (most recent call last): File "ause_model.py", line 82, in <module> model = load_model('./client_1_main_global_model.pth') File "ause_model.py", line 78, in load_model model.load_state_dict(torch.load(model_path)) File "//site-packages/torch/nn/modules/module.py", line 1482, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for MyLSTM: Missing key(s) in state_dict: "rnn.weight_ih_l0", "rnn.weight_hh_l0", "rnn.bias_ih_l0", "rnn.bias_hh_l0", "rnn.weight_ih_l0_reverse", "rnn.weight_hh_l0_reverse", "rnn.bias_ih_l0_reverse", "rnn.bias_hh_l0_reverse", "output_layer.weight", "output_layer.bias". Unexpected key(s) in state_dict: "cur_round", "model".

# 模型结构
class MyLSTM(nn.Module):
    def __init__(self,
                 in_channels,
                 hidden,
                 out_channels,
                 n_layers=1,
                 embed_size=8,
                 dropout=.0):
        super(MyLSTM, self).__init__()
        self.in_channels = in_channels
        self.hidden = hidden
        self.embed_size = embed_size
        self.out_channels = out_channels
        self.n_layers = n_layers

        # self.encoder = nn.Embedding(in_channels, embed_size)

        self.rnn =\
            nn.LSTM(
                input_size=in_channels,
                hidden_size=hidden,
                num_layers=n_layers,
                batch_first=True,
                dropout=dropout,
                bidirectional=True
            )

        # 双向 LSTM 输出维度为 hidden_size * 2
        self.output_layer = nn.Linear(hidden * 2, out_channels)

    def forward(self, input_):
        input_ = input_.unsqueeze(1)
        output, _ = self.rnn(input_)
        lstm_out_last = output[:, -1, :]

        # 通过输出层,得到最终的预测结果
        # 输出的形状: (batch_size, output_size)
        output = self.output_layer(lstm_out_last)

        return output


# 加载模型
# model = torch.load('final_main_global_model.pth')


def load_model(model_path):
    # 创建一个与预训练模型相同结构的实例
    model = MyLSTM(in_channels=23, hidden=128, out_channels=1)  # 以ResNet18为例
    # 加载保存的模型参数
    model.load_state_dict(torch.load(model_path))
    return model


model = load_model('./client_1_main_global_model.pth')

i = 0
model.eval()
with torch.no_grad():
    for data, target in eval_loader:
        if (i == 100):
            break
        # data = data.unsqueeze(1)
        output = model(data)
        print(output)
        i += 1
@rayrayraykk
Copy link
Collaborator

You can use the following args in cfg file to load from ckpt.

federate.restore_from (string) '' The checkpoint file to restore the model. -
federate.save_to (string) '' The path to save the model. -

@junzlovegood
Copy link
Author

Thanks. I want to obtain the model's predicted output results for each input data (eval result). How should I modify the code?

@rayrayraykk
Copy link
Collaborator

Duplicated #764

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants