Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please tell me, when I try to convert my trained model into a pte file, do I need to pre-load the model's weights? #3883

Closed
tayloryoung-o opened this issue Jun 6, 2024 · 0 comments

Comments

@tayloryoung-o
Copy link

    for i, (input_data, labels) in enumerate(self.train_loader):
      example_shape = input_data.shape
    example_args = (torch.randn(example_shape),)
    
    self.model.eval()
     
    # 1. torch.export: Defines the program with the ATen operator set.
    aten_dialect = export(self.model,example_args)
    # 2. to_edge: Make optimizations for Edge devices
    edge_program = to_edge(aten_dialect)

    # 3. to_executorch: Convert the graph to an ExecuTorch program
    executorch_program = edge_program.to_executorch()

    # 4. Save the compiled .pte program
    with open("Transformer.pte", "wb") as file:
        file.write(executorch_program.buffer)

When I use the above code, the pte file is successfully generated, but after getting the following result, I can't continue to run

I 00:00:00.062442 executorch:executor_runner.cpp:73] Model file ../Transformer.pte is loaded.
I 00:00:00.062491 executorch:executor_runner.cpp:82] Using method forward
I 00:00:00.062504 executorch:executor_runner.cpp:129] Setting up planned buffer 0, size 1204903600.
I 00:00:00.916517 executorch:executor_runner.cpp:152] Method loaded.
I 00:00:00.920503 executorch:executor_runner.cpp:162] Inputs prepared.
^C

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant