Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix torch models on cpu #782

Merged
merged 1 commit into from Dec 28, 2021
Merged

Conversation

cning112
Copy link
Contributor

Description

To make torch-based models runnable when cuda is unavailable

Motivation and Context

Some of the benchmark examples (such as GATs) doesn't run when cpu-only because of the missing map_location argument in torch.load() function

How Has This Been Tested?

  • Pass the test by running: pytest qlib/tests/test_all_pipeline.py under upper directory of qlib.
  • If you are adding a new feature, test on your own test scripts.

Screenshots of Test Results (if appropriate):

  1. Pipeline test:
  2. Your own tests:

Types of changes

  • Fix bugs
  • Add new feature
  • Update documentation

@you-n-g
Copy link
Collaborator

you-n-g commented Dec 28, 2021

Thanks! It looks great!

@you-n-g you-n-g merged commit 622303b into microsoft:main Dec 28, 2021
@you-n-g you-n-g added the bug Something isn't working label Jan 12, 2022
qianyun210603 pushed a commit to qianyun210603/qlib that referenced this pull request Mar 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants