Skip to content

Conversation

@zhi-bao
Copy link
Contributor

@zhi-bao zhi-bao commented Dec 10, 2024

What does this PR do?

This PR optimizes memory usage during the hstack when stacking nodes saved in the CSC format. Previously, the output format was required to be in CSR format to ensure faster prediction times. However, this did not align with hstack fast path cases (i.e., the input and output format should be consistent), leading to approximately 2 times the model size in additional memory. This was caused by the intermediate conversion:

  1. The nodes was first converted to a COO sparse matrix.
  2. The COO matrix was then converted to CSR format.

However, if the output format is required to be in CSC format, the above intermediate conversion becomes unnecessary. In this case, the hstack operation can directly convert nodes to a CSC sparse matrix.

This PR addresses these inefficiencies and reduces the memory overhead by outputting a CSC sparse matrix during the stacking process.

Test CLI & API (bash tests/autotest.sh)

Test APIs used by main.py.

  • Test Pass
    • (Copy and paste the last outputted line here.)
  • Not Applicable (i.e., the PR does not include API changes.)

Check API Document

If any new APIs are added, please check if the description of the APIs is added to API document.

  • API document is updated (linear, nn)
  • Not Applicable (i.e., the PR does not include API changes.)

Test quickstart & API (bash tests/docs/test_changed_document.sh)

If any APIs in quickstarts or tutorials are modified, please run this test to check if the current examples can run correctly after the modified APIs are released.

@Eleven1Liu Eleven1Liu requested a review from a team December 10, 2024 14:09
@zhi-bao zhi-bao requested a review from will945945945 January 3, 2025 16:14
@will945945945 will945945945 merged commit 6d955f2 into ntumlgroup:master Jan 4, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants