Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Block_Info::read_block_costs(): use more realistic RAM estimate #81

Closed
vasdommes opened this issue Jun 30, 2023 · 0 comments
Closed

Block_Info::read_block_costs(): use more realistic RAM estimate #81

vasdommes opened this issue Jun 30, 2023 · 0 comments
Milestone

Comments

@vasdommes
Copy link
Collaborator

Currently Schur block size is used as a block cost to balance RAM during a timing run.
See https://github.com/davidsd/sdpb/blob/6ad4c404e5a208a44c606598fdd5f3c7fea089b4/src/sdp_solve/Block_Info/read_block_costs.cxx#L49C1-L59C6:

      // If no information, assign a cost proportional to the matrix
      // size.  This should balance out memory use when doing a timing
      // run.
      auto schur_sizes(schur_block_sizes());
      for(size_t block = 0; block < schur_sizes.size(); ++block)
        {
          result.emplace_back(schur_sizes[block] * schur_sizes[block], block);
        }

In reality, RAM usage is something like 2#(B) + 5#(PSD) + 2#(Schur) + 2#(Bilinear pairing), usually dominated by B matrix (in code - free_var_matrix). We should use this estimate instead of currently used #(Schur).

vasdommes added a commit to vasdommes/sdpb that referenced this issue Jul 5, 2023
vasdommes added a commit that referenced this issue Jul 11, 2023
Fix #81 Block_Info::read_block_costs(): use more realistic RAM estimate
@vasdommes vasdommes added this to the 2.6.0 milestone Nov 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant