Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add arena_matrix specialization for sparse matrices #2971

Merged
merged 18 commits into from
Mar 27, 2024

Conversation

SteveBronder
Copy link
Collaborator

Summary

While working on a branch to let Stan math use ctest and cmake I found a memory leak in the sparse matrix impl for vari. It was a confusing bug that had to deal with how Eigen's implimentation manages dynamic memory for sparse matrices. This PR removes the use of chainable_alloc and directly holding sparse matrices in the var_value<SparseMatrix> class and instead uses a new specialization of arena_matrix for sparse matrices.

Tests

Tests were added for the new arena_matrix's constructor, assignment operators, and inplace operators and can be run with.

python ./runTests.py ./test/unit/math/rev/core/arena_matrix_test.cpp

Side Effects

Yes! A big design decision here is to add inplace operators += and -= to the sparse implementation of arena_matrix. Normally, += and -= do not really make sense for sparse matrices since adding a scalar to all elements of a sparse matrix would just make it a dense matrix. But for Stan math, when we do the adjoint accumulation in reverse mode, we only want to accumulate over the nonzero values of the adjoints. Talking to @bob-carpenter about this, we could think of a sparse matrix as having a mix of double and var scalars, where everywhere we see a 0 there would be a double and everywhere else there would be a var. The little vector below is what I mean with this interpretation, where we have doubles for 0 values and vars for nonzero values

A = [double(0), double(0), var(x), double(0), double(0), var(y)]

If we do cos(A), since cos(0) = 1, we end up with a dense matrix of vars like the following

B = cos(A);
print(B);
B = [var(1.0), var(1.0), cos(var(x)), var(1.0), var(1.0), cos(var(y))]

We want a dense output of vars because later series of expressions may use those scalar values of the output matrix.

When we call grad for reverse mode, we are taking the adjoint (gradient) of the output with respect to the input. For the zero values of the sparse matrix our adjoint is with respect to a constant which means we don't want to propagate the associated values in our return matrix upwards. So we want our operator+=(Sparse this, Dense rhs) to ignore any zero values when we do the adjoint accumulation. This means that when we use arena_matrix<SparseMatrix> in our vari implementation we end up with code for the reverse pass that nicely looks like all the rest of our code

template <typename SparseMat>
auto cos(const SparseMat& sparse_mat) {
  arena_t<SparseMat> mat_arena = sparse_mat;
  var_value<Eigen::Matrix<double, -1, -1>> ret = cos(value_of(mat_arena));
  reverse_pass_callback([mat_arena, ret]() {
    mat_arena.adj() += -sin(value_of(mat_arena)).cwiseProduct(ret.adj());
  });
  return ret;
}

So while it's nonstandard for a sparse matrix to have a += and -= defined in it, for our use case it actually makes a lot of sense.

Release notes

Adds a sparse matrix implimentation for arena_matrix

Checklist

  • Copyright holder: Simons Foundation

    The copyright holder is typically you or your assignee, such as a university or company. By submitting this pull request, the copyright holder is agreeing to the license the submitted work under the following licenses:
    - Code: BSD 3-clause (https://opensource.org/licenses/BSD-3-Clause)
    - Documentation: CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)

  • the basic tests are passing

    • unit tests pass (to run, use: ./runTests.py test/unit)
    • header checks pass, (make test-headers)
    • dependencies checks pass, (make test-math-dependencies)
    • docs build, (make doxygen)
    • code passes the built in C++ standards checks (make cpplint)
  • the code is written in idiomatic C++ and changes are documented in the doxygen

  • the new changes are tested

@bob-carpenter
Copy link
Contributor

What is the "sparse matrix impl for vari"? Is this something that's exposed to users anywhere or used elsewhere in our code base?

Is this work toward adding sparse matrices to Stan? If so, is there a design doc somewhere about the plans?

@SteveBronder
Copy link
Collaborator Author

The sparse matrix impl for vari is the SoA pattern for autodiff on sparse matrices.

There is a design doc below, but it needs updated and does not contain much info on the low level implementation
https://github.com/SteveBronder/design-docs/blob/spec/sparse-matrices/designs/0004-sparse-matrices.md

@bob-carpenter
Copy link
Contributor

Why are you forking the design doc rather than updating? We don't want to have to review two design docs. We've traditionally only reviewed functional specs.

My overall suggestion is to work through what some example models would look like, such as a time series model with a sparse Cholesky precision matrix.

If you add a pull request for an updated design design doc, I'd be happy to comment. Some quick comments:

  • All of this is going to be tricky with parameters, because of autodiff. We can't just take zeroes and drop them out of a matrix of autodiff variables.

  • You don't need to mention that we want gradients---we always want gradients with our functions

  • by " common optimization technique is to reorder the rows and columns such that the structure is easier for algorithms to traverse." do you mean something like CSR notation?

  • I'd put the bounds in the data in that the indexes can't go below 1 or above the number of rows (or columns)

  • @dpsimpson and @avehtari have a point---we have to be very strict with sparse matrices defined as parameters and most of the sparseness in regression comes through covariates; having said that, I don't see how it's fundamentally different than our other types, which are all sized at declaration time---we're just adding sparsity and I would very strongly prefer our design to remain uniform so that the same types can be used in all of the blocks.

  • The triple extraction you talk about is buggy because the indexes are integers, but a matrix[N, 3] has real entries. Instead, you can make this an N-array of 3-tuples.

  • I didn't understand the discussion in the Hard Way vs. the Simple Way---the simple way looked like one of the hard way cases

@SteveBronder
Copy link
Collaborator Author

Why are you forking the design doc rather than updating? We don't want to have to review two design docs. We've traditionally only reviewed functional specs.

This is not a fork, this just changes the underlying vari implementation to use a new arena_matrix used for sparse matrices.

My overall suggestion is to work through what some example models would look like, such as a time series model with a sparse Cholesky precision matrix.

Do you mean in the design doc?

If you add a pull request for an updated design design doc, I'd be happy to comment. Some quick comments:

Sure, do you mind if after I copy paste your comments about the design doc into the discussion page.

That design doc needs heavily updated imo, but either way this PR should go through as this is a current bug and I'm 99% sure this is how the impl would need to look for the design we would approve

@SteveBronder SteveBronder requested review from andrjohns and WardBrian and removed request for andrjohns January 2, 2024 17:46
@syclik
Copy link
Member

syclik commented Jan 2, 2024

@SteveBronder, I'm guessing this is still an issue. Do you remember what prevented the tests from passing?

@stan-buildbot
Copy link
Contributor


Name Old Result New Result Ratio Performance change( 1 - new / old )
arma/arma.stan 0.21 0.19 1.12 10.5% faster
low_dim_corr_gauss/low_dim_corr_gauss.stan 0.01 0.01 1.08 7.27% faster
gp_regr/gen_gp_data.stan 0.02 0.02 1.01 1.1% faster
gp_regr/gp_regr.stan 0.11 0.11 1.04 3.48% faster
sir/sir.stan 82.43 79.55 1.04 3.49% faster
irt_2pl/irt_2pl.stan 4.07 4.11 0.99 -1.18% slower
eight_schools/eight_schools.stan 0.06 0.05 1.08 7.39% faster
pkpd/sim_one_comp_mm_elim_abs.stan 0.27 0.26 1.05 4.77% faster
pkpd/one_comp_mm_elim_abs.stan 18.95 19.03 1.0 -0.45% slower
garch/garch.stan 0.51 0.48 1.08 7.0% faster
low_dim_gauss_mix/low_dim_gauss_mix.stan 2.99 2.92 1.02 2.11% faster
arK/arK.stan 1.74 1.69 1.03 2.73% faster
gp_pois_regr/gp_pois_regr.stan 2.68 2.68 1.0 -0.18% slower
low_dim_gauss_mix_collapse/low_dim_gauss_mix_collapse.stan 9.58 9.65 0.99 -0.81% slower
performance.compilation 189.97 193.76 0.98 -2.0% slower
Mean result: 1.0325540932328534

Jenkins Console Log
Blue Ocean
Commit hash: f70fb84db501edcbf822691c02664c43b76287b2


Machine information No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.3 LTS Release: 20.04 Codename: focal

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 80
On-line CPU(s) list: 0-79
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
Stepping: 4
CPU MHz: 2400.000
CPU max MHz: 3700.0000
CPU min MHz: 1000.0000
BogoMIPS: 4800.00
Virtualization: VT-x
L1d cache: 1.3 MiB
L1i cache: 1.3 MiB
L2 cache: 40 MiB
L3 cache: 55 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke md_clear flush_l1d arch_capabilities

G++:
g++ (Ubuntu 9.4.0-1ubuntu1~20.04) 9.4.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Clang:
clang version 10.0.0-4ubuntu1
Target: x86_64-pc-linux-gnu
Thread model: posix
InstalledDir: /usr/bin

Copy link
Collaborator

@andrjohns andrjohns left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Few q's and suggestions, otherwise looks good

* @param other Eigen Sparse Matrix class
*/
template <typename T, require_same_t<T, PlainObject>* = nullptr>
arena_matrix(T&& other) // NOLINT
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't look like any of these constructors uses the innerNonZerosPtr member from the inputs, is that intentional?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh! I think I left them out because elsewhere in the Eigen docs it said it was just for compatibility with other packages. But I'll put them in just in case. No reason not to

Comment on lines +298 to +300
for (; static_cast<bool>(it) && static_cast<bool>(iz); ++it, ++iz) {
f(it.valueRef(), iz.value());
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this (and the other overloads) just loop over (*this).innerSize()? The compiler might be able to optimise better if it knows the length.

Otherwise, I think this is more readable as a while loop:

while (static_cast<bool>(it) && static_cast<bool>(iz)) {
  f(it.valueRef(), iz.value());
  ++it;
  ++iz;
}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes! So actually since we know for the sparse arena matrix the pointer for the values is contiguous from our memory arena we can actually just loop over the value pointer itself with the number of nonzero entries. At least for the scalar and arena matrix inplace ops. For general sparse and dense idt we have that guarantee so we should still do something like the while loop you have in the above

Comment on lines +319 to +324
for (int k = 0; k < (*this).outerSize(); ++k) {
typename Base::InnerIterator it(*this, k);
for (; static_cast<bool>(it); ++it) {
f(it.valueRef(), x(it.row(), it.col()));
}
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these operations something that could be implemented using Eigen's NullaryExpr framework? That could allow more room for Eigen to optimise/simplify ops

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think NullaryExpr is only available for dense types :/

If we wanted to optimize this harder we could write specific operator+= for each of sparse, dense, and scalar ops that under the hood will make an Eigen map to a dense eigen vector of the values. The current inplace ops for two sparse arena matrices is

  template <typename F, typename Expr,
            require_convertible_t<Expr&, MatrixType>* = nullptr,
            require_same_t<Expr, arena_matrix<MatrixType>>* = nullptr>
  inline void inplace_ops_impl(F&& f, Expr&& other) {
    auto&& x = to_ref(other);
    auto* val_ptr = (*this).valuePtr();
    auto* x_val_ptr = x.valuePtr();
    const auto non_zeros = (*this).nonZeros();
    for (Eigen::Index i = 0; i < non_zeros; ++i) {
      f(val_ptr[i], x_val_ptr[i]);
    }
  }

But we could instead for += do

  template <typename F, typename Expr,
            require_convertible_t<Expr&, MatrixType>* = nullptr,
            require_same_t<Expr, arena_matrix<MatrixType>>* = nullptr>
  inline void operator+=(F&& f, Expr&& other) {
    auto&& x = to_ref(other);
    auto* val_ptr = (*this).valuePtr();
    auto* x_val_ptr = x.valuePtr();
    const auto non_zeros = (*this).nonZeros();
    Eigen::Map<MatrixType>(val_ptr, non_zeros) += Eigen::Map<MatrixType>(x_val_ptr, non_zeros);
  }

That would allow Eigen to optimize a lot more, but at the expense of losing a bit of generalization from just having inplace_ops_impl. If you'd like the above then I'm fine with the extra code imo

Comment on lines 351 to 356
* @note Caution! Inplace operators assume that either
* 1. The right hand side sparse matrix has the same sparcity pattern
* 2. You only intend to add a scalar or dense matrix coefficients to the
* nonzero values of `this` This is intended to be used within the reverse
* pass for accumulation of the adjoint and is built as such. Any other use
* case should be be sure the above assumptions are satisfied.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* @note Caution! Inplace operators assume that either
* 1. The right hand side sparse matrix has the same sparcity pattern
* 2. You only intend to add a scalar or dense matrix coefficients to the
* nonzero values of `this` This is intended to be used within the reverse
* pass for accumulation of the adjoint and is built as such. Any other use
* case should be be sure the above assumptions are satisfied.
* @note Caution! Inplace operators assume that either
* 1. The right hand side sparse matrix has the same sparsity pattern
* 2. You only intend to add a scalar or dense matrix coefficients to the
* nonzero values of `this`. This is intended to be used within the reverse
* pass for accumulation of the adjoint and is built as such. Any other use
* case should be sure that the above assumptions are satisfied.

Comment on lines 374 to 379
* @note Caution!! Inplace operators assume that either
* 1. The right hand side sparse matrix has the same sparcity pattern
* 2. You only intend to add a scalar or dense matrix coefficients to the
* nonzero values of `this` This is intended to be used within the reverse
* pass for accumulation of the adjoint and is built as such. Any other use
* case should be be sure the above assumptions are satisfied.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* @note Caution!! Inplace operators assume that either
* 1. The right hand side sparse matrix has the same sparcity pattern
* 2. You only intend to add a scalar or dense matrix coefficients to the
* nonzero values of `this` This is intended to be used within the reverse
* pass for accumulation of the adjoint and is built as such. Any other use
* case should be be sure the above assumptions are satisfied.
* @note Caution! Inplace operators assume that either
* 1. The right hand side sparse matrix has the same sparsity pattern
* 2. You only intend to add a scalar or dense matrix coefficients to the
* nonzero values of `this`. This is intended to be used within the reverse
* pass for accumulation of the adjoint and is built as such. Any other use
* case should be sure that the above assumptions are satisfied.


/**
* Equivalent to `Eigen::Matrix`, except that the data is stored on AD stack.
* That makes these objects triviali destructible and usable in `vari`s.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* That makes these objects triviali destructible and usable in `vari`s.
* That makes these objects trivially destructible and usable in `vari`s.

Comment on lines 14 to 15
* @tparam MatrixType Eigen matrix type this works as (`MatrixXd`, `VectorXd`
* ...)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* @tparam MatrixType Eigen matrix type this works as (`MatrixXd`, `VectorXd`
* ...)
* @tparam MatrixType Eigen matrix type this works as (`MatrixXd`, `VectorXd`,
* ...)

@stan-buildbot
Copy link
Contributor


Name Old Result New Result Ratio Performance change( 1 - new / old )
arma/arma.stan 0.25 0.19 1.31 23.82% faster
low_dim_corr_gauss/low_dim_corr_gauss.stan 0.01 0.01 1.01 1.26% faster
gp_regr/gen_gp_data.stan 0.02 0.02 1.09 8.5% faster
gp_regr/gp_regr.stan 0.11 0.11 0.99 -0.61% slower
sir/sir.stan 77.86 77.81 1.0 0.06% faster
irt_2pl/irt_2pl.stan 4.2 3.92 1.07 6.57% faster
eight_schools/eight_schools.stan 0.06 0.05 1.01 1.16% faster
pkpd/sim_one_comp_mm_elim_abs.stan 0.26 0.25 1.01 1.27% faster
pkpd/one_comp_mm_elim_abs.stan 18.4 18.3 1.01 0.55% faster
garch/garch.stan 0.48 0.46 1.04 4.06% faster
low_dim_gauss_mix/low_dim_gauss_mix.stan 2.81 2.85 0.99 -1.42% slower
arK/arK.stan 1.66 1.65 1.01 1.09% faster
gp_pois_regr/gp_pois_regr.stan 2.61 2.58 1.01 0.82% faster
low_dim_gauss_mix_collapse/low_dim_gauss_mix_collapse.stan 9.18 9.23 0.99 -0.61% slower
performance.compilation 177.41 182.27 0.97 -2.74% slower
Mean result: 1.0352222085741138

Jenkins Console Log
Blue Ocean
Commit hash: 14dfa9321f82f4102f9b572c68f1e7440177abf4


Machine information No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.3 LTS Release: 20.04 Codename: focal

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 80
On-line CPU(s) list: 0-79
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
Stepping: 4
CPU MHz: 2400.000
CPU max MHz: 3700.0000
CPU min MHz: 1000.0000
BogoMIPS: 4800.00
Virtualization: VT-x
L1d cache: 1.3 MiB
L1i cache: 1.3 MiB
L2 cache: 40 MiB
L3 cache: 55 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke md_clear flush_l1d arch_capabilities

G++:
g++ (Ubuntu 9.4.0-1ubuntu1~20.04) 9.4.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Clang:
clang version 10.0.0-4ubuntu1
Target: x86_64-pc-linux-gnu
Thread model: posix
InstalledDir: /usr/bin

Copy link
Collaborator

@andrjohns andrjohns left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM thanks!

@andrjohns andrjohns merged commit 35d6d53 into develop Mar 27, 2024
8 checks passed
@syclik syclik deleted the fix/sparse-vari-impl branch April 2, 2024 02:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants