Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Framework: Mysterious 6 failures in package Zoltan2Sphynx with no data on CDash going back to at least 2022-05-18 #10836

Closed
bartlettroscoe opened this issue Aug 5, 2022 · 31 comments
Labels
PA: Framework Issues that fall under the Trilinos Framework Product Area type: bug The primary issue is a bug in Trilinos code or tests Waiting Waiting for some external team to do something before this can be completed

Comments

@bartlettroscoe
Copy link
Member

bartlettroscoe commented Aug 5, 2022

Bug Report

@trilinos/framework, @csiefer2, @rppawlo, @e10harvey

Next Action Status

This is due to a defect in CTest introduced in CMake 3.18. The fix for this is in CMake 3.24.3 (released 2022-11-01) . (See SNL Kitware #209). Next: Install CMake 3.24.3 everywhere and use with Trilinos PR builds ...

Internal Issues:

Description

As shown in this query going back to at least 2022-05-18, there have been many PR builds that failed showing "6" failures in the package Zoltan2Sphynx but when you click on "6", there are no errors shown. Also, there are zero "Not Run", "Fail" and "Pass" tests reported for these builds. That is very strange because generally if no test results are submitted, then empty rows are reported for "Not Run", "Fail" and "Pass" tests, not a zero.

As shown in that query above, all of these failures are coming from either 'ascic' or 'ascicgpu' machines and they span a bunch of different builds including intel-17.0.1, intel-19.0.5', cuda-11.4.2, gnu-7.2.0, and gnu-8.3.0` builds and has so far impacted 88 different builds (as of 8/5/2022).

So far, this has impacted 28 PRs so far including #10537, #10552, #10606, #10614, #10628, #10644, #10653, #10662, #10675, #10677, #10682, #10697, #10706, #10720, #10749, #10751, #10767, #10775, #10777, #10779, #10783, #10784, #10796, #10799, #10801, #10802, #10817, and #10834

NOTE: For some reason (that we should investigate), when a global target not associated with a TriBITS package fails, it seems that CTest assigns this to the last subproject (sorted alphanumerically) which is Zoltan2Sphynx. We have seen this many times. Therefore, I don't believe this failure has anything to do with Zoltan2.

Steps to Reproduce

Unknown. Seems to randomly occur in PR testing.

@bartlettroscoe bartlettroscoe added type: bug The primary issue is a bug in Trilinos code or tests PA: Framework Issues that fall under the Trilinos Framework Product Area resolved: duplicate Issue is really a duplicate of some other issue where the efforts will be focused and removed resolved: duplicate Issue is really a duplicate of some other issue where the efforts will be focused labels Aug 5, 2022
@bartlettroscoe bartlettroscoe added this to ToDo in Trilinos TriBITS Refactor via automation Aug 5, 2022
@bartlettroscoe bartlettroscoe moved this from ToDo to In Progress in Trilinos TriBITS Refactor Aug 5, 2022
@bartlettroscoe
Copy link
Member Author

CC: @jhux2, @csiefer2, @e10harvey

There is an important clue that I missed about these mysterious "6" build errors. If you look at all of the PR build that have "6"
build errors since 2022-07-01 shown here, in almost all of these cases, you see there is a single "Not Run" test which is the test:

  • ThyraEpetraAdapters_EpetraOperatorWrapper_UnitTests_MPI_4

that is built from the source file EpetraOperatorWrapper_UnitTests.cpp (see #10823).

However, note that I can't seem to find any of these build errors in any of the recent "Master Merge" builds shown on CDash.

Someone needs to get onto these machines where the PR builds are actually running and try to reproduce these errors in those build dirs. There is really not much more I can do without being given the access.

@jhux2
Copy link
Member

jhux2 commented Aug 11, 2022

Potentially linked to #10842.

@jhux2
Copy link
Member

jhux2 commented Aug 12, 2022

@bartlettroscoe #10813 has merged, which is a fix for #10842. PR #10775 started after that merge, and it still shows the same 6 failures in Zoltan2Sphinx that don't appear when you click on the link, as you can see here.

@bartlettroscoe
Copy link
Member Author

I will see if we can't get Kitware's help on debugging this (it has been a challenge for Zack to see things on both the clients where the XML files are getting generated and on the CDash server where they are being consumed).

@jhux2
Copy link
Member

jhux2 commented Aug 12, 2022

@bartlettroscoe Sounds good. From the internal ticket, there was an indication that Zack saw real compile errors in the logs.

@bartlettroscoe
Copy link
Member Author

bartlettroscoe commented Aug 12, 2022

@bartlettroscoe Sounds good. From the internal ticket, there was an indication that Zack saw real compile errors in the logs.

@jhux2, if that is the case, we should be able to see those at:

I think @e10harvey gave all Trilinos developers read-only access to that Jenkins site.

I will pull the build log file (which is 89M) and see if I can see what is failing.

@bartlettroscoe
Copy link
Member Author

bartlettroscoe commented Aug 12, 2022

@jhux2, so for whatever reason, the build log file for the PR #10775 for the build Trilinos_PR_gcc-7.2.0-debug is not shown at:

But we see the build log file for other builds like, for example:

Therefore, we can't see the build errors for that build for the PR #10775 that is reporting "6" build errors.

Looking at the configure output for the build Trilinos_PR_gcc-7.2.0-debug with ID 850 for PR #10775 at:

shows:

Trilinos repos versions:
--------------------------------------------------------------------------------
*** Base Git Repo: Trilinos
45c3324 [Fri Aug 12 09:16:38 2022 -0600] <trilinos@sandia.gov>
Merge remote-tracking branch 'source_remote/fix-crusher-modules' into develop
 --------------------------------------------------------------------------------

Well, that is the correct branch fix-crusher-modules. (NOTE: The merge commit 45c3324 is generated locally so we can't see its ancestors).

Looking at the console file at:

we see:

11:18:25  +----------------------------------------------------------+
11:18:25  + START build step
11:18:25  +----------------------------------------------------------+
11:18:25  Build project
11:18:25     Each symbol represents 1024 bytes of output.
11:19:00      ..................................................  Size: 49K
...
12:08:54      ..................................................  Size: 4400K
12:12:18      ......................... Size of output: 4425K
12:12:18  Error(s) when building project
12:12:18     1 Compiler errors
12:12:18     0 Compiler warnings

So there is definingly build errors. We just can't see what they are.

Have we tried to reproduce that build locally and see what happens?

I will post a new Trilinos HelpDesk issue to see if we can get some help for why the build log is not being archived on Jenkins.

@jhux2
Copy link
Member

jhux2 commented Aug 12, 2022

@bartlettroscoe I tried a few times, but my guess is that we'd need the exact configure line. Is there a way to generate the cmake configure line from the file generatedPRFragment.cmake?

@bartlettroscoe
Copy link
Member Author

This screenshot shows the missing build log file at:

image

@jhux2
Copy link
Member

jhux2 commented Aug 12, 2022

@bartlettroscoe Results for PR #10802 look clean so far, and earlier iterations had the missing 6 tests.

@bartlettroscoe
Copy link
Member Author

@bartlettroscoe I tried a few times, but my guess is that we'd need the exact configure line. Is there a way to generate the cmake configure line from the file generatedPRFragment.cmake?

@jhux2, yup, they actually give you the exact configure command in the Jenkins job. For example, for this build it is shown at:

which is:

/projects/sems/install/rhel7-x86_64/sems/utility/cmake/3.19.1/bin/cmake -C "/scratch/trilinos/jenkins/ascic142/workspace/Trilinos_PR_gcc-7.2.0-debug/generatedPRFragment.cmake" -C "/scratch/trilinos/jenkins/ascic142/workspace/Trilinos_PR_gcc-7.2.0-debug/packageEnables.cmake" -G "Ninja" /scratch/trilinos/jenkins/ascic142/workspace/Trilinos_PR_gcc-7.2.0-debug/Trilinos.

And, actually, you can see that on CDash too with the uploaded build files under:

which shows:

image

When you click on configure_command.txt it takes you to:

@bartlettroscoe
Copy link
Member Author

FYI: I submitted TRILINOSHD-166 to see if we can get some help in archiving the build log file and the XML files (which Zack Galbreath will need to debug submit problems to CDash).

@jhux2
Copy link
Member

jhux2 commented Aug 12, 2022

#10808 has the missing 6, as well. How do I find the logs on jenkins?

@jhux2
Copy link
Member

jhux2 commented Aug 12, 2022

@bartlettroscoe I tried to reproduce the missing 6 error in #10775 by following the instructions in #10836 (comment), but I was unsuccessful, i.e., the build finished without error.

There are few caveats:

I built on ascicgpu031. I needed to disable a few TPLs and packages: HDF5, NetCDF, Scotch, and Moertel. I also removed -Wall from the cxx flags.

Here are the loaded modules:

  1) sparc-tools/python/3.7.9
  2) sparc-tools/exodus/2021.11.26
  3) sparc-tools/tools/main
  4) sparc-tools/taos/2020.09.23
  5) sparc-cmake/3.23.2
  6) sparc-git/2.19.1
  7) sparc-dev/gcc-7.2.0_openmpi-4.0.3
  8) sems-env
  9) cde/v2/gcc/7.2.0/metis/5.1.0
 10) cde/v2/gcc/7.2.0/parmetis/4.0.3
 11) sems-superlu/5.2.1/base

[EDIT]

I loaded the environment with

source ${TRILINOS_DIR}/cmake/std/atdm/load-env.sh cee-rhel7-gnu-opt-serial
module load sems-env
module load cde/v2/gcc/7.2.0/parmetis/4.0.3
module unload cde/v2/gcc/7.2.0/openmpi/4.0.5
module load sems-superlu

@bartlettroscoe
Copy link
Member Author

I tried to reproduce the missing 6 error in #10775 by following the instructions in #10836 (comment), but I was unsuccessful, i.e., the build finished without error.

@jhux2, how did you load the env?

@bartlettroscoe
Copy link
Member Author

#10808 has the missing 6, as well. How do I find the logs on jenkins?

@jhux2, we will have to wait until the Jenkins job:

finishes and then see if that shows the LastBuild_<date>-<time>.log file or not.

@jhux2
Copy link
Member

jhux2 commented Aug 12, 2022

how did you load the env?

@bartlettroscoe See my edited #10836 (comment).

@bartlettroscoe
Copy link
Member Author

@jhux2, the correct way to load the env is with the gen-config.sh script and use that to generate the CMake fragment file on the machine at the same time. I will create a wiki page that describes how to reproduce builds using GenConfig that should be easier to follow and should be able to correctly reproduce PR builds.

@bartlettroscoe
Copy link
Member Author

And once again, the Jenkins job artifacts for the build with "6" build failures PR-10808-test-rhel7_sems-gnu-7.2.0-openmpi-1.10.1-serial_debug_shared_no-kokkos-arch_no-asan_no-complex_no-fpic_mpi_no-pt_no-rdc_no-uvm_deprecated-on_no-package-enables-852 at:

does NOT list the LastBuild_<date>-<num>.log file showing only:

image

@bartlettroscoe
Copy link
Member Author

@jhux2, given that the same intel-17.0.1 build for the same PR #10829 showing the "6" build failures have different numbers of "Not Run" tests as shown here showing:

image

makes me think this might be an out-of-memory issue was well.

Also, given that there is no LastBuild_<date>-<time>.log file archived in the Jenkins builds as not shown, for example, here:

showing:

image

it might make sense that an out-of-memory condition might trigger that file not even getting generated by the ctest -S process.

@bartlettroscoe
Copy link
Member Author

@zackgalbreath, could an out-of-memory state cause ctest -S to not generate the LastBuild_<date>-<time>.log file?

@bartlettroscoe
Copy link
Member Author

FYI: @srbdev cleared out /tmp/ on 'ascic142' this morning (see #10875) so these errors might go away now (at least on 'ascic142').

@bartlettroscoe
Copy link
Member Author

Below documents an attempt to reproduce build errors reported for the "6" build errors for PR #10808 for the build:

I use a throw-away integration test branch to also test a few other small PRs at the same time (details below) and the reproduction process was simply:

$ cd <some-build-dir>/

$ source /fgs/rabartl/Trilinos.base/Trilinos/packages/framework/load-gen-config-env.sh \
rhel7_sems-gnu-7.2.0-openmpi-1.10.1-serial_debug_shared_no-kokkos-arch_no-asan_no-complex_no-fpic_mpi_no-pt_no-rdc_no-uvm_deprecated-on_no-package-enables

$ rm -rf CMake*

$ cmake \
-G Ninja \
-C GenConfigSettings.cmake \
-D Trilinos_ENABLE_TESTS=ON \
-D Trilinos_TRACE_ADD_TEST=ON \
-D CTEST_BUILD_FLAGS=-j16 -D CTEST_PARALLEL_LEVEL=16 \
-D Trilinos_ENABLE_KokkosKernels=ON -D Trilinos_ENABLE_ALL_FORWARD_DEP_PACKAGES=ON \
$TRILINOS_DIR

$ make dashboard

This submitted to CDash at:

and it showed all passing builds and tests.

So I was not able to reproduce any failures (but I was able to test changes to the file Trilinos/CTestConfig.cmake and a new helper script Trilinos/packages/framework/load-gen-config-env.sh).

(NOTE: I also got a rude reminder you can't use the new SEMS modules on the new 'hpws' machines which I forgot I reported in TRILINOSHD-59)

Attempt to reproduce build errors associated with "6" build failures for build 'rhel7_sems-gnu-7.2.0-openmpi-1.10.1-serial_debug' for PR #10808: (click to expand)

Trying to reproduce build errors possibly associated with the "6" build errors reported for the build:

Using a throw-away integration test branch 10807-kokkos-kernels-cublas-titb on 'hpws055' using:

$ ssh hpws055

$ cd /fgs/rabartl/Trilinos.base/BUILDS/PR/rhel7_sems-gnu-7.2.0-openmpi-1.10.1-serial_debug/

$ cat load-env-and-cmake-frag-file.sh
. /fgs/rabartl/Trilinos.base/Trilinos/packages/framework/load-gen-config-env.sh \
rhel7_sems-gnu-7.2.0-openmpi-1.10.1-serial_debug_shared_no-kokkos-arch_no-asan_no-complex_no-fpic_mpi_no-pt_no-rdc_no-uvm_deprecated-on_no-package-enables

$ cat do-configure 
rm -rf CMake*
cmake \
-G Ninja \
-C GenConfigSettings.cmake \
-D Trilinos_ENABLE_ALL_FORWARD_DEP_PACKAGES=OFF \
-D Trilinos_ENABLE_TESTS=ON \
-D Trilinos_TRACE_ADD_TEST=ON \
"$@" \
$TRILINOS_DIR

$ . load-env-and-cmake-frag-file.sh

$ time ./do-configure \
  -DCTEST_BUILD_FLAGS=-j16 -DCTEST_PARALLEL_LEVEL=16 \
  -DTrilinos_ENABLE_KokkosKernels=ON -DTrilinos_ENABLE_ALL_FORWARD_DEP_PACKAGES=ON \
  &> configure.out

real    2m49.904s
user    0m48.330s
sys     1m4.549s

$ make dashboard &> make.dashboard.out

That build was successful as shown by:

$ grep "Build PASSED" make.dashboard.out 
Build PASSED!

but it showed a lot of test failures:

$ grep "failed out of" make.dashboard.out 
10% tests passed, 2459 tests failed out of 2743

This submitted to CDash at:

and

As shown in this query, all 2456 out of the 2459 test failures are showing "undefined symbol: ompi_common_verbs_usnic_register_fake_drivers" errors like:

/fgs/rabartl/Trilinos.base/BUILDS/PR/rhel7_sems-gnu-7.2.0-openmpi-1.10.1-serial_debug/packages/adelus/test/vector_random/Adelus_vector_random.exe: symbol lookup error: /projects/sems/install/rhel7-x86_64/sems/compiler/gcc/7.2.0/openmpi/1.10.1/lib/libmca_common_verbs.so.7: undefined symbol: ompi_common_verbs_usnic_register_fake_drivers

The remaining 3 failing tests shown here for the tests:

  • Compadre_GMLS_Manifold_Refinement_Study_LU
  • Compadre_GMLS_Manifold_Refinement_Study_QR
  • Compadre_GMLS_Staggered_Manifold_Refinement_Study

don't given any clue why they are failing because they are calling a Python scrpt and showing ouptut.

So you can't reproduce Trilinos PR builds for the rhel7_sems-gnu-7.2.0-openmpi-1.10.1-serial env on an 'hpws' machine :-(

Well, shoot, I already reported this problem way back on 2021-12-14 in TRILINOSHD-59 and it is still not fixed. (Can't believe I got bit by that.)

Wow, so you can only reproduce these builds on actual ascic and ascicgpu machines?

Logging on the machine 'ascicgpu17' with the same build directory in place, I run the dashboard again with a pre-configured bit of software with:

$ ssh ascicgpu17

$ cd /fgs/rabartl/Trilinos.base/BUILDS/PR/rhel7_sems-gnu-7.2.0-openmpi-1.10.1-serial_debug/

$ . load-env-and-cmake-frag-file.sh

$ time make dashboard &> make.dashboard.out

real    45m54.466s
user    478m37.078s
sys     41m59.991s

$ grep "failed out of" make.dashboard.out 
100% tests passed, 0 tests failed out of 2743

which posted to:

and

So I was not able to reproduce any build errors :-(

@bartlettroscoe
Copy link
Member Author

As (not) shown in this query, there have not been any of these mysterious "6" build errors with no output since 2022-08-15 so I think it is safe to say this issue is resolved.

Things that may have fixed this that were done:

  • Increasing the memory limit for CDash on trilinos-cdash.sandia.gov
  • Clearing out /tmp/ on 'ascic142'

Closing as fixed.

@bartlettroscoe
Copy link
Member Author

BTW, we never did figure out how CDash got into this state that it shows "6" build errors with no error output. I just know the errors went away after they cleaned up disk space on the Jenkins clients.

@bartlettroscoe
Copy link
Member Author

CC: @zackgalbreath, @e10harvey

Sadly, this is not actually fixed. An experimental build the Framework team is doing for the C++17 transition (see TRILFRAME-411) driven by the Jenkins job:

which submitted to CDash here is showing these "6" failures with no build details:

image

The good news this time is that we have the Build.xml file archived in that Jenkins job to inspect. I will send the file to @zackgalbreath offline for him to inspect.

@bartlettroscoe
Copy link
Member Author

CC: @e10harvey, @zackgalbreath, @csiefer2, @jhux2

So it turns out the that defect causing these these mysterious "6" failing and the single global-level error for the outer cmake --build . ... command reported in #10823 are caused by the same problem on the client side running the ctest -S driver. The build errors are reported as the outer command cmake --build . ... . When the resulting Build.xml file is not too big, then this is reported as the single build error for the command cmake --build . ... which gets assigned to the last subproject which is Zoltan2Sphynx. But then the Build.xml file is too large, there is a submit failure and CDash only gets a partial version of that file. And when CDash is set up to parse the file as it is being submitted (called synchronous processing which has not been used for Trilinos for over 10 years) and the file is not all the way submitted, it seems to be producing this strange state on CDash showing "6" build errors. (That would seem to be a defect on the CDash side.)

For evidence for this for the recent case described above, the Build.xml file is 251M and is just a single build command and erorr:

<?xml version="1.0" encoding="UTF-8"?>
<Site BuildName="PR-__UNKNOWN__-test-rhel7_sems-clang-9.0.0-openmpi-1.10.1-serial_release-debug_shared_no-kokkos-arch_no-asan_no-complex_no-fpic_mpi_no-pt_no-rdc_no-uvm_deprecated-on_no-package-enables-197"
	BuildStamp="20220829-1353-Experimental"
	Name="ascic143"
	Generator="ctest-3.19.1"
	CompilerName=""
	CompilerVersion=""
	OSName="Linux"
	Hostname="ascic143"
	OSRelease="3.10.0-1127.18.2.el7.x86_64"
	OSVersion="#1 SMP Mon Jul 20 22:32:16 UTC 2020"
	OSPlatform="x86_64"
	Is64Bits="1"
	VendorString="GenuineIntel"
	VendorID="Intel Corporation"
	FamilyID="6"
	ModelID="63"
	ProcessorCacheSize="46080"
	NumberOfLogicalCPU="72"
	NumberOfPhysicalCPU="36"
	TotalVirtualMemory="4095"
	TotalPhysicalMemory="64160"
	LogicalProcessorsPerPhysical="2"
	ProcessorClockFrequency="2300"
	>
               <Build>
                              <StartDateTime>Aug 29 07:55 MDT</StartDateTime>
                              <StartBuildTime>1661781330</StartBuildTime>
                              <BuildCommand>/projects/sems/install/rhel7-x86_64/sems/utility/cmake/3.19.1/bin/cmake --build . --config "Debug" -- -j20 -k 0</BuildCommand>
                              <Failure type="Error">
                                             <!-- Meta-information about the build action -->
                                             <Action/>
                                             <!-- Details of command -->
                                             <Command>
                                                            <WorkingDirectory>/scratch/trilinos/jenkins/ascic143/workspace/Trilinos_PR_clang-9.0.0/pull_request_test</WorkingDirectory>
                                                            <Argument>/projects/sems/install/rhel7-x86_64/sems/utility/cmake/3.19.1/bin/cmake</Argument>
                                                            <Argument>--build</Argument>
                                                            <Argument>.</Argument>
                                                            <Argument>--config</Argument>
                                                            <Argument>Debug</Argument>
                                                            <Argument>--</Argument>
                                                            <Argument>-j20</Argument>
                                                            <Argument>-k</Argument>
                                                            <Argument>0</Argument>
                                             </Command>
                                             <!-- Result of command -->
                                             <Result>
                                                            <StdOut> 
                                                            … thousands of lines of output …
                                                            </StdOut>
                                                            <ExitCondition>1</ExitCondition>
                                             </Result>
                              </Failure>
                              <Log Encoding="base64" Compression="bin/gzip"/>
                              <EndDateTime>Aug 29 08:15 MDT</EndDateTime>
                              <EndBuildTime>1661782529</EndBuildTime>
                              <ElapsedMinutes>19</ElapsedMinutes>
               </Build>
</Site>                 

You can see this by downloading that Build.xml file and running:

$ du -sh Build.xml 
251M    Build.xml

$ grep -nH "<BuildCommand>" Build.xml 
Build.xml:29:           <BuildCommand>/projects/sems/install/rhel7-x86_64/sems/utility/cmake/3.19.1/bin/cmake --build . --config "Debug" -- -j20 -k 0</BuildCommand>

$ grep -nH "<StdOut>" Build.xml 
Build.xml:48:                           <StdOut>[1/27620] Building CXX object packages/kokkos/core/src/CMakeFiles/kokkoscore.dir/impl/Kokkos_NumericTraits.cpp.o

$ grep -nH "</StdOut>" Build.xml 
Build.xml:1176665:ninja: build stopped: cannot make progress due to previous errors.</StdOut>

@bartlettroscoe bartlettroscoe added this to ToDo in Trilinos TriBITS Refactor via automation Aug 29, 2022
@bartlettroscoe bartlettroscoe moved this from ToDo to In Progress in Trilinos TriBITS Refactor Aug 29, 2022
@bartlettroscoe
Copy link
Member Author

bartlettroscoe commented Aug 29, 2022

In summary, there are 3 independent defects that look to be working together to result in showing these mysterious "6" build errors:

If any one of those things did not happen, you would not see these mysterious "6" build errors. The first two look to be user errors on the SNL side. The third looks to be a possible CDash defect.

@bartlettroscoe
Copy link
Member Author

bartlettroscoe commented Sep 22, 2022

If any one of those things did not happen, you would not see these mysterious "6" build errors. The first two look to be user errors on the SNL side. The third looks to be a possible CDash defect.

Actually, the second one "ctest -S driver reporting the one single build error cmake --build . ... command." appears to be caused by a defect in CTest (see #10823 (comment)).

@bartlettroscoe bartlettroscoe added the Waiting Waiting for some external team to do something before this can be completed label Oct 19, 2022
@bartlettroscoe
Copy link
Member Author

FYI: The fix for this is in CMake 3.24.3 (released 2022-11-01) . (See SNL Kitware #209). Next: Install CMake 3.24.3 everywhere and use with Trilinos PR builds ...

@bartlettroscoe
Copy link
Member Author

This is now resolved since all of the PR builds are using CMake 3.24.3 (see #10823 (comment)). We should never see this again.

Closing as complete.

Trilinos TriBITS Refactor automation moved this from In Progress to Done Dec 1, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
PA: Framework Issues that fall under the Trilinos Framework Product Area type: bug The primary issue is a bug in Trilinos code or tests Waiting Waiting for some external team to do something before this can be completed
Projects
Development

No branches or pull requests

2 participants