C++ modules project directory structure#1
C++ modules project directory structure#1OldDev78 wants to merge 18 commits intoRedot-Engine:masterfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds CMake build infrastructure, CMake helper modules, C++20 module-based engine core (math, definitions, version), SIMD/platform CPU detection, unit-test integration with Boost.UT, and related source/header files and .gitignore entries. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip You can disable poems in the walkthrough.Disable the |
There was a problem hiding this comment.
Actionable comments posted: 13
🧹 Nitpick comments (15)
core/definitions/version.cppm (1)
2-3: Re-exporting the entirestdmodule may be unintended.
export import std;re-exports all of the standard library through this module. If only specific types are needed (likeuint16_t,format_context), consider importing without re-exporting, or importing only the necessary headers viastd.compat.Suggested fix if full re-export is not intended
export module core.version; -export import std; -import std.compat; +import std;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/definitions/version.cppm` around lines 2 - 3, The file currently uses "export import std;" which re-exports the entire standard library; if that was not intended, replace the re-export with a regular import (remove "export") or explicitly import only the needed facilities instead of the whole module; for example keep "import std.compat;" and import specific types/utilities you use (e.g. uint16_t, format_context, etc.) rather than exporting std, so update the "export import std;" line accordingly and ensure any code referencing standard symbols still compiles after switching to non-exported or targeted imports.core/memory/allocator.cpp (1)
4-7: Allocation failure silently sets size to 0.If
std::callocfails, the allocator is left in a valid but empty state (_size = 0). Consider logging or providing a way to check initialization status, especially for the 256MB default allocation which could realistically fail.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/memory/allocator.cpp` around lines 4 - 7, The RawAllocator constructor silently treats calloc failure by setting _size to 0; modify RawAllocator::RawAllocator to detect allocation failure and surface it (e.g., set an explicit initialization flag like _initialized or provide a method isInitialized(), and/or log the error via the project's logging facility) instead of silently leaving the object empty; update uses of _size/_memory_block to check the new status (or throw a std::bad_alloc in the constructor if that policy is acceptable) so callers can detect the failed 256MB allocation.CMakePresets.json (1)
21-26: Experimental import std UUID and disabled FORTIFY_SOURCE require attention.
CMAKE_EXPERIMENTAL_CXX_IMPORT_STDuses a hardcoded UUID that is CMake version-specific and will need updating when CMake releases stableimport stdsupport.Disabling
FORTIFY_SOURCE(-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0) removes buffer overflow runtime checks. If this is intentional for module compatibility, consider adding a comment explaining why, and potentially re-enabling it for Release builds.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CMakePresets.json` around lines 21 - 26, The preset is using a hardcoded CMake experimental UUID and globally disabling FORTIFY_SOURCE: remove or guard the CMAKE_EXPERIMENTAL_CXX_IMPORT_STD UUID entry (CMAKE_EXPERIMENTAL_CXX_IMPORT_STD) and instead conditionalize import-std usage on a CMake version check or feature test so it won’t break on other CMake releases; for the FORTIFY change, replace the global CMAKE_CXX_FLAGS_INIT entry that contains "-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0" with either a scoped/conditional flag (only for the specific target/module that needs it) or add a comment and a Release-specific override that re-enables FORTIFY_SOURCE, documenting why it was disabled (reference CMAKE_CXX_FLAGS_INIT).cmake/modules.cmake (2)
80-93: UnusedDIRvariable and inconsistent target name derivation.The
DIRvariable is set but never used in both loops. Additionally,target_link_modulesusesstring(REPLACE "/" "_")whileadd_modules_libraryusesstring(MAKE_C_IDENTIFIER)— these produce different results for edge cases (e.g., paths with dashes).♻️ Proposed fix — remove unused variable and use consistent naming
if (MOD_LINK_PUBLIC) foreach(NAME ${MOD_LINK_PUBLIC}) - set(DIR "${CMAKE_CURRENT_SOURCE_DIR}/${NAME}") - string(REPLACE "/" "_" TARGET ${NAME}) + string(MAKE_C_IDENTIFIER ${NAME} TARGET) target_link_libraries(${ARGV0} PUBLIC ${TARGET}) endforeach() endif() if (MOD_LINK_PRIVATE) foreach(NAME ${MOD_LINK_PRIVATE}) - set(DIR "${CMAKE_CURRENT_SOURCE_DIR}/${NAME}") - string(REPLACE "/" "_" TARGET ${NAME}) + string(MAKE_C_IDENTIFIER ${NAME} TARGET) target_link_libraries(${ARGV0} PRIVATE ${TARGET}) endforeach() endif()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmake/modules.cmake` around lines 80 - 93, Remove the unused DIR variable and make target name derivation consistent with add_modules_library: in the blocks handling MOD_LINK_PUBLIC and MOD_LINK_PRIVATE remove the set(DIR ...) lines, replace string(REPLACE "/" "_" TARGET ${NAME}) with string(MAKE_C_IDENTIFIER TARGET ${NAME}) (or the same identifier-generation call used by add_modules_library), and continue calling target_link_libraries(${ARGV0} PUBLIC|PRIVATE ${TARGET}) so both loops produce the same target names and avoid the unused variable.
30-33:file(GLOB)does not trigger CMake re-configuration on new file additions.Using
file(GLOB ...)for source collection is convenient but CMake won't automatically detect new.cppm,.cpp, or.test.cppfiles without a manual reconfigure. Consider addingCONFIGURE_DEPENDS(CMake 3.12+) to mitigate this:♻️ Proposed fix
- file(GLOB CPP_MODULE_FILES ${LIB_PATH}/*.cppm) + file(GLOB CPP_MODULE_FILES CONFIGURE_DEPENDS ${LIB_PATH}/*.cppm) - file(GLOB CPP_UNIT_TESTS ${LIB_PATH}/*.test.cpp) - file(GLOB CPP_SRC_FILES ${LIB_PATH}/*.cpp) + file(GLOB CPP_UNIT_TESTS CONFIGURE_DEPENDS ${LIB_PATH}/*.test.cpp) + file(GLOB CPP_SRC_FILES CONFIGURE_DEPENDS ${LIB_PATH}/*.cpp)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmake/modules.cmake` around lines 30 - 33, The file(GLOB ...) calls for CPP_MODULE_FILES, CPP_UNIT_TESTS, and CPP_SRC_FILES won't make CMake reconfigure when new files are added; update each file(GLOB ...) invocation (the ones that populate CPP_MODULE_FILES, CPP_UNIT_TESTS, and CPP_SRC_FILES) to add the CONFIGURE_DEPENDS option so CMake monitors the directory and reruns configure when new .cppm/.cpp/.test.cpp files appear.core/math/vector4.cppm (2)
35-35: Pointer arithmetic on struct members is technically undefined behavior.
(&x)[i]assumes contiguous memory layout of struct members, which is not guaranteed by the C++ standard. Whilealignas(16)and the static asserts make this work in practice on major compilers, it's technically UB.A safer alternative would be to use a union with an array, or std::bit_cast in C++20.
♻️ Alternative using array storage
struct alignas(16) Vector4 { union { struct { float x, y, z, w; }; float data[4]; }; constexpr float& operator[](const int i) noexcept { if consteval { switch (i) { /* ... */ } } else { return data[i]; } } };Note: Anonymous structs in unions are a common extension but also non-standard. The current approach is pragmatic given the static asserts.
Also applies to: 47-47
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/math/vector4.cppm` at line 35, The operator[] implementation in Vector4 uses pointer arithmetic on a member via (&x)[i], which is undefined behavior; replace the pointer arithmetic with a well-defined storage access by adding an explicit array-backed representation (e.g., add float data[4] inside the struct or a union with the x,y,z,w members) and return data[i] in Vector4::operator[](int) for the non-consteval path, or alternatively use std::bit_cast to map the struct to an array; update both overloads of operator[] (the non-const and const versions referenced at the diff lines) to read from that array instead of (&x)[i].
84-91: Division by zero produces infinity or NaN without warning.
operator/=computes1.0f / swithout checking ifs == 0.0f. This will silently produceinforNaNvalues. Consider whether this is acceptable or if a debug assertion is warranted.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/math/vector4.cppm` around lines 84 - 91, operator/= currently computes inv = 1.0f / s with no zero check; update Vector4::operator/= to guard against division-by-zero by adding a debug assertion (e.g., assert(s != 0.0f) or assert(std::fabs(s) > std::numeric_limits<float>::epsilon())) and handle release behavior (either leave unchanged, return *this, or define a clear fallback) so you don't silently produce inf/NaN; reference the constexpr Vector4& operator/=(const float s) noexcept implementation and add the assertion/check before computing inv.core/memory/allocator.cppm (2)
58-60:deallocatecallsstd::terminate()on failure — document this behavior.Calling
std::terminate()whenpop()fails is severe. This is appropriate for a strict LIFO arena but should be clearly documented since it differs from standard allocator semantics where deallocation order is typically not enforced.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/memory/allocator.cppm` around lines 58 - 60, The deallocate method currently calls std::terminate() when _raw_allocator.pop(n * sizeof(T)) fails, which is a strong, non-standard behavior; update the comment/docs for the deallocate function (and/or the allocator class) to explicitly state that deallocation must follow strict LIFO order, that failure to pop will call std::terminate(), and that this allocator does not follow typical allocator semantics; reference the deallocate method and the _raw_allocator.pop call in your comment so users understand the enforced behavior and its consequences.
29-29:get_default_raw_allocator()may have thread-safety concerns.If this returns a reference to a global/static
RawAllocator, concurrent calls from multiple threads could cause data races on the internal_stack_pos. Consider documenting thread-safety guarantees or providing thread-local allocators.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/memory/allocator.cppm` at line 29, get_default_raw_allocator returns a reference to a shared RawAllocator which can lead to data races on its internal _stack_pos when used from multiple threads; either make the default allocator instance thread-local (so get_default_raw_allocator returns a reference to a thread_local RawAllocator), or add synchronization inside RawAllocator to protect _stack_pos (e.g., a mutex around allocation/stack operations), or alternatively change the API to return a new allocator by value and document the thread-safety contract for get_default_raw_allocator and RawAllocator::_stack_pos so callers know whether concurrent access is allowed.core/definitions/definitions.cppm (1)
17-24:is_zero_constructiblesemantic may be misleading.The comment states this trait identifies types where "the default value is just all-0 bytes" and can use
memset. However,std::is_trivially_constructibleonly guarantees the constructor is trivial — it does not guarantee the default value is zero-bytes. For example, a struct with uninitialized primitive members is trivially constructible but not zero-initialized.Consider renaming to
is_trivially_default_constructibleor documenting the actual semantic more precisely.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/definitions/definitions.cppm` around lines 17 - 24, The trait is_zero_constructible and is_zero_constructible_v are misleading because they use std::is_trivially_constructible which does not guarantee zero-bytes initialization; update the code by either (A) renaming the trait to is_trivially_default_constructible (and is_trivially_default_constructible_v) and adjust the comment to describe trivial default construction semantics, or (B) if you truly need zero-byte-initializable types, change the implementation to reflect that (e.g., document and implement a check that only accepts types known to be zero-initializable) and update the comment accordingly; ensure you update all references to is_zero_constructible/is_zero_constructible_v to the new name or semantics.boost/ut.cppm (1)
14-22: Inconsistent template argument usage forreporter_junitinstantiations.Lines 14-17 instantiate
reporter_junit<boost::ut::printer>explicitly, while lines 20-22 usereporter_junit<>(default template argument). If the default isprinter, these should be equivalent, but mixing explicit and default arguments can cause ODR issues if they resolve to different types.♻️ Proposed fix — use consistent template arguments
template class boost::ut::reporter_junit<boost::ut::printer>; template void boost::ut::reporter_junit<boost::ut::printer>::on<bool>(boost::ut::events::log<bool>); template void boost::ut::reporter_junit<boost::ut::printer>::on<bool>(boost::ut::events::assertion_pass<bool>); template void boost::ut::reporter_junit<boost::ut::printer>::on<bool>(boost::ut::events::assertion_fail<bool>); template auto boost::ut::detail::test::operator=<>(test_location<void (*)()> _test); template auto boost::ut::expect<bool>(const bool&expr,const reflection::source_location&); -template void boost::ut::reporter_junit<>::on<boost::ut::detail::fatal_<bool>>(events::assertion_fail<boost::ut::detail::fatal_<bool>>); -template void boost::ut::reporter_junit<>::on<boost::ut::detail::fatal_<bool>>(events::assertion_pass<boost::ut::detail::fatal_<bool>>); -template void boost::ut::reporter_junit<>::on<boost::ut::detail::fatal_<bool>>(events::log<boost::ut::detail::fatal_<bool>>); +template void boost::ut::reporter_junit<boost::ut::printer>::on<boost::ut::detail::fatal_<bool>>(events::assertion_fail<boost::ut::detail::fatal_<bool>>); +template void boost::ut::reporter_junit<boost::ut::printer>::on<boost::ut::detail::fatal_<bool>>(events::assertion_pass<boost::ut::detail::fatal_<bool>>); +template void boost::ut::reporter_junit<boost::ut::printer>::on<boost::ut::detail::fatal_<bool>>(events::log<boost::ut::detail::fatal_<bool>>);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@boost/ut.cppm` around lines 14 - 22, The instantiations mix reporter_junit<boost::ut::printer> and reporter_junit<> which can cause ODR/type mismatch; pick one form and make all explicit instantiations consistent (e.g., change the reporter_junit<> lines to reporter_junit<boost::ut::printer> or vice versa). Update the explicit instantiation lines for reporter_junit::on (the overloads on<bool>, on<boost::ut::detail::fatal_<bool>> and the log/assertion variants) and any other template instantiations referencing reporter_junit (and related symbols like operator=<> and expect<bool>) so they all use the same template argument form (reporter_junit<boost::ut::printer> or reporter_junit<>), ensuring uniformity across the file.platform/simd.h (1)
25-36: SIMD_AVX2 is unconditionally defined but not validated at runtime.The header unconditionally defines
SIMD_AVX2for all x64 builds, but older x64 CPUs may not support AVX2 instructions. If code paths later use AVX2-specific intrinsics (e.g.,_mm256_*), they will crash on CPUs without AVX2.The current
vector4.cppmonly uses SSE intrinsics which are safe, but definingSIMD_AVX2as a compile-time constant could mislead future code into assuming AVX2 is available.Consider either:
- Renaming to
SIMD_SSEfor clarity on the baseline guarantee- Adding a runtime
validate_cpu()check before any AVX2 code paths execute🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@platform/simd.h` around lines 25 - 36, The header currently defines SIMD_AVX2 unconditionally for ARCH_X64 which can mislead code (e.g., vector4.cppm or future files) into using AVX2 intrinsics on CPUs without AVX2; either revert the compile-time claim by renaming the macro to a safe baseline like SIMD_SSE (update all uses of SIMD_AVX2 to SIMD_SSE and leave SIMD_AVX512 guarded by ENABLE_AVX512), or keep the SIMD_AVX2 macro but add a runtime CPU feature validation function (e.g., validate_cpu() called before any AVX2-specific paths) that checks CPUID/feature flags and disables AVX2 code paths or falls back to SSE if not present; update references to SIMD_AVX2 in code to consult the runtime check (or the new SIMD_SSE macro) so no AVX2 intrinsics are executed unless validated.platform/cpu/cpu_info_x64.cpp (1)
90-100: Redundant architecture guard insidevalidate_cpu().The
#if defined(__x86_64__) || defined(_M_X64)check on lines 91-99 is redundant because lines 14-16 already emit#errorfor non-x86_64 platforms. If compilation reaches line 90, the architecture is guaranteed to be x86-64.♻️ Simplified implementation
void validate_cpu() noexcept { - `#if` defined(__x86_64__) || defined(_M_X64) - const CpuFeature level = detect_cpu_feature(); + const CpuFeature level = detect_cpu_feature(); - if(level == CpuFeature::NONE) { - std::abort(); - } - `#else` - `#error` Unsupported architecture. - `#endif` + if (level == CpuFeature::NONE) { + std::abort(); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@platform/cpu/cpu_info_x64.cpp` around lines 90 - 100, Remove the redundant architecture preprocessor guard inside validate_cpu(): since earlier file-level checks already emit `#error` for non-x86_64, simplify validate_cpu() to always call detect_cpu_feature(), check if the returned CpuFeature (from detect_cpu_feature()) is CpuFeature::NONE and abort if so; remove the inner `#if/`#else/#endif block and the Unsupported architecture error to keep validate_cpu() concise and guaranteed-x86_64.core/math/constants.cppm (1)
18-19: Naming may cause confusion: these are reciprocals, not max values.
UINT32_MAX_DandUINT32_MAX_Fstore1.0 / UINT32_MAX, not the max value itself. This naming could mislead readers into thinking they represent the maximum value.Consider renaming to clarify intent (e.g.,
UINT32_MAX_RECIP_DorINV_UINT32_MAX_D).♻️ Suggested renaming
- constexpr double UINT32_MAX_D = 1. / static_cast<double>(std::numeric_limits<std::uint32_t>::max()); - constexpr float UINT32_MAX_F = 1.f / static_cast<float>(std::numeric_limits<std::uint32_t>::max()); + constexpr double INV_UINT32_MAX_D = 1. / static_cast<double>(std::numeric_limits<std::uint32_t>::max()); + constexpr float INV_UINT32_MAX_F = 1.f / static_cast<float>(std::numeric_limits<std::uint32_t>::max());🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/math/constants.cppm` around lines 18 - 19, The constants UINT32_MAX_D and UINT32_MAX_F are actually the reciprocals (1/UINT32_MAX) and the names are misleading; rename them to a clearer identifier such as INV_UINT32_MAX_D and INV_UINT32_MAX_F (or UINT32_MAX_RECIP_D / _F) and update all references accordingly (search for UINT32_MAX_D and UINT32_MAX_F and replace usages in functions, headers, and tests) so the intent (inverse of UINT32_MAX) is explicit across the codebase.core/math/math.cppm (1)
39-47: Minor: Missingnoexceptonrad_to_deg.
deg_to_radis markednoexceptbutrad_to_degis not, despite having identical exception-safety characteristics (both perform only arithmetic operations).♻️ Suggested fix for consistency
template <std::floating_point T> - constexpr T rad_to_deg(T y) { + constexpr T rad_to_deg(T y) noexcept { return y * (T{180.} / std::numbers::pi_v<T>); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/math/math.cppm` around lines 39 - 47, rad_to_deg has the same exception-safety as deg_to_rad but lacks noexcept; update the template function declaration for rad_to_deg to include noexcept (i.e., make the signature match deg_to_rad's exception specifier) while keeping it constexpr and templated over std::floating_point to ensure consistent behavior and compile-time guarantees between deg_to_rad and rad_to_deg.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cmake/compiler.cmake`:
- Around line 17-18: The CMake file currently injects hardcoded GCC/Clang SIMD
flags via add_compile_options(-mavx2 -mfma), which is not portable and can break
MSVC or run on unsupported CPUs; change this to conditionally add these flags
only for compatible compilers and make them configurable: introduce a CMake
option (e.g., ENABLE_AVX2) and wrap the add_compile_options call in a compiler
check that only applies the flags when CMAKE_CXX_COMPILER_ID indicates GCC or
Clang (and/or when target_compile_options for specific targets is appropriate),
and document/disable the option by default so users on MSVC or older CPUs are
not forced to build with -mavx2 -mfma.
- Around line 6-15: The current logic ignores the USE_LTO preset and only checks
CMAKE_BUILD_TYPE; update the condition to honor the USE_LTO cache/option (check
the variable USE_LTO alongside CMAKE_BUILD_TYPE) and only enable LTO when
USE_LTO is ON and IPO_SUPPORTED is true, otherwise emit a clear message; replace
the compiler-specific add_link_options(-flto) usage with the portable CMake
property by setting the target or directory property
CMAKE_INTERPROCEDURAL_OPTIMIZATION (or set the global policy via
set(CMAKE_INTERPROCEDURAL_OPTIMIZATION ON) when appropriate) and keep the
existing IPO_SUPPORTED/else message flow (or alternatively remove the unused
USE_LTO from presets if you prefer not to support it).
In `@CMakePresets.json`:
- Line 33: Update the misleading "description" fields in the CMakePresets.json
presets that currently read "Release configuration, including C# support" and
the analogous Debug description; remove the phrase "including C# support" (or
replace it with an accurate short phrase like "with standard C/C++ build
settings") so the descriptions reflect the actual preset purpose; locate and
edit the two "description" properties for the Release and Debug presets to
update their text.
In `@core/math/math.cppm`:
- Around line 112-113: final_range is declared as type T but should be a boolean
condition; change its declaration to bool final_range and assign it the result
of (to_t == T{0}) so the ternary that sets final_scale uses a boolean condition
(i.e., use bool final_range = to_t == T{0}; then compute T final_scale =
final_range ? T{.5} : t / to_t;). Ensure you update the declarations around
final_range/final_scale to reference these exact symbols.
- Around line 74-80: The compile-time (consteval) branch multiplies only the
first term by 0.5 due to missing parentheses, producing wrong Catmull‑Rom
results; change the consteval return so the outer T{0.5} multiplies the entire
sum — i.e. multiply 0.5 by the full expression (T{2} * from) + (a * weight) + (b
* w2) + (c * w3) — ensuring the expression using from, a, b, c, weight, w2, w3
is fully parenthesized just like the runtime std::fma path.
In `@core/math/math.test.cpp`:
- Around line 7-11: The test "pow"_test currently compares draco::math::pow(2.,
.5) to std::numbers::sqrt2_v<double> with operator== which is fragile for
floating-point; change the assertion to an epsilon/approximate comparison
instead (e.g., use Boost.UT's approximate matcher or compare fabs(result -
expected) <= tol). Update the expectation in the "pow"_test to use a small
tolerance (e.g., 1e-12 or appropriate for double) when comparing result from
draco::math::pow to std::numbers::sqrt2_v<double>.
In `@core/math/vector4.cppm`:
- Around line 142-144: The ARCH_ARM64 preprocessor branch in
core/math/vector4.cppm lacks a return, causing compilation/UB; implement the
ARM64 branch to return the same scalar fallback as the default `#else` path (copy
the return expression or call used there) so the function (e.g., the Vector4
routine in this diff) always returns a value; if NEON implementation isn't
ready, mirror the `#else` logic or forward to the generic helper used elsewhere in
this file to ensure correct behavior.
In `@core/memory/allocator.cpp`:
- Around line 15-19: RawAllocator::push has an off-by-one bug: it increments
_stack_pos before computing the return pointer so callers get the address after
the allocated region; change the logic to compute the return pointer from
_memory_block + _stack_pos first, then advance _stack_pos by bytes (while
keeping the same bounds check against _size and the nullptr check for
_memory_block) so the returned pointer points to the start of the newly
allocated slice.
- Around line 15-19: RawAllocator::push currently advances by bytes without
honoring alignment, which can misalign allocations; modify RawAllocator::push to
accept an optional alignment parameter (defaulting to alignof(std::max_align_t))
or use a module-wide default, compute an aligned offset from _stack_pos (e.g.,
align up to alignment boundary), check that aligned_offset + bytes <= _size, set
_stack_pos = aligned_offset + bytes, and return _memory_block + aligned_offset;
ensure you update the corresponding module interface (cppm) signature to match
the new parameter/default.
In `@core/memory/allocator.cppm`:
- Around line 48-56: Remove the misleading constexpr specifier from the
allocation methods: the functions allocate and allocate_at_least call
_raw_allocator.push(), which is not usable in constant evaluation, so delete
constexpr from both allocate(const std::size_t n) and allocate_at_least(const
std::size_t n) (allocate_at_least already forwards to this->allocate), leaving
their behavior unchanged otherwise.
- Around line 33-67: Add allocator equality operators so STL containers can
compare allocators: implement a templated non-member operator== and operator!=
for ElementsAllocator that accept ElementsAllocator<T> and ElementsAllocator<U>
and return true when they refer to the same RawAllocator (compare
&a._raw_allocator == &b._raw_allocator), mark them noexcept, and place them
alongside the ElementsAllocator class (use the private member _raw_allocator to
perform the comparison); ensure operator!= is the negation of operator== so
deallocation compatibility across different T specializations is correctly
represented.
In `@platform/cpu/cpu_info_x64.cpp`:
- Around line 35-38: Fix the typo in the comment above the os_has_zmm()
function: change "full ZMM statae is required for AVX512." to "full ZMM state is
required for AVX512." The function to locate is os_has_zmm() which calls
get_xcr0(); update only the comment text to correct the spelling.
In `@platform/simd.h`:
- Around line 60-68: The preprocessor checks use an undefined DEBUG symbol so
the debug branches for ASSUME and UNREACHABLE never run; update the conditionals
to use standard debug macros (e.g., change occurrences of `#if` DEBUG to something
like `#if` defined(DEBUG) || defined(_DEBUG) || !defined(NDEBUG)) so both MSVC and
non‑MSVC paths pick up actual debug builds, and/or add a CMake change to define
DEBUG for debug configurations; ensure the change affects the blocks that define
ASSUME and UNREACHABLE and keep USING_COMPILER_MSVC logic intact.
---
Nitpick comments:
In `@boost/ut.cppm`:
- Around line 14-22: The instantiations mix reporter_junit<boost::ut::printer>
and reporter_junit<> which can cause ODR/type mismatch; pick one form and make
all explicit instantiations consistent (e.g., change the reporter_junit<> lines
to reporter_junit<boost::ut::printer> or vice versa). Update the explicit
instantiation lines for reporter_junit::on (the overloads on<bool>,
on<boost::ut::detail::fatal_<bool>> and the log/assertion variants) and any
other template instantiations referencing reporter_junit (and related symbols
like operator=<> and expect<bool>) so they all use the same template argument
form (reporter_junit<boost::ut::printer> or reporter_junit<>), ensuring
uniformity across the file.
In `@cmake/modules.cmake`:
- Around line 80-93: Remove the unused DIR variable and make target name
derivation consistent with add_modules_library: in the blocks handling
MOD_LINK_PUBLIC and MOD_LINK_PRIVATE remove the set(DIR ...) lines, replace
string(REPLACE "/" "_" TARGET ${NAME}) with string(MAKE_C_IDENTIFIER TARGET
${NAME}) (or the same identifier-generation call used by add_modules_library),
and continue calling target_link_libraries(${ARGV0} PUBLIC|PRIVATE ${TARGET}) so
both loops produce the same target names and avoid the unused variable.
- Around line 30-33: The file(GLOB ...) calls for CPP_MODULE_FILES,
CPP_UNIT_TESTS, and CPP_SRC_FILES won't make CMake reconfigure when new files
are added; update each file(GLOB ...) invocation (the ones that populate
CPP_MODULE_FILES, CPP_UNIT_TESTS, and CPP_SRC_FILES) to add the
CONFIGURE_DEPENDS option so CMake monitors the directory and reruns configure
when new .cppm/.cpp/.test.cpp files appear.
In `@CMakePresets.json`:
- Around line 21-26: The preset is using a hardcoded CMake experimental UUID and
globally disabling FORTIFY_SOURCE: remove or guard the
CMAKE_EXPERIMENTAL_CXX_IMPORT_STD UUID entry (CMAKE_EXPERIMENTAL_CXX_IMPORT_STD)
and instead conditionalize import-std usage on a CMake version check or feature
test so it won’t break on other CMake releases; for the FORTIFY change, replace
the global CMAKE_CXX_FLAGS_INIT entry that contains "-U_FORTIFY_SOURCE
-D_FORTIFY_SOURCE=0" with either a scoped/conditional flag (only for the
specific target/module that needs it) or add a comment and a Release-specific
override that re-enables FORTIFY_SOURCE, documenting why it was disabled
(reference CMAKE_CXX_FLAGS_INIT).
In `@core/definitions/definitions.cppm`:
- Around line 17-24: The trait is_zero_constructible and is_zero_constructible_v
are misleading because they use std::is_trivially_constructible which does not
guarantee zero-bytes initialization; update the code by either (A) renaming the
trait to is_trivially_default_constructible (and
is_trivially_default_constructible_v) and adjust the comment to describe trivial
default construction semantics, or (B) if you truly need zero-byte-initializable
types, change the implementation to reflect that (e.g., document and implement a
check that only accepts types known to be zero-initializable) and update the
comment accordingly; ensure you update all references to
is_zero_constructible/is_zero_constructible_v to the new name or semantics.
In `@core/definitions/version.cppm`:
- Around line 2-3: The file currently uses "export import std;" which re-exports
the entire standard library; if that was not intended, replace the re-export
with a regular import (remove "export") or explicitly import only the needed
facilities instead of the whole module; for example keep "import std.compat;"
and import specific types/utilities you use (e.g. uint16_t, format_context,
etc.) rather than exporting std, so update the "export import std;" line
accordingly and ensure any code referencing standard symbols still compiles
after switching to non-exported or targeted imports.
In `@core/math/constants.cppm`:
- Around line 18-19: The constants UINT32_MAX_D and UINT32_MAX_F are actually
the reciprocals (1/UINT32_MAX) and the names are misleading; rename them to a
clearer identifier such as INV_UINT32_MAX_D and INV_UINT32_MAX_F (or
UINT32_MAX_RECIP_D / _F) and update all references accordingly (search for
UINT32_MAX_D and UINT32_MAX_F and replace usages in functions, headers, and
tests) so the intent (inverse of UINT32_MAX) is explicit across the codebase.
In `@core/math/math.cppm`:
- Around line 39-47: rad_to_deg has the same exception-safety as deg_to_rad but
lacks noexcept; update the template function declaration for rad_to_deg to
include noexcept (i.e., make the signature match deg_to_rad's exception
specifier) while keeping it constexpr and templated over std::floating_point to
ensure consistent behavior and compile-time guarantees between deg_to_rad and
rad_to_deg.
In `@core/math/vector4.cppm`:
- Line 35: The operator[] implementation in Vector4 uses pointer arithmetic on a
member via (&x)[i], which is undefined behavior; replace the pointer arithmetic
with a well-defined storage access by adding an explicit array-backed
representation (e.g., add float data[4] inside the struct or a union with the
x,y,z,w members) and return data[i] in Vector4::operator[](int) for the
non-consteval path, or alternatively use std::bit_cast to map the struct to an
array; update both overloads of operator[] (the non-const and const versions
referenced at the diff lines) to read from that array instead of (&x)[i].
- Around line 84-91: operator/= currently computes inv = 1.0f / s with no zero
check; update Vector4::operator/= to guard against division-by-zero by adding a
debug assertion (e.g., assert(s != 0.0f) or assert(std::fabs(s) >
std::numeric_limits<float>::epsilon())) and handle release behavior (either
leave unchanged, return *this, or define a clear fallback) so you don't silently
produce inf/NaN; reference the constexpr Vector4& operator/=(const float s)
noexcept implementation and add the assertion/check before computing inv.
In `@core/memory/allocator.cpp`:
- Around line 4-7: The RawAllocator constructor silently treats calloc failure
by setting _size to 0; modify RawAllocator::RawAllocator to detect allocation
failure and surface it (e.g., set an explicit initialization flag like
_initialized or provide a method isInitialized(), and/or log the error via the
project's logging facility) instead of silently leaving the object empty; update
uses of _size/_memory_block to check the new status (or throw a std::bad_alloc
in the constructor if that policy is acceptable) so callers can detect the
failed 256MB allocation.
In `@core/memory/allocator.cppm`:
- Around line 58-60: The deallocate method currently calls std::terminate() when
_raw_allocator.pop(n * sizeof(T)) fails, which is a strong, non-standard
behavior; update the comment/docs for the deallocate function (and/or the
allocator class) to explicitly state that deallocation must follow strict LIFO
order, that failure to pop will call std::terminate(), and that this allocator
does not follow typical allocator semantics; reference the deallocate method and
the _raw_allocator.pop call in your comment so users understand the enforced
behavior and its consequences.
- Line 29: get_default_raw_allocator returns a reference to a shared
RawAllocator which can lead to data races on its internal _stack_pos when used
from multiple threads; either make the default allocator instance thread-local
(so get_default_raw_allocator returns a reference to a thread_local
RawAllocator), or add synchronization inside RawAllocator to protect _stack_pos
(e.g., a mutex around allocation/stack operations), or alternatively change the
API to return a new allocator by value and document the thread-safety contract
for get_default_raw_allocator and RawAllocator::_stack_pos so callers know
whether concurrent access is allowed.
In `@platform/cpu/cpu_info_x64.cpp`:
- Around line 90-100: Remove the redundant architecture preprocessor guard
inside validate_cpu(): since earlier file-level checks already emit `#error` for
non-x86_64, simplify validate_cpu() to always call detect_cpu_feature(), check
if the returned CpuFeature (from detect_cpu_feature()) is CpuFeature::NONE and
abort if so; remove the inner `#if/`#else/#endif block and the Unsupported
architecture error to keep validate_cpu() concise and guaranteed-x86_64.
In `@platform/simd.h`:
- Around line 25-36: The header currently defines SIMD_AVX2 unconditionally for
ARCH_X64 which can mislead code (e.g., vector4.cppm or future files) into using
AVX2 intrinsics on CPUs without AVX2; either revert the compile-time claim by
renaming the macro to a safe baseline like SIMD_SSE (update all uses of
SIMD_AVX2 to SIMD_SSE and leave SIMD_AVX512 guarded by ENABLE_AVX512), or keep
the SIMD_AVX2 macro but add a runtime CPU feature validation function (e.g.,
validate_cpu() called before any AVX2-specific paths) that checks CPUID/feature
flags and disables AVX2 code paths or falls back to SSE if not present; update
references to SIMD_AVX2 in code to consult the runtime check (or the new
SIMD_SSE macro) so no AVX2 intrinsics are executed unless validated.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7d5f351b-6839-4bac-a0d3-a9c0226845ac
📒 Files selected for processing (23)
.gitignoreCMakeGraphVizOptions.cmakeCMakeLists.txtCMakePresets.jsonboost/ut.cppmboost/ut.hppboost/ut_main.cppcmake/compiler.cmakecmake/modules.cmakecore/CMakeLists.txtcore/core.cppmcore/definitions/definitions.cppmcore/definitions/version.cppmcore/math/constants.cppmcore/math/math.cppmcore/math/math.test.cppcore/math/vector4.cppmcore/memory/allocator.cppcore/memory/allocator.cppmplatform/cpu/cpu_info.hplatform/cpu/cpu_info_neon.cppplatform/cpu/cpu_info_x64.cppplatform/simd.h
There was a problem hiding this comment.
Actionable comments posted: 7
♻️ Duplicate comments (2)
core/math/vector4.cppm (1)
142-144:⚠️ Potential issue | 🔴 CriticalARM64 branch missing return statement — will fail to compile.
The
ARCH_ARM64preprocessor branch has only a TODO comment but no return statement. This will cause a compilation error on ARM64 builds.Based on learnings, the NEON implementation is planned for a future PR, but a scalar fallback is needed now to prevent build failures.
🐛 Proposed fix — add scalar fallback
`#elif` ARCH_ARM64 - // TODO: setup NEON SIMD. + // TODO: setup NEON SIMD. Using scalar fallback for now. + return a.x*b.x + a.y*b.y + a.z*b.z + a.w*b.w; `#else`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/math/vector4.cppm` around lines 142 - 144, The ARCH_ARM64 preprocessor branch in vector4.cppm currently contains only a TODO and no return, causing ARM64 builds to fail; replace the TODO with a scalar-fallback implementation that mirrors the non-SIMD path (perform the same element-wise calculations and return the resulting Vector4), ensuring the branch returns the correct Vector4 value; specifically update the code guarded by `#elif` ARCH_ARM64 to call the same scalar logic used in the `#else` branch (or delegate to the scalar helper function if one exists) so the function compiles and behaves identically until a NEON SIMD implementation is added.core/math/math.test.cpp (1)
7-11:⚠️ Potential issue | 🟡 MinorFloating-point equality comparison may be fragile.
Comparing
pow(2., .5)withstd::numbers::sqrt2_v<double>using==can fail depending on the implementation. Consider using an epsilon-based comparison.🛠️ Suggested fix
"pow"_test = [] { double result = draco::math::pow(2., .5); constexpr double expected = std::numbers::sqrt2_v<double>; - expect(result == expected); + expect(std::abs(result - expected) < 1e-15); };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/math/math.test.cpp` around lines 7 - 11, The test "pow"_test currently uses exact equality (expect(result == expected)) to compare draco::math::pow(2., .5) against std::numbers::sqrt2_v<double>, which is fragile for floating-point; change the assertion to an epsilon-based comparison (e.g., compute std::abs(result - expected) and assert it is less than a small epsilon like 1e-12) to tolerate rounding differences, updating the assertion in the "pow"_test to use this approximate check for draco::math::pow and expected.
🧹 Nitpick comments (7)
platform/cpu/cpu_info_x64.cpp (1)
90-100: Redundant architecture check insidevalidate_cpu().The
#if defined(__x86_64__) || defined(_M_X64)guard on lines 91 and 97-99 is redundant — the file already has a compile-time guard at lines 14-16 that will#errorif compiled on a non-x86-64 platform. This inner check can never reach the#elsebranch.♻️ Simplify by removing redundant guard
void validate_cpu() noexcept { - `#if` defined(__x86_64__) || defined(_M_X64) - const CpuFeature level = detect_cpu_feature(); + const CpuFeature level = detect_cpu_feature(); - if(level == CpuFeature::NONE) { - std::abort(); - } - `#else` - `#error` Unsupported architecture. - `#endif` + if (level == CpuFeature::NONE) { + std::abort(); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@platform/cpu/cpu_info_x64.cpp` around lines 90 - 100, The inner architecture preprocessor guard inside validate_cpu() is redundant; remove the `#if/`#else/#endif block so validate_cpu() simply calls detect_cpu_feature(), checks if the returned level equals CpuFeature::NONE and calls std::abort() on failure. Update validate_cpu() to be unconditional (no architecture checks) and leave the existing compile-time guard at the top of the file intact; use the symbols validate_cpu(), detect_cpu_feature(), and CpuFeature::NONE to locate and modify the code.platform/cpu/cpu_info.h (1)
1-13: Consider exposing a getter for the detected CPU feature level.The
CpuFeatureenum is declared in the public API, but based on the implementations incpu_info_x64.cppandcpu_info_neon.cpp, there's no public function to query the detected feature. Thevalidate_cpu()function only aborts on failure and returnsvoid. This makes the enum effectively unusable by external callers.If callers need to query runtime SIMD capabilities (e.g., to select code paths), consider adding:
CpuFeature get_cpu_feature() noexcept;Otherwise, if the enum is purely internal, it could be moved into an implementation-only header or anonymous namespace.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@platform/cpu/cpu_info.h` around lines 1 - 13, The public API exposes enum CpuFeature but offers no way to read the detected feature; add a noexcept getter declaration CpuFeature get_cpu_feature() noexcept to the header and implement it in the platform-specific sources (cpu_info_x64.cpp / cpu_info_neon.cpp) to return the runtime-detected feature used by validate_cpu(), or if you intend CpuFeature to be internal, move the enum into the implementation files (or an impl-only header) and keep validate_cpu() as the only public function—choose one approach and update the header accordingly so external callers can query CPU capabilities via get_cpu_feature() or the enum is hidden entirely.core/math/constants.cppm (2)
7-8: Constant names may be misleading.
SQRT12andSQRT13suggest √12 and √13, but they're actually 1/√2 and 1/√3 (reciprocal square roots). Consider more explicit names likeINV_SQRT2/INV_SQRT3orRSQRT2/RSQRT3.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/math/constants.cppm` around lines 7 - 8, The constants SQRT12 and SQRT13 are misnamed (they imply √12/√13) but actually hold 1/√2 and 1/√3; rename them to clear identifiers such as INV_SQRT2 and INV_SQRT3 (or RSQRT2/RSQRT3) in core/math/constants.cppm and update every usage site to the new names (search for SQRT12 and SQRT13 and replace references), and ensure any corresponding documentation or tests are updated to reflect the new names.
18-19: Constant names suggest maximum values but hold reciprocals.
UINT32_MAX_DandUINT32_MAX_Fsuggest they contain the max value, but they actually hold1.0 / UINT32_MAX. Consider names likeINV_UINT32_MAX_DorUINT32_MAX_RECIPROCALto avoid confusion.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/math/constants.cppm` around lines 18 - 19, The constants UINT32_MAX_D and UINT32_MAX_F currently store the reciprocal of std::numeric_limits<std::uint32_t>::max(); rename them to a clearer identifier (e.g., INV_UINT32_MAX_D and INV_UINT32_MAX_F or UINT32_MAX_RECIPROCAL_D/F) and update all references/usages to the new names (search for UINT32_MAX_D and UINT32_MAX_F), and adjust any accompanying comments or documentation to state they are the reciprocal (1.0 / UINT32_MAX) rather than the max value itself.core/definitions/version.cppm (1)
1-3: Re-exporting entirestdmodule is heavyweight.
export import std;exposes the entire standard library to all consumers ofcore.version. This increases compile times and pollutes the namespace. Consider importingstdwithout exporting, and only exporting the specific formatter specialization.♻️ Suggested refinement
export module core.version; -export import std; -import std.compat; +import std;Note: The formatter specialization will still work since it's in
namespace stdand consumers will have their ownimport std;.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/definitions/version.cppm` around lines 1 - 3, Replace the heavyweight re-export of the entire standard library by removing "export import std;" from module core.version and instead use "import std;" locally; then explicitly export only the formatter specialization (the std::formatter specialization for your Version type, e.g., std::formatter<core::Version> or formatter<Version>) so consumers get the formatting customization but do not inherit the full std namespace; update the module to keep the formatter specialization declaration/definition in namespace std while ensuring only that specialization is exported from core.version.core/math/math.cppm (1)
44-47: Missingnoexceptfor consistency.
rad_to_deglacksnoexceptwhiledeg_to_rad(line 40) has it. Both perform the same constexpr arithmetic operations.♻️ Proposed fix
template <std::floating_point T> - constexpr T rad_to_deg(T y) { + constexpr T rad_to_deg(T y) noexcept { return y * (T{180.} / std::numbers::pi_v<T>); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/math/math.cppm` around lines 44 - 47, The function rad_to_deg lacks noexcept while its twin deg_to_rad has it; update the rad_to_deg declaration (template <std::floating_point T> constexpr T rad_to_deg(T y)) to add noexcept so its signature and exception-safety are consistent with deg_to_rad, keeping the same constexpr arithmetic body that multiplies by (T{180.} / std::numbers::pi_v<T>).CMakePresets.json (1)
26-26: Disabling_FORTIFY_SOURCEweakens runtime buffer overflow protection.The
-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0flags disable glibc's buffer overflow detection. If this is required for C++ module compatibility, consider documenting the reason.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CMakePresets.json` at line 26, The preset currently sets CMAKE_CXX_FLAGS_INIT to "-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0", which disables glibc's _FORTIFY_SOURCE protections; either remove these flags from CMAKE_CXX_FLAGS_INIT to restore runtime buffer overflow checks or, if they are absolutely required for C++ module compatibility, make the change explicit and safe by: (a) gating the flags behind a clearly named option/variable (e.g., ENABLE_MODULE_COMPAT) and only appending them when that option is enabled, and (b) adding a short documented rationale in the preset (or adjacent project docs) explaining why _FORTIFY_SOURCE must be unset; reference the CMAKE_CXX_FLAGS_INIT setting and the _FORTIFY_SOURCE symbols when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cmake/compiler.cmake`:
- Around line 18-21: The current CMake snippet adds x86-specific flags (-mavx2
-mfma) for any GNU/Clang compiler which breaks ARM64; update the guard to only
add those flags when the target architecture is x86 (e.g. check
CMAKE_SYSTEM_PROCESSOR or a similar CMake architecture variable for
"x86_64|amd64|i386|i686") in addition to CMAKE_CXX_COMPILER_ID, so
add_compile_options(-mavx2 -mfma) only runs on x86 targets; reference the
existing ARCH_X64 / ARCH_ARM64 distinction from platform/simd.h to keep behavior
consistent across build and source-level architecture detection.
In `@cmake/modules.cmake`:
- Around line 5-10: The CMake file uses CMAKE_SOURCE_DIR which breaks when this
project is included as a subdirectory; update all occurrences to
PROJECT_SOURCE_DIR so paths are relative to this project: change the add_library
invocation that currently references CMAKE_SOURCE_DIR for boost/ut_main.cpp, the
target_sources call (BASE_DIRS and FILES entries referencing
CMAKE_SOURCE_DIR/boost and ut.cppm), and the target_include_directories call
further down to use PROJECT_SOURCE_DIR instead of CMAKE_SOURCE_DIR.
- Around line 30-33: Update the three file(GLOB ...) invocations that populate
CPP_MODULE_FILES, CPP_UNIT_TESTS, and CPP_SRC_FILES to include the
CONFIGURE_DEPENDS option and use the absolute variable ${LIB_DIR} instead of
${LIB_PATH}; specifically change the file(GLOB calls that reference
${LIB_PATH}/*.cppm, ${LIB_PATH}/*.test.cpp, and ${LIB_PATH}/*.cpp) so they use
${LIB_DIR} and add CONFIGURE_DEPENDS to ensure CMake reconfigures when new
source files are added.
- Around line 83-84: Replace the ad-hoc string(REPLACE "/" "_") normalization in
target_link_modules with the same MAKE_C_IDENTIFIER invocation used by
add_modules_library so module names with '-' or '.' become valid C identifiers
(i.e., compute TARGET by calling MAKE_C_IDENTIFIER on NAME rather than only
replacing "/"); after computing TARGET, check for the existence of the target
(e.g., using if(TARGET ... or if(TARGET_EXISTS ...) and emit a clear
message/fail early) before calling target_link_libraries(${ARGV0} PUBLIC
${TARGET}); apply the same change to the second occurrence at the other block
(lines analogous to 91-92).
In `@CMakeLists.txt`:
- Line 6: Replace the unconditional call to enable_testing() with include(CTest)
so CMake defines the standard BUILD_TESTING option, and update any checks using
the undefined CMAKE_TESTING_ENABLED in cmake/modules.cmake to use BUILD_TESTING
instead (search for enable_testing(), include(CTest), CMAKE_TESTING_ENABLED,
BUILD_TESTING and change logic that gates test target creation to
if(BUILD_TESTING) so tests are enabled/disabled via -DBUILD_TESTING).
In `@CMakePresets.json`:
- Line 21: The cmakeMinimumRequired in CMakePresets.json is set to 4.0.2 but the
UUID "d0edc3af-4c50-42ea-a356-e2862fe7a444" requires CMake ≥4.0.3; update the
cmakeMinimumRequired value in CMakePresets.json to 4.0.3 (or to 4.2 if you
prefer to match CMakeLists.txt) so the preset UUID requirement and declared
minimum CMake version are consistent.
In `@core/math/vector4.cppm`:
- Line 35: Add a static_assert that the Vector4 type is standard-layout before
using pointer arithmetic (&x)[i]; specifically, include <type_traits> if
necessary and add something like
static_assert(std::is_standard_layout<Vector4>::value, "Vector4 must be
standard-layout for (&x)[i] pointer arithmetic") adjacent to the existing
trivial/constexpr assertions (the one around line 97) so the runtime branch
using (&x)[i] is well-defined for members x, y, z, w.
---
Duplicate comments:
In `@core/math/math.test.cpp`:
- Around line 7-11: The test "pow"_test currently uses exact equality
(expect(result == expected)) to compare draco::math::pow(2., .5) against
std::numbers::sqrt2_v<double>, which is fragile for floating-point; change the
assertion to an epsilon-based comparison (e.g., compute std::abs(result -
expected) and assert it is less than a small epsilon like 1e-12) to tolerate
rounding differences, updating the assertion in the "pow"_test to use this
approximate check for draco::math::pow and expected.
In `@core/math/vector4.cppm`:
- Around line 142-144: The ARCH_ARM64 preprocessor branch in vector4.cppm
currently contains only a TODO and no return, causing ARM64 builds to fail;
replace the TODO with a scalar-fallback implementation that mirrors the non-SIMD
path (perform the same element-wise calculations and return the resulting
Vector4), ensuring the branch returns the correct Vector4 value; specifically
update the code guarded by `#elif` ARCH_ARM64 to call the same scalar logic used
in the `#else` branch (or delegate to the scalar helper function if one exists) so
the function compiles and behaves identically until a NEON SIMD implementation
is added.
---
Nitpick comments:
In `@CMakePresets.json`:
- Line 26: The preset currently sets CMAKE_CXX_FLAGS_INIT to "-U_FORTIFY_SOURCE
-D_FORTIFY_SOURCE=0", which disables glibc's _FORTIFY_SOURCE protections; either
remove these flags from CMAKE_CXX_FLAGS_INIT to restore runtime buffer overflow
checks or, if they are absolutely required for C++ module compatibility, make
the change explicit and safe by: (a) gating the flags behind a clearly named
option/variable (e.g., ENABLE_MODULE_COMPAT) and only appending them when that
option is enabled, and (b) adding a short documented rationale in the preset (or
adjacent project docs) explaining why _FORTIFY_SOURCE must be unset; reference
the CMAKE_CXX_FLAGS_INIT setting and the _FORTIFY_SOURCE symbols when making the
change.
In `@core/definitions/version.cppm`:
- Around line 1-3: Replace the heavyweight re-export of the entire standard
library by removing "export import std;" from module core.version and instead
use "import std;" locally; then explicitly export only the formatter
specialization (the std::formatter specialization for your Version type, e.g.,
std::formatter<core::Version> or formatter<Version>) so consumers get the
formatting customization but do not inherit the full std namespace; update the
module to keep the formatter specialization declaration/definition in namespace
std while ensuring only that specialization is exported from core.version.
In `@core/math/constants.cppm`:
- Around line 7-8: The constants SQRT12 and SQRT13 are misnamed (they imply
√12/√13) but actually hold 1/√2 and 1/√3; rename them to clear identifiers such
as INV_SQRT2 and INV_SQRT3 (or RSQRT2/RSQRT3) in core/math/constants.cppm and
update every usage site to the new names (search for SQRT12 and SQRT13 and
replace references), and ensure any corresponding documentation or tests are
updated to reflect the new names.
- Around line 18-19: The constants UINT32_MAX_D and UINT32_MAX_F currently store
the reciprocal of std::numeric_limits<std::uint32_t>::max(); rename them to a
clearer identifier (e.g., INV_UINT32_MAX_D and INV_UINT32_MAX_F or
UINT32_MAX_RECIPROCAL_D/F) and update all references/usages to the new names
(search for UINT32_MAX_D and UINT32_MAX_F), and adjust any accompanying comments
or documentation to state they are the reciprocal (1.0 / UINT32_MAX) rather than
the max value itself.
In `@core/math/math.cppm`:
- Around line 44-47: The function rad_to_deg lacks noexcept while its twin
deg_to_rad has it; update the rad_to_deg declaration (template
<std::floating_point T> constexpr T rad_to_deg(T y)) to add noexcept so its
signature and exception-safety are consistent with deg_to_rad, keeping the same
constexpr arithmetic body that multiplies by (T{180.} / std::numbers::pi_v<T>).
In `@platform/cpu/cpu_info_x64.cpp`:
- Around line 90-100: The inner architecture preprocessor guard inside
validate_cpu() is redundant; remove the `#if/`#else/#endif block so validate_cpu()
simply calls detect_cpu_feature(), checks if the returned level equals
CpuFeature::NONE and calls std::abort() on failure. Update validate_cpu() to be
unconditional (no architecture checks) and leave the existing compile-time guard
at the top of the file intact; use the symbols validate_cpu(),
detect_cpu_feature(), and CpuFeature::NONE to locate and modify the code.
In `@platform/cpu/cpu_info.h`:
- Around line 1-13: The public API exposes enum CpuFeature but offers no way to
read the detected feature; add a noexcept getter declaration CpuFeature
get_cpu_feature() noexcept to the header and implement it in the
platform-specific sources (cpu_info_x64.cpp / cpu_info_neon.cpp) to return the
runtime-detected feature used by validate_cpu(), or if you intend CpuFeature to
be internal, move the enum into the implementation files (or an impl-only
header) and keep validate_cpu() as the only public function—choose one approach
and update the header accordingly so external callers can query CPU capabilities
via get_cpu_feature() or the enum is hidden entirely.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 29dc764c-4ab5-4444-a973-e3a94ea1976b
📒 Files selected for processing (21)
.gitignoreCMakeGraphVizOptions.cmakeCMakeLists.txtCMakePresets.jsonboost/ut.cppmboost/ut.hppboost/ut_main.cppcmake/compiler.cmakecmake/modules.cmakecore/CMakeLists.txtcore/core.cppmcore/definitions/definitions.cppmcore/definitions/version.cppmcore/math/constants.cppmcore/math/math.cppmcore/math/math.test.cppcore/math/vector4.cppmplatform/cpu/cpu_info.hplatform/cpu/cpu_info_neon.cppplatform/cpu/cpu_info_x64.cppplatform/simd.h
🚧 Files skipped from review as they are similar to previous changes (7)
- core/CMakeLists.txt
- core/definitions/definitions.cppm
- boost/ut_main.cpp
- platform/simd.h
- .gitignore
- boost/ut.cppm
- core/core.cppm
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
core/math/vector4.cppm (1)
26-35:⚠️ Potential issue | 🔴 CriticalReplace runtime pointer arithmetic with safe index access.
Lines 35 and 47 use
(&x)[i]to access array elements, which performs pointer arithmetic over a non-array struct member. While Vector4 is standard layout and this works in practice, it's not guaranteed safe by C++ semantics. Additionally, theif constevalbranch is a runtime condition that does not prevent the undefined behavior path.The proposed fix uses a unified switch statement that works both at compile-time and runtime, eliminating the inconsistency where invalid indices silently map to
win compile-time evaluation but invoke undefined behavior at runtime.Proposed fix
constexpr float& operator[](const int i) noexcept { - if consteval { - switch (i) { - case 0: return x; - case 1: return y; - case 2: return z; - default: - case 3: return w; - } - } else { return (&x)[i]; } + switch (i) { + case 0: return x; + case 1: return y; + case 2: return z; + case 3: return w; + default: std::unreachable(); + } } constexpr const float& operator[](const int i) const noexcept { - if consteval { - switch (i) { - case 0: return x; - case 1: return y; - case 2: return z; - default: - case 3: return w; - } - } else { return (&x)[i]; } + switch (i) { + case 0: return x; + case 1: return y; + case 2: return z; + case 3: return w; + default: std::unreachable(); + } }Also applies to: 38–47
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/math/vector4.cppm` around lines 26 - 35, The operator[] implementation in Vector4 (constexpr float& operator[](const int i) noexcept and the const overload) must avoid the unsafe pointer arithmetic (&x)[i]; replace both the consteval branch and the runtime branch with a single switch on i that returns x/y/z/w for cases 0..3 and handles the default case consistently (e.g., return w or assert/throw if you prefer) so the same safe indexing is used at compile- and run-time; update both operator[](int) and the const operator[](int) overload in vector4.cppm to use this unified switch-based access.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@core/math/math.cppm`:
- Around line 27-33: The abs template can invoke undefined behavior for signed
integral minimum values; fix by restricting the template so signed integral
types are excluded: update the check in the constexpr branch for
template<arithmetic T> constexpr T abs(T value) noexcept to only handle
std::floating_point<T> || std::unsigned_integral<T>, and add a clear fallback
(e.g., static_assert or overload) for std::signed_integral<T> that directs
callers to use std::abs or a safe alternative; reference std::numeric_limits and
std::make_unsigned_t in comments if you keep a note about handling min values.
In `@core/math/vector4.cppm`:
- Around line 136-140: The dot implementation in vector4.cppm uses the SSE3
intrinsic _mm_movehdup_ps inside the ARCH_X64 block without verifying SSE3
support; update the dot code path (the function using __m128
shuf/_mm_movehdup_ps and sum calculations) to either guard the SSE3-specific
sequence with a compile-time check like `#if` defined(__SSE3__) (and fall back to
an SSE2-only sequence) or replace the sequence with an SSE2-compatible
shuffle/add implementation; ensure the fallback computes the same horizontal sum
result so targets without SSE3 still use a correct SSE2 path.
---
Duplicate comments:
In `@core/math/vector4.cppm`:
- Around line 26-35: The operator[] implementation in Vector4 (constexpr float&
operator[](const int i) noexcept and the const overload) must avoid the unsafe
pointer arithmetic (&x)[i]; replace both the consteval branch and the runtime
branch with a single switch on i that returns x/y/z/w for cases 0..3 and handles
the default case consistently (e.g., return w or assert/throw if you prefer) so
the same safe indexing is used at compile- and run-time; update both
operator[](int) and the const operator[](int) overload in vector4.cppm to use
this unified switch-based access.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 28d906ce-37a0-43c3-be25-9a37b81cef62
📒 Files selected for processing (4)
CMakeLists.txtcmake/modules.cmakecore/math/math.cppmcore/math/vector4.cppm
🚧 Files skipped from review as they are similar to previous changes (2)
- cmake/modules.cmake
- CMakeLists.txt
| __m128 shuf = _mm_movehdup_ps(m); | ||
| __m128 sum = _mm_add_ps(m, shuf); | ||
|
|
||
| shuf = _mm_movehl_ps(shuf, sum); | ||
| sum = _mm_add_ss(sum, shuf); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# 1) Confirm SSE3-only intrinsic usage.
rg -n -C2 '_mm_movehdup_ps|_mm_hadd_ps|_mm_dp_ps' -g '**/*.cppm' -g '**/*.cpp'
# 2) Check compile flags/toolchain settings for explicit SSE3+ enablement.
rg -n -C2 'target_compile_options|target_compile_features|CMAKE_CXX_FLAGS|/arch:|-msse3|-march=' \
-g '**/CMakeLists.txt' -g '**/*.cmake' -g 'CMakePresets.json'Repository: Redot-Engine/DraconicEngine
Length of output: 1596
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Read the vector4.cppm file around the problem area to see preprocessor guards
head -n 160 core/math/vector4.cppm | tail -n 40Repository: Redot-Engine/DraconicEngine
Length of output: 1468
x64 dot path relies on ARCH_X64 guard but lacks explicit SSE3 feature verification.
Line 136 uses _mm_movehdup_ps (SSE3) within the #if ARCH_X64 block. While the code is architecturally gated to x64, there's no explicit verification that SSE3 is available on the target system. SSE3 is not guaranteed on all x64 targets—only SSE2 is part of the baseline x64 ABI.
To ensure portability, either (a) enforce SSE3 support via compiler flags (-msse3 or equivalent), or (b) replace with SSE2-only sequences for baseline x64 compatibility.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@core/math/vector4.cppm` around lines 136 - 140, The dot implementation in
vector4.cppm uses the SSE3 intrinsic _mm_movehdup_ps inside the ARCH_X64 block
without verifying SSE3 support; update the dot code path (the function using
__m128 shuf/_mm_movehdup_ps and sum calculations) to either guard the
SSE3-specific sequence with a compile-time check like `#if` defined(__SSE3__) (and
fall back to an SSE2-only sequence) or replace the sequence with an
SSE2-compatible shuffle/add implementation; ensure the fallback computes the
same horizontal sum result so targets without SSE3 still use a correct SSE2
path.
There was a problem hiding this comment.
We will not need a fallback here. Draconic presupposes x64 having support for AVX2 and higher.
We won't need to support -msse3 like coderabbitai said as ISA has backwards compatibility and supports older instructions. So, if we have -mavx2 then it should also have -msse{4.2,4.1,3,2, ...} -- That said, we will need to make sure fma and bmi, because it is not guaranteed with AVX2 (for some reason.)
There was a problem hiding this comment.
@mcdubhghlas, understood — you're correct. Since the project presupposes AVX2+, SSE3 is guaranteed via ISA backwards compatibility, so the fallback suggestion was unnecessary. I'll retract that.
Regarding FMA and BMI: looking at the cmake/compiler.cmake summary, -mavx2 -mfma are already unconditionally added, so FMA is covered. BMI1/BMI2, however, are indeed separate CPUID feature bits not implied by AVX2. If any future code paths use BMI intrinsics (e.g., _blsr_u32, _bextr_u32, _pext_u32, etc.), you'd want to add -mbmi and -mbmi2 to the compile options in cmake/compiler.cmake explicitly.
(*˘︶˘*)
✏️ Learnings added
Learnt from: mcdubhghlas
Repo: Redot-Engine/DraconicEngine PR: 1
File: core/math/vector4.cppm:136-140
Timestamp: 2026-03-17T14:51:36.674Z
Learning: In the Draconic Engine (Redot-Engine/DraconicEngine), the x64 build presupposes AVX2 and higher SIMD support. SSE3 (and all older SSE extensions) are guaranteed on any AVX2 target via ISA backwards compatibility, so no SSE3 fallback is needed. The cmake/compiler.cmake unconditionally adds `-mavx2 -mfma` compile flags, covering FMA. However, BMI1/BMI2 are NOT guaranteed with AVX2 and must be enabled separately with `-mbmi` / `-mbmi2` if BMI intrinsics are used.
Learnt from: mcdubhghlas
Repo: Redot-Engine/DraconicEngine PR: 1
File: core/math/vector4.cppm:136-140
Timestamp: 2026-03-17T14:51:36.674Z
Learning: In core/math/vector4.cppm (Redot-Engine/DraconicEngine), the `dot(Vector4, Vector4)` SSE path using `_mm_movehdup_ps` (SSE3) inside the `#if ARCH_X64` block does NOT need an SSE3 fallback. The project requires AVX2 as a baseline, which implies all SSE extensions (SSE through SSE4.2) through ISA backwards compatibility.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
🧠 Learnings used
Learnt from: Arctis-Fireblight
Repo: Redot-Engine/DraconicEngine PR: 1
File: core/math/vector4.cppm:142-144
Timestamp: 2026-03-16T02:08:35.440Z
Learning: In `core/math/vector4.cppm`, the `ARCH_ARM64` branch of the `dot(Vector4, Vector4)` function is an intentional placeholder. ARM64 NEON SIMD implementation is planned for a future PR. The current TODO comment is acknowledged by the maintainer (Arctis-Fireblight).
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (2)
engine/native/core/math/math.test.cpp (1)
7-10:⚠️ Potential issue | 🟡 MinorUse a tolerance in the
powtest.Line 10 compares a
std::powresult tostd::numbers::sqrt2_v<double>with==, which is implementation-sensitive and can fail on a correct build. Use an epsilon or Boost.UT's approximate matcher instead.Suggested adjustment
- expect(result == expected); + expect(std::abs(result - expected) <= 1e-12);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@engine/native/core/math/math.test.cpp` around lines 7 - 10, The test "pow"_test currently uses exact equality comparing draco::math::pow(2., .5) to std::numbers::sqrt2_v<double>; change the assertion to use a tolerance/approximate comparison (e.g., Boost.UT's expect( result == approx(expected) ) or compare std::abs(result - expected) < epsilon) so floating-point rounding differences don't cause spurious failures—update the assertion inside the "pow"_test to use either Boost.UT's approximate matcher or a small constexpr epsilon when comparing result and std::numbers::sqrt2_v<double>.engine/native/core/math/math.cppm (1)
27-37:⚠️ Potential issue | 🔴 Critical
abs()still invokes UB for signed minimum values.Line 32 negates
valuein place. Forstd::numeric_limits<T>::min(), that overflows and is undefined behavior; the floating-point branch also preserves-0.0. Split the overloads so signed integrals have explicit safe semantics instead of sharing this branch.One safe pattern if signed-integral return types can change
- template <arithmetic T> - constexpr T abs(T value) noexcept { - if constexpr (std::floating_point<T> || std::signed_integral<T>) { - // Manually compute abs for signed types. - // Also avoids potential int8_t -> int issues. - return value < T{0} ? -value : value; - } else { - // unsigned is always positive! :^) - return value; - } - } + template <std::floating_point T> + constexpr T abs(T value) noexcept { + return value == T{0} ? T{0} : (value < T{0} ? -value : value); + } + + template <std::unsigned_integral T> + constexpr T abs(T value) noexcept { + return value; + } + + template <std::signed_integral T> + constexpr std::make_unsigned_t<T> abs(T value) noexcept { + using U = std::make_unsigned_t<T>; + return value < T{0} ? U(-(value + T{1})) + U{1} : U(value); + }If the current signed return type must stay, it still needs a
min()guard and documented behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@engine/native/core/math/math.cppm` around lines 27 - 37, The abs template currently negates signed-minimum values causing UB and mixes floating and signed-integral semantics; split into three overloads (std::floating_point, std::signed_integral, std::unsigned_integral) and implement safe behavior: for floating_point provide the standard floating-point absolute (preserving -0.0 behavior if desired) in a separate overload; for unsigned_integral simply return the value; for signed_integral guard against std::numeric_limits<T>::min() before negation—e.g., check if value == std::numeric_limits<T>::min() and return a documented result (either return value as-is or handle via wider/unsigned conversion if you can change return type), otherwise return value < T{0} ? -value : value; reference the template function abs and the signed_integral branch when making these changes and add <limits> include and a short comment documenting the min() behavior.
🧹 Nitpick comments (1)
engine/native/core/math/constants.cppm (1)
18-19: Rename these reciprocal constants.Lines 18-19 store
1 / max(uint32_t), but the exported names read like the raw limit values. A name likeINV_UINT32_MAX_D/Fwould make this API much harder to misuse.Suggested adjustment
- constexpr double UINT32_MAX_D = 1. / static_cast<double>(std::numeric_limits<std::uint32_t>::max()); - constexpr float UINT32_MAX_F = 1.f / static_cast<float>(std::numeric_limits<std::uint32_t>::max()); + constexpr double INV_UINT32_MAX_D = 1. / static_cast<double>(std::numeric_limits<std::uint32_t>::max()); + constexpr float INV_UINT32_MAX_F = 1.f / static_cast<float>(std::numeric_limits<std::uint32_t>::max());🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@engine/native/core/math/constants.cppm` around lines 18 - 19, The constants UINT32_MAX_D and UINT32_MAX_F are actually the reciprocals (1 / max(uint32_t)) but named as if they were the raw limits; rename these symbols to INV_UINT32_MAX_D and INV_UINT32_MAX_F (and update any corresponding exports/usages) so the API clearly indicates they are inverse values, and ensure all references to UINT32_MAX_D/UINT32_MAX_F in the codebase are updated to the new names to prevent misuse.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@engine/native/core/definitions/definitions.cppm`:
- Around line 17-27: The current is_zero_constructible inherits
std::is_trivially_constructible which does not guarantee an all-zero byte
pattern is a valid object; change the API to require an explicit opt-in trait
for zero-byte safety: add a new trait template is_all_zero_constructible<T>
(default false) and provide only safe specializations (e.g., for
std::is_integral<T>, std::is_pointer<T>, and safe enums or other types you
explicitly permit), then either rename or re-alias is_zero_constructible to
refer to is_all_zero_constructible (or deprecate the old name); update
is_zero_constructible_v and the zero_constructible concept to use the new
is_all_zero_constructible trait instead of std::is_trivially_constructible to
avoid misclassifying pointers/enums as safe to memset(0).
In `@engine/native/platform/simd.h`:
- Around line 62-76: The preprocessor conditionals in platform/simd.h use `#if
DEBUG` which breaks when DEBUG is defined without a value; change those checks
to `#ifdef DEBUG` or `#if defined(DEBUG)` so the ASSUME(...) and UNREACHABLE()
macro definitions (both debug and non-debug branches) are selected correctly;
update the two occurrences of `#if DEBUG` in the file surrounding the ASSUME and
UNREACHABLE definitions to `#ifdef DEBUG` (or `#if defined(DEBUG)`) to fix the
compile-time behavior.
---
Duplicate comments:
In `@engine/native/core/math/math.cppm`:
- Around line 27-37: The abs template currently negates signed-minimum values
causing UB and mixes floating and signed-integral semantics; split into three
overloads (std::floating_point, std::signed_integral, std::unsigned_integral)
and implement safe behavior: for floating_point provide the standard
floating-point absolute (preserving -0.0 behavior if desired) in a separate
overload; for unsigned_integral simply return the value; for signed_integral
guard against std::numeric_limits<T>::min() before negation—e.g., check if value
== std::numeric_limits<T>::min() and return a documented result (either return
value as-is or handle via wider/unsigned conversion if you can change return
type), otherwise return value < T{0} ? -value : value; reference the template
function abs and the signed_integral branch when making these changes and add
<limits> include and a short comment documenting the min() behavior.
In `@engine/native/core/math/math.test.cpp`:
- Around line 7-10: The test "pow"_test currently uses exact equality comparing
draco::math::pow(2., .5) to std::numbers::sqrt2_v<double>; change the assertion
to use a tolerance/approximate comparison (e.g., Boost.UT's expect( result ==
approx(expected) ) or compare std::abs(result - expected) < epsilon) so
floating-point rounding differences don't cause spurious failures—update the
assertion inside the "pow"_test to use either Boost.UT's approximate matcher or
a small constexpr epsilon when comparing result and
std::numbers::sqrt2_v<double>.
---
Nitpick comments:
In `@engine/native/core/math/constants.cppm`:
- Around line 18-19: The constants UINT32_MAX_D and UINT32_MAX_F are actually
the reciprocals (1 / max(uint32_t)) but named as if they were the raw limits;
rename these symbols to INV_UINT32_MAX_D and INV_UINT32_MAX_F (and update any
corresponding exports/usages) so the API clearly indicates they are inverse
values, and ensure all references to UINT32_MAX_D/UINT32_MAX_F in the codebase
are updated to the new names to prevent misuse.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9a733c83-5833-4f1f-a32d-8228beee7ba0
📒 Files selected for processing (22)
.gitignoreCMakeLists.txtCMakePresets.jsoncmake/Compiler.cmakecmake/Modules.cmakedocs/index.mdengine/native/CMakeLists.txtengine/native/core/CMakeLists.txtengine/native/core/core.cppmengine/native/core/definitions/definitions.cppmengine/native/core/definitions/version.cppmengine/native/core/math/constants.cppmengine/native/core/math/math.cppmengine/native/core/math/math.test.cppengine/native/core/math/vector4.cppmengine/native/platform/cpu/cpu_info.hengine/native/platform/cpu/cpu_info_neon.cppengine/native/platform/cpu/cpu_info_x64.cppengine/native/platform/simd.hengine/native/thirdparty/boost/ut.cppmengine/native/thirdparty/boost/ut.hppengine/native/thirdparty/boost/ut_main.cpp
✅ Files skipped from review due to trivial changes (1)
- .gitignore
🚧 Files skipped from review as they are similar to previous changes (2)
- CMakePresets.json
- CMakeLists.txt
0e50109 to
6800ede
Compare
|
@OldDev78 I reverted the thirdparty changes, but looks like you rebased at some point. You may need to remove your copy of the |
Rebased to |
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Proposed C++ modules structure using CMake 4.22, clang-22 / gcc-15 and ninja-build.
Native libraries are declared inside directories containing C++ modules with translation units containing their implementation (i.e., using C++ module interfaces instead of headers). C/C++ source and headers can still be used within a module implementation, but headers are not intended for declaring any exposed library interfaces, with an exception for the upcoming C ABI.
Directories containing modules can be added as a library using the CMake
add_modules_libraryfunction.The CMakeLists.txt can refer to subdirectories recursively, and they refer to the sub-directories as the library targets.
Linker dependencies still need to be expressed in CMake with the
target_link_librariescommand.Summary by CodeRabbit
New Features
Infrastructure
Chores