Skip to content

Update C++ Standard from 14 to 17#8041

Merged
snnn merged 42 commits intomasterfrom
snnn/cxx2
Jun 25, 2021
Merged

Update C++ Standard from 14 to 17#8041
snnn merged 42 commits intomasterfrom
snnn/cxx2

Conversation

@snnn
Copy link
Copy Markdown
Contributor

@snnn snnn commented Jun 11, 2021

Description:

Switched the code to C++17. To build ONNX Runtime on old distros like CentOS 7, you need to install a newer GCC from additionary repos. If you build onnxruntime with the newer GCC, typically the result binary can't be distributed to other places because it depends on the new GCC's runtime libraries, something that the stock OS doesn't have. But on RHEL/CentOS, it can be better. We use Red Hat devtoolset 8/9/10 with CentOS7 building our code. The new library features(like std::filesystem) that not exists in the old C++ runtime will be statically linked into the applications with some restrictions:

  1. GCC has dual ABI, but we can only use the old one. It means std::string is still copy-on-write and std::list::size() is still O(n). Also, if you build onnxruntime on CentOS 7 and link it with some binaries that were built on CentOS 8 or Ubuntu with the new ABI and export C++ symbols directly(instead of using a C API), the it won't work.

  2. We still can't use std::optional. It is a limitation coming from macOS. We will solve it when we got macOS 11 build machines. It won't be too long.

  3. Because CUDA 10.2 doesn't support C++17 and we still need to generate CUDA 10.2 packages, please avoid to use C++17 in CUDA files(*.cu). Also, the *.h files that they include(like core/framework/float16.h). You are welcome to use the new features in any *.cc files.

Also, update the minimum supported macOS version to 10.13.
Also, fix some compile warnings.

Motivation and Context

  • Why is this change required? What problem does it solve?
  • If it fixes an open issue, please link to the issue here.

@snnn snnn requested a review from a team as a code owner June 11, 2021 22:41

target_compile_definitions(${target_name} PUBLIC -DNSYNC_ATOMIC_CPP11)
if(APPLE)
target_compile_definitions(${target_name} PRIVATE -Doptional_CONFIG_SELECT_OPTIONAL=1)
Copy link
Copy Markdown
Contributor

@edgchen1 edgchen1 Jun 11, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can use the value optional_OPTIONAL_NONSTD, according to the docs: https://github.com/martinmoene/optional-lite#select-stdoptional-or-nonstdoptional

also, is ORT's optional.h a public header? include/onnxruntime/core/common/optional.h
if it may be included by external code, that code would need to configure it the same way. should it be a PUBLIC compile definition? #Closed

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should it be PUBLIC instead of PRIVATE?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

char* hostname = nullptr;
if (-1 == _dupenv_s(&hostname, &hostname_len, "COMPUTERNAME"))
hostname = "?";
hostname = const_cast<char*>("?");
Copy link
Copy Markdown
Contributor

@edgchen1 edgchen1 Jun 12, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we avoid const_cast? #Closed

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer to leave it as a TODO as it is not related to C++ 14/17 upgrade. Currently this PR is already not small.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok. TODO comment?

@snnn snnn requested a review from edgchen1 June 25, 2021 03:44
Changming Sun added 2 commits June 24, 2021 20:48
#if ((__cplusplus >= 201703L) || (defined(_MSVC_LANG) && (_MSVC_LANG >= 201703L)))
#define IF_CONSTEXPR if constexpr
#else
#define IF_CONSTEXPR if
Copy link
Copy Markdown
Contributor

@edgchen1 edgchen1 Jun 25, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IF_CONSTEXPR

why do we need a macro for this? #Closed

Copy link
Copy Markdown
Contributor Author

@snnn snnn Jun 25, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because CUDA 10.2 doesn't support C++17 and we still need to generate CUDA 10.2 packages, please avoid to use C++17 in CUDA files(*.cu). Also, the *.h files that they include(like core/framework/float16.h). In these *.h files, we need to use such a macro until we retire the CUDA 10.2 pipelines.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, can we name it ORT_IF_CONSTEXPR?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


// otherwise, not contiguous
return nullopt;
return std::nullopt;
Copy link
Copy Markdown
Contributor

@edgchen1 edgchen1 Jun 25, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

std::nullopt;

using std::nullopt with onnxruntime::optional here, can we use std or onnxruntime consistently? #Closed

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


const auto type_constraints = BuildKernelDefConstraintsFromTypeList<DataTypes>();
const auto enabled_type_constraints = BuildKernelDefConstraintsFromTypeList<EnabledDataTypes>();
#define type_constraints (BuildKernelDefConstraintsFromTypeList<DataTypes>())
Copy link
Copy Markdown
Contributor

@edgchen1 edgchen1 Jun 25, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

type_constraints

replace at call site instead of macro #Closed

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

ApplicableMatrixReduction expected_reduction,
const optional<int>& expected_m = nullopt,
const optional<int>& expected_n = nullopt) {
const optional<int>& expected_m = std::nullopt,
Copy link
Copy Markdown
Contributor

@edgchen1 edgchen1 Jun 25, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

std::nullopt

would prefer to be consistent, e.g., nullopt
alternatively, this can be {} and it won't need to change when we move to std::optional #Closed

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

template <typename T>
static std::vector<std::vector<T>> GetRandomValuesForMaxPool(const std::vector<TensorInfo>& infos) {
std::vector<std::vector<T>> datas(infos.size());
std::random_device rd;
Copy link
Copy Markdown
Contributor

@edgchen1 edgchen1 Jun 25, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

std::random_device rd;

can we use generator initialized with GetTestRandomSeed()? then it's easier to reproduce failures #Closed

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

}

static Status AllocateSparseTensor(MLValue& mlvalue, const DataTypeImpl& ml_type, AllocatorPtr allocator,
static Status AllocateSparseTensor(OrtValue& mlvalue, const DataTypeImpl& ml_type, AllocatorPtr allocator,
Copy link
Copy Markdown
Contributor

@pranavsharma pranavsharma Jun 25, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since you've made this change in a number of places, should we also rename the file ml_value.h to ort_value.h? The filename doesn't match the class it houses. Can be done in a separate PR.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. I will do it in another PR.

inline std::string ToWideString(const std::string& s) { return s; }
#endif

#if ((__cplusplus >= 201703L) || (defined(_MSVC_LANG) && (_MSVC_LANG >= 201703L)))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're moving to C++17 for good, why do we need this condition?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because CUDA 10.2 doesn't support C++17 and we still need to generate CUDA 10.2 packages, we must avoid to use C++17 in CUDA files(*.cu). Also, the *.h files that they include(like core/framework/float16.h).

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. thanks. Would be good to document this somewhere.

Copy link
Copy Markdown
Contributor

@edgchen1 edgchen1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:shipit:

@snnn snnn merged commit c716b56 into master Jun 25, 2021
@snnn snnn deleted the snnn/cxx2 branch June 25, 2021 21:08
@snnn snnn mentioned this pull request Jun 27, 2021
@snnn snnn mentioned this pull request Aug 23, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants