Skip to content

[flang] Fix compile error : on MacOS 11.5.1 (20G80), cmake version 3.21.1 #333

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion flang/runtime/time-intrinsic.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,10 @@
// overload will have a dummy parameter whose type indicates whether or not it
// should be preferred. Any other parameters required for SFINAE should have
// default values provided.

// #define invalid_clock_t (-1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Remove commented code.

#define INVALID_CLOCK_T (0xFF)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will 0xFF work on machines that have std::clock_t as a signed integer? Will a static cast of -1 to std::clock_t work better?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You definitely don't want 0xFF, as that is a "valid" clock_t value (although it would be very close to January 1st, 1970 :) ). My local patch was to use static_cast<clock_t>(-1) instead.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Btw, on FreeBSD the situation is 'interesting' in the sense that clock_t is unsigned only on 32-bit architectures (probably for historical reasons), and signed on everything else. On macOS it seems that __darwin_clock_t is always just unsigned long.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you might consider std::numeric_limits<std::clock_t>::max(), which will be platform independent, and pretty clear what is happening.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you might consider std::numeric_limits<std::clock_t>::max(), which will be platform independent, and pretty clear what is happening.

No, you should really use -1. The clock(3) man pages on both macOS and FreeBSD say:

RETURN VALUES
The clock() function returns the amount of time used unless an error occurs, in which case the return value is -1.

On Linux:

RETURN VALUE
The value returned is the CPU time used so far as a clock_t; to get the number of seconds used, divide by CLOCKS_PER_SEC. If the processor time used is not available or its value cannot be represented, the func‐
tion returns the value (clock_t) -1.

I think the Linux man page makes it a bit clearer that you have to cast -1 to clock_t.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Having said all that, it's very unlikely that you cannot retrieve the current time.... but still, better safe than sorry :) )


namespace {
// Types for the dummy parameter indicating the priority of a given overload.
// We will invoke our helper with an integer literal argument, so the overload
Expand All @@ -36,7 +40,7 @@ using preferred_implementation = int;
// This is the fallback implementation, which should work everywhere.
template <typename Unused = void> double GetCpuTime(fallback_implementation) {
std::clock_t timestamp{std::clock()};
if (timestamp != std::clock_t{-1}) {
if (timestamp != std::clock_t{INVALID_CLOCK_T}) {
return static_cast<double>(timestamp) / CLOCKS_PER_SEC;
}

Expand Down