New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
speed up util::normalize_angle for stupidly-large values #3337
Conversation
include/mapnik/util/math.hpp
Outdated
namespace mapnik { namespace util { | ||
|
||
constexpr double pi = 3.14159265358979323846; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
M_PI can be used from mapnik/global.hpp
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
M_PI
is not part of c++ so yes, defining it in include/mapnik/util/math.hpp
and removing MSVC related #if/def in global.hpp
makes sense to me. One thing to be aware third party libs are using M_PI
from math.h, agg has its own #define ... We just have to be consistent
This partially reverts commit 5ab6db2. The change of argument --threads N doesn't just change output, it changes the mode of operation. With --threads 0, the benchmark runs wholly in the main thread. With --threads 1, it starts 1 worker thread, which does the work, and the main thread measures how long it takes to complete.
Make it clear in benchmark output which mode it ran in: - "main thread" only - "1 worker" thread - "N workers"
Because it simply calculates the remainder after division by full turn, it shouldn't take time proportional to the magnitude of its operand.
Codecov Report
@@ Coverage Diff @@
## master #3337 +/- ##
=========================================
- Coverage 68.9% 68.9% -0.01%
=========================================
Files 442 442
Lines 23259 23263 +4
=========================================
+ Hits 16027 16029 +2
- Misses 7232 7234 +2
Continue to review full report at Codecov.
|
speed up util::normalize_angle for large values
It's pretty unlikely that real code calls this with stupidly large angles (like 100 radians), but I think these should be handled gracefully, too.
In my benchmarks the new function is
The point at which
std::remainder
is faster than subtracting2 * pi
in a loop is of course highly architecture dependent. I haven't done any real investigation -- just ran a simple benchmark,std::remainder
started winning around 110 radians on my laptop. So I picked 100 as the point where the extra precision might be worth it -- despite the fact that my function is still slower below ~130, because it does a few subtraction loops that are thrown away if the angle is too large.