Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

speed up util::normalize_angle for stupidly-large values #3337

Merged
merged 6 commits into from Jul 12, 2018

Conversation

lightmare
Copy link
Contributor

It's pretty unlikely that real code calls this with stupidly large angles (like 100 radians), but I think these should be handled gracefully, too.

In my benchmarks the new function is

  • slightly faster on positive angles (can explain this...)
  • slightly slower on small negative angles > -15 (... but not this 馃槙 )
  • starts using std::remainder above 32 pi (~100 radians), where it's slower
  • no contest on angles of magnitude > 130

The point at which std::remainder is faster than subtracting 2 * pi in a loop is of course highly architecture dependent. I haven't done any real investigation -- just ran a simple benchmark, std::remainder started winning around 110 radians on my laptop. So I picked 100 as the point where the extra precision might be worth it -- despite the fact that my function is still slower below ~130, because it does a few subtraction loops that are thrown away if the angle is too large.

namespace mapnik { namespace util {

constexpr double pi = 3.14159265358979323846;

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

M_PI can be used from mapnik/global.hpp

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

M_PI is not part of c++ so yes, defining it in include/mapnik/util/math.hpp and removing MSVC related #if/def in global.hpp makes sense to me. One thing to be aware third party libs are using M_PI from math.h, agg has its own #define ... We just have to be consistent

This partially reverts commit 5ab6db2.

The change of argument --threads N doesn't just change output, it
changes the mode of operation.
With --threads 0, the benchmark runs wholly in the main thread.
With --threads 1, it starts 1 worker thread, which does the work, and
the main thread measures how long it takes to complete.
Make it clear in benchmark output which mode it ran in:
- "main thread" only
- "1 worker" thread
- "N workers"
Because it simply calculates the remainder after division by full turn,
it shouldn't take time proportional to the magnitude of its operand.
@codecov
Copy link

codecov bot commented Jul 11, 2018

Codecov Report

Merging #3337 into master will decrease coverage by <.01%.
The diff coverage is 75%.

Impacted file tree graph

@@            Coverage Diff            @@
##           master   #3337      +/-   ##
=========================================
- Coverage    68.9%   68.9%   -0.01%     
=========================================
  Files         442     442              
  Lines       23259   23263       +4     
=========================================
+ Hits        16027   16029       +2     
- Misses       7232    7234       +2
Impacted Files Coverage 螖
include/mapnik/util/math.hpp 0% <酶> (酶) 猬嗭笍
src/math.cpp 77.77% <75%> (-22.23%) 猬囷笍

Continue to review full report at Codecov.

Legend - Click here to learn more
螖 = absolute <relative> (impact), 酶 = not affected, ? = missing data
Powered by Codecov. Last update 25f0ef0...7c4ccf0. Read the comment docs.

@lightmare lightmare merged commit 187c1df into mapnik:master Jul 12, 2018
@lightmare lightmare deleted the benchanting branch July 12, 2018 17:30
@lightmare lightmare added this to the v3.1.0 milestone Jul 12, 2018
leedejun pushed a commit to leedejun/mapnik that referenced this pull request Dec 26, 2023
speed up util::normalize_angle for large values
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants