-
Notifications
You must be signed in to change notification settings - Fork 265
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SIMD Implementation #18
Comments
Another potential solution: https://github.com/ospray/tsimd |
There is the bullet physics math library which has SIMD extensions... https://github.com/bulletphysics/bullet3/tree/master/src/LinearMath Older versions of BulletPhysics used the "Sony Vector Math Library", but it looks like Bullet consolidated it to what is above... Similar goals though, I see lots of SSE.... Sony Vector Math lib here: Support for PPC ( |
Thanks for the resources! I will take a look. |
The Sony one is very similar to ones I've used on Xbox360 and Playstation3 in AAA titles. We are fortunate that Sony opensourced it. It's pretty rare to find a production quality (cross platform) math lib with native vector intrinsics / SoA. Here is one post when Bullet Physics dropped the Sony library around 2015. So I haven't compared the two math libs, but maybe the Bullet team had a reason. Or maybe the Bullet math is more tailored to Bullet's needs... |
Often much of the computation speed up from simds doesn't come up from vector operation, true a vector maps to a simd register nicely but often the operation do not, like cross and dot are not crazily simds friendly. The real good speeds up comes from processing like 4-8 particles at the time, in those cases SOA data layout helps a lot. I would expect particles to map quite well with that. If you guys know more I would love to know that. M |
He's right, Definitely have to be careful. Think of vector and floating point as running on separate units inside the CPU (the FPU and the vector unit) - and if you use results from one in the other, you suffer a penalty (called a Load Hit Store). Basically some code execution latency (due to CPU pipeline stall) as the CPU marshals the data over to the other unit. The strategy to use with SIMD is to keep the work on the vector unit in the CPU - when a scalar is needed in a calculation, use a SIMD "scalar" type, basically you're only using the X component of the 4 vector. There are certainly cases where you can do clever things to process 4 particles at once in each of the X/Y/Z/W components to get 4x as giordi91 says. Executing math functions sequentially on large memory coherent arrays of vectors are the best, as giordi91 says. Another speedup tip is to avoid conditionals. Sure branch prediction is fast, but even faster is no conditional. Often you can have a simd "boolean" (just a floatingpoint 0 - 1) that you multiply in a very basic equation:
When And my final advice is to take advantage of CPU's parallel pipelining. Interlace non-dependent operations so that while one of the hidden pipelines in your CPU is working on your operation, you can keep the other pipelines full:
You'd think the compiler would be smart enough, and sometimes it is. But sometimes it isn't... I've seen gains just by rearranging the order of my code, so that tells me this is worth knowing about. |
Thanks for the great input, @giordi91 and @subatomicglue! I haven't spent much time on this topic lately. But as @giordi91 mentioned, I also think batch processing of particles (4~8 in bundle) would be nicer. That becomes a little bit tricky when dealing with SPH operators which are essentially Grid-based/hybrid simulations could be a bit more straightforward compared to the SPH solvers. The main perf bottleneck is in pressure Poisson solver which is basically a combination of BLAS function calls (mat x vec, axpy, and something very similar). So I think vectorizing |
Better performance using SIMD implementation by either directly implementing SIMD operations or utilizing Intel ISPC.
The text was updated successfully, but these errors were encountered: