Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SIMD Implementation #18

Open
doyubkim opened this issue Jun 12, 2016 · 7 comments
Open

SIMD Implementation #18

doyubkim opened this issue Jun 12, 2016 · 7 comments
Assignees

Comments

@doyubkim
Copy link
Owner

Better performance using SIMD implementation by either directly implementing SIMD operations or utilizing Intel ISPC.

@doyubkim doyubkim added this to the Jet v1.x milestone Jun 12, 2016
@doyubkim doyubkim self-assigned this Jun 12, 2016
@doyubkim doyubkim removed this from the Jet v.Next milestone Oct 10, 2017
@doyubkim
Copy link
Owner Author

doyubkim commented Jan 2, 2018

Another potential solution: https://github.com/ospray/tsimd

@subatomicglue
Copy link

subatomicglue commented Jan 22, 2018

There is the bullet physics math library which has SIMD extensions... https://github.com/bulletphysics/bullet3/tree/master/src/LinearMath

Older versions of BulletPhysics used the "Sony Vector Math Library", but it looks like Bullet consolidated it to what is above... Similar goals though, I see lots of SSE....

Sony Vector Math lib here:
https://github.com/erwincoumans/sce_vectormath

Support for PPC (ppu/ directory) and Intel (SSE/ directory)... and the SPU chip in the PS3... (spu/ dir) and a CPU only version (in scalar/cpp)

@doyubkim
Copy link
Owner Author

Thanks for the resources! I will take a look.

@subatomicglue
Copy link

subatomicglue commented Jan 23, 2018

The Sony one is very similar to ones I've used on Xbox360 and Playstation3 in AAA titles. We are fortunate that Sony opensourced it. It's pretty rare to find a production quality (cross platform) math lib with native vector intrinsics / SoA.

Here is one post when Bullet Physics dropped the Sony library around 2015. So I haven't compared the two math libs, but maybe the Bullet team had a reason. Or maybe the Bullet math is more tailored to Bullet's needs...

@giordi91
Copy link

giordi91 commented Mar 5, 2018

Often much of the computation speed up from simds doesn't come up from vector operation, true a vector maps to a simd register nicely but often the operation do not, like cross and dot are not crazily simds friendly. The real good speeds up comes from processing like 4-8 particles at the time, in those cases SOA data layout helps a lot. I would expect particles to map quite well with that. If you guys know more I would love to know that.

M

@subatomicglue
Copy link

subatomicglue commented Mar 5, 2018

He's right, Definitely have to be careful. Think of vector and floating point as running on separate units inside the CPU (the FPU and the vector unit) - and if you use results from one in the other, you suffer a penalty (called a Load Hit Store). Basically some code execution latency (due to CPU pipeline stall) as the CPU marshals the data over to the other unit.

The strategy to use with SIMD is to keep the work on the vector unit in the CPU - when a scalar is needed in a calculation, use a SIMD "scalar" type, basically you're only using the X component of the 4 vector. There are certainly cases where you can do clever things to process 4 particles at once in each of the X/Y/Z/W components to get 4x as giordi91 says.

Executing math functions sequentially on large memory coherent arrays of vectors are the best, as giordi91 says.

Another speedup tip is to avoid conditionals. Sure branch prediction is fast, but even faster is no conditional. Often you can have a simd "boolean" (just a floatingpoint 0 - 1) that you multiply in a very basic equation:

 result = pickMe * myBoolean + orPickMe * myBoolean;

When myBoolean is 1, you get pickMe, and when myBoolean is 0, you get orPickMe...

And my final advice is to take advantage of CPU's parallel pipelining. Interlace non-dependent operations so that while one of the hidden pipelines in your CPU is working on your operation, you can keep the other pipelines full:

   Vec4 someResult1 = a.someMath()
   Vec4 someResult2 = b.someMath()    // can pipeline into the CPU 'while' the 1st one is running
   Vec4 someResult3 = c.someMath()    // can pipeline into the CPU 'while' the 1st two are running
   /* use the someResult1 */
   /* use the someResult2 */
   /* use the someResult3 */

You'd think the compiler would be smart enough, and sometimes it is. But sometimes it isn't... I've seen gains just by rearranging the order of my code, so that tells me this is worth knowing about.

@doyubkim
Copy link
Owner Author

doyubkim commented Mar 6, 2018

Thanks for the great input, @giordi91 and @subatomicglue!

I haven't spent much time on this topic lately. But as @giordi91 mentioned, I also think batch processing of particles (4~8 in bundle) would be nicer. That becomes a little bit tricky when dealing with SPH operators which are essentially for each neighbor { ... } since it's likely to be an unordered random neighbor access, though.

Grid-based/hybrid simulations could be a bit more straightforward compared to the SPH solvers. The main perf bottleneck is in pressure Poisson solver which is basically a combination of BLAS function calls (mat x vec, axpy, and something very similar). So I think vectorizing Fdm* solvers could bring some meaningful perf enhancement. Actually, it would be great to see some contributions in this area since I'm mostly focusing on GPGPU at the moment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants