Various examples and short exercises on bridging Python and compiled code
mylib.cpp and mylib.h - this is where I actually did the implementation of the count 3d and hypersphere exercises. These files modify mylib.soc which is the file you need to use in order to work with the ctype stuff. I think i left my original attempt on mycpplib.cpp and mycpplib.h but they aren't used here. to get all the stuff working in the jupyter notebook, you need to open the .soc file, not the .so file.
numba.ipynb - python+numba is significantly faster than vanilla python and the time ratio increases until about n = 1000 where it flattens out. this took 18 minutes to run. Initially I tried with extrapolation to get the plot faster, but that flattened out and looked wrong. I used chatgpt to clean up the code, so there is a remark about extrapolation in the plot title that i forgot to remove. I don't want to rerun to fix it, but I wanted to note why it's there.
C-to-Python.ipynb - i got my pi estimation with numpy down to around 1.7 seconds. Originally I tried separately generating x and y, then creating the inside condition which has xx and yy <= 1. This method creates 4 temporaries which scale when you're doing 100 million iterations. My original calculation took around 2.6 seconds. The change to get more speed was to use in-line operations because those reuse memory instead of generating a temporary value. My new method has only 1 temporary, so that's 3 less per iteration (300 million less overall). This change saved me almost 1 second for a simple calculation. In-line operations can shave a significant amount of time for more complex computational work. For part b, my calculation of the hypersphere took about 2 seconds. I did the previous assignment solely in cpp so I didn't get to see the speed up you get from ctypes vs vanilla python.
cppyy.ipynb - for part a, the vanilla python to ctypes ratio grows nonlinearly with n - it looks exponential - so for large n, we can have a calculation taking 100,000 times longer using vanilla python versus the cpp library stuff. for part b, numba works comparably to ctypes. their ratio is roughly constant save for a weird spike i saw around n=700. There is a comment about how these fluctuations can happen and don't matter too much, so I didn't smooth out the spike. for large n the behavior clearly shows roughly constant ratio. The ratio is about 1.5 so numba is a little faster.