Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2D #21

Open
Permafacture opened this issue Nov 27, 2014 · 22 comments
Open

2D #21

Permafacture opened this issue Nov 27, 2014 · 22 comments

Comments

@Permafacture
Copy link

I couldn't find any way to communicate other than filing a bug.

I've been programming in python as a hobby for 5 years. I contributed heavily to a now abandoned non sequential ray tracer using cython. My interest in ray tracing is for optics but I know it can be applied to many fields and I'd like there to be a go to python library for scientists.

I'm very interested in contributing to a fast python geometry library. Specifically, in computing intersections and normals of rays and 2D surfaces (even more specifically, rational b splines) for 2d ray tracing. Bivariate b splines are a more than a bit over my head, but I'd be interested in that 3D possibility. In 3d, I'd be willing to go as far as a triangular mesh.

Is there any interest here in 2d geometry or computing normals (which I'd use for raytracing)?

@adamlwgriffiths
Copy link
Owner

Definitely! As long as its to do with rendering or maths, it can go in Pyrr =). And I'm more than happy to accept help.

Feel free to do some development and do a pull request.
Would be great if any functions could have tests added too =)

Let me know if you need any help in navigating the code base.

@Permafacture
Copy link
Author

My comment about Pyrr not being vectorized didn;t make it to github I guess. I saw that your roadmap actually does include working on arrays of geometries, and not just one at a time. I would be happy to take this on. I'll fork this project and start soon. In case you're curious, I'd replace line as a single line to lines as a manager of an array of lines, which one would use to add, delete and filter lines. One could feed this any iterable, and it wouldn't be compiled into an array until 1) one calls .compile() on the object or 2) a method of the object that requires it being compiled is called.

Eventually, I'd like to use a GDAL like style where one would call lines1.intersects(aabb1) and some magic calls lines_intersects_aabb(lines1,aabb1) based on the datatypes passed. I think there could also be an iter_intersects method so that if lines1 and lines2 are 1000000 lines long each, and lines1.intersects(lines2) would return a (1000000,1000000,2) array that would overflow memory, lines1.iter_intersects(lines2, steps=(100,0)) would return a generator that yields (100,1000000,3) arrays.

I'd also like to see a focus on speed, providing a Data Oriented Design ORM to the objects as much as possible. Using numpy arrays pretty much does this, but we can lay out the lines (for example) in the array however is fastest and the user doesn't need to know. So, a (6,N) array may be slightly faster than an (N,2,3) array.

I will write tests also. Looking forward to this.

-Elliot

@adamlwgriffiths
Copy link
Owner

One of the major design considerations of Pyrr was to keep the 'numpy way' of working on batches of data.
Alas, my numpy-fu isn't the best, so some functions had to forgo this (ie, should be able to normalize (normalise?) batches of vectors, multiply batches of matrices, etc).
So if you see any code that you know how to make work on batches, go for it.

I've tried to keep all functionality in both procedural and OO APIs.
The procedural tends to be more for vectorization (large batches of data) and the OO for singular blocks.
The OO API accepts slices of np arrays, so you could split a batch of arrays into seperate objects, pass those around, and then do a mass transformation on the single array (this is how I was thinking of doing camera transforms for the scene).
If you can think of a way to do DOD for the OO API, that keeps it simple to use for singular instances of the data (1 Vector vs 100 Vectors), then I'm all ears =).

Any reason to stage the conversion to np.array (ie the .compile method)?
Sounds like a compile step could be done in a higher level and the Lines functions would just do an immediate np conversion. This would be in keeping with the existing style, and np itself.
I'm happy for Pyrr to include as many layers as makes sense. But I don't want to obscure basic functions either. Modularity FTW.

I'm happy for the data to be laid out however makes sense.
If it's a continuous line (A -> B -> C) then a flat list makes sense.
If it's a list of line segments (A -> B, B -> C) then (N,2,3,) seems more intuitive.
Does it make much of a difference performance wise? Surely the data is contiguous and wouldn't incur an overhead?
In the end, as long as the API is consistent, it's not a big deal. Where it's not obvious I try to go with convention if there is one.
There can always be functions to convert from one format to another using indices or what not.
I haven't had to deal with line's in my code so I'll leave the decision to you. Happy to bounce ideas around though =).

Be aware that I the only OO implementations I've created so far are for Quaternions, Matrices and Vectors.
So the geometric primitives have no OO implementation at the moment.

If the API breaks we'll have to increment the major version.
I honestly don't know about usage so I'm not sure if API breakage will upset anyone =/.
PyPi stats seem pretty healthy, but I've never had any rude emails when I've broken the API.
Can always provide a backward compatible wrapper I guess that redirects line calls to lines.

Cheers,
Adam

@adamlwgriffiths
Copy link
Owner

Let me know if you need any help =)

@Permafacture
Copy link
Author

I'm in the process of writing an example of the api I am thinking of. When
I have something in the next couple of weeks, I will post it on github and
you can see if it's a direction you want pyrr to go, and if so, we can
talk about how to adjust it to fit what you are envisioning also.

@Permafacture
Copy link
Author

Heyoh. I created an example of the API I am thinking of.

https://github.com/Permafacture/python-computational-geometry

Regarding some of the previous points:

One of the major design considerations of Pyrr was to keep the 'numpy way' of working on batches of data.

I don't believe you've done this. Everything in geometric_tests.py is writen for single geometries at a time. You could use numpy.vectorize, but this is still really just a for loop, not the real vectorization numpy provides through ufuncs.

If you can think of a way to do DOD for the OO API, that keeps it simple to use for singular instances of the data (1 Vector vs 100 Vectors), then I'm all ears =).

In my repo, geometric tests are added to geometric objects through metaprogramming. So, one writes a function for intersection between a line and a box once (as you have), and then both line and box objects have a method for intersecting the other. These methods have meaningful docstrings too, and help() works well on the geometry objects.

The objects are not single instances of geometries, but sets, basically a wrapper around an array, giving it methods appropriate to the data in that array. If you want the intersection between two lines, you just add only one line to each Segments2d object. The result would be a 1x1 grid, so you'd have to get pts[0][0] instead of just getting a single point returned, which seems like a mild hastle for having the same API work for single and sets of vectors in an efficient way.

Any reason to stage the conversion to np.array (ie the .compile method)?
Sounds like a compile step could be done in a higher level and the Lines functions would just do an immediate np conversion. This would be in keeping with the existing style, and np itself.

Well, convert is a method in the Geometry superclass, so it is "higher level". But my thinking is to not do unnecessary array creations. Just add geometries to the object by whatever means you want and they are compiled to an array right when it is needed.

Also, this allows the computation of properties such as normals, determinants or what have you only when/if they are asked for. The results are "cached" as properties of the objects so they won't be recomputed. When the data is recompiled (lines are added), the cache is invlidated.

I'm happy for Pyrr to include as many layers as makes sense. But I don't want to obscure basic functions either. Modularity FTW.

Sure. I don't think any basic functions are being obscured here. It's not many layers. Specific geometries inherit from the Geometry superclass, and geometric tests are added at run time.

By the way, if you are at all interested in this direction, another step I would consider is taking CGAL or GEOS functions and turning them into numpy ufuncs, which would add a lot of fast, tested, geometric tests to the project right away.

@adamlwgriffiths
Copy link
Owner

One of the major design considerations of Pyrr was to keep the 'numpy way' of working on batches of data.

I don't believe you've done this. Everything in geometric_tests.py is writen for single geometries at a time. You could use numpy.vectorize, but this is still really just a for loop, not the real vectorization numpy provides through ufuncs.

Totally true. I've wanted Pyrr to do this but I haven't had the time / experience with np to figure out the best way to do some of these. I also haven't had to use many of these functions in this way so I haven't had the motivation.
But it's definitely a goal I'd like to work toward.
Any help getting there would be appreciated, Pyrr could use more attention than I have the time to give it.

Essentially what you've written is a way of managing collections of object types, and that sounds fine to me.

I'll number things to make it easier to track points.

The Geometry class you've provided seems ok, although it's more of a collection (perhaps geometry is the correct term anyway? I'm not a mathematician =P).

I'd probably rename clear_properties to _clear_cache.
I prefer to append _ to anything not intended for external use. But it's a trivial.

Also, set the default dtype to None to be consistent with the rest of the API.
For my use case, np.float32 is my default for OpenGL, and I'd say np.float64 is yours. So I've left it to None to follow np's behavior.

In my repo, geometric tests are added to geometric objects through metaprogramming. So, one writes a function for intersection between a line and a box once (as you have), and then both line and box objects have a method for intersecting the other.

Ok, so the existing geometric tests could be improved to be vectorised, if thats too much effort then just build the higher level content on top of it, and it can be refactored later. As long as the API is appropriate then the internal logic is trivial.

I planned to do automatic selection of geometric tests using the same methods I did with the Vector / Quaternion / Matrix objects.
This uses the multipledispatch lib / pattern. It's not neat, but it works.

@dispatch(BaseMatrix)
def __mul__(self, other):
    return self * Quaternion(other)

@dispatch(BaseVector)
def __mul__(self, other):
    return type(other)(quaternion.apply_to_vector(self, other))

This would work for a Line, Sphere, etc.
The only issue is when you're passed a standard np.array, Because so many types share the same shape (sphere.shape = (4,), vector.shape = (4,), etc) it is ambiguous. So it needs to objects or collections.

So if you create a *Collection class that then uses dispatch methods to compare to other collections, then that should work.

spheres.intersect(lines)
aabbs.intersect(aabbs)
rays.intersect(spheres)

You could also hijack some operators to make it easier.
I did this for vector / quaternion / matrix.
dot = v1|v2
cross = v1^v2
inverse = ~v1

I think & works well for intersection. Doesn't have to have any though if it could be ambiguous.

spheres & lines

By the way, if you are at all interested in this direction, another step I would consider is taking CGAL or GEOS functions and turning them into numpy ufuncs, which would add a lot of fast, tested, geometric tests to the project right away.

If something can be scavenged / utilised to our benefit, I'm all for it.
I've been very interested in constructive solid geometry libs, but again haven't had time to look into any.

Thanks for your input =) It's good to get some fresh eyes and input.

Cheers,
Adam

@Permafacture
Copy link
Author

1.The Geometry class you've provided seems ok, although it's more of a collection (perhaps geometry is the correct term anyway? I'm not a mathematician =P).

I agree. I should change it to GeometryCollection or something.

  1. I'd probably rename clear_properties to _clear_cache. I prefer to append _ to anything not intended for external use. But it's a trivial.

I agree totally. Thought of it myself only after posting.

  1. Also, set the default dtype to None to be consistent with the rest of the API.

Okay

5&6

I saw the multiple dispatch stuff and realized it was similar to what I was doing. I like that API style better. I'm not sure how to combine that with the other decorators I already have, which make it so one only has to write the geometric test once and that's it. Then it is a method of both geometries it applies to. The meta-programming is getting deep: more research required.

  1. You could also hijack some operators to make it easier.

I don't like this. It possibly complicates/obfuscates things with little/no practical gain.

What is your use case, btw? Are there any geometric tests you'd like to be using in the batched format soon?

@adamlwgriffiths
Copy link
Owner

Tbh I haven't used many of the functions in Pyrr yet.
I will be using them for a simple 3d engine / framework, but I've only been using vector / matrix / quaternion so far.
The library so far is basically me transcribing the c++ code from 3D Maths for Game Programmers.
Hence there are a number of gaps (lerp, slerp, various geometric tests).
Some I grabbed from the internet as I felt the need.

I had a think about the dtype, and I think you're right. I left them as None because I was trying to match np interfaces where it made sense. But we're creating 'objects' not generic buffers. So a good default makes sense. It should be np.float.
Objects like Rectangle, etc where an int type can make sense can just over-ride the dtype.
I myself use np.float32 due to OpenGL usage, but that's a special case, I'd rather target Pyrr at 3D maths in general, and not at a 3D API specifically.
Don't make this change yet, leave it as None for now, later on we can change the defaults throughout the codebase to np.float.

You're right about the operators, it's too much magic. I think it's convenient with vectors, but applying it haphazardly would be an attempt to corrupt python itself.
Just use collection.intersect(collection2) or what not.

In some places we'd need to return different types.
Rays intersecting rays would return points. Lines intersecting rectangles would return line segments.

Also, spheres need more than just intersections, you want normals and penetration distances (derivable from the intersection).

i, n, d = spheres.intersect(spheres2)

or

i = spheres.intersect(spheres2)
n = spheres.normals(spheres2)
d = spheres.distance(spheres2)

Perhaps the intersection returns an intersection object?

i = spheres.intersect(spheres2)
i.normals
i.distance

Unsure on what is best. Do you have any preference or experience with other APIs that you found worked well?
The existing geometric_tests match the second one, so perhaps just stick with that, which should simplify things greatly.

If you used multipledispatch, the function would have to be hand coded for each type.
Meta programming here wouldn't work so well.
It works well for vector / matrix / etc. Tthat said, I'm happy for a new solution as I'm not attached to it.
But for intersections it may be annoying, especially as adding new collection types would require touching all the existing collections.
But perhaps new collections wouldn't be added often?
I'd rather clearer code than magic, but if the burden of maintaining is annoying then I'm happy for some easy to read magic.

The function names try and follow a standard, a_test_b where a and b are types and test is the test type used (intersect, etc).
Using your method of storing a name of the collection type and then doing a getattr on the module will work for this.
As long as the API is consistent, internals can always be changed if we don't like them.

so something like this sounds about right

class ObjectCollection(...):
    def intersect(self, other):
        # meta check here
        getattr(geometric_tests, '{}_intersect_{}'.format(self._type, other._type)
        ....

class LineCollection(ObjectCollection):
    _type = 'line'

class LineSegmentCollection(ObjectCollection):
    _type = 'line_segment'

I'm not sure how you'd manage the return types.
You could provide more data in the class description

class LineCollection(...)
    _intersect_return = {'line': PointCollection}

I think this would prove cumbersome in the long run.

Perhaps the return type isn't important and just returning straight np arrays is ok, but I have a feeling you'd want to pass that to another collection for further checks. But this doesn't work so well for some operations on things like spheres (distance, normal, etc).

I see you're using results with a mask, this may be ok for now to get the initial API up.

Perhaps multipledispatch is a simpler solution for now. Just manually code the functions and worry about adding awesome magic later? I'll leave that up to you.

If you don't want to manually write the caching, I've had success with this library
https://pypi.python.org/pypi/memoize/
https://github.com/sionide21/memoize
To clear the cache, iterate through self._memo_* and del the attributes.
Would remove a few functions and save maintenance =).

These are just some ideas, I don't want to paralyse you with a second voice conflicting with your goals. I know how that feels.
Write the API you think works best and is simplest to use and maintain.
Provide some simple example scripts and we'll go from there.

Cheers

@Permafacture
Copy link
Author

I'll address some of those questions by writing some actual code, but theres a couple things worth talking about more here.

In some places we'd need to return different types.
Rays intersecting rays would return points. Lines intersecting rectangles would return line segments.

Unfortunately, this isn't really true. I have used GDAL, and in that library a line segment intersecting a line segment could be a point OR line segment. A line instersecting a rectangle could be a line segment OR a point. A polygon intersecting a polygon could be a point, line OR polygon! These are anoying subtleties.

Perhaps the intersection returns an intersection object?

i = spheres.intersect(spheres2)
i.normals
i.distance

Unsure on what is best. Do you have any preference or experience with other APIs that you found worked well?

Like I said, I've only worked with GDAL. It was nice, but not vectorized, which is why I want to work on a new project. I think the results should be geometry collections also. Maybe in some cases, one would want a special geometry collection for the result. But for spheres, could we not just use a line segment? It has a normal and distance, and would contain information about the portions of the spheres that intersected.

I think we should stick to cases we know we care about to really get things working in a useable way, but it is good to keep an eye out on annyoing generalities/specifics that could come up later and bite us if we aren't paying attention.

I see you're using results with a mask, this may be ok for now to get the initial API up.

I think the mask is important for the results to make sense. If you find the intersection between 10000 rays and 1000 rectangles, getting back a flat array of 3461 points doesn't do anything for you. You need to know (probably) what rays and rectangles intersected to get those points. The mask lets us keep the meangingful structure of the result and operate on a flat array of the result when needed. Numpy has masked arrays also, which would reduce the return value to a single object rather than an array and mask. I need to research this a bit more to make sure there are no gotchas.

A problem/question I see here is that a user defined geometry collection is a flat list of geometries, but a result is a grid. How do I keep track of the result's corispondance to the original collections through multiple queries? Like, find all the intersections between two sets of lines [N lines intersects M lines gives (N,M) points]. Find what rectangles contain those intersections points [(N,M) points within P rectangles gives (N,M,P) bools?]. What lines have points of intersection that are within a given rectangle? I guess we could have results just always become one more dimension than the arguments, and the user answers their questions with slices. So, results [:,:,3] would be an (N,M) array of points of intersections that are within the 4th rectangle...

So, here's a good example of a first use case. Raytracing rays and triangles. First you'd test for intersection between the rays and the bounding boxes of the triangles (to save expensive computations where there is clearly no intersection). Then, where they pass that test, get the point of intersection if it exists. Then, query all those triangles for their properties at that point of intersection (normal, color, reflective/refractive qualities). For POV raytracing, those colors, etc, need to correlate to the original rays, because the rays corispond to pixels on the screen.

Just off the top of my head, this could look like:

#triangles is made for POV ray tracing and subclassed from a geometry collection
passed_test = rays.intersects(triangles.bboxes)  #intersects just returns bools
intersections, intersections_mask = rays.intersections(triangles, mask=passed_test)
#Trouble on the next line: how does triangles know which axis corisponds to
# the triangles? 
colors = triangles.query_colors(intersections, mask=intersections_mask, axis=1)
image = raycolors2pixels(colors)

Hmm, so maybe each axis needs to be named after the geometry colelction instance it represents? Using a dictionary lookup or recarray?

Happy new year,
Elliot

@adamlwgriffiths
Copy link
Owner

I believe your example above could be solved with an extra return value, which is an index of what an object collided with.

a = rays()
b = triangles()
c = a.intersection(b)

for intersection in c:
    geometry, ray, triangle = intersection

So the intersection could instead be a tuple of indices with an intersection value.
Which also allows you to do

_, ray_indices, triangle_indices = c
intersecting_rays = rays[ray_indices]
triangles = triangles[triangle_indices]

Whether the tuple is indices, a masked array of booleans, or what-not isn't important.
But the extra return value should solve your problem?

As an object wrapper, it sounds like there needs to be the ability to
a. the source object
b. the collided object
c. the intersection object (line segment, point, ray, etc)
d. further derivation of intersections, such as ray reflections, sphere distances, etc.

I'm not sure how you'd wrap that as objects.
It seems like you'd want a generic collection that can contain all types, which can perform checks against other collections.
It could then return a collision result, which is a superset of a geometry collection. Which lets you extract ray collisions, etc. It could even return pointers to the original geometry from those collections.
Then if you wanted to do further manipulation such as flipping normals, applying colours, you take the geometry type you want, then run a function over them (flip, colour, etc), then re-insert them in a new collection, that would work?

Curious to see what you're thinking.

@Permafacture
Copy link
Author

Hey,

I've got some messy code (embarassed to show it at this point) that is starting to do what I want. Here's how a user uses it (only does line intersections right now):

if __name__ == '__main__':
    import matplotlib.pyplot as plt
    from composite_geometry import Lines2d
    from base_geometry import Vec2d

    n = 5
    begins1 = np.random.randint(-25,25,(n,2))
    ends1 = np.random.randint(-25,25,(n,2))
    segs1 = Lines2d(begin=begins1,end=ends1)

    m=7
    begins2 = np.random.randint(-25,25,(m,m,2))
    ends2 =np.random.randint(-25,25,(m,m,2))
    segs2 = Lines2d(begin=begins2,end=ends2)

    #plot segments
    xs,ys = export2mpl(segs1)
    plt.plot(xs,ys,'b') #blue solid
    xs,ys = export2mpl(segs2)
    plt.plot(xs,ys,'g--') #dashed green

    #plot the begining of all segments in segs2
    xs,ys = export2mpl(segs2['begin'])
    plt.plot(xs,ys,'ko')  #black dot

    #Set up result object (not calculated untill results are access) 
    result = LineLineIntersect(segs1,segs2)

    ###plot segs2 from begining to first intersection###

    #Get the parameter of point intersections in the frame of segs2
    ub = result.points[segs2] #actually triggers calculation

    #result.lines[segs2], the line segments representing co-linear overlap
    # has not been calculated, but would be if accessed

    axis = 2  #manually specify the axis representing segs2 
    shortest = np.nanmin(ub.arr,axis=axis)  #smallest parameter is first intersection
    pts = segs2.eval_param(shortest)

    #create new line segments that represent segs2 from begining to first intersection
    short_lines = Lines2d(begin=segs2['begin'].arr, end=pts.arr)
    xs,ys =batched_mpl(short_lines) 
    plt.plot(xs,ys,'g')  #solid green
    plt.show()

Result:

simple_trace

I'm a bit stuck mucking through making better tools for using named axes (Ie, shortest = min(ub.arr,axis=segs1)). I thought a good next step is to see if I can make this code do some of what you are actually using it for. If it is more performant (which I strongly suspect) and more useful to you, I'll know to keep going.

I'd prefer to keep as much communication public as possible, but if this one bug report is not the best forum for communication, we could use email (list-serve?) My email address is my user name at gmail.

@adamlwgriffiths
Copy link
Owner

I agree regarding public communication. Github is fine by me =)

There may be a number of things that could possibly be done to make the code simpler. But the internals may be more complex than I realise.

Regarding matching functionality to use.
As long as the underlying algorithms are accessible and usable without too much effort, then it will be fine.
Also don't stress about code quality. API is important, but only once it's officially in the code base.
If you want to put the code up somewhere, I'm happy to play around with it.

Cheers

@Permafacture
Copy link
Author

@Permafacture
Copy link
Author

Hey, FYI, don't play with it too much. So far I've changed the base gemetries to subclasses of ndarray, added cacheable properties to base geometries, and changed the static and cacheable properties to be properties rather than dictionary items. so normalize(segs1['begin'].arr) is now segs1.begin.normal. Subcalssing ndarray is kind of a pain, but it's really the way to do it I think, and I'm just trying to figure out how slices should behave before pushing my changes.

I think slices should be invalid. Otherwise, you spend more time in python land creating a sliced object that has sliced cahced properties (if they've been cached yet) without knowing that you are causing this overhead. This would be bad for calculating results and such where you just want to operate on raw arrays. There will likely be a downcast method (raw()) that returns a plain ndarray of the data (for when you're slicing to perform some operations on the data, like in a result calculation) and If you want a Line2d object that is a slice of another you can easily create a new one from the ndarray slice and will know that you've invalidated the cache. ie: newline2d = Line2d(begin=oldline2d.begin.raw()[mask], normal=oldline2d.normal.raw()[mask]).

You're still creating a new array when you call raw to get ready to do slices (instead of getting a memory view), but in performace critical sections, one can probably reuse these arrays in calculating the results anyways.

@adamlwgriffiths
Copy link
Owner

Sorry, I intended to play around and even brainstorm independently (without looking too much to see how an independent solution works) but I haven't had a chance.

The changes you mention above are things I was thinking of adding to what you'd suggested anyway, so great work with that.

I would avoid removing functionality where it makes sense. I'd rather not be an encumbrance for the sake of performance.
That being said, I'd plaster warnings all over the place with example code on how to avoid any potential bottlenecks. I think it's better to have good docs to deal with an issue like performance.

How are you handling the edge case intersection results. Eg. line intersections where the result is another line?

I'm for-seeing myself being busy for the immediate future. I'll try to find some time to take a look.

@Permafacture
Copy link
Author

I get being busy. No rush. I'd expect to get my library up to the same functionality as yours before you even consider working with it. This would include a performace comparison for real world problems (problems you already deal with). It's not a fork, but a different approach to the same question. Once I have the API about right, I might try adding transformations and see if I can get my abandoned pyglet project to go faster. Transformations (not using numpy) are like 80% of the run time: https://vimeo.com/66736654 and https://vimeo.com/65989831

the segment segment intersect object has points and lines attributes, which are cached (not calculated unless needed). I would later add a result level mask so that the intersect returning lines doesn't check the combinations that are already known to have a points intersersection.

Letting slices/masks/etc not work is not actually removing functionality, but just not implementing it. But, users would expect numpy array like objects to support slicing so I get what you mean. Maybe a flag could enable/disable the overhead, and users who want errors when using the objects inefficiently could have them.

@chrsbats
Copy link
Collaborator

"Transformations (not using numpy) are like 80% of the run time"
Are those demos in pure python?

@Permafacture
Copy link
Author

Physics with pymunk (wraps a C library), graphics with openGL and marching squares with OpenCV. So, no, Python is the glue. Transformations are the only heavy lifting in python, and I pay the price. Still, 60 fps is not awful.

@chrsbats
Copy link
Collaborator

That kind of thing (and some of the stuff in PyRR) could be accelerated very easily via use of the Numba JIT. Numba has a lot of nice hooks for defining ufuncs without resorting to C code as well. Unfortunately Numba is a pretty heavy install (it's not a simple pip install and would require the LLVM to be around). The easiest way to set it up is the Anaconda distro. I haven't mentioned Numba earlier as it would break interop with a lot of pure python projects, but if you are already using something like OpenCV it probably doesn't matter as that lib isn't very venv friendly either

Could be worth considering a numba based fork? The more compatible option would be Cython acceleration but it's still a bit of a pain.

@Permafacture
Copy link
Author

I found opencv to be as user friendly as numpy. What have you found unfriendly about it?

I don't really care about that demo. It would just be an test at using pyrr for performance, and inform the design. Numba sounds cool, but if I did care about that demo, it would be as an easily distibutable game engine with user friendly installation. Once pyrr is vectorized, I think cython or wrapping gdal into numpy would be good steps. Cython is a bit of a pain to start using, but distributes more easily, which is important.

@chrsbats
Copy link
Collaborator

From memory I found OpenCV hard to install via pip in a virtualenv. I
ended up installing it via conda so I could keep the venv and python
version isolated, which in tern required hunting down a 3rd party package
on binstar. One of the nice things about numpy is that it can compile to
virtual env.

Using OpenCV itself isn't too bad but the underlying C bindings do show
through on occasion.

On Fri, Apr 10, 2015 at 3:12 PM, Elliot Hallmark notifications@github.com
wrote:

I found opencv to be as user friendly as numpy. What have you found
unfriendly about it?

I don't really care about that demo. It would just be an test at using
pyrr for performance, and inform the design. Numba sounds cool, but if I
did care about that demo, it would be as an easily distibutable game engine
with user friendly installation. Once pyrr is vectorized, I think cython or
wrapping gdal into numpy would be good steps. Cython is a bit of a pain to
start using, but distributes more easily, which is important.


Reply to this email directly or view it on GitHub
#21 (comment).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants