Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mesh Visualization #896

Closed
SimonDanisch opened this issue May 9, 2015 · 29 comments
Closed

Mesh Visualization #896

SimonDanisch opened this issue May 9, 2015 · 29 comments
Milestone

Comments

@SimonDanisch
Copy link

Hi,
next example isn't working:
image
This looks definitely like wrong normals.
Is this a similar problem as in #892 ?

# -*- coding: utf-8 -*-
# Copyright (c) 2014, Vispy Development Team.
# Distributed under the (new) BSD License. See LICENSE.txt for more info.

"""
Simple demonstration of Mesh visual.
"""
import sys
import numpy as np
from vispy import scene
from vispy import app, gloo, visuals, io, geometry
from vispy.geometry import create_sphere
from vispy.visuals.transforms import (STTransform, AffineTransform,
                                      ChainTransform)

canvas      = scene.SceneCanvas(keys='interactive')
canvas.view = canvas.central_widget.add_view()

verts, faces, normals, nothin = io.read_mesh("Rider.obj")
mesh = scene.visuals.Mesh(vertices=verts, shading='smooth', faces=faces)

canvas.view.add(mesh)
canvas.view.camera = scene.TurntableCamera()

canvas.view.camera.set_range((-20, 20), (-20, 20), (-20, 20))
canvas.show()

if __name__ == '__main__':
    if sys.flags.interactive != 1:
        canvas.app.run()

Best,
Simon

@larsoner
Copy link
Member

Did this work at some point previously? Could show the same problem with our triceratops object? If not, could you upload the model somewhere?

Do you know if this worked at all in a previous version of vispy?

@SimonDanisch
Copy link
Author

that's my second try with vispy, so I don't know about previous states ;)
Do you mean the tutorial\visuals\05_viewer_location.py example?
It doesn't seem to use the newest API. When I use the mesh in my code snippet it looks different but definitely weird.
image

I got the model from turbosquid:
http://www.turbosquid.com/3d-models/printable-thoat-rider-sculpture-obj-free/735300

@larsoner
Copy link
Member

Yeah, and in case we need a simpler example:

>>> from vispy import io, plot
>>> fname = io.load_data_file('orig/triceratops.obj.gz')
>>> fig = plot.Fig()
>>> fig[0, 0].mesh(*io.read_mesh(fname)[:2])

I get the same behavior there. @campagnola any idea what might be causing this?

@campagnola
Copy link
Member

@SimonDanisch, thanks for pointing out that the tutorials are broken! The screenshot you posted looks to me like the mesh is being drawn with face culling enabled. Make sure you call gloo.set_state(cull_face=False).

@Eric89GXL, your example looks correct on my machine. Can you verify that culling is being disabled in your MeshVisual (this has been tampered with recently, so check with latest master)?

@larsoner
Copy link
Member

I tried on latest master on my machine, I get

screen shot 2015-05-10 at 6 55 23 am

@SimonDanisch
Copy link
Author

That looks like mine...
I tried inserting gloo.set_state(cull_face=False) at various places, but it doesn't change anything.

While we're at it, a python beginner question: why are my vispy scripts only working in the example folder?
I couldn't really find a solution for that, I guess it has something to do with the load path. It says: ImportError: cannot import name app.
I'm on anaconda, and installed it either via conda install, or with python setup.py install... Not sure what the working solution was.

@campagnola
Copy link
Member

Ok, I can enable culling and it does not generate the effect you see (so the normal vectors are correct for this mesh). However if I disable the depth test (search for 'depth_test' in visuals/mesh.py) then I see the same artifact you have. So you are correct that this is a depth testing issue. The question then is why it is only broken on some platforms..

@larsoner
Copy link
Member

@SimonDanisch Python has an order in which it looks at directories to import things. Let's say you say import vispy. The first place it looks is the current directory for vispy.py or a directory named vispy with an __init__.py inside it. If either exists, it imports vispy from there. If they don't, then it looks in the user's Python directory, then the system-wide one. Somewhere in there it also looks in PYTHONPATH.

When you do setup.py install, it installs vispy to the system python directory (in your case, the root anaconda directory). If you installed vispy via conda install, this is effectively doing something like setup.py install with whatever version of vispy the conda folks have. This is probably out of date, and you should try updating to latest master.

You can always check to see where something is being imported from by doing:

>>> import vispy
>>> vispy
<module 'vispy' from '/Users/larsoner/custombuilds/vispy/vispy/__init__.pyc'>
>>> 

The short answer to your question is that you should be able to run the examples from anywhere, since from vispy import app should work regardless of where you open the Python terminal.

@larsoner
Copy link
Member

@campagnola I'm not sure why it would be platform dependent. I'll take a look on my Linux machine and see if it's fixed there tomorrow.

@SimonDanisch
Copy link
Author

Thanks a lot for the explanation!
Indeed the console doesn't care... So it must be a problem with Sublime! I'll see if I can pin this down further.

@larsoner larsoner added this to the version 0.5 milestone May 28, 2015
@larsoner
Copy link
Member

I can confirm this works on Linux, so it does appear to be platform-dependeng somehow. Argh...

@julienr
Copy link
Contributor

julienr commented Oct 28, 2015

I think I am running into the same problem and I have a hypothesis :
In vispy/visuals/mesh.py, in the shading_vertex_template code, the $visual2scene transform is applied as follow :

    vec4 pos_scene = $visual2scene($to_vec4($position));
    vec4 normal_scene = $visual2scene(vec4($normal, 1.0));
    vec4 origin_scene = $visual2scene(vec4(0.0, 0.0, 0.0, 1.0));

As I understand it, $visual2scene is what is usually called the modelview matrix. I think to compute normals for a given modelview matrix, you should apply transpose(inverse(modelview)) instead of modelview. This is detailed here :
http://www.lighthouse3d.com/tutorials/glsl-12-tutorial/the-normal-matrix/

I haven't found a way to quickly test this because I haven't found how to compute an inverse transpose using the vispy transforms pipeline.

@rougier
Copy link
Contributor

rougier commented Oct 28, 2015

Maybe the normal computation is simply wrong. Could you check with glumpy normal computation code instead ? It's available at https://github.com/glumpy/glumpy/blob/master/glumpy/geometry/normals.py

@julienr
Copy link
Contributor

julienr commented Oct 28, 2015

@rougier The normals computed by MeshData.get_vertex_normals look ok. I load my mesh from a .obj file (exported from Blender) which contains normals and they are similar to the one I get using get_vertex_normals.

@rougier
Copy link
Contributor

rougier commented Oct 29, 2015

Are the normals inside the .obj file or are they computed from vispy ?

@julienr
Copy link
Contributor

julienr commented Oct 29, 2015

I am comparing the normals in the .obj to the one computed from vispy and they are the same.

I've made a gist with the code I'm using.

That's what I get without the mesh.set_gl_state('translucent', depth_test=True, cull_face=True) line

smooth_shading

If I show the normals in the fragment shader (gl_FragColor = vec4(v_normal_vec, 1.0)), I get the following :
normals

Now, if I enable cull_face for the mesh (it defaults to False in MeshVisual.__init__), I get the following
cull_face_true

This looks much better but there are still some artifacts depending on the viewpoint

screen shot 2015-10-29 at 10 08 17 screen shot 2015-10-29 at 10 08 22

Finally, here is the smooth rendering I get with cull_face=True. For some reason, this doesn't look like per-fragment lightning :
screen shot 2015-10-29 at 10 11 07

I am not sure why enabling culling reduces the number of artifacts. Is it that depth testing isn't working correctly and culling remove some of the faces that should have been removed by the depth test ?

Regarding the lighting issue, this looks like some of the varying passed to the fragment shader are not interpolated correctly.

@julienr
Copy link
Contributor

julienr commented Oct 29, 2015

Ok, I found out what was causing the lightning issue : since the .obj file had vertices and normals, a single vertex will have different normals depending on the face it belongs to and therefore will be duplicated at loading time. Loading from a PLY file and computing the normals using vispy solves this.

I still need to use cull_face=True to get a correct rendering and still see some small artifacts.

cull_face=Truecull_face=False (default)
screen shot 2015-10-29 at 10 40 45 screen shot 2015-10-29 at 10 44 38

@julienr
Copy link
Contributor

julienr commented Oct 29, 2015

Forcing view.camera.depth_value = 10 seems to fix the rendering issues and I don't have to enable cull_face=True. So this look like a depth testing issue.

@julienr
Copy link
Contributor

julienr commented Oct 29, 2015

I fixed it for my case by doing the following :

  • Force a 24 bits depth buffer (on OSX with the Qt backend, it picks up 16 bits by default) using
canvas = scene.SceneCanvas(keys='interactive', size=(800, 600), show=True,
        config={'depth_size': 24})
  • Change view.camera.depth_value to be 10 instead of the default of 1000000.0
view.camera.depth_value = 10

@larsoner
Copy link
Member

@julienr so the normal calculation seems correct, and it was a depth buffer + cull_face issue?

@julienr
Copy link
Contributor

julienr commented Oct 29, 2015

@Eric89GXL yes. Mostly a depth buffer issue (the cull_face is somehow a workaround the depth buffer issue). Maybe vispy should default to 24bit depth buffer on OSX and use a slightly smaller value than 1000000.0 for far by default ? Or figure out the far value somewhat automatically from the scene ?

@larsoner
Copy link
Member

It seems like we could try to infer the right camera depth value from the depths of the objects in the scene, but I wonder if it would be fragile, or perhaps slow e.g. if objects are moved via transforms often.

@julienr
Copy link
Contributor

julienr commented Nov 1, 2015

I definitely think this is something that should just work. But you're right that this is more complicated than it looks.
Anybody knows how other solutions handle this ? VTK ? Even Blender ?

@larsoner
Copy link
Member

larsoner commented Nov 2, 2015

@almarklein any ideas for how to improve the depth buffer?

@almarklein
Copy link
Member

Better defaults, perhaps. we should certainly aim for a default depth buffer of at least 24 bits. And if the current camera.depth_value is too large, we might need to make it smaller, though 10 seems much too small. Also see this comment: https://github.com/vispy/vispy/blob/master/vispy/scene/cameras/base_camera.py#L82

@astrofrog
Copy link
Contributor

I ran into this too (on MacOS X 10.8) and it turns out two issues I opened (#1174 and #1175) are both fixed by setting config={'depth_size': 24}. Would it make sense to make this the default?

@rougier
Copy link
Contributor

rougier commented Feb 23, 2016

@astrofrog Yes definitely

@almarklein
Copy link
Member

@astrofrog PR? :)

@liubenyuan
Copy link
Contributor

I saw that many issues can be fixed by setting config={'depth_size': 24} in the canvas, so why dont we make this value default ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants