New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add an example for Magnum LibOvrIntegration #10

Merged
merged 10 commits into from Jun 21, 2015

Conversation

Projects
None yet
2 participants
@Squareys
Contributor

Squareys commented Jun 3, 2015

This is an early pullrequest and should not be merged.

Hello @mosra, hello everybody,

I am currently working on an example which shows how to use magnum for rendering to the Oculus Rift. I will share my progress and ask questions here.

Current state and plans:

  • connect the rift
  • apply rift head orientation and position to camera
  • render to the rift via the sdk compositor
  • find a neat design for everything
    • use libovr integration library
  • small interesting demo scene (Decided against, since this isn't supposed to be a rift demo, but a simple example. And I should get back to studies soon.)
  • improve FindOVR cmake module
  • cleanup code and codestyle

All code is subject to change, fixup! commits will be autosquashed after everything is done.

To compile and run the code, you need the latest oculus sdk and runtime. For rendering, a debug HMD is created automatically, in case no hardware is connected. Headtracking obviously requires an actual headset.

All suggestions, inspiration and critique are welcome ;)

Greetings,
Squareys

@Squareys Squareys force-pushed the Squareys:ovr-example branch from c638343 to e7a78e8 Jun 5, 2015

@Squareys

This comment has been minimized.

Contributor

Squareys commented Jun 5, 2015

@mosra I might need a little help with some framebuffer stuff. In case you have the time, that is.

My previous two commits should add the distortion rendering, but all I get is a black screen. I believe the problem lies in my probably incorrect use of Framebuffer.

General Idea:

  • HMDCamera renders to one of a set of textures which are attached to a framebuffer shortly before rendering.
  • Via the Oculus SDK compositor these textures then go through distortion rendering and a copy of the result is drawn onto a mirror texture (and the HMD, if connected).
  • The mirror texture is blitted to the defaultFramebuffer to be shown in the window.

I will list the relevant portions of the code below, I would be glad, if you could have a quick look at them and tell me if you find something fishy:

Meanwhile I will implement the conversions you mentioned above, so, no hurry.

@mosra

This comment has been minimized.

Owner

mosra commented Jun 5, 2015

This line looks suspicious to me (you are blitting rectangle with zero height):

Range2Di::fromSize({0, h}, {w, 0}),

Other than that it looks good to me (apart from doing some unnecessary stuff here -- reading uninitialized data into image, but I assume that's some leftover from debugging).

Some gl code I'm not sure about how to translate to magnum

I didn't implement detaching textures from framebuffer yet ('cause I thought it wouldn't be needed .. apparently it is). Will do that later today (it's doable using pure GL calls, but you don't want to do that :)).

Meanwhile I will implement the conversions you mentioned above, so, no hurry.

The Quaternion class doesn't have the bits needed to do this, so it's impossible at this point :)

@Squareys

This comment has been minimized.

Contributor

Squareys commented Jun 5, 2015

Oh, man, thank you! It actually works now :D

Ovr

I started coding the integration library anyway, I can allways add the converters for the quaternions later on.

@mosra

This comment has been minimized.

Owner

mosra commented Jun 5, 2015

Great! :)

Framebuffer detaching is done in mosra/magnum@233a15b.

@Squareys

This comment has been minimized.

Contributor

Squareys commented Jun 6, 2015

@mosra Thanks, that was extremely quick.

My progress on the libovr integration is here: https://github.com/Squareys/magnum-integration/tree/libovr. For now, I am aiming to wrap the most important functions of the libOVR CAPI in a way which in the end will require no libOVR specific code in the example.

I'm almost done, but I am a little unsure if this might not be overdoing it, since LibOVRIntegration needs to be maintained afterwards and so on (I will need this for at least the next year anyway, so I may aswell do that).

I will probably split up the code of LibOVRIntegration into more files, it's getting kinda cluttered.

@mosra

This comment has been minimized.

Owner

mosra commented Jun 6, 2015

Wow, that's outstanding amount of first-class work, seriously. You should give the copyright to yourself, though ;)

I also don't know where exactly to draw the line between essential functionality and "overdone" framework. For example in the Bullet integration the only thing is a "feature" that integrates Bullet rigid body into Magnum scene graph. I could wrap nearly everything from Bullet (and there's a lot of stuff), but the decision was that it wouldn't help anyone, as people who knew Bullet would now need to learn completely new API and people who knew Magnum would need to do that anyway. Then the burden of adapting the wrapper to each new Bullet release etc.

However, as I imagine, the OVR lib is just this thing that provides position/orientation and passes textures around for deformation, people don't need to interact with it directly after the setup is done and there won't be much stuff added/changed over time.

In my opinion this doesn't need to be split into more files, everything still needs to be used at the same time, am I right? I'd make the OvrTextureUtil::wrap() into a free function in the LibOvrIntegration namespace and put it also in Hmd.h so there's even less files.

@mosra

This comment has been minimized.

Owner

mosra commented Jun 6, 2015

... damn, replied late :D

@Squareys

This comment has been minimized.

Contributor

Squareys commented Jun 6, 2015

Wow, that's outstanding amount of first-class work, seriously.

Thank you!

there won't be much stuff added/changed over time.

The Oculus SDK went through some rather significant API breaks in the last year (it is still in beta). I believe they kinda settled on an idea now, though.

In my opinion this doesn't need to be split into more files, everything still needs to be used at the same time, am I right?

Ah, well, splitting already happened. I might merge some files together again after I'm done. You are right with the exception that the HMDCamera in the example doesn't need LibOVRContext, but that is rather insignificant, right?* I will definitely move OVRTextureUtil::wrap() into a free function as you suggested. It is only needed internally now, btw.

I started moving the example code to using LibOVRIntegration where possible and everything looks much simpler now! The compositor and its layers still need to be wrapped, then the example should be able to run without direct use of libOVR.

* [Edit] And the future Compositor with its Layers for that matter, so maybe there is some potential for keeping things split after all.

@Squareys

This comment has been minimized.

Contributor

Squareys commented Jun 7, 2015

You should give the copyright to yourself, though ;)

Do you mean "completely" to myself or add it as in BulletIntegration?

* [EDIT] Sorry, stupid question, just read https://github.com/mosra/magnum-integration/blob/master/CONTRIBUTING.md.

@Squareys Squareys referenced this pull request Jun 7, 2015

Merged

LibOVRIntegration #3

18 of 18 tasks complete
@Squareys

This comment has been minimized.

Contributor

Squareys commented Jun 18, 2015

I cleaned up some code and changed the comments etc.

I currently don't see the cube(s) I added to the scene, so I will fix that and then try to test this on my 8 year old PC (laptops are not supported by LibOVR SDK 0.6.0.0-beta anymore and wont be in the future. Debug Hmds work, though. Problems are only with real devices.). I kinda doubt that it's graphics card will support the required opengl features.

I will tell you how that went later on ;)

@Squareys Squareys force-pushed the Squareys:ovr-example branch from 9d12fdd to b784b9b Jun 18, 2015

@Squareys

This comment has been minimized.

Contributor

Squareys commented Jun 18, 2015

I'm still stuck at the scene not being rendered correctly. I will have another go at this tomorrow.

@Squareys

This comment has been minimized.

Contributor

Squareys commented Jun 19, 2015

Resetting all context states with Context::current()->resetState() after basically all remotely problematic sdk calls did not change anything.

Also, rendering to a direct layer (undistorted) results merely in a grey scene.

I am guessing my scene is not setup correctly, will notify if I find something.

@Squareys

This comment has been minimized.

Contributor

Squareys commented Jun 19, 2015

Did it! :D Hadn't had my scene set up correctly and the perspective matrix for the camera required some revised math.

@mosra

This comment has been minimized.

Owner

mosra commented Jun 19, 2015

Screenshot, pretty please :)

@Squareys

This comment has been minimized.

Contributor

Squareys commented Jun 19, 2015

Alright, will update the one above, give me 10 minutes ;)

@Squareys

This comment has been minimized.

Contributor

Squareys commented Jun 19, 2015

Omg. Windows -_- There are two files in the git index now: OvrExample.cpp and OVRExample.cpp.

I will fix this on linux later. Until then, I will finish up LibOvrIntegration and documentation, I need to fix naming of the example executable, and then both projects should be ready for merge from my side. I will notify you, when I got there.

@mosra

This comment has been minimized.

Owner

mosra commented Jun 19, 2015

Cool. :)

So, in the end, no state resetting was needed from Magnum side, am I right? That's great.

@Squareys

This comment has been minimized.

Contributor

Squareys commented Jun 19, 2015

So, in the end, no state resetting was needed from Magnum side, am I right?

100% 👍

@Squareys Squareys force-pushed the Squareys:ovr-example branch 3 times, most recently from 77be139 to f3255be Jun 19, 2015

@Squareys Squareys changed the title from [WIP] Add an example for magnum + libOVR to Add an example for Magnum LibOvrIntegration Jun 19, 2015

@Squareys

This comment has been minimized.

Contributor

Squareys commented Jun 19, 2015

@mosra This is ready for review from my side.

@Squareys

This comment has been minimized.

Contributor

Squareys commented Jun 20, 2015

@mosra Don't merge yet!

My old PC supports GL2.0 :P But I got the Rift to work kinda on my Laptop, so I was able to test this. There is still a problem with the orientation/position. I believe it might be some lefthand/righthand stuff, but it's almost too suddle for that. I will tell you once I figured it out. From expecience I can tell this will take quite a while, though.

* [EDIT] I think, I got it. Rotation/Translation might be applied in wrong order. Flipping the arguments of the DualQuaternion muliplication in the conversion should do the job, right? Will try later.

@mosra

This comment has been minimized.

Owner

mosra commented Jun 20, 2015

It might, yeah (sorry again about my quaternion incompetence) :)

@Squareys

This comment has been minimized.

Contributor

Squareys commented Jun 20, 2015

@mosra Kinda embarassed, since you actually had it correct, but I switched it as I was trying to get the test to work, and forgot to switch it back.

I want to do one last thing for LibOvrIntegration (it seems oculus sdk creates some slightly special projection matrices). Is there a way to set the projection matrix of a camera directly? As far as I know there is only setPerspective() and setOrthographic().

@mosra

This comment has been minimized.

Owner

mosra commented Jun 20, 2015

Is there a way to set the projection matrix of a camera directly? As far as I know there is only setPerspective() and setOrthographic().

This has been on my TODO list for ages. I wanted to add a function that would allow you to set projection matrix directly, but that would then break near() and far() getters -- looks like there is no single way to obtain them from 4x4 matrix that would work on both perspective and orthographic projections.

Researching more on this topic...

@Squareys

This comment has been minimized.

Contributor

Squareys commented Jun 20, 2015

@mosra I might be totally wrong, but an idea could be the following:

A projection matrix gives you normalized device coordinates, right? So applying the projection matrix to points at the far distance plane should result in z values of -1. When applying the inverse of the projection matrix, we should get the far distance value. Analog for near distance plane.

This should also work for orthographic projection, except if the z value is just completely dropped in which case there weren't any clipping planes to begin with.

Of course this is not efficient at all, one could simply optimize for only multiplying a z coordinate though, and also, this only needs to be done once when setting the projection matrix, right?

@mosra

This comment has been minimized.

Owner

mosra commented Jun 20, 2015

That's a clever idea, but sadly breaks when the projection matrix is something more than just simple projection -- projection combined with rotation, oblique near plane clipping etc. More general solution would be to provide getters for clipping planes instead of near/far distances (some kind of Frustum class) but that's rather large stuff to do as an afternoon fixup :)

I instead decided to completely remove the near()/far() getters and provide just a general setProjectionMatrix() function. If the users need the properties, they can save them on their side, as they are the ones who are providing the projection matrix in the first place.

I need to do some remaining bureaucracy (API/header deprecation) and will push the change soon after.

@Squareys

This comment has been minimized.

Contributor

Squareys commented Jun 20, 2015

Okay, awesome :)

@mosra

This comment has been minimized.

Owner

mosra commented Jun 20, 2015

Huh. Done in mosra/magnum@73419be, in the end the changes were far larger than I expected. You need also latest Corrade. Some code may now produce deprecation warnings (sorry!), see the commit message for details (or 2f07a25 to see the implications).

@Squareys

This comment has been minimized.

Contributor

Squareys commented Jun 20, 2015

far larger than I expected

Wow, indeed! Pretty neat, though, I'd say :) Will apply changes to my side later this evening, which should conclude LibOvrIntegration and the example.

@Squareys Squareys force-pushed the Squareys:ovr-example branch from 7599767 to 86ea919 Jun 21, 2015

@Squareys Squareys force-pushed the Squareys:ovr-example branch from 86ea919 to f32d50e Jun 21, 2015

@mosra mosra merged commit f32d50e into mosra:master Jun 21, 2015

@mosra mosra added this to the 2018.02 milestone Feb 15, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment