Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Offline render back end : Learning route #32

Closed
nyue opened this issue May 12, 2022 · 3 comments
Closed

Offline render back end : Learning route #32

nyue opened this issue May 12, 2022 · 3 comments

Comments

@nyue
Copy link

nyue commented May 12, 2022

Hi,

I'd like to learn more about developing back end for ANARI with a specific interest in targeting offline CPU renderers.

I have built ANARI and toy around with the viewer.

My main development environment will be Ubuntu.

What example code should I start focusing on ? I have watched the OSPray back end development video from the ANARI Webinar 2022

Cheers

@jeffamstutz
Copy link
Contributor

I think the best option is to start with the sink device and start building up an implementation from there. The sink device exists as a test implementation that trades robustly taking any API call stream for not doing anything useful. What this does is get some basics "off the ground" -- being able to load your library/device, mapping frame output data, etc. From there I would take a very simple ANARI application (such as anariTutorial) and progress through the API in roughly the following order:

  • base object lifetime (anariNew*(), anariRetain(), anariRelease())
  • object parameters/commits
  • ANARIFrame interface (parameters, map/unmap, correct pixel formats, etc)
  • ANARICamera: basic perspective parameters
  • hard coded scene inside ANARIWorld (ignores parameters) to get anariRenderFrame() setup
  • ANARIArray1D: get each array ownership model correct
  • arrays of objects: lock down the extra object lifetime concerns for arrays of ANARIObject
  • ANARISurface without instancing, with placeholder objects where possible (i.e. ANARIMaterial)

That should get you to a basic image fairly quickly and gives you an idea of how the object model expressed by the ANARI API looks like. From there it's a matter of building out the rest of the object types, and then the object subtypes where relevant. This will also give you an idea of how to read through other implementations to see what they do, such as the example device or a vendor implementation like VisRTX.

In any case, I recommend dealing with some of the core problems first (object lifetimes, parameters/properties, arrays etc.), then start working on the world object hierarchy one object at a time. Each of these problems are relatively "bite sized", though there are quite a few things to go through in the end. I'm hoping to one day write up either blog posts or make YouTube videos walking through implementing ANARI, but alas that doesn't exist yet! 😅

@jeffamstutz
Copy link
Contributor

I'll also note that you can also take the approach to connect ANARI to another existing rendering system. This is the current approach used for RadeonProRender and OSPRay respectively. Getting the basics of objects, parameters, lifetimes, etc. as outlined above are still important to solve correctly, but it's very reasonable to forward the "heavy lifting" of actual rendering to an existing rendering engine.

@nyue
Copy link
Author

nyue commented May 30, 2022

Thank you @jeffamstutz

@nyue nyue closed this as completed May 30, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants