Skip to content

Commit

Permalink
Added backburner and mental ray to previous work
Browse files Browse the repository at this point in the history
  • Loading branch information
Jhonnyg committed May 10, 2011
1 parent e017473 commit 884aeaa
Show file tree
Hide file tree
Showing 2 changed files with 26 additions and 2 deletions.
26 changes: 25 additions & 1 deletion Lightning/report/tex/chapters/chapter1.5_previous.tex
Expand Up @@ -6,7 +6,7 @@ \section{BOINC}

Differences with BOINC and the proposed project:
\begin{itemize}
\item In BOINC, the scientists are the end user. They give the BOINC network
\item In BOINC, the scientists are the end user. They provide the BOINC network
a problem to solve, and they get the results. In the proposed project,
the user acts like both worker and job initiator. It’s more of a
give-and-take philosophy.
Expand All @@ -22,3 +22,27 @@ \section{BOINC}
that there is no direct interaction between two clients.
\end{itemize}

\section{Backburner}
% http://images.autodesk.com/adsk/files/backburner20100.2_user_guide.pdf

Autodesk Backburner is a job management system for distributed rendering and works with several products of the Autodesk suite, such as 3D Studio Max and Maya. The backburner architechture is comprised of mainly two components, Backburner Manager and Backburner Server. A user with a creative application that supports the Backburner interface sends jobs to the Backburner Manager, which distributes these jobs as a set of tasks to its connected Backburner servers. The manager keeps track of the network topology and stores a database of the current job states in its servers. A Backburner server is used together with adapters and processing engines. What type of job a server is capable of executing depends on what type of adapters and processing engines is installed on the server.

Backburner is quite similar to Lightning in several ways.

similarities:
- Interface solution
- Different processing engines
- Batch rendering / group rendering

differences:
- Network administration
- Render farm access
- Backburner is controlled with awesome monitor system to pause/stop/otherwise control jobs
- A dedicated network is more reliable than ad-hoc
- Backburner can schedule clients to only work on certain hours/dates

\section{Mental Ray}

%http://www.kxcad.net/autodesk/3ds_max/Autodesk_3ds_Max_9_Reference/distributed_bucket_rendering_rollout_mental_ray_renderer.html

Mental Ray is equipped with distributed rendering capabilities similar to Lightning: a frame can be divided into buckets and rendered on machines elsewhere using Autodesk Backburner. However, Mental Ray requires the user to know the intended hosts on which the server software is running. This requires a whole lot of network administration, and is best suited for dedicated render farms, something that a hobbyist or student often does not have access to.
2 changes: 1 addition & 1 deletion Lightning/report/tex/chapters/chapter1_introduction.tex
Expand Up @@ -7,7 +7,7 @@ \chapter{Description}
% - Generic approach, but with specific implementation
% - Academic study of relationship between network distribution cost vs. running on one computer?

The general idea of this project is to build a two-part prototype, capable of distributing computational data over a peer-to-peer (P2P) network, and retrieving the results within a feasible time frame. The first part deals with communication between remote clients in a decentralized network, with focus on fast data transfer rather than handling large amounts of data. The protocol will primarily be used to synchronize clients in the network to collectively share the computational workload of a task.
The idea of this project is to build a two-part prototype, capable of distributing computational data over a peer-to-peer (P2P) network, and retrieving the results within a feasible time frame. The first part deals with communication between remote clients in a decentralized network, with focus on fast data transfer rather than handling large amounts of data. The protocol will primarily be used to synchronize clients in the network to collectively share the computational workload of a task.

The second part of the project consists of a naïve ray tracer that can subdivide a scene into smaller jobs (for example screen regions / blocks), and then process them with little to none dependence on each other or the rest of the scene. Each job is automatically distributed over the network, and the results should be sent back to the job initiator as soon each job finishes.

Expand Down

0 comments on commit 884aeaa

Please sign in to comment.