Blender NLE NextGen

Troy James Sobotka edited this page Dec 30, 2018 · 21 revisions

A collection of thoughts regarding the current state of the VSE and musings regarding potential directions for the future. Concrete design direction thoughts and justifications located here.

Pros of Current VSE Implementation

It Works

  • Somewhat manageable limping along in a small production environment. About best in breed of open source editing options. By "works":
    • One can perform an assembly, rough cut, and work to a final cut all on proxies. With a flip, masters can be generated in applicable formats such as 32 bit EXRs and post production work can commence.
    • Decent stability assuming codec woes are avoided.

Industrial File Format Support

  • Has a solid 75% support of EXR. 40% DPX, mostly broken. Minor missing details include metadata support and like minors. DPX is hard coded and requires a fully OCIO enabled loading. EXRs ignore fundamentally mandatory colour transforms, etc.

Treats Frames as First Class Citizens

  • Series of still frames are treated as first class citizens, allowing for scrubbing of various still image formats.

Splines and Animation

  • Solid animation integration with FCurves and like details for blocking in effects, transitions, and other elements.
  • Many values are exposed for animation, with a potentially complex approach for layering modifications to the animation curves.
  • FCurves are copy / pasteable which helps for transferring a blocked in effect for reassembly.

Creative Design

  • Decent drag and drop shuffling of strips with multi layer / channel paradigm for creative manipulations.
  • Very creative for experimenting with edits.
  • Rapid table-top styled creativity for shuffling strips around and experimenting.

Cons of Current VSE Implementation

Broken Pixel Handling

  • Eight bit horrible code paths.
  • Broken nonlinear reference hack.
  • Disconnect between high quality “online” in compositor results versus lower quality “offline” results in VSE.

Historical Conventions

  • Conventional editing paradigms missing such as in out point selection and ripple, roll, etc. edits. Tied deeply with the architecture of a singular play head.
  • May clash with above table-top style of design pattern. Research needed here.
  • Some interface elements can be obfuscated or behave in unexpected ways. These would seem minor changes.

Offline or Online?

  • Suffering from an identity crisis. Is it an online editor? Is it an offline editor?
  • Have there been any successful online editors? Avid and Final Cut Pro only form a small portion of a larger ecosystem that is rounded out by software such as Nuke, Maya, Hiero, Houdini, ProTools, etc. In a disproportionate majority of efforts that deal with cinema / animated productions, nothing more than text files are utilized from the NLEs.
  • Have there been any grass root success stories from less commercial companies, with less numbers dedicated to development that have succeeded?
  • Is the inherent strength of an online system, the ability to edit swiftly and make creative decisions based on timing, dissolved when such a tool attempts to reach for larger influence? Have-cake-and-eat-it-too naivety.
  • The idea of an always-online NLE is rooted in the misguided assumption that computing power will eventually catch up. However, Blinn's Law holds fast. Frequently, always-online NLEs exist as entry points for other ecosystem components for the companies that offer them, as opposed to precise tools for existing within a larger environment.
  • But these feature films, even the low budget ones, all used a traditional offline workflow that involved a handoff from the editors to a separate finishing team at another facility. https://blog.frame.io/2018/03/05/oscar-2018-workflows/

Forked Code Paths

  • Lack of reuse of quality imaging components that are largely found within the compositor. The division between the need for performance for real-time preview using unsigned bytes versus the need for quality via thirty two bit float is a deep divide.
  • Code for the VSE needs to be custom coded for the lesser quality unsigned char representation in many cases. Stems from identity crisis above.

Lack of Interchange

  • Lack of ability to get edited blueprints out of the VSE and into the compositor for audio work, grading, finishing, titling, tracking, visual effects work, etc. See interchange subject in adjacent wiki space.
  • Likely stems from identity crisis.
  • Interchange with other applications. Currently cumbersome at the frame level, and impossible at an abstraction above that, for example, per shot grading in an external application.

Lack of SMPTE Timecode

  • The ability to sync off camera audio for cutting is made more cumbersome without timecode, even as a stand alone editor.
  • Huge need for timecode if interchange is considered. Not to be confused with timestamps related to codec decoding / encoding.
  • Timecode is not complex, but merely a minor bump of effort in the greater picture.
  • LibTLC appears to be robust enough to handle this.

Mixed Bag of Codec Issues (FFMPEG)

  • Slippage and sync with certain codecs. Nothing to do with Blender, but more the larger picture of complexity of codecs and FFMPEG.
  • Again tied to above identity crisis. A baseline codec be could be enforced to avoid sync and slippage at a cost of an 'import' phase.
  • Despite having a File Browser editor space, interaction with clips is limited. Suggestions of media bins and like discussions pose deeper bodies of work.

What is the VSE?

It would be a dire crime to force an application to be something that it is not. So what is the VSE in relationship to the projects it contributed to?

  • Big Buck Bunny and Sintel are examples of a traditional 3D animation project, complete with labyrinthine complexity of effects and post work.
  • Tears of Steel is an example of a motion picture visual effects project. Also complete with labyrinthine complexity per shot. There are common needs within these scenarios.
  • Per shot grading at finishing phase.
  • High quality image manipulations.
  • Audio syncing and development in external applications.
  • Rapid evaluation of edits via a traditional offline proxied system.

The Great Debate: Offline versus Online

It is the firmest belief that Blender's current design scope is targeting sophisticated animation, motion, and graphics work. Where extended effort and quality are desired, offline systems provide the greatest fluidity level for interactions. Edit on work prints, and perform the heavy lifting on pristine archive deep bit depth frames.

An entirely online-system is also not without merits, but is likely beyond the design direction Blender is evolving in. Newsrooms and in-the-field news types might find an offline system cumbersome, where speed to output is a critical factor. Further, the needs of such creators are vastly more limited with regard to deep images and complex visual needs.

  • Hardware never catches up. Just when hardware becomes acceptable for GenerationA needs, GenerationB needs have evolved. A concrete example in visual effects might be deep compositing. Just as 32 bit float processing has become a solid reality, talks of deeper bit depths or additional channels have entered into view.
  • GPUs have improved significantly, and yet there has been zero uptake in many pipelines. Why?
  • Often bit depths are limited.
  • Often chipsets are implemented differently. If you render frames on a render farm, even the slightest change between frames is detectable, let alone concurrent frame rendering across machines / chips.
  • The bleeding edge of quality such as tetrahedral interpolation in OpenColorIO only currently runs on CPU.
  • OpenCL is an interesting point here as compared against traditional GPU renderings, as it permits the ability to run code on top of the iron. That said, low level rounding differences or like architectural decisions would need to be tested.
  • Canned effects. The all-in-one systems rely on plugins and other more high level approaches. Not only does this lead to a deluge of repeated looks and effects, this is also limiting to a creator.
  • Nodal compositing swept in and won't likely be leaving soon. It allows the artist to control, with great granularity and quality, the nature of effects and post production work.
  • To expand an all-in-one online system to meet such a need often requires a vast reworking of architecture.
  • Scene-referred linear models versus display-referred and display-linear models.
  • Maintaining a scene referred linear model is of huge value for quality, including proper color mixing which is a vital aspect of dissolves and fades, in addition to all visual effects work. To balance speed for editing versus quality, how are online systems handling this transition? Is your data a no-operation between the states?

How does locating identity help the VSE?

By defining the role clearly, certain design decisions become answered without any sort of flame war or diatribes on mailing lists or forums.

  • Visual effects in the VSE would be limited to only being of sufficient quality to pass along the editorial needs and, in terms of data, block in some loose curves to start post production work on. Effectively rendering effects in the VSE moot.
  • Support for one light grades.
  • Support for fades, cuts, and dissolves.
  • Support for rough and quick text overlays indicating missing slugs, actions, or descriptive text at offline quality levels.
  • Support for interactive and swift blocking of polygons, shapes, or images to communicate visual effects, animation points, or other needs briskly.
  • Support for the basic canned types in CMX3600 or FCPXML.
  • Limit the scope of the VSE input codecs issues. Insist on an internal and stable codec for ingestion. Community could extend the input path via simple scripts harnessing FFMPEG, LibAV, or customized needs.
  • Limits the scope of VSE output codec issues. Insist on a few clearly defined and historically accepted formats. Where DPX is likely, the ability to output to standard historical codecs can focus limited developer resources to further flesh out these formats with solid granularity. Targets might include DNxHD and ProRes.
  • Focus on key needs with regards to interchanges such as timecode and basic format support such as CMX3600 for input and output.
  • Clear design goals dismiss flame wars that tap energy.
  • Speed up artist workflow in clearly outlining the role of the VSE in a larger pipeline view.
  • Clearly divide between process needs. VSE NextGen could expand to encompass some of the grading needs, as the per shot format works exceptionally well for such a need, or interface with a custom nodal grading view takes onto the nodal view. In this way, the VSE would evolve from an isolated element and toward a “strip view” of a shot list.

How does committing to an offline system aid an artist?

A gathering of tidbits that relate to some general rules of thumb in post production.

  • Avoid tampering with archival frames unless absolutely required. Offline systems preserve archival renders or frames at maximum quality.
  • Does a codec mangle your data? How? Offline systems prevent codec mangling.
  • Does a file format mangle your data? Offline systems prevent file mangling.
  • Is control and integrity of the data maintained at all times? Knowing that a file is untouched asserts integrity.
  • When touching archival frames, try to do so in a fashion that is as least invasive as possible. An offline system preserves this tenet in that an artist is acutely aware when they are working on the raw frames.
  • Avoid heavy lifting until required. An offline system avoids wasted effort in that the heavy data archival frames are only rendered or ingested after commitment from the editorial.
  • Considering that a typical feature may have 1300-1500 shots, the original sum of shots is much greater. Processing every single piece of footage using heavy lifting tools is redundant effort given that the vast majority will end up on the cutting floor. Offline systems observe this and avoid waste.
  • To this end, an offline system allows an editor to rapidly get to cutting and skip dealing with raw formats, bit depth, etc. until picture lock and pushing along the pipeline. Minor deviations may occur of course, but this doesn't impact the great savings of time and processing.

Links

Many of the more solid links to post production pipelines has apparently been lost to bitrot. If anyone has some, please contribute them.

You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
Press h to open a hovercard with more details.