Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ongoing design goals #6

Open
5 of 6 tasks
eliemichel opened this issue Oct 15, 2020 · 18 comments
Open
5 of 6 tasks

Ongoing design goals #6

eliemichel opened this issue Oct 15, 2020 · 18 comments
Labels
discussion Discussion about design or Q/A rather than an actual issue

Comments

@eliemichel
Copy link
Owner

eliemichel commented Oct 15, 2020

This started as a follow up of #27 of the Blender Host, from which a lot of relevant insights and ideas emerged.

Already integrated:

  • (2B) Add a kOfxMeshPropNoEdge boolean property turned on by default, that one must turn off when building a mesh with loose edges.
  • (2C) Add a kOfxMeshPropConstantFaceCount property type to meshes to skip the use of the kOfxMeshAttribFaceCounts buffer when it is not -1.
  • (2D) Add a kOfxMeshEffectPropIsDeformation boolean property turned off by default that one can turn on to speed up processing when the effect does not affect connectivity.
  • (B) Add a way for effects to request attributes to the host
  • Add semantics (kOfxMeshAttribPropSemantic) to attributes in order to flag their usage (UV, weight group, etc).

Summary of the proposals:

Unsolved issues:

  • How to list the available attributes in a mesh

Rejected:

  • (2E) Introduce arbitrary edge attributes (kOfxMeshAttribEdge) with no such attribute by default

A solution for (B) would indeed take place during the Describe action. And this is needed beyond just connectivity: an effect must be able to tell whether it needs UVs or not, or this or that specially named attribute, etc. It would enable sending only the relevant attributes, and also let the Host's UI tell the user about important missing attributes. I imagine this as a set of kOfxInputProp* maybe.


@tkarabela Regarding your issue (A), there's this little advertised fact: It is possible already to point the output at the same buffer as the input, for unchanged attributes, and in particular for connectivity information when the effect only is a modifier.

About the "knowing the exact edge count in mixed cases" problem, do you believe that iterating once over the faceCounts attribute prior to allocating teh mesh introduces too much overhead? I think that anyway that's how many plugin writers would fill the loose edge count property anyways.

@eliemichel eliemichel added the discussion Discussion about design or Q/A rather than an actual issue label Oct 15, 2020
@tkarabela
Copy link
Contributor

[From previous discussion] (A) How do I tell the host that it should copy connectivity from input mesh (kOfxMeshMainInput?) to output (kOfxMeshMainOutput). [...] In terms of API, it looks kinda like kOfxMeshEffectActionIsIdentity to me (kOfxMeshEffectActionIsDeformation?), even specifying which input should be routed to output.

Regarding your issue (A), there's this little advertised fact: It is possible already to point the output at the same buffer as the input, for unchanged attributes, and in particular for connectivity information when the effect only is a modifier.

Oh, kind of "attribute forwarding", I can see that being useful! :) In terms of API, it might be a minor issue that by the time the output mesh is inputReleaseMesh()-ed, the mesh with the forwarded attribute may have already been inputReleaseMesh()-ed before, as order of releasing is undefined. For kOfxMeshAttribPropIsOwner == true, that sounds like trouble, but the inputs should ideally be proxied anyway, and I assume most of the time all host input buffers outlive cooking time. Still, I think it should be adressed in the spec in some way (while mentioning that "attribute forwarding" is a thing and that it's encouraged).

About the "knowing the exact edge count in mixed cases" problem, do you believe that iterating once over the faceCounts attribute prior to allocating teh mesh introduces too much overhead? I think that anyway that's how many plugin writers would fill the loose edge count property anyways.

Since we want (2E), I feel that couting only loose edges would be confusing. Let me propose (2F): Add an integer property kOfxMeshPropEdgeCount, the number of edges in a mesh (both face edges and loose edges).

This is equivalent to stating the number of loose edges, since kOfxMeshPropEdgeCount == kOfxMeshPropVertexCount - loose_edge_count, but seems more meaningful as it's also the number of elements in kOfxMeshAttribEdge attributes (mirroring what we have for point/vertex/face attributes). Having (2F) would supersede (2B).

Without (2F), the host has to overallocate kOfxMeshAttribEdge attributes, unless face counts are "forwarded". In other cases (allocating native mesh on host or effect side which does not use 2-vertex faces, eg. Blender or VTK), it can be solved with a simple extra traversal of face counts which likely isn't too expensive.

I imagine it could work this way:

  • host, preparing input mesh - either pays the price of extra traversal or gets edge count from host (for Blender, I think we can)
  • effect, recieving input mesh - if edge count is needed, it's available, no extra traversal
  • effect, requesting output mesh - either pays the price of extra traversal or it "knows" (for pure pointcloud/wireframe/poly meshes, it always "knows", other common case may be "the same number of loose edges as input")
  • host, receiving output mesh - if edge count is needed, it's available, no extra traversal

In the end, it's not that big of a deal; for me the reasons to have (2F) are mostly:

  • kOfxMeshPropEdgeCount fits well with kOfxMeshAttribEdge in the API, while kOfxMeshPropNoEdge looks like a one-off flag
  • if the host can get edge count cheaply, it should pass it to the effect

We could make setting kOfxMeshPropEdgeCount before calling output meshAlloc() optional, meaning the same as kOfxMeshPropNoEdge == true (ie. assuming kOfxMeshPropEdgeCount == kOfxMeshPropVertexCount). For optimization, the crucial thing is knowing the kOfxMeshPropNoEdge property - it may not be too much more difficult to ask the effect "zero or how many?" instead of "zero or some?" loose edges.

@tkarabela
Copy link
Contributor

I tried looking at how Blender mesh native representation looks like, and it's a different story that the edge proposals for Open Mesh Effects are so far; it looks like faces have distinct vertices, but shared edges. This makes "face" edges harder to handle and to have edge attributes, we would need to define the order of edges and what vertices do they connect. This makes me wary of (2F) and bringing edge attributes into the spec at all, until the use case is more clear.

Or I might be missing something obvious, I'm not well versed in 3D graphics implementations...

Screenshot from 2020-10-16 01-34-57

@eliemichel
Copy link
Owner Author

In terms of API, it might be a minor issue that by the time the output mesh is inputReleaseMesh()-ed, the mesh with the forwarded attribute may have already been inputReleaseMesh()-ed before, as order of releasing is undefined.

Indeed, we'll have to remember to sort this out.

Still, I think it should be adressed in the spec in some way (while mentioning that "attribute forwarding" is a thing and that it's encouraged).

Added a note in the doc about attribute forwarding: https://github.com/eliemichel/OpenMeshEffect/commit/44c3fc623832c2f347198d1c3f03d8a5a38c7753

This makes "face" edges harder to handle and to have edge attributes, we would need to define the order of edges and what vertices do they connect.

Annoying indeed. :/ Maybe we need to checkout how other software handle this, like Maya's API (its C++ API is quite good iirc).

@tkarabela
Copy link
Contributor

Maybe we need to checkout how other software handle this, like Maya's API (its C++ API is quite good iirc).

I looked into some of the APIs, here's what I gathered:

Maya

3DS Max

Cinema4D

Universal Scene Description format

It seems to me that edge representation is motivated by efficient mesh traversal, not just storing per-edge data. Personally I quite like the current point/vertex/face representation (à la Houdini or USD), I think it's easy to work with and agnostic w.r.t. plugin/host implementation details. Committing to a particular edge representation seems to add complexity and increase impedance mismatch when converting to/from OFX mesh. The Maya model doesn't look bad, but I'd rather just generate faces/vertices and not have to worry about edges and any deduplication that may be required.

Regarding edge attributes themselves, it would be handy that "if the effect preserves topology, make it preserve any edge attributes as well" (this can be a host implementation detail), however I don't know of a particular use case for them in an effect which couldn't be solved with vertex/face attributes. (In the special case of loose edges, we have "loose edge attributes" already in form of face attributes.)


As for the other proposals, can we add

  • (2B) Add a kOfxMeshPropNoEdge (or perhaps kOfxMeshPropNoLooseEdge to be crystal clear?) boolean property turned on by default, that one must turn off when building a mesh with loose edges.
  • (2C) Add a kOfxMeshPropConstantFaceCount property type to meshes to skip the use of the kOfxMeshAttribFaceCounts buffer when it is not -1.

to the specification? :) I'm preparing a patch for the Blender host to fix loose edge issues I've encountered with MfxVTK (eg. Blender->OFX doesn't pass loose edges currently) and this would make sense to implement at the same time. I think it's independent of the edge representation issue above (ie. (2B), (2C) make sense in current spec as well as in hypotetical future spec with explicit edges separate from faces).

@eliemichel
Copy link
Owner Author

I have officialy added (2B) and (2C) in bef660d
Thanks once again for your involvment ! For edges let's stick for now to the "if one wants edge attributes, they explicitely create 2-verts faces on top of existing faces to store them, even though it'd mean also allocating memory for existing faces".

@eliemichel
Copy link
Owner Author

eliemichel commented Oct 25, 2020

Regarding (2D) it must not be an action. It is an action to check whether an effect with a given valuation of its parameters is the Identity, but for deformers we need to know whether the effect is intrinsically a deformer, which ever values its parameters have (deformers for only some parameters' valuation are handled by attribute forwarding). So this must actually be a property of an effect. I suggest kOfxMeshEffectPropIsDeformation (set to false by default).

(edited the initial message accordingly)

@tkarabela
Copy link
Contributor

Great! :) With (2B) and (2C) in the spec, I will prepare a patch for the Blender host to support it, as well as address any remaining issues with loose edge meshes, if that's okay? (I've tried the latest binaries of OFX Blender and MfxVTK and it seems that it "just works" now, but I want to check in debug build as well.)

Ah, I see. Yes, the deformation should be declared early, at describe time. (2D) kOfxMeshEffectPropIsDeformation with default false sounds just right :)

After these optimization opportunities and edge handling, another big one is (B) attributes. These are some of my thoughts:

  • From work, I have experience with scientific/engineering visualization - having some surface/volumetric data with per-point or per-element attributes (scalar/vector/tensor) and processing them. The softwares used for that don't really care what the data is supposed to be, beyond the fundamental data type (eg. you can integrate a path through a vector field, but not scalar field).
  • However, when looking at Blender and other traditional 3D apps APIs, it seems that attribute handling is much more tailored to specific use cases (ie. you have "vertex colors" and "vertex weights" and "UV maps" which all have specific restrictions, and they are not interchangable/convertible, though at the end of the day they are all int/float attributes of the same thing).
  • This leads me to the idea of tagging attributes with "intent" (like OpenFX can hint that a float parameter is distance/angle/etc.), which would make it easier for the host to present suitable UI, especially since the different attributes can be stuffed into different namespaces in the host, like Blender does it. The effect would hint that it expects a "UV map" attribute on input, etc.
  • Perhaps it's not entirely necessary, since 2 x float is likely a UV map, 3 x int8 is likely RGB color, etc.
  • Like with non-polygonal meshes, MfxVTK looks like a good testing vehicle for this :D For example, the "Sample points (volume)" effect computes "depth" of the points with respect to mesh surface. It would be cool to use this with instancing, using this "depth" in shader. However, as far as Blender:
    • It seems that most useful/accessible stuff (colors, UVs) is defined for vertices, not points
    • Even if I had eg. vertex colors, native object instancing still wouldn't pass the data to the instances...

@eliemichel
Copy link
Owner Author

I was thinking about making some attribute names conventional. This is inspired by Houdini, who tries to be as agnostic as your dataviz tools are wrt to attributes (P is the attribute for position, N for normals, Cd for color, psize for poitn size, etc.). This is already how it works for point position, vertex-point association and face counts: kOfxMeshAttribPointPosition, kOfxMeshAttribVertexPoint and kOfxMeshAttribFaceCounts are standardized attribtue names.

In Blender, these are called "layers" but it is quite similar. Layers that have a specific meaning like UVs are accessibles twice iirc: once as agnostic float2 layers, and once flagged as UVs. General purpose layers are not advertised anywhere in the UI but accessible for instance in Python through the BMesh module. So far the Blender implementation does not use them and only look at specific UVs because I discovered this only lately actually. I've started working on an add-on to visualize them in the viewport by the way.

In the end I think I like your idea of tagging attributes, so adding a kOfxMeshAttribPropSubType, it is quite consistent with the way parameter's subtypes are specified. I just wonder whether we should also specify some naming convention, like "if it starts or end with "uv" then it must be a UV or just say that "uv0, uv1, ..." are the standard texture coordinate attributes. Also, should such tags be forwarded by default to outputs? Is it the responsibility of the hosts or of the plugins?

@eliemichel
Copy link
Owner Author

I've added kOfxMeshEffectPropIsDeformation in 40cb5eb (and also fixed a typo in kOfxMeshPropNoEdge, the 'k' has made its way to the content of the define).

@tkarabela
Copy link
Contributor

tkarabela commented Oct 29, 2020

Yes, having conventional attributes like kOfxMeshAttribVertexUV would make sense in the API. I'm still wondering how working with attributes should work overall, it feels like multiple things are mixed up together:

  • input mesh on its own has whatever attributes it has (in the host application)
  • an effect may use an attribute to drive what it does - it has a slot for attribute with given location/datatype, which should be pointed to one of the available attributes (more precisely, the slot belongs to one of the input/output meshes)
  • finally, an effect may produce an attribute on output, either modifying one of its input attributes (see above) or creating an unrelated new attribute

There's a question of what the effect is supposed to "see":

  • (A) Should it only see the basic geometry attributes + any extra attributes it requests at describe time (eg. the plugin asks for kOfxMeshAttribVertexUV, host presents UI to select one of existing UV maps, then passes only that UV map to the effect with the generic conventional name)?
  • (B) Should it see "all" extra attributes the input mesh has (insofar as they are representable in OFX), with a way to retrieve attributes it has specifically asked for during describe (and which the user selected with UI)?

So far, I feel that the design leads more into (A). The use case for (B) is non-deformer effects - for deformers, attribute forwarding may be done automatically by the host (with the plugin optionally adding/overriding non-geometry attributes), but for non-deformers, creating output attributes has to be done by the effect. Some examples:

  • decimation/tesselation
  • instancing

In these cases, the effect can copy/interpolate attributes (eg. Blender Decimate does it), but it would be tedious/impossible to declare all these attributes explicitly (I believe they should only be declared when they are changing the way the effect works, not when you're essentially passing them through). We probably need the tagging/"attribute subtypes" idea for this to be feasible (eg. for decimation, you can average RGB values but not material IDs).

C++ API sketch, variant (A) with deformer effect (Blender's Cast modifier), all non-geometry attributes of kOfxMeshMainInput forwarded automatically:

CastEffect::Describe() {
    auto input_mesh_main = AddInput(kOfxMeshMainInput);
    auto input_mesh_control = AddInput("ControlObject").Label("Control object");
    auto output_mesh = AddInput(kOfxMeshMainOutput);

    input_mesh_main.AddVertexAttribute(kOfxMeshAttribVertexWeight,  // name
                                       1,                           // num of components
                                       MfxAttributeType::Float,
                                       MfxAttributeSubtype::Weight) // 0.0 <= x <= 1.0
                   .Label("Influence");

    // possibly "sweetened" even more:
    // input_mesh_main.AddVertexWeightAttribute(kOfxMeshAttribVertexWeight).Label("Influence");

    // regular parameters
    AddParam("Factor", 1.0).Range(0.1, 100.0);

    IsDeformer(true); // declare the deformer property
}

CastEffect::Cook() {
    auto input_mesh_main = GetInput(kOfxMeshMainInput).GetMesh();
    auto input_mesh_control = GetInput("ControlObject").GetMesh();
    
    auto attrib_influence = input_mesh_main.GetVertexAttribute(kOfxMeshAttribVertexWeight);

    // ...
}   

C++ API sketch, variant (B) with decimate effect, no automatic forwarding by host:

DecimateEffect::Describe() {
    AddInput(kOfxMeshMainInput);
    AddInput(kOfxMeshMainOutput);
    AddParam("TargetReduction", 0.5).Range(0.0, 1.0);
}

DecimateEffect::Cook() {
    auto input_mesh = GetInput(kOfxMeshMainInput).GetMesh();
    auto output_mesh = GetInput(kOfxMeshMainOutput).GetMesh();

    // ... decimate ...

    // Allocate all attributes for custom passthrough.
    // Possibly the following block could be a pre-defined method in the C++ helper,
    // eg. output_mesh.AddAttributes(input_mesh)

    for (int i = 1; i < input_mesh.GetPointAttributeCount(); i++) {
        //       ^ to skip geometry attributes, we could define that they are always the first
        
        MfxAttributeProps attribute_props;
        auto input_attribute = input_mesh.GetPointAttribute(i);  // new functionality
        input_attribute.FetchProperties(attribute_props);
        attribute_props.isOwned = true;
        output_mesh.AddAttribute(attribute_props);  // just sugar
    }
    // repeat for vertex/face/mesh attributes

    output_mesh.Allocate(...)

    // fill attributes
    
    output_mesh.Release();
    input_mesh.Release();
}

Looking at the code, it seems that (B) does not conflict with (A) at all; the effect is allowed to access its defined attributes by names it used to define them (input_mesh.GetPointAttribute("MyInputAttribute")), and all attributes by index (input_mesh.GetPointAttribute(0)), with no guarantees about presence or names of the other (indexed) attributes.

A drawback to (B) is that it forces the host to convert more stuff into the OFX world, so that the effect can see it. With reusing host buffers, the cost of doing so could be effectively zero, though, and for deformer effects we could omit the extra attributes and leave them on auto-forward.


With regards to Blender implementation, I was looking through it and also saw the useful generic attributes which are nowhere to be seen in the GUI. Add-on to be able to show them would be useful :)

The implementation details are not clear to me, but I get the impression that attributes/layers are supposed to be consistent for all meshes of a given object, ie. it's the object that defines the layers (or the number of layer slots)? See the "Generate Data Layers" button of the Data Transfer modifier. It has also happened to me that the CustomData buffer was NULL, even tough the layer was supposed to be there based on layer count.

@eliemichel
Copy link
Owner Author

eliemichel commented Nov 1, 2020

an effect may use an attribute to drive what it does - it has a slot for attribute with given location/datatype, which should be pointed to one of the available attributes (more precisely, the slot belongs to one of the input/output meshes)

For instance, an "inflate" effect (moving vertices along their normal), would typically use the standard attribute for normals by default ('N' in Houdini) but still let the possibility in (advanced) parameters to read from a different attribute.

The string field to input the custom attribute name will typically feature a helper dropdown listing the list of currently available attributes, but would not fail if the attribute is not existent (just use a default value, even though the result may be irrelevant).

We could have "rename attribute" effects to handle special cases (Houdini does as well).

Anyway, I like your "slot" idea, it feels user friendly, but this could be up to the (blender) implementation.

There's a question of what the effect is supposed to "see"

We are again facing an ease of use versus efficiency question. To be consistent, I belive we should adopt a similar solution as for constant face counts for instance: easy use by default, and flags to turn on extra optimizations.

Should it only see the basic geometry attributes + any extra attributes it requests at describe time

In the "inflate" effect scenario this does not work because the user may change the name of the attribtue to query. So I suggest the following (breaking) change: (F1) we can add an action in between Describe and Cook, for which the instance parameters are set up and the available input attributes are listed, and the effects must tell which of the attributes it will need. If this action is not implemented, all attributes are exposed to the cook action.


for non-deformers, creating output attributes has to be done by the effect.

Yes, we reaching something very interesting here.

I think that saying "interpolate like this all remaining poitn attributes" is a job for the C++ plugin support API, the base C API only needs to provide access to the list of available attributes and their data. And I guess you think the same since you suggested sketches in C++. ^^ I mean I think there is no need to flag these "pass-through" attributes differently at the C level, since in the end the effect needs to read them in the same way.

For you example (A), I think it should be called RequestVertexAttribute rather than AddVertexAttribute, it seems weird to be able to modify an input. Example (B) is good, I wonder whether default attribute interpolation should be handled in a separate virtual method. We could imagine providing default if the plugin writer is able to implement at least something like:

std::vector<std::pair<int,float>> DecimateEffect::GetPointInfluencers(int pointIndex, int outputIndex) {
	std::vector<std::pair<int,float>> influencers;
	influencers.push_back(std::make_pair(42, 0, 0.3));
	influencers.push_back(std::make_pair(43, 0, 0.6));
	influencers.push_back(std::make_pair(58, 0, 0.1));
	return influencers;
}

I don't like the term "influencer" at all, but the idea is to tell that a point pointIndex from the outputIndex-th output mesh is a kind of barycenter (wrt. attributes that don't have a special meaning to the effect) where input point 42 from input mesh 0 has a weight of 0.3. I think this could handle many cases, and we could still do it "manually" as in (B) in more advanced cases.

(PS: I redacted your previous message but don't worry it was only to add C++ syntax highlight to code blocks)

@tkarabela
Copy link
Contributor

Should it only see the basic geometry attributes + any extra attributes it requests at describe time

In the "inflate" effect scenario this does not work because the user may change the name of the attribtue to query.

The way I imagine it, it would work even in this case -- there is a layer of abstraction between attributes in the host application and attributes the effect receives (the "slot" idea):

InflateEffect::Describe() {
    auto input_mesh = AddInput(kOfxMeshMainInput);   // perhaps: AddMainInput();
    auto output_mesh = AddInput(kOfxMeshMainOutput); // perhaps: AddMainOutput();

    input_mesh.RequestVertexNormalsAttribute();
    // ie:
    // input_mesh.RequestAttribute(kOfxMeshAttribVertex,              // location
    //                             3,                                 // components
    //                             kOfxMeshAttribVertexNormals,       // name
    //                             MfxAttributeType::Float,           // type
    //                             MfxAttributeSubtype::Normals);     // subtype

    // regular parameters
    AddParam("Offset", 1.0).Range(-100.0, 100.0);

    IsDeformer(true); // declare the deformer property
}

InflateEffect::Cook() {
    auto input_mesh = GetInput(kOfxMeshMainInput).GetMesh();    // perhaps: GetMainInput().GetMesh();
    auto output_mesh = GetInput(kOfxMeshMainOutput).GetMesh();  // perhaps: GetMainOutput().GetMesh();

    auto vertex_normals = input_mesh.GetVertexAttribute(kOfxMeshAttribVertexNormals);
    // perhaps: input_mesh.GetVertexNormalsAttribute();

    // ...
}

Depending on host capabilities, this would show a drop-down menu in effect UI, to select which vertex vector attribute to pass as "OfxMeshAttribVertexNormals" to the effect, with usual mesh normals selected as default. For hosts that don't support multiple normals/vertex vector attributes, this would just pass mesh normals without giving user the option to override.

An alternative approach would be to pass native host attribute names as ordinary string parameters and let the effect look them up at cooking time -- while possible in the current API, I would consider this an anti-pattern (it requires (F1) to avoid passing in all attributes; it leaks host attribute names; it makes it difficult to tell if the effect is ready to be cooked, whether the attribute names are mandatory, refer to correct attribute type, etc.).

So I suggest the following (breaking) change: (F1) we can add an action in between Describe and Cook, for which the instance parameters are set up and the available input attributes are listed, and the effects must tell which of the attributes it will need. If this action is not implemented, all attributes are exposed to the cook action.

Ideally, I would prefer to shield the effects from native host names of things, as that could encourage relying on host implementation details and making plugins less portable between hosts. With more conventional attributes (UVs, normals, vertex colors, vertex weights) and ability to have custom attributes, I think the API would be expressive enough. The only use-case I can think of for listing not-explicitly-declared attributes is for passing them through, as with non-deformer effects. (In this case, I would let the effect see attribute attachment and type, but erase its name, so that the feature doesn't get used to get around attribute declaration.)

In this spirit, I would default to passing the basic implicit attributes (PointPosition, VertexPoint, FaceCounts) and any explicitly requested, "named" attributes to the effect, with a non-default option to request all attributes as anonymous, "indexed" attributes (for non-deformer passing through). Eg.:

DeformEffect::Describe() {
    auto input_mesh = AddInput(kOfxMeshMainInput);
    auto output_mesh = AddInput(kOfxMeshMainOutput);

    input_mesh.RequestAllAttributes(); // note: this is a special use-case, it doesn't replace RequestAttribute()

    // regular parameters
    AddParam("Ratio", 0.5).Range(0.0, 1.0);
}

DeformEffect::Cook() {
    auto input_mesh = GetInput(kOfxMeshMainInput).GetMesh();
    auto output_mesh = GetInput(kOfxMeshMainOutput).GetMesh();

    // ... decimate ...

    for (int i = 1; i < input_mesh.GetPointAttributeCount(); i++) {
        auto input_attribute = input_mesh.GetPointAttribute(i);
        // ...
    }

    // ...
}

In terms of optimization, there is one use-case supported by (F1) and not this, namely excluding some of the basic attributes (PointPosition, VertexPoint, FaceCounts), but that seems pretty niche; it could be good for:

The usual case is optimized by the effect itself only asking for attributes it needs, and the case of non-deformer passthrough doesn't seem to benefit from (F1): either the effect needs all attributes to transform it itself, or it somehow communicates the transformation to the host, in which case it can skip input_mesh.RequestAllAttributes() and let the host do it.


There is one thing about attributes (and mesh inputs, too) that we didn't mention so far, which is whether they are mandatory or optional. For parameters, it doesn't really matter since they all must provide defaults, but with attributes/mesh inputs, things are different. I can think of:

  • attributes with no alternative (PointPosition, VertexPoint, FaceCounts)
  • attributes with alternatives and suitable defaults (VertexUV, VertexNormals)
  • attributes with alternatives and no obvious default (vertex weights, anything custom)
  • addional mesh input which is mandatory (eg. for boolean or instancing effect)
  • addional mesh input which is optional (eg. for Blender's Cast modifier or my "Poke" effect I'm currently developing in MfxVTK)

If there was a way to declare which inputs are mandatory or not, hosts could present better UI for the user and skip attempting to cook an effect that is not ready (needlessly converting stuff from host environment to OFX environment).


Regarding conventional attributes, I would propose to add the following (F2):

  • kOfxMeshAttribVertexUV (2 x float, vertex attachment, subtype kOfxMeshAttribPropSubTypeUV)
  • kOfxMeshAttribVertexNormals (3 x float, vertex attachment, subtype kOfxMeshAttribPropSubTypeNormals)
  • kOfxMeshAttribVertexWeight (1 x float, vertex attachment, subtype kOfxMeshAttribPropSubTypeWeight)
  • kOfxMeshAttribVertexColor (3 x uint8, vertex attachment, subtype kOfxMeshAttribPropSubTypeColor)

None of these would be "requested" by default, unlike PointPosition et al. If the effect requires more than one of the same kind for given mesh input, it can request an attribute with different name but the same attachment, component count, type and subtype. The subtype value should be used by the host to present suitable UI (pick from available UV maps, vertex colors, etc.).

  • Not sure if they should end with -s or not :)
  • Not sure about RGB vs. RGBA color, but I would define what the individual components mean (that it's indeed RGB and not BGR or something)

I wonder whether default attribute interpolation should be handled in a separate virtual method. We could imagine providing default if the plugin writer is able to implement at least something like: [ ... ]

Yes, this looks useful :) I can imagine that any output point/vertex/face attribute would typically be a linear combination of at most a few input elements. For an instancing effect, this could even be just copying from input attributes (though these attributes may not be from main input mesh).

@eliemichel
Copy link
Owner Author

Ideally, I would prefer to shield the effects from native host names of things, as that could encourage relying on host implementation details

If I refer to the way I (and others) use attributes in Houdini, besides somes standardized ones the attribute name is a convention with myself (as a creator) because the attribute was generated at a previous stage. I am not assuming that the host will expose its internal naming: available attributes are either standard ones or user created ones from previous effects in the tree.

the case of non-deformer passthrough doesn't seem to benefit from (F1): either the effect needs all attributes to transform it itself, or it somehow communicates the transformation to the host

Actually I just started to think about an issue we may encounter: let's say we have three sequential effects A-->B-->C. C requires an attribute foo that A produces. B is not a simple deformer but is able to forward arbitrary attributes, so we want B to be aware of the fact that later on in the effect tree something needs foo to let the host know that it needs to take care of it. So the host must be able to ask an effect "please handle this attribute that you don't care about but that I'll need".


Good points about mandatory or optional.

  • attributes with no alternative
  • attributes with alternatives and suitable defaults

This should be a property of the mechanism to request inputs, related to inputs.

  • attributes with alternatives and no obvious default

What should be done in this case?

  • addional mesh input which is mandatory
  • addional mesh input which is optional

A property of mesh inputs. What is the difference between "optional" and "not mandatory"?


kOfxMeshAttribVertexWeight and kOfxMeshAttribVertexColor could also be attached to points (especially for poitn cloud effects)

I am really not a fan of forcing the color to be an uint8. They are so many good reasons to use 32bt floatting poitn colors! But also some time it'll be uint8 so should we find a possibility to let the host five whatever it has? If we were to keep only one as standard (which does not prevent the plugin to request something else) it'd rather have it be a float I think. And there is also the question of RGBA/RGB.

If the effect requires more than one of the same kind for given mesh input, it can request an attribute with different name but the same attachment, component count, type and subtype. The subtype value should be used by the host to present suitable UI

But then isn't the subtype enough actually?

@tkarabela
Copy link
Contributor

tkarabela commented Nov 5, 2020

Good points :)

If I refer to the way I (and others) use attributes in Houdini, besides somes standardized ones the attribute name is a convention with myself (as a creator) because the attribute was generated at a previous stage. I am not assuming that the host will expose its internal naming: available attributes are either standard ones or user created ones from previous effects in the tree.

I understand that, it's very much how one works with VTK/Paraview ^^ My point was that it would be beneficial if the "function signature" of an effect is decoupled from the "runtime" attribute names, like in the C++ language:

// effect side
struct MyEffectInputMesh { float *OfxMeshAttribVertexUV; };
struct MyEffectOutputMesh { float *OfxMeshAttribVertexUV; };
void my_effect(MyEffectInputMesh mainInput, MyEffectOutputMesh mainOutput) { ... }

// host side
float *uv0, *uv1;
float *my_custom_uv_map;

my_effect( { uv0 } , { uv0 } );
my_effect( { my_custom_uv_map } , { uv1 } );
// everything works despite no variables actually being named "OfxMeshAttribVertexUV"

For input attributes, it has to work that way since the effect has to name them at describe time, before it gets the actual mesh. With output attributes, there is some flexibility since they are not declared at describe time, but only created later at cooking time. The host can presumably keep the attribute names from the effect (if they are custom) or rename them according to host application convention (if they use the conventional OFX names). I think this area is where we'll run into host limitations, so I wouldn't specify this too strictly, something like:

  • A host will allow the user to choose which attributes should be used as inputs for the effect (from attributes available at that point in the effect tree)
  • A host will allow a "downstream" OFX effect to use output attribute from an "upstream" OFX effect as its input

Actually I just started to think about an issue we may encounter: let's say we have three sequential effects A-->B-->C. C requires an attribute foo that A produces. B is not a simple deformer but is able to forward arbitrary attributes, so we want B to be aware of the fact that later on in the effect tree something needs foo to let the host know that it needs to take care of it. So the host must be able to ask an effect "please handle this attribute that you don't care about but that I'll need".

I agree, it should be driven by the host, I would propose the following:

  • For deformer effects, unused attributes are passed through by the host automatically. They are not put as extra attributes on the input mesh (that would be unnecessary).
  • For non-deformer effects, unused attributes should be passed through by the effect. That is: host will put them as attributes on the main input mesh and the effect is supposed to create matching attributes on main output mesh, copying/interpolating the values according to what the effect does.
  • The host may choose to omit some unused attributes from the pass-through mechanism above, based on state of the host application and/or user input (eg. the host knows from dependency graph that the attribute will not be used later, user disables pass-through of the attribute, etc.)

It can be easy for the host to know that an attribute will be needed later (when the consumer is another OFX effect, as in your example), but pretty difficult to prove an attribute will not be needed (its consumer may be other non-OFX effect, a shader, or even file export). I imagine a more practical way to skip unnecessary processing would be to have a switch in the host UI where the user can override which attributes should/should not be passed to the effect.

attributes with no alternative (basic geometry attributes)
attributes with alternatives and suitable defaults
This should be a property of the mechanism to request inputs, related to inputs.
attributes with alternatives and no obvious default
What should be done in this case?

At the API level, I would only provide a way to specify whether a requested attribute is mandatory or not. The rest is up to the host - whether the attribute can be made user-selectable or not, and whether it should have a pre-selected value or not. (In a hypothetical host, all attributes could be user-selectable, even normals, point coordinates etc. - I understand that this is the case in Houdini, but other hosts may be more limited.)

addional mesh input which is mandatory
addional mesh input which is optional
A property of mesh inputs. What is the difference between "optional" and "not mandatory"?

No difference? ^^ Either the (mesh / attribute) is "optional" ie. it's okay to cook without it, or it's "mandatory" and the host should skip cooking until user selects the input (the cooking would presumably fail with kOfxStatFailed if the host attempted it).

kOfxMeshAttribVertexWeight and kOfxMeshAttribVertexColor could also be attached to points (especially for poitn cloud effects)
I am really not a fan of forcing the color to be an uint8. They are so many good reasons to use 32bt floatting poitn colors! But also some time it'll be uint8 so should we find a possibility to let the host five whatever it has? If we were to keep only one as standard (which does not prevent the plugin to request something else) it'd rather have it be a float I think. And there is also the question of RGBA/RGB.

This ties into the larger question of what to do when the attribute location/datatype/components don't match, which seems important for usability, and I'd love to support this as well. As you note, it seems that it's really the subtype that is saying "what" the attribute is, and the rest could be left up to the effect, with the host doing converting behind the scenes. Ie., for attribute kOfxMeshAttribColor (or really any attribute with subtype kOfxMeshAttribPropSubTypeColor), you could:

  • request it with 3 or 4 components
  • request it as uint8 or float
  • request it with vertex or point attachment

Some combinations may be disallowed by host, but many things should "just work", though only few combinations will hit the fast path in host (match the underlying buffer). I'm not sure how it should behave with output attributes (if you output float color, but host uses uint8 - should it narrow down float to uint8, or keep float color in a "hidden" attribute that may be otherwise inaccessible?).

With that being said, instead of (F2) I would propose the following (F3):

Subtype Conventional name Datatypes Dimensions Attachments Note
? kOfxMeshAttribPointPosition float 3 point
? kOfxMeshAttribVertexPoint int 1 vertex
? kOfxMeshAttribFaceCounts int 1 face
kOfxMeshAttribPropSubTypeUV kOfxMeshAttribUV float 2 vertex 0 <= u,v <= 1
kOfxMeshAttribPropSubTypeNormals kOfxMeshAttribNormals float 3 point/vertex/face |x| = 1
kOfxMeshAttribPropSubTypeWeight kOfxMeshAttribWeight float 1 point/vertex/face 0 <= w <= 1
kOfxMeshAttribPropSubTypeColor kOfxMeshAttribColor uint8/float 3/4 point/vertex/face RGB(A)

From the C++ helper point of view, it could be quite ergonomic (not sure about the names, or if it should look like template parameters, but you get the idea):

InflateEffect::Describe() {
    auto input_mesh = AddInput(kOfxMeshMainInput);
    auto output_mesh = AddInput(kOfxMeshMainOutput);

    input_mesh.RequestVertexNormalsAttribute();
    input_mesh.RequestFloatVertexColorsAttribute();
}

InflateEffect::Cook() {
    auto input_mesh = GetInput(kOfxMeshMainInput).GetMesh();
    auto output_mesh = GetInput(kOfxMeshMainOutput).GetMesh();

    auto input_normals = input_mesh.GetVertexNormalsAttribute();
    auto input_colors = input_mesh.GetFloatVertexColorsAttribute();

    output_mesh.AddPointColorAttribute();
}

@eliemichel
Copy link
Owner Author

eliemichel commented Nov 10, 2020

I am working on integrating these ideas to the C API. In the design of the original Image Effect API, clipGetHandle was meant to be called only at cook time, not at describe time (see here), and so did I specify for our inputGetHandle.

In some cases, we can still imaging calling this in the Describe action

// Request the kOfxMeshPropTransformMatrix property to be filled by the host
propSetInt(inputProperties, kOfxInputPropRequireTransformMatrix, 0, 1)

where inputProperties was populated by inputDefine. Problem arises regarding attributes. Attributes cannot be requested using predefined kOfxInputPropRequire... in a property set because property sets are by designed fixed, while an effect can have arbitrary inputs.

(BTW how does your API handles the case were e.g. several normal attributes are needed, should one call input_mesh.RequestVertexNormalsAttribute() multiple times?)

A logical thing to do would be to introduce a new function to the meshEffectSuite, something like inputRequestAttribute(OfxInputHandle input, char *name, OfxAttributeType type, int count) where the name is used later on to find this attribute (like parameters or inputs are identified by string -- actually this could be named inputAttributeDefine to be more consistent though less legible).

My concern is that this requires the OfxInputHandle object that was supposed not to be available at describe time. After thinking a bit about it, it does not seem to break anything nor any important concept to switch the design to say that the OfxInputHandle can be accessed at describe time. Because compared to the Image Effect API, we have an extra indirection, namely the "mesh" is different from the "input slot" whereas in image effects there is only a notion of "clip" that is both the slot and the data flowing into it.

The only drawback to my eyes is that it make it weird that inputDefine does not return the OfxInputHandle directly. What are your thoughts about this? Do you see any reason to prevent the use from OfxInputHandle at describe time?

edit: My bad, there is the same indirection in Image Effects: clip <-> input and image <-> mesh. So we must figure out what good reason they had to make OfxClipHandle not accessible at describe time. The difference is that they represent images as property sets while we have a proper OfxMeshHandle type.

edit2: The more I think about "subtype" the more I think it should have a different name because e.g. "UV" is not a subtype of "float" (the "type" in our current naming convention) but rather a subtype of "float2" (the combination of type+componentCount). I am thinking about calling it semantics, like in HLSL.

@eliemichel
Copy link
Owner Author

eliemichel commented Nov 11, 2020

A lot of news with 89d428f

Semantics

It introduces Semantics: kOfxMeshAttribSemanticTextureCoordinate, kOfxMeshAttribSemanticNormal, kOfxMeshAttribSemanticColor and kOfxMeshAttribSemanticWeight (I don't like the cryptic "UV" even though "TextureCoordinate" is a bit long)

Attribute Request

It also introduces inputRequestAttribute():

OfxStatus (*inputRequestAttribute)(OfxMeshInputHandle input,
                                   const char *attachment,
                                   const char *name,
                                   int componentCount,
                                   const char *type,
                                   const char *semantic,
                                   int mandatory);

Breaking changes

As well as two breaking changes:

  1. Add a semantic argument to attributeDefine (that may be null). I though about letting the semantics be defined only through propSetString on the attribute property set but it was making attributeDefine behaving a bit different than inputRequestAttribute, plus this is a way to foster the use of semantics that otherwise people would easily neglect.

  2. Add a OfxMeshInputHandle *input return argument to inputDefine, as mentioned earlier. This also mean that the OfxMeshInputHandle object is now accessible at describe time. I wonder whether there is still a reason to keep MfxInputDef separate from MfxInput in the C++ API then.

All of this has been ported to the examples, host code and plugin support, as well as to the Blender implementation in a6445ef (only a first base, it does not actually ensure that requested attributes will be present).

The CppPluginSupport implementation of attribute request is the bare minimum so far but we'll add nicer utility functions on top of this.

edit: FIY the CppPluginSupport is now documented here: https://openmesheffect.org/Sdk/reference.html I called it "SDK" as it seemed more appropriate.

@tkarabela
Copy link
Contributor

Exciting news! inputRequestAttribute() for OfxMeshInputHandle looks just like what's needed, and calling subtypes "semantics" makes a lot of sense. Good thing there's so many standards to get inspiration from ^^

I wonder whether there is still a reason to keep MfxInputDef separate from MfxInput in the C++ API then.

From the C++ SDK point of view, it prevents the user from shooting themselves in the foot (calling AddAttribute(), Allocate() etc. at describe time), which doesn't sound like a bad thing.

I though about letting the semantics be defined only through propSetString on the attribute property set but it was making attributeDefine behaving a bit different than inputRequestAttribute, plus this is a way to foster the use of semantics that otherwise people would easily neglect.

Good call. The OpenFX API has a lot of properties IMO, and semantics should be easy to find and use.

(BTW how does your API handles the case were e.g. several normal attributes are needed, should one call input_mesh.RequestVertexNormalsAttribute() multiple times?)

I was thinking about having the name as default parameter with the option to override it if you wanted multiple attributes:

RequestVertexNormalsAttribute(const char *name=kOfxMeshAttribNormals, bool mandatory=true) { // the "conventional" name for normals attribute
    return RequestAttribute(kOfxMeshAttribVertex, name, 2, kOfxMeshAttribTypeFloat, kOfxMeshAttribSemanticNormal, mandatory);
}

C++ API documentation is a good thing to have, I'll keep that it mind and write docstrings if I do something there.


I've updated MfxVTK to use the new API:

https://github.com/tkarabela/MfxVTK/blob/70e5d7d3ff8e52c8adc3c68860af541b13490a14/src/mfx_vtk_plugin/effects/VtkSurfaceDistanceEffect.cpp#L35-L37

...and I can report that it works with the latest nightly branch and C++ SDK. As the semantic parameter defaults to NULL in the C++ API, nothing was broken for me, I just added attribute requests and semantic to AddAttribute().

I have at least three effects that will test the attribute system and host interaction:

  • VtkSurfaceDistanceEffect (computes shortest distance from starting vertices to all others, useful for animating "wave" propagating along surface)
    • is deformer effect (just computes new attribute)
    • input: 1xvertex color (or vertex weight) mandatory attribute
    • output: 1xvertex UV (where U is the distance) - for usability, it should protect existing UV attributes and not overwrite them, but add a new one (or at least let user choose which UV attribute will be overwritten)
  • VtkPokeEffect (softbody-like deformation, but static)
    • is deformer effect (just moves points)
    • input can be one of:
      • 1 mesh with 1 vertex color mandatory attribute (self-collision, define which part is rigid and which part deforms)
      • 2 meshes with 1 vertex color / vertex weight optional attribute each (separate collider mesh)
    • output is just the deformed mesh with no extra attributes
  • VtkSamplePointsVolumeEffect (make point cloud inside mesh)
    • not deformer
    • input has no requested attributes
    • output: 1xfloat point attribute (distance to mesh surface) - for usability, it should be "convertible" to something per-vertex, like a UV map, once the topology is constructed using later effects - so that it can be used in shading

In terms of the attribute interface, there isn't yet a way to iterate over attributes using kOfxMeshPropAttributeCount, which the non-deformer effects would need.

@eliemichel
Copy link
Owner Author

From the C++ SDK point of view, it prevents the user from shooting themselves in the foot

Correct, I realized this when reading again for documentation. It could even make sense to split some MfxEffectDef from the MfxEffect but we'd lose the ability to define an effect just by subclassing one thing so I'd rather keep it this way for the sake of ease of use and check for misuses with additional code when building in debug mode.

I was thinking about having the name as default parameter with the option to override it if you wanted multiple attributes

Ok, makes sense!

In terms of the attribute interface, there isn't yet a way to iterate over attributes using kOfxMeshPropAttributeCount, which the non-deformer effects would need.

Yes, it's a todo.

Good examples indeed, I've seen you made a nice documentation on readthedoc btw :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion Discussion about design or Q/A rather than an actual issue
Projects
None yet
Development

No branches or pull requests

2 participants