Experimenting with Geometry2 #4386

Closed
mrdoob opened this Issue Feb 7, 2014 · 78 comments

Comments

Owner

mrdoob commented Feb 7, 2014

Seems like I'm finally starting to experiment with something that I've been thinking for a long time.

The problem

The current Geometry structure is, albeit user-friendly, convoluted and unperformant. That structure was designed before WebGL even existed and it has been patched along the way.

BufferGeometry was created as a response to this. However, BufferGeometry has the opposite problem, it's not user-friendly. I don't think a "normal" user needs to know what attributes are.

Every time I see a three.js based project running on WebGL on mobile I hope (fear) the persons behind it didn't use Geometry.

The solution

I spent a long while trying to think how to make BufferGeometry more user friendly, but I couldn't see a way of doing that without sacrificing its flexibility so eventually I decided to keep BufferGeometry as it is. I spent some time during the last cycle making Projector (CanvasRenderer, SoftwareRenderer, SVGRenderer, ...) support the structure instead.

https://github.com/mrdoob/three.js/blob/dev/src/core/Projector.js#L307-L342

The intention now is to go with Geometry2, which eventually would replace Geometry and break backwards compatibility, although I wonder if people modify geometry much or not...

The idea of Geometry2 is to have everything in flat arrays: vertices, normals, uvs and colors ready to be submitted to the GPU (on the WebGL side).

https://github.com/mrdoob/three.js/blob/dev/src/core/Geometry2.js

However, that's not very user-friendly... I experimented with a similar class some months ago that had some "interfaces" to help the user:

https://github.com/mrdoob/three.js/blob/dev/examples/js/core/TypedGeometry.js

All that getters/setters black magic basically allows the user to modify the geometry without having to know that they're just modifying a blob of numbers. The API looked like this:

geometry.face( 10 ).vertex( 1 ).x = 10;

However, this would only be for the user. Generators and loaders would fill the arrays directly. Here's how PlaneGeometry2 looks like:

https://github.com/mrdoob/three.js/blob/dev/src/extras/geometries/PlaneGeometry2.js

As a side not, performance-wise, I've noticed that where PlaneGeometry was created in ~6ms, PlaneGeometry2 generates in ~0.3ms.

On the Projector side, the code is looking pretty simple so far:

https://github.com/mrdoob/three.js/blob/dev/src/core/Projector.js#L343-L358

To index or not to index

This is another topic that has troubled me for years and I think I want to bet to favour non-indexed geometries this time around. Part of the current over complexity in WebGLRenderer is the fact that WebGL limits ELEMENT_ARRAY_BUFFER (indices) to Uint16Array (without extensions). This means that, internally, WebGLRenderer has to split your geometry in chunks and well... I rather not explain how painful this is.

So, even if indexed geometries save on GPU bandwidth, I don't think the complexity it creates is worth it. Geometry2 won't support indices (Let me know if I'm missing something here).

However, BufferGeometry will still be in place, so if someone wants to construct their custom indexed geometry they will still have a way to do it.

Outcome

In theory, generators and loaders will speed up ~10x. Projects will consume less memory (single Float32Arrays vs tons of Vector2/Vector3s). And, both Projector and WebGLRenderer will get much simpler.

mrdoob added the Enhancement label Feb 7, 2014

Couldn't you call it Shape? I'm lazy as hell when I type and Geometry2 feels like getElementsByClass. And that would prevent the backwards compatibility break.

Contributor

empaempa commented Feb 7, 2014

Hey!
I think this is great. One idea would be to allow indices as long as the referenced arrays (vertices, uvs etc.) are within Uint16 - simply check it and throw an error if any of the arrays surpasses the limit.

Even though I haven't seen any major performance impact running drawArrays on desktops, you do lift some work of the vertex shader using indices. And that's probably a very good thing on mobiles.

Owner

mrdoob commented Feb 7, 2014

@rafaelcastrocouto Geometry2 is a temporal name. It will eventually replace Geometry.

@empaempa True that! Maybe a IndexedGeometry could be created and have such limit check.

Contributor

empaempa commented Feb 7, 2014

Yes, that's a good idea to split them up. I guess a usability question would be what devs think of when thinking of "geometry" - does it use indices or not? If so, maybe UnindexGeometry ;)

Contributor

dlabz commented Feb 7, 2014

This is great!
One thing though: I work with IFC files, which come with their own UUID's, so possibility of setting the UUID via constructor would come handy.
Thanks

Owner

mrdoob commented Feb 7, 2014

One thing though: I work with IFC files, which come with their own UUID's, so possibility of setting the UUID via constructor would come handy.

Can't you change the property directly?

geometry.uuid = new_uuid;
Contributor

dlabz commented Feb 7, 2014

@mrdoob : Sure I can. As I said, it would come handy. But, since I'm not using generated UUID, and my files have thousands of objects, so there is a lot of unnecessary calculations done . I can also use the custom geometry.js for my needs. It was just my two cents...

Contributor

bhouston commented Feb 7, 2014

One thing we may want to do is always use 32bit ints for the indices. I did some checks and nearly every device and PC I could find supports the 32bit index webgl extension. Having to split meshes into 16bit chuncks is brutal and generally unnecessary.

Contributor

bhouston commented Feb 7, 2014

BTW I think this change rocks. :) Just be sure to allow for multiple UV sets, tangents, binormals, etc...

Owner

mrdoob commented Feb 7, 2014

One thing we may want to do is always use 32bit ints for the indices. I did some checks and nearly every device and PC I could find supports the 32bit index webgl extension.

Interesting...

BTW I think this change rocks. :) Just be sure to allow for multiple UV sets, tangents, binormals, etc...

Will do! ^^

Contributor

crobi commented Feb 10, 2014

This will be the most important and awesome change to three.js in a long time, in my opinion. I've been actually thinking about replacing three.js by some other library because the overhead of all those Vector3s was getting too big.

How will the new Geometry handle MeshFaceMaterials? Those cause overhead as well since the engine has to split faces by the face materials. Are you dropping MeshFaceMaterial and only allowing one material per Geometry? Or will the geometry consist of multiple parts, each having one material and one range of vertices/indices that belong to it?

Dropping indices will increase the file size of models (not sure by how much on average), or require loaders to de-index the data. I was hoping three.js would eventually adapt a glTF-like file format that does not require any preprocessing at all during loading (glTF loads binary blobs into typed arrays and passes the data directly to webgl buffers).

Owner

mrdoob commented Feb 10, 2014

Are you dropping MeshFaceMaterial and only allowing one material per Geometry?

I think so. Yes.

I was hoping three.js would eventually adapt a glTF-like file format that does not require any preprocessing at all during loading (glTF loads binary blobs into typed arrays and passes the data directly to webgl buffers).

I think the new structure should be perfect for loading gITF. Shouldn't it?

Contributor

crobi commented Feb 10, 2014

I think so. Yes.

This is only indirectly related to the new Geometry design, but if you're removing MeshFaceMaterial (which I think is a good idea), you might have to adapt other interfaces to simplify the usage of multi-material objects. I assume multi-material Mesh objects won't be supported at all and will have to be implemented via multiple Mesh objects.

  • Some people might expect a loader to support multi-material objects, so loaders might have to output a list of meshes instead of a single mesh.
  • A single skinned object might contain multiple materials (see also isse #3510), so you might have to separate the skeleton and the mesh/geometry (this might simplify the Mesh / SkinnedMesh / MorphAnimMesh hierarchy).

I think the new structure should be perfect for loading gITF. Shouldn't it?

It should. But if you don't support indices at all, vertices have to be duplicated before saving the file, which might increase the file size. It's just a potential disadvantage of not supporting indexed geometry.

Contributor

zz85 commented Feb 18, 2014

i think helper methods in TypedGeometry is a great idea. Would the usual Vector3s be depreciated or merged into it?

Right now I'm assume everytime you do a geometry.vertex(1) it would return you a new TypedVector3? I think the overheads shouldn't be much, but if it was run in a loop, would it better to to allow copying? eg.

typeVector = new THREE.TypeVector3();
for (i=0; i<10000;i++) {
    geometry.vertex(i, typeVector);
    typeVector.set(x, y, z);
}

Right now it seems that the typed arrayed in Geometry2 are pre-allocated too. I'm thinking about applications which would require changing the amount of vertices, eg a modelling app for example. Would it be better to pre-allocate a large array first, re-create the arrays, or expand them? (SubdivisionModifier might also find "producing" a new geometry easier than "modifying" a existing one).

Also, any thoughts on how Geometry2 may support a half-edge mesh representation?

Lastly, any planned timeline for supporting Geometry2 across the renderers esp. WebGLRenderer? That may affect how soon we should start thinking geometries in the new format :D

Contributor

bhouston commented Feb 18, 2014

Would the usual Vector3s be depreciated or merged into it?

I really like the math library of ThreeJS and I would suggest keeping it. I would suggest not using it in Geometry2 though. Geometry2 should use flat arrays that are easily uploaded to WebGL. But keeping around Vector3 is useful for doing the math that drives the engine -- all game engines have a 3D math library similar to ThreeJS even if they have flat arrays for their renderable items.

I'm thinking about applications which would require changing the amount of vertices, eg a modelling app for example.

A flat array is by far the best representation, even for modelers.

(SubdivisionModifier might also find "producing" a new geometry easier than "modifying" a existing one).

While this is a useful function for modelling (we use it in http://clara.io), it isn't intended for real-time engines as that type modifier is very slow. No game engine that I know of uses SubD in real-time, except on the GPU as real-time tesselation (which is slightly different) I do not think one should optimize a WebGL-oriented rendering engine to support SubdivisionModifier. Rather one should focus on real-time tesselation methods via a geometry shader (but alas this is not yet supported in WebGL.)

Also, any thoughts on how Geometry2 may support a half-edge mesh representation?

No game engines support such a structure. Half-Edge mesh representations are for editing. Usually one translates these structures to flat arrays for rendering. Even game engines with dynamic geometry (such as fracturing engines) do not use half-edge mesh representations.

Contributor

zz85 commented Feb 18, 2014

@bhouston i'm not against the idea of flat arrays but rather curious to know how you or others would support dynamic geometry with flat arrays (Geometry2) - over allocate, reallocate, or expand the flat arrays?

As for the half-edge representation referenced in The ryg blog, it seems like he converts that data structure into flat arrays too, but considering that he probably uses that for real-time demos, its a possibility someone may write half-edge meshes that translates to Geometry2. If so, I think it would be a bonus for more efficient subdivisions, or modelling applications.

Contributor

crobi commented Feb 18, 2014

i'm not against the idea of flat arrays but rather curious to know how you or others would support dynamic geometry with flat arrays

  • If you don't modify the topology of the geometry: Use a vertex shader. If this is not possible, modify vertex data in-place, either using the API mrdoob described (geometry.face( 10 ).vertex( 1 ).x=10;), or accessing the raw data.
  • If you modify the topology (e.g., insert new vertices): create a new geometry object from scratch and delete the old one. If you had some kind of dynamic mesh structure, this is what you would have to do anyway before sending the new vertex data to the GPU.

Also, any thoughts on how Geometry2 may support a half-edge mesh representation?

You could build a separate half-edge mesh structure that outputs corresponding flat arrays when desired. A common use case for a javascript render engine is to load some geometry and display it in real time, without any modifications (game engines, 3d product previews). If you want this to be performant on mobile devices, having flat arrays that are sent 1:1 to the GPU is pretty much the only option. Anything with lots of small objects will have a negative impact on the memory performance of the engine.

Contributor

bhouston commented Feb 18, 2014

If so, I think it would be a bonus for more efficient subdivisions, or modelling applications.

The structures one wants for modeling application (and I know as we created http://clara.io), is not really like Geometry2. It is quite a bit different. And it actually can not be displayed directly, rather we translate our modeling structure into THREE.Geometry (and hopefully soon THREE.Geometry2) for display.

Real-time display of geometry is sort of a different problem than editing/modeling of geometry. I think it is best to have different structures for them so that each structure is efficient at its task, rather than finding an intermediate structure that is suboptimal for both tasks. (Or excessively complex.)

Contributor

zz85 commented Feb 18, 2014

i see, Geometry2 now sounds more like a wrapper of elements/buffers for the GPU.

Owner

mrdoob commented Feb 18, 2014

The structures one wants for modeling application (and I know as we created http://clara.io), is not really like Geometry2. It is quite a bit different. And it actually can not be displayed directly, rather we translate our modeling structure into THREE.Geometry (and hopefully soon THREE.Geometry2) for display.

Why aren't you translating to THREE.BufferGeometry?

Owner

mrdoob commented Feb 18, 2014

Lastly, any planned timeline for supporting Geometry2 across the renderers esp. WebGLRenderer? That may affect how soon we should start thinking geometries in the new format :D

It should already work... f1630e9 :)

Owner

mrdoob commented Feb 18, 2014

I'm thinking, however, that maybe Geometry2 should just extend BufferGeometry...

Owner

mrdoob commented Feb 18, 2014

I'm thinking, however, that maybe Geometry2 should just extend BufferGeometry...

Hmmm, it may not be a bad idea... 8adc048

Contributor

bhouston commented Feb 19, 2014

Why aren't you translating to THREE.BufferGeometry?

Our dynamic topology means we can change vertex counts and we can easily exceed the int16 limit to the indices. We do not want at this point to manage multiple THREE.BufferGeometry's, so we are letting ThreeJS's automatic splitting of Geometry do it this way. But it is really slow when this happens. Thus we would prefer for ThreeJS to add support int32 index buffers (and possibly drop int16 index buffer support, but I am not ideological on this, I just think it isn't useful) so splitting is not necessary and we can use a single THREE.BufferGeometry per object in our scene.

The other thing that would help is the ability to share buffers between THREE.BufferGeometries. One thing we do is have wireframes and solid meshes draw for the same object. The wireframe for a polymesh and its solid representation share the same vertices and UVs and normals, although the index buffers for the line set and the triangulated surface differ.

In part this sharing is important to us for maximum speed, but I understand we are relatively unique in this area so I am not pushing on it. That prototype direct renderer I wrote supports this type of structure -- sharing vertices and UV between a lineset and a triangulated surface.

Contributor

bhouston commented Feb 19, 2014

In part this sharing is important to us for maximum speed, but I understand we are relatively unique in this area so I am not pushing on it. That prototype direct renderer I wrote supports this type of structure -- sharing vertices and UV between a lineset and a triangulated surface.

I think in the end that we will likely have to move towards something like that direct renderer approach (allowing one to skip the intermediate ThreeJS scene representation) I wrote because it can give us gains by being nearly perfectly optimal (both for speed and for GPU/CPU memory usage) for our case at the cost of being a bit difficult to use. It is just a question of getting the time to finish the direct renderer and unifying the shaders as much as possible.

Owner

mrdoob commented Feb 19, 2014

Thus we would prefer for ThreeJS to add support int32 index buffers (and possibly drop int16 index buffer support, but I am not ideological on this, I just think it isn't useful) so splitting is not necessary and we can use a single THREE.BufferGeometry per object in our scene.

You mean using the OES_element_index_uint extension?

Contributor

zz85 commented Feb 19, 2014

Would interleaved attributes be of consideration? See http://i-am-glow.com/?p=165

While on this topic of geometry performance, it maybe also worth taking a look at the rendering approach done in @empaempa's GLOW library.

Contributor

bhouston commented Feb 19, 2014

You mean using the OES_element_index_uint extension?

Yes. :)

Would interleaved attributes be of consideration?

While they can offer a bit of improved performance - or so they say, they are horrible to update in response to changes as you need to update the whole inteleaved buffer rather than just one attribute-specific flat array. Also the interleaved structure changes based on which attributes are enabled which can make things a bit complex.

Contributor

empaempa commented Feb 19, 2014

@zz85 Interleaving is cool for (the bus') performance but really painful to implement. I'd leave it on the roadmap for now ;)

Owner

mrdoob commented Feb 19, 2014

Contributor

bhouston commented Feb 19, 2014

@gero3 @mrdoob Whoa, sweet! Thank you!

mrdoob referenced this issue in jamieowen/threeflow.js Feb 19, 2014

Closed

BufferGeometryExporter #2

@crobi & @mrdoob. About leaving one material per Geometry. 3d artists use multi-materials ("MM" hereafter) in practice rather often. Their exclusion from the engine can cause problems.

Artificial splitting of a MM object to separate parts by material can be a bad idea. For example, let we deform a MM surface. It can be hard to provide its continuity after splitting sometimes. Or, in some cases a user needs to know that "this set of objects" really is a single "logical" object, therefore he must maintain additional data and code to support this. Such things look unnatural.

Now you are providing these excelent things (continuity and others) in Three.js (thank you!). After MM exclusion, some users will have to reproduce these tricky things each time in own code.

p.s. I hope that I understood you incorrectly and you are not going to remove MM support.

Contributor

empaempa commented Feb 20, 2014

@a-plotnik This is true, but in the realtime world MM isn't widely supported - at least not on weak platforms such mobiles and web. I'd rather see some way to easily combine shaders/materials into one shader to simulate MM :)

Contributor

bhouston commented Feb 20, 2014

@a-plotnik It is true that DirectX (both the old style v9 and the new style v10/v11) supports multi-material meshes, but it does so in a simplier and more restricted way than ThreeJS's current support. The way that DirectX has done multiple materials is to allow one to render subsets of the index buffer -- such as you can render from 0-1100 and then later you can render 1101-2500. One usually applies different materials to each of these contiguous subset of the index buffer. This means that you do not have a per face material index as ThreeJS has it now. It also means that the renderer doesn't have to do much at all to support multimaterials in this fashion -- it just has to render a subset of the indices.

If ThreeJS wanted to support multi-materials in this fashion, the way that it would do it would be to have a list of subset ranges. And one would be able to apply materials to each subset as an alternative to setting on the mesh as a whole. This would be very fast, the only cost would be switching materials as all the buffers would remain loaded and unchanged.

I am not sure this is a priority for us, I'm just sharing this so one can learn from how DirectX does it.

Owner

mrdoob commented Feb 20, 2014

Hmmm... BufferGeometry kind of already supports this with its offsets array... ^^

@empaempa My current target is browsers on desktops. Mobile devices with new surprises are expected later, so thank you for warnings.

@bhouston Thanks a lot, the situation becomes more clear.

I worried that multi-materials can disappear from high level api, may be with integrity of corresponding objects. I didn't try to touch internal Geometry representation.

@mrdoob
Thank you for this whole topic. I used only Geometry before and looked at it only.

Contributor

Usnul commented Mar 7, 2014

@mrdoob
have a look at https://developer.mozilla.org/en-US/docs/Web/API/ArrayBufferView
This is probably what could make whole typed array thing work in context of geometry. A view can specify an offset, and since it's part of the language - you can probably expect higher performance. That and the fact that you get a pointer to section of a buffer, which is really want you seem to be trying to do.

On the other note - i'm not quite sure what is the fundamental issue with current Geometry and how far implications of replacing it would go, could you clarify? Generally there will always be trade-offs in implementation, it would be cool if design could be driven by ease of use primarily, as well as keeping this API consistent mostly.

Owner

mrdoob commented Mar 9, 2014

@Usnul your comment made me try this:

THREE.Geometry3 = function ( size ) {

    THREE.BufferGeometry.call( this );

    var verticesBuffer = new ArrayBuffer( size * 3 * 4 );
    var normalsBuffer = new ArrayBuffer( size * 3 * 4 );
    var uvsBuffer = new ArrayBuffer( size * 2 * 4 );

    this.vertices = [];
    this.normals = [];
    this.uvs = [];

    for ( var i = 0; i < size; i ++ ) {

        this.vertices.push( new Float32Array( verticesBuffer, i * 3 * 4, 3 ) );
        this.normals.push( new Float32Array( normalsBuffer, i * 3 * 4, 3 ) );
        this.uvs.push( new Float32Array( uvsBuffer, i * 2 * 4, 2 ) );

    }

    this.attributes[ 'position' ] = { array: new Float32Array( verticesBuffer, 0, size * 3 ), itemSize: 3 };
    this.attributes[ 'normal' ] = { array: new Float32Array( normalsBuffer, 0, size * 3 ), itemSize: 3 };
    this.attributes[ 'uv' ] = { array: new Float32Array( uvsBuffer, 0, size * 2 ), itemSize: 2 };

};

THREE.Geometry3.prototype = Object.create( THREE.BufferGeometry.prototype );

This results in a nice-ish API...

geometry.vertices[ 0 ][ 0 /*x*/ ] = 1;

Sadly, creating all these ArrayBufferViews is slow.

Contributor

jbaicoianu commented Mar 9, 2014

I was discussing this briefly with jetienne and merpnderp on irc earlier and we had the idea of using accessors and proxy objects to help with this sort of thing. Here's my implementation of that. This allows you to do any of the normal operations you'd expect to be able to do on normal vertices, and is identical to how you would work with the old THREE.Geometry:

var geom = new THREE.Geometry4(10);
geom.vertices[0].set(1, 2, 3);
geom.vertices[1].copy(someOtherVec);
geom.vertices[2].addVectors(geom.vertices[0], geom.vertices[1]);
geom.vertices[3].y = 10;
console.log(geom.vertices[2]);

Here's what the code looks like:

THREE.Geometry4 = function ( size ) {

        THREE.BufferGeometry.call( this );

        var verticesBuffer = new ArrayBuffer( size * 3 * 4 );
        var normalsBuffer = new ArrayBuffer( size * 3 * 4 );
        var uvsBuffer = new ArrayBuffer( size * 2 * 4 );

        this.attributes[ 'position' ] = { array: new Float32Array( verticesBuffer, 0, size * 3 ), itemSize: 3 };
        this.attributes[ 'normal' ] = { array: new Float32Array( normalsBuffer, 0, size * 3 ), itemSize: 3 };
        this.attributes[ 'uv' ] = { array: new Float32Array( uvsBuffer, 0, size * 2 ), itemSize: 2 };

        this.vertices = new THREE.VectorArrayProxy( this.attributes[ 'position' ] );
        this.normals = new THREE.VectorArrayProxy( this.attributes[ 'normal' ] );
        this.uvs = new THREE.VectorArrayProxy( this.attributes[ 'uv' ] );

};
THREE.Geometry4.prototype = Object.create( THREE.BufferGeometry.prototype );

THREE.VectorArrayProxy = function(attribute) {

        // Acts as a proxy for an array of vectors, by setting up accessors which return THREE.Vector*Proxy objects

        this.attribute = attribute;

        for (var i = 0, l = this.attribute.array.length / this.attribute.itemSize; i < l; i++)  {

                Object.defineProperty(this, i, {
                        get: (function(i) { return function() { return this.getValue(i); }})(i),
                        set: (function(i) { return function(v) { return this.setValue(i, v); }})(i),
                });

        }

}

THREE.VectorArrayProxy.prototype.getValue = function(i) {

        // Allocates a new THREE.Vector2Proxy or THREE.Vector3Proxy depending on the itemSize of our attribute

        var subarray = this.attribute.array.subarray(i * this.attribute.itemSize, (i + 1) * this.attribute.itemSize);

        switch (this.attribute.itemSize) {

                case 2:
                        return new THREE.Vector2Proxy(subarray);

                case 3:
                        return new THREE.Vector3Proxy(subarray);

        }

}
THREE.VectorArrayProxy.prototype.setValue = function(i, v) {

        var vec = this[i];
        vec.copy(v);

}

// Vector Proxy Objects

THREE.Vector2Proxy = function(subarray) {

        this.subarray = subarray;

}
THREE.Vector2Proxy.prototype = Object.create( THREE.Vector2.prototype );
Object.defineProperty(THREE.Vector2Proxy.prototype, 'x', { get: function() { return this.subarray[0]; }, set: function(v) { this.subarray[0] = v; } });
Object.defineProperty(THREE.Vector2Proxy.prototype, 'y', { get: function() { return this.subarray[1]; }, set: function(v) { this.subarray[1] = v; } });


THREE.Vector3Proxy = function(subarray) {

        this.subarray = subarray;

}
THREE.Vector3Proxy.prototype = Object.create( THREE.Vector3.prototype );

Object.defineProperty(THREE.Vector3Proxy.prototype, 'x', { get: function() { return this.subarray[0]; }, set: function(v) { this.subarray[0] = v; } });
Object.defineProperty(THREE.Vector3Proxy.prototype, 'y', { get: function() { return this.subarray[1]; }, set: function(v) { this.subarray[1] = v; } });
Object.defineProperty(THREE.Vector3Proxy.prototype, 'z', { get: function() { return this.subarray[2]; }, set: function(v) { this.subarray[2] = v; } });

I haven't benchmarked this, so I don't know how this would perform with highly-dynamic geometry. If the idea of using accessor functions so heavily isn't too much of a turn-off, it would be worth benchmarking to see what sort of performance impact this has in various use cases.

Owner

mrdoob commented Mar 9, 2014

Nice! It's similar to the approach in #3672 but I hadn't came up with the idea of creating all the getters/setters for i :)

Owner

mrdoob commented Mar 9, 2014

Also the idea of extending Vector3 and patching its x, y and z...

Contributor

bhouston commented Mar 9, 2014

I also think that creating properties per element in the large array is highly problematic in terms of memory usage -- now you have two pointers per item of the original array and remember that pointers are at least 32 bits, thus you have probably doubled memory consumption, and it is probably worse than that. I'd recommend against being too fancy.

Owner

mrdoob commented Mar 9, 2014

Sadly...

new THREE.Geometry3( 200 ); // 1.378ms
new THREE.Geometry4( 200 ); // 8.273ms

I also think that creating properties per element in the large array is highly problematic in terms of memory usage -- now you have two pointers per item of the original array and remember that pointers are at least 32 bits, thus you have probably doubled memory consumption, and it is probably worse than that. I'd recommend against being too fancy.

Yep! There has to be a way though... :)

Owner

mrdoob commented Mar 9, 2014

@jbaicoianu here's a more efficient approach:

THREE.Geometry5 = function ( size ) {

    THREE.BufferGeometry.call( this );

    var verticesBuffer = new Float32Array( size * 3 );
    var normalsBuffer = new Float32Array( size * 3 );
    var uvsBuffer = new Float32Array( size * 2 );

    this.vertices = [];
    this.normals = [];
    this.uvs = [];

    for ( var i = 0; i < size; i ++ ) {

        this.vertices.push( new THREE.TypedVector3( verticesBuffer, i * 3 ) );
        this.normals.push( new THREE.TypedVector3( normalsBuffer, i * 3 ) );
        this.uvs.push( new THREE.TypedVector2( uvsBuffer, i * 2 ) );

    }

    this.attributes[ 'position' ] = { array: verticesBuffer, itemSize: 3 };
    this.attributes[ 'normal' ] = { array: normalsBuffer, itemSize: 3 };
    this.attributes[ 'uv' ] = { array: uvsBuffer, itemSize: 2 };

};

THREE.Geometry5.prototype = Object.create( THREE.BufferGeometry.prototype );

THREE.TypedVector2 = function ( array, offset ) {

    this.array = array;
    this.offset = offset;

};

THREE.TypedVector2.prototype = Object.create( THREE.Vector2.prototype );

Object.defineProperties( THREE.TypedVector2.prototype, {
    'x': {
        get: function () { return this.array[ this.offset ]; },
        set: function ( v ) { this.array[ this.offset ] = v; }
    },
    'y': {
        get: function () { return this.array[ this.offset + 1 ]; },
        set: function ( v ) { this.array[ this.offset + 1 ] = v; }
    }
} );

THREE.TypedVector3 = function ( array, offset ) {

    this.array = array;
    this.offset = offset;

};

THREE.TypedVector3.prototype = Object.create( THREE.Vector3.prototype );

Object.defineProperties( THREE.TypedVector3.prototype, {
    'x': {
        get: function () { return this.array[ this.offset ]; },
        set: function ( v ) { this.array[ this.offset ] = v; }
    },
    'y': {
        get: function () { return this.array[ this.offset + 1 ]; },
        set: function ( v ) { this.array[ this.offset + 1 ] = v; }
    },
    'z': {
        get: function () { return this.array[ this.offset + 2 ]; },
        set: function ( v ) { this.array[ this.offset + 2 ] = v; }
    }
} );
Owner

mrdoob commented Mar 9, 2014

I think I'm going to proceed with this one :)

The benchmark is not very fair at the moment because PlaneGeometry is indexed and PlaneGeometry5 is not, but these are the results:

new THREE.PlaneGeometry( 200, 200, 200, 200 ); // ~370ms, 54Mb
new THREE.PlaneGeometry5( 200, 200, 200, 200 ); // ~100ms, 20Mb

I'll work on THREE.IndexedPlaneGeometry5 next.

Owner

mrdoob commented Mar 9, 2014

@bhouston By the way, this is basically an improvement to Geometry. I'm aiming to find something fairly user-friendly that sits on top of BufferGeometry. For efficiency, one should probably deal with BufferGeometry directly.

Also, would be nice to be able to do something like new THREE.Geometry( BufferGeometry ) to be able to manipulate a BufferGeometry in an easy (albeit non performant) way.

Owner

mrdoob commented Mar 9, 2014

Ok. Here's vs IndexedGeometry5... :)

new THREE.PlaneGeometry( 200, 200, 200, 200 ); // ~370ms, 54Mb
new THREE.IndexedPlaneGeometry5( 200, 200, 200, 200 ); // ~24ms, 7Mb

For reference, this is the results by using plain BufferGeometry:

new THREE.PlaneGeometryB( 200, 200, 200, 200 ); // ~8ms, 3Mb
Collaborator

WestLangley commented Mar 9, 2014

@mrdoob Hehe. That's what I meant by "clever" the other day!

Owner

mrdoob commented Mar 9, 2014

@WestLangley kudos to @jbaicoianu for the nudge ^^

Owner

mrdoob commented Mar 9, 2014

Now the question is... should the primitive generators, and loaders use this interface? or leave this just for examples/tutorials and keep the internals as fast as possible?

Contributor

jbaicoianu commented Mar 9, 2014

Sounds like ideally, all of the primitive generators and loaders should be converted to raw BufferGeometry - it always surprised me how long it took to allocate a simple PlaneGeometry with high subdivision, so a 15x speedup is nice but a 46x improvement is AWESOME. As long as there's an easy way to convert a BufferGeometry to one of these new easier-to-use geometry wrappers, then it seems most people would appreciate the efficiency at init time.

I wonder how well some of the more complex geometries (like ExtrudeGeometry) will work by just swapping out the base Geometry for this new implementation. It would be awesome if the compatibility layer was so close to the real thing that we could use these with all the BufferGeometry efficiencies, without having to throw out the code out and start over.

Owner

mrdoob commented Mar 10, 2014

It would be awesome if the compatibility layer was so close to the real thing that we could use these with all the BufferGeometry efficiencies, without having to throw out the code out and start over.

Sadly we'll have to re-code quite a few things... geometry.vertices structure is backwards compatible, but all the rest won't be...

Contributor

zz85 commented Mar 10, 2014

👍 THREE.Geometry5 😉

Owner

mrdoob commented Mar 10, 2014

This is how PlaneGeometry looks like using raw BufferGeometry btw.

/**
 * @author mrdoob / http://mrdoob.com/
 * based on http://papervision3d.googlecode.com/svn/trunk/as3/trunk/src/org/papervision3d/objects/primitives/Plane.as
 */

THREE.PlaneGeometryB = function ( width, height, widthSegments, heightSegments ) {

    THREE.BufferGeometry.call( this );

    this.width = width;
    this.height = height;

    this.widthSegments = widthSegments || 1;
    this.heightSegments = heightSegments || 1;

    var width_half = width / 2;
    var height_half = height / 2;

    var gridX = this.widthSegments;
    var gridY = this.heightSegments;

    var gridX1 = gridX + 1;
    var gridY1 = gridY + 1;

    var segment_width = this.width / gridX;
    var segment_height = this.height / gridY;

    var vertices = new Float32Array( gridX1 * gridY1 * 3 );
    var normals = new Float32Array( gridX1 * gridY1 * 3 );
    var uvs = new Float32Array( gridX1 * gridY1 * 2 );

    var offset = 0;
    var offset2 = 0;

    for ( var iy = 0; iy < gridY1; iy ++ ) {

        var y = iy * segment_height - height_half;

        for ( var ix = 0; ix < gridX1; ix ++ ) {

            var x = ix * segment_width - width_half;

            vertices[ offset     ] = x;
            vertices[ offset + 1 ] = - y;

            normals[ offset + 2 ] = 1;

            uvs[ offset2 ] = ix / gridX;
            uvs[ offset2 + 1 ] = iy / gridY;

            offset += 3;
            offset2 += 2;

        }

    }

    var indices = new Uint16Array( gridX * gridY * 6 );

    var offset = 0;

    for ( var iy = 0; iy < gridY; iy ++ ) {

        for ( var ix = 0; ix < gridX; ix ++ ) {

            var a = ix + gridX1 * iy;
            var b = ix + gridX1 * ( iy + 1 );
            var c = ( ix + 1 ) + gridX1 * ( iy + 1 );
            var d = ( ix + 1 ) + gridX1 * iy;

            indices[ offset     ] = a;
            indices[ offset + 1 ] = b;
            indices[ offset + 2 ] = d;

            indices[ offset + 3 ] = b;
            indices[ offset + 4 ] = c;
            indices[ offset + 5 ] = d;

            offset += 6;

        }

    }

    this.attributes[ 'index' ] = { array: indices, itemSize: 1 };
    this.attributes[ 'position' ] = { array: vertices, itemSize: 3 };
    this.attributes[ 'normal' ] = { array: normals, itemSize: 3 };
    this.attributes[ 'uv' ] = { array: uvs, itemSize: 2 };

};

THREE.PlaneGeometryB.prototype = Object.create( THREE.BufferGeometry.prototype );

Not very user friendly, but efficient.

Contributor

Usnul commented Mar 10, 2014

@mrdoob I would caution against trading readability for performance, user-friendly API is usually more important in long run, just like code that's easier to maintain. But these snippets looks quite promising!

IMHO you should go deep with performance and leave "easy coding" for tquery (http://jeromeetienne.github.io/tquery/)

Contributor

Usnul commented Mar 10, 2014

@rafaelcastrocouto
jQuery has a use, because of how css works, and because of how html works. tQuery is a cool idea, without the solid basis of jQuery. jQuery exists as an extension to DOM api, this is because that api is lacking. Three.js has the freedom to work on its API and improve it, it doesn't have the kind of momentum DOM API does, which requires a very long time to change anything. Looking at tQuery i see a layer of indirection which reduces amount of code I write in small quantities and gives me nothing to help with the kind of code that I write in large quantities. Because of interest invested into tQuery - it may bring cool API ideas for 3d in general, but those same ideas can be merged back into threejs. "easy coding" is smacking keyboard with a fish :)

Contributor

bhouston commented Mar 10, 2014

My two cents is I prefer the direct use of BufferGeometry rather than being tricky. What @mrdoob outlines in this comment:

#4386 (comment)

It is just easier for me to understand, it is incredibly performance and it still has the easy to use interface provided by functions.

If one wants to make it easy to use elsewhere for custom geometry creation, just have a CustomGeometry node that wraps a BufferGeometry that has useful functions like this.setVertices( array of Vector3, optionalOffset, optionalLength ), this.setFaces( array of FacesoptionalOffset, optionalLength ), etc.

Contributor

crobi commented Mar 10, 2014

I agree with bhouston. I care about performance and value it higher than user-friendliness of on-the-fly mesh modifications. I also feel comfortable enough directly manipulating the plain buffers (but maybe that is just my experience with graphics programming).

I would prefer the core of three.js to use a plain BufferGeometry-like structure (including geometry loaders, geometry generators, tutorials and examples) and provide a wrapper for mesh manipulations.

Contributor

bhouston commented Mar 11, 2014

@crobi I'd actually argue that the approach outlined by @mrdoob here #4386 (comment) is actually more user friendly than the trickiness of the other approaches. The reason it is more user friendly is that it is understandable. It sucks trying to explain trickiness because it requires communicating both the underlying aspects as well as the trickiness on top. I'd rather just explain the underlying aspects in a more direct manner.

Owner

mrdoob commented Mar 12, 2014

So here's the latest approach... 9858c27

Collaborator

WestLangley commented Mar 12, 2014

How about calling it BufferGeometryHelper?

Owner

mrdoob commented Mar 12, 2014

I considered that. But the API have sort of established that *Helper is a visual helper.

But yes, naming suggestions welcomed.

Owner

mrdoob commented Mar 12, 2014

Maybe just GeometryManipulator, or GeometryHandler...

Contributor

zz85 commented Mar 12, 2014

GeometryWrapper.

btw @mrdoob's new picture is taking a little time getting use to 😝

Owner

mrdoob commented Mar 12, 2014

btw @mrdoob's new picture is taking a little time getting use to 😝

haha! yeah, taking me some time me too ;)

Contributor

gero3 commented Mar 13, 2014

What is the plan around releasing this?

Contributor

bhouston commented Mar 13, 2014

I like the most recent version, because it keeps the core simple but it is easy to use the helper when needed. When I'ved used this pattern before, I've called it Mesh and MeshEditor. If one mapped this naming scheme to this situation one would end up with GeometryEditor or BufferGeometryEditor.

Collaborator

WestLangley commented Mar 13, 2014

+1 What @bhouston said.

Owner

mrdoob commented Mar 13, 2014

What is the plan around releasing this?

I may hold on releasing this month. I would like to feel good about this. And there are a few things broken right now.

Ideally, I would like to have all the geometry generators and loaders converted for the next release. But I suspect is not realistic.

An approach I went through was having PlaneGeometry and PlaneBufferGeometry. PlaneGeometry basically adds all helpers to what PlaneBufferGeometry produces. This being the most backwards compatible way. It's definitely faster than the current PlaneGeometry, but, not as fast/optimised as PlaneBufferGeometry.

That's what I'm still thinking about... Go with BufferGeometry based PlaneGeometry + GeometryEditor, or PlaneGeometry + PlaneBufferGeometry...

Contributor

jbaicoianu commented Mar 14, 2014

@mrdoob, if the primary goal is to clean up all the Geometry specific code from WebGLRenderer, isn't the majority of that code essentially the same as the code to convert a Geometry to a BufferGeometry? What if Mesh automatically turned a legacy Geometry into one or more BufferGeometry objects on init, then the Geometry specific stuff in WebGLRenderer and all the stuff with geometry.geometryGroups could be stripped out, and as far as WebGLRenderer is concerned everything's a BufferGeometry.

Something like this:

THREE.Mesh = function ( geometry, material ) {
    THREE.Object3D.call( this );
    if (geometry === undefined) {
        this.geometry = new THREE.BufferGeometry();
    } else if (geometry instanceof THREE.Geometry) {
        this.convertGeometry(geometry, material);
    } else {
        this.geometry = geometry;
    }
}

THREE.Mesh.prototype = Object.create( THREE.Object3D.prototype );

THREE.Mesh.prototype.convertGeometry = function(geometry, material) {
    if (material instanceof THREE.MeshFaceMaterial) {
        // Multi-material mesh, move the geometryGroups splitting code here
        var splitgeometry = ...;

        for (var i = 0; i < splitgeometry.length; i++) {
            this.add(new THREE.Mesh(splitgeometry[i], material.materials[i]));
        }
        return new THREE.BufferGeometry();
    } else {
        return THREE.BufferGeometryUtils.fromGeometry(geometry);
    }
}

If this suggestion is getting a bit off of the original topic I can file this as a new issue, just trying to consider another possible option for how to enable the WebGLRenderer clean-up without having to ask everyone to sacrifice their old code. This suggestion would allow Geometry to live alongside BufferGeometry without having to keep a separate codepath in WebGLRenderer's internals, and people could update their code to take full advantage of BufferGeometry and its helpers as needed.

Contributor

zz85 commented Mar 14, 2014

i think another reason why mrdoob wants is getting rid of old Geometry (apart from renderer complexity) is that using Geometry objects are potentially a performance hog for large amount of vertices.

in my humble opinion, everyone have a level of abstraction they are comfortable with. the old geometry feels like a good abstraction for build your own geometries, and while buffergeometry feels like a good abstraction of how buffers are represented.

personally, the "rawer" BufferGeometry feels more for exporters, machines or humans who wants to optimize their stuff. I think the users of BufferGeometry would have no problems writing or creating their own Geometry primitives easily. the rest of users (or maybe just me) might still prefer to think of meshes with vertices and faces, which is probably why GeometryManipulator (or what I see as a wrapper) comes in.

Owner

mrdoob commented Mar 14, 2014

@mrdoob, if the primary goal is to clean up all the Geometry specific code from WebGLRenderer, isn't the majority of that code essentially the same as the code to convert a Geometry to a BufferGeometry? What if Mesh automatically turned a legacy Geometry into one or more BufferGeometry objects on init, then the Geometry specific stuff in WebGLRenderer and all the stuff with geometry.geometryGroups could be stripped out, and as far as WebGLRenderer is concerned everything's a BufferGeometry.

Yes, I thought of that. And it would indeed help to have a better understanding of WebGLRenderer. But then it would pollute Mesh, Line and ParticleSystem... I thought it would be a better approach to produce BufferGeometry directly rather than converting.

Contributor

jbaicoianu commented Mar 14, 2014

I agree that generating the included PlaneGeometry, SphereGeometry, etc. should be done with BufferGeometry (or at least that there should be a PlaneBufferGeometry option - I can see benefits to both approaches), and that people should be encouraged to use BufferGeometry or a wrapper for their own cases, but I also understand how frustrating it is for users of the library when major API changes like this force them to rewrite large parts of their code. This sort of change often leaves people stuck between a rock and a hard place, when they want/need some cool new feature that has been added but upgrading means a couple days of going through and refactoring their codebase, updating libraries, etc, and the library earns a reputation for having an unstable API.

I would consider my suggestion to be a transitional thing, a way that we could give the nice "deprecated" message but still work with existing scenes, and then at some future point when everyone has had plenty of time to convert their code and the existing Three.js code, it could be removed and BufferGeometry with its accompanying wrappers would be the only option that needs to be supported. This would be in line with how deprecation currently works in Three.js - normally the deprecated function complains but still works as it did before, until it's stripped out in the next release.

Another option rather than doing the conversion directly in the Mesh code would be to throw a deprecated warning in there if THREE.Geometry is passed, and provide the above Mesh.convertGeometry() code as part of BufferGeometryUtils or GeometryUtils. Developers would then be forced to go through and change all of those instances rather than it seamlessly working, but the change is a lot less daunting if the deprecated warning tells you that it's a one-line change in your app code. This avoids polluting the Mesh, Line, etc. classes with compatibility code, at the cost of a little developer inconvenience.

Contributor

Usnul commented Mar 17, 2014

I agree with @zz85 , having low-level abstraction like buffered geometry (without anything above) depreciates value of three.js as an abstraction on top of WebGL. Just my personal feeling.

Owner

mrdoob commented Mar 17, 2014

Thanks a lot guys! Always good to see other perspectives :)

To your point @jbaicoianu, what if there was a BufferGeometry wrapper that replicated the current Geometry structure. That way stuff will break for people that are modifying geometry in their code, but they will only need to add a line of code and things should work again.

var geometry = new THREE.GeometryEditor( new THREE.PlaneGeometry( 100, 100, 10, 10 ) );

However, people that are creating Geometry from scratch by hand will not have it that easy...

Contributor

jbaicoianu commented Mar 18, 2014

@mrdoob yes, if the BufferGeometry wrapper exposes all of the properties in the same way and is sufficiently compatible, then that's a good way of providing an upgrade path too. #4588 is my attempt at that, I've implemented the same TypedVertex approach used in your experiments above to provide geometry.faces, geometry.faceVertexUvs, etc.

It also uses getters to only allocate the compatibility objects when accessed, which means generators which use the underlying BufferGeometry attributes should init at the same speed as raw BufferGeometry, and only users who make heavy use of geometry.vertices, geometry.faces, etc. will incur the performance penalty.

Akxe commented Mar 19, 2014

Is there any actual sum of function with Geometry2? For example how is the custom mesh creating done etc... Thaks

mrdoob closed this May 12, 2014

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment