Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

up/oversampling point cloud with vedo for balancing feature distribution #338

Closed
ttsesm opened this issue Mar 12, 2021 · 45 comments
Closed

Comments

@ttsesm
Copy link

ttsesm commented Mar 12, 2021

Hi @marcomusy,

I was wondering if vedo has any upsampling method for point and feature vectors on point clouds like SMOTE or something alike. For example I have the following point cloud:

pcd_oversampling

where for each point (x,y,z) I have a corresponding feature vector, eg light intensity, among others (normals, reflectance factor, area, etc). Now if I check the distribution of the values in this feature vector (grouped in 9 clusters) I will notice that is heavily imbalanced:

image

and this is how the clusters correspond to the point cloud:

image

you will notice that values with high range (or really low range) are only a few (the bright area in the first image). Now what I would like to do is to created "fake" points (around these areas based on the lux values that lack in the distribution) so that to bring this distribution in a balanced form but at the same time populate with relevant values (as much as this is possible) in the other feature vectors that each new "fake" point will have.

Any idea whether this could be achieved?

I am attaching a .csv file in case it helps where each column corresponds to the following attributes for each point in the point cloud: [x, y, z, light_intensity, reflectance_red, reflectance_green, reflectance_blue, normal_x, normal_y, normal_z, area, lux_value, cluster_id] so I would l like to create new points and values for the other feature vectors so that the "lux_value" distribution gets balanced.

Thanks.
data.zip

@marcomusy
Copy link
Owner

Hi @ttsesm - although this could be in principle doable with pcloud.densify() I think it's a bad idea in general to create additional points, except for very special cases.
You should rather assess the statistical significance of your data, in any case you can play around with this example:
pip install -U git+https://github.com/marcomusy/vedo.git
https://github.com/marcomusy/vedo/blob/master/examples/volumetric/densifycloud.py

Another useful method might be: newpcloud.interpolateDataFrom(oldpcloud)

@ttsesm
Copy link
Author

ttsesm commented May 10, 2021

@marcomusy as a continuation of the above question does vedo have any mesh face decimation function which I could similarly use for creating more faces/points. So let's say that I have a mesh now, which given some specific faces (not all the mesh) I would like to subdivide/decimate only these specific ones while the rest faces remain as they are.

@marcomusy
Copy link
Owner

"decimate" means reducing the nr of points/faces.
"subdivide" increase them.

If you have a subset of a mesh that you want to subdivide you can try:
m1 = mysubmesh.subdivide(method=1)
then
mfinal = merge(m1, whole_mesh_without_submesh).clean()
where clean removes the duplicate vertices

@ttsesm
Copy link
Author

ttsesm commented May 11, 2021

Perfect!!!
It worked nicely, thanks a lot.

And for the record:

import os
import numpy as np
import pandas as pd
import vedo as vd

data = pd.read_csv("ls_room_999.csv", header=None, delimiter=',', low_memory=False).to_numpy(dtype='float')


verts = data[:, 16:28].reshape(-1, 3)
faces = np.arange(0, len(verts), 1)
faces = np.array(np.split(faces, faces.shape[0] / 3))  # this describes the faces (by vertex index)

face_normals = data[:,8:11]

luminaire_face_idxs = np.array(np.where(data[:, 7] > 0)).flatten()

m = vd.Mesh([verts, np.delete(faces, luminaire_face_idxs, axis=0)]).alpha(0.2).lw(0.1)

m1 = vd.Mesh([verts, faces[luminaire_face_idxs]]).alpha(0.2).lw(0.1)

while(len(m1.faces()) < len(m.faces())):
    m1.subdivide(method=1)

mfinal = vd.merge(m1, m).clean().alpha(0.2).lw(0.1)

vd.show(m, axes=8)
vd.show(m1, axes=8)
vd.show(mfinal,axes=8)

Now I need to find a way so that in each new face to give the attributes of the original one (values in the other columns of the data[] structure) that has been derived from.

p.s. Indeed I was meaning "subdivide", decimation as you said is the opposite.

ls_room_999.csv

@marcomusy
Copy link
Owner

Now I need to find a way so that in each new face to give the attributes of the original one (values in the other columns of the data[] structure) that has been derived from.

you do that with mesh.addPointArray(myarr, "name_of_arr")

@ttsesm
Copy link
Author

ttsesm commented May 11, 2021

Btw, is it possible somehow that any of the other methods loop, adaptive or butterfly can give me a fixed number of points. What I mean is that, initially the mesh without the submesh has 16640 faces, and the submesh 8 faces once I start applying the linear subdivision (method=1) I will end up with the submesh having 32768 faces when it fulfills the comparative operator but this is again too big. Ideally I would like to subdivide the submesh up to 16640 faces or somewhere close to this number. One idea would be to apply decimation afterwards which I see that it has such kind of functionality (fraction=, N=) but if I can do it in first place it would be preferable.

Now I need to find a way so that in each new face to give the attributes of the original one (values in the other columns of the data[] structure) that has been derived from.

you do that with mesh.addPointArray(myarr, "name_of_arr")

👍 Yes I remember that from here. Let me see how I can adapt it to this use case.

@marcomusy
Copy link
Owner

is it possible somehow that any of the other methods loop, adaptive or butterfly can give me a fixed number of points.

I don't think you can control that in vtk I'm afraid.

@ttsesm
Copy link
Author

ttsesm commented May 11, 2021

I see, I've managed to bring it down to the exact faces number with:

m1.decimate(len(m.faces()) / len(m1.faces()), method='pro', boundaries=True)

though the decimation becomes a bit ugly for some faces:

image

In any case though it seems to be serving its purpose.

@marcomusy
Copy link
Owner

OK! maybe .smoothLaplacian() can improve the triangle quality, but it may mess up the boundaries (?)

@ttsesm
Copy link
Author

ttsesm commented May 11, 2021

OK! maybe .smoothLaplacian() can improve the triangle quality, but it may mess up the boundaries (?)

uhm, the output seems a bit unstable. I played a bit with the parameters but it doesn't seem to be a generic combination I can use. Anyways, for now I think I can rely on the decimation initial output. If on the way down I see that this might be causing problems I will reconsider any possible workaround.

@ttsesm
Copy link
Author

ttsesm commented May 12, 2021

@marcomusy is it possible to add custom data arrays per face instead of points with mesh.addPointArray()?

I've tried to insert the intensity values in my submesh faces with m1.addPointArray(data[8880:8888, 7], "intensity") but this it didn't work of course since it expects a data array size of number of points and not size of number of faces. The idea is that once this is done then the new created faces should have the same custom data array properties.

@marcomusy
Copy link
Owner

yes obviously!
m1.addCellArray(data[8880:8888, 7], "intensity")

@ttsesm
Copy link
Author

ttsesm commented May 13, 2021

Uhm, is there a way to retain the cell array after decimation though? It seems that decimation removes any connection with the existing cell arrays :-(.

Also reading on the different filter methods, I see that the vtk.vtkAdaptiveSubdivisionFilter() might be doing what I wanted in first place (to subdivide up to a specific amount of faces https://github.com/Kitware/VTK/blob/f2115cd8228b134d7b840f70b3a5b467cb784f4f/Filters/Modeling/vtkAdaptiveSubdivisionFilter.cxx#L37 https://github.com/Kitware/VTK/blob/f2115cd8228b134d7b840f70b3a5b467cb784f4f/Filters/Modeling/vtkAdaptiveSubdivisionFilter.cxx#L384) but then using it with vedo doesn't seem to do anything.

@marcomusy
Copy link
Owner

  1. About the problem with decimate. You are right, this is very weird, VTK doesn't port the cell and point data to the decimated mesh. Not sure if it's intentional or a genuine bug. So it must be done manually. Consider this example:
from vedo import *

sph = Sphere().clean().lineWidth(1)
nc = sph.NCells()
sph.addCellArray(range(nc), "mycellscalars")

sph2 = sph.clone().mapCellsToPoints()
sph2d = sph.clone().decimate(0.5)
sph2d.interpolateDataFrom(sph2, N=3).mapPointsToCells()

# sph2d.subdivide()

show(sph, sph2d, N=2, axes=1)

Screenshot from 2021-05-14 23-23-05

I also realized that in vtk the interpolation only works on points that's why we need to map cell data to point data and finally back-map them. So the method should be better called interpolatePointDataFrom().

  1. the vtkAdaptiveSubdivisionFilter indeed has a bug in vedo - sorry about that - it will be fixed in the next round.

@marcomusy
Copy link
Owner

marcomusy commented May 14, 2021

PS: I tought of adding a keyword on instead of changing the name, so in the next release this will become:

from vedo import *

sph = Sphere().lw(1)

arr = sph.cellCenters()
sph.celldata["mycellscalars"] = arr[:,0]
sph.cmap('jet', "mycellscalars", on="cells")

sphd = sph.clone().decimate(0.5)
sphd.interpolateDataFrom(sph, N=3, on='cells')

# sphd.subdivide(method=2, mel=0.1)

show(sph, sphd, N=2, axes=1)

Screenshot from 2021-05-15 00-23-37
so no more need to make an extra copy.
I will push a new version next week. Thanks a lot for finding out this problem.

@ttsesm
Copy link
Author

ttsesm commented May 14, 2021

Thanks a lot Marco, appreciated.

No worries, that's why we are here, to spot them and report them :-). In the new version will you have the vtk.vtkAdaptiveSubdivisionFilter() bug fixed as well since it seems that by setting the SetMaximumNumberOfTriangles() parameter in first place will eliminate the extra step for me with the decimation and mapping back and forth from Cells to Points.

sdf = vtk.vtkAdaptiveSubdivisionFilter()
sdf.SetMaximumNumberOfTriangles(1000)

@marcomusy
Copy link
Owner

You will have keyword mel (max edge length) to control the nr of triangles. SetMaximumNumberOfTriangles would not control this as it's an upper bound. Indeed for too small MaximumNumberOfTriangles and too small mel one gets funny things:

Screenshot from 2021-05-15 00-41-37

@ttsesm
Copy link
Author

ttsesm commented May 14, 2021

Hmm, but edge length and number of triangles should be connected somehow, shouldn't be? Meaning that setting one of two flags should not be allowing to the set the other. For example depending the number of triangles you should have a corresponding edge length (large number of triangles -> short edge lengths and vise versa) which should be computed internally. Thus, I believe both flags should be available but not possible to set both at the same time, since this might cause the problem you exemplify above. Moreover setting the number of desired triangles I think it is easier while calculating which edge length will give me the desired number of triangles might be more cumbersome (though sometimes might be useful, that's why I believe both flags should be accessible and settable).

Also since it is a subdivide filter, looking on your example above it looks to me more like a decimation. Thus, you could have a check where you raise an error if the amount of triangles are less than the original ones of the mesh.

@marcomusy
Copy link
Owner

From playing a bit with these params it seems to me they have different meanings.
If one sets a very high MaximumNumberOfTriangles ..nothing happens (so this is intended not as a "desired nr of triangles" but as an upper limit to their generation, and then it's not even clear to me why this variable exists at all!)
On the contrary MaximumEdgeLength really controls the density of triangles.

Also since it is a subdivide filter, looking on your example above it looks to me more like a decimation. Thus, you could have a check where you raise an error if the amount of triangles are less than the original ones of the mesh.

good point but basically if mel is too large, again ..nothing happens, so subdivision won't yield less triangles.

I hope i'm not making a blunder :)

@ttsesm
Copy link
Author

ttsesm commented May 15, 2021

From playing a bit with these params it seems to me they have different meanings.
If one sets a very high MaximumNumberOfTriangles ..nothing happens (so this is intended not as a "desired nr of triangles" but as an upper limit to their generation, and then it's not even clear to me why this variable exists at all!)

Well maybe nothing happens because internally they are doing the estimation of the edge length and if that does not fulfill some criteria internally (from a quick look in the source code though I did not see anything related) then nothing is applied.

On the contrary MaximumEdgeLength really controls the density of triangles.

Also since it is a subdivide filter, looking on your example above it looks to me more like a decimation. Thus, you could have a check where you raise an error if the amount of triangles are less than the original ones of the mesh.

good point but basically if mel is too large, again ..nothing happens, so subdivision won't yield less triangles.

I see, well setting the mel should be fine (you will need an extra step to compute it based on the current length of the faces and the desired number of faces).

I hope i'm not making a blunder :)

Well if something is wrong, it will be reveal at some point :-p

@ttsesm
Copy link
Author

ttsesm commented May 15, 2021

Marco I was thinking today, how you will handle a mesh with different length of edges? It is possible that the same mesh can have different sizes of faces at the same time.

I am asking that because in order to compute the mel in order to get the desired number of faces I need to know the current edge length but if this differs since the mesh might have both large and small faces thus long and short edges how I do that?

@marcomusy
Copy link
Owner

It is possible that the same mesh can have different sizes of faces at the same time.

Yes, sure.

You will not be able to define exactly the number of final triangles, you can campute mel only from the average distance of points.
As I just pushed to master you should be able run the above examples.

@ttsesm
Copy link
Author

ttsesm commented May 16, 2021

...only from the average distance of points

Is there a method in the vedo.Mesh structure that returns this info or the edges length (so that not to reinvent the wheel)? From a quick search I found a method about the line width lineWidth/lw but nothing about the edge length or points distance.

@marcomusy
Copy link
Owner

No.
But since you can retrieve the total area A, I would compute it as:
lenght = sqrt(A/N*2*sqrt(3))
with N the number of points.

@ttsesm
Copy link
Author

ttsesm commented May 17, 2021

No.
But since you can retrieve the total area A, I would compute it as:
lenght = sqrt(A/N*2*sqrt(3))
with N the number of points.

Hhmm, are you sure about the formula? Also A and N are the area and points of the mesh or the submesh? I've tried both actually but it doesn't seem to give the correct result. The default mel seems to give a better result.

@marcomusy
Copy link
Owner

marcomusy commented May 17, 2021

Indeed it seems that sqrt(s.area()/s.N()) is closer to the actual mean.. not sure why.. the formula looks correct:

from vedo import *
from vedo.pyplot import histogram
import numpy as np

s = Mesh(dataurl+"bunny.obj").lw(1)
s.scale(100).subdivide(2).smooth()

points= s.points()
ds = []
for f in s.faces():
    p0,p1,p2 = points[f]
    ds += [mag(p1-p0), mag(p2-p1), mag(p0-p2)]
printc(s.area(), s.N(), sqrt(s.area()/s.N()), np.mean(ds))

h = histogram(ds, xtitle="edge length")

show(s, h, N=2, sharecam=False)

Screenshot from 2021-05-17 12-18-04

@ttsesm
Copy link
Author

ttsesm commented May 17, 2021

Indeed, it gets a bit interesting. For example in the following example the formulation works fine (even without specifying the mel and using the default value):

from vedo import *
from vedo.pyplot import histogram
import numpy as np


sph = Sphere().lw(1)

verts = sph.points()
faces1 = sph.faces()[0:390]
faces2 = sph.faces()[390:len(sph.faces())]

sph1 = Mesh([verts, faces1]).alpha(0.2).lw(0.1)
sph2 = Mesh([verts, faces2]).alpha(0.2).lw(0.1)

print("Sph1: {}".format(len(sph1.faces())))
print("Sph2: {}\n".format(len(sph2.faces())))

length = sqrt(sph1.area()/sph1.N())
sph1.subdivide(method=2, mel=length) # or using the default length

print("Sph1_subdivided: {}".format(len(sph2.faces())))

mfinal = merge(sph2, sph1).clean().alpha(0.2).lw(0.1)

show(sph1, axes=8)
show(sph2, axes=8)
show(mfinal,axes=8)

Output:

Sph1: 390
Sph2: 1722

Sph1_subdivided: 1722

However, on my custom data:

import numpy as np
import pandas as pd
from vedo import *

data = pd.read_csv("ls_room_999.csv", header=None, delimiter=',', low_memory=False).to_numpy(dtype='float')

verts = data[:, 16:28].reshape(-1, 3)
faces = np.arange(0, len(verts), 1)
faces = np.array(np.split(faces, faces.shape[0] / 3))  # this describes the faces (by vertex index)

face_normals = data[:, 8:11]

luminaire_face_idxs = np.array(np.where(data[:, 7] > 0)).flatten()

# the optional reverse() flips the orientation of cells
m = Mesh([verts, np.delete(faces, luminaire_face_idxs, axis=0)]).alpha(0.2).lw(0.1)

m1 = Mesh([verts, faces[luminaire_face_idxs]]).alpha(0.2).lw(0.1)

print("m: {}".format(len(m.faces())))
print("m1: {}\n".format(len(m1.faces())))

length = np.sqrt(m1.area()/m1.N())
m1.subdivide(method=2, mel=length)
# m1.subdivide(method=2) # default

print("m1_subdivided: {}".format(len(m1.faces())))

mfinal = merge(m1, m).clean().alpha(0.2).lw(0.1)

show(m, axes=8)
show(m1, axes=8)
show(mfinal,axes=8)

Output:

m: 16640
m1: 8

m1_subdivided: 262144

If I though use the default mel so I don't provide the length the output is:

m: 16640
m1: 8

m1_subdivided: 16384

Which of course is closer to the correct number of faces. Any idea why this could be happening. I see that inside the subdivide you use the diagonalSize() instead of the area() but I am not sure I understand the difference.

@marcomusy
Copy link
Owner

To be honest I don't quite understand it, It's possible that the vtk MaximumEdgeLenght variable is doing something more complicated that is not documented :(

@ttsesm
Copy link
Author

ttsesm commented May 17, 2021

Hhm, I do not know either. I have the feeling that the issue comes from what is considered as the mesh area each time. Also reading the description does not make me wiser as well:

vtkAdaptiveSubdivisionFilter - subdivide triangles based on edge and/or area metrics
Superclass: vtkPolyDataAlgorithm
vtkAdaptiveSubdivisionFilter is a filter that subdivides triangles based on maximum edge length and/or triangle area. It uses a simple case-based, multi-pass approach to repeatedly subdivide the input triangle mesh to meet the area and/or edge length criteria. New points may be inserted only on edges; depending on the number of edges to be subdivided a different number of triangles are inserted ranging from two (i.e., two triangles replace the original one) to four.
Triangle subdivision is controlled by specifying a maximum edge length and/or triangle area that any given triangle may have. Subdivision proceeds until their criteria are satisfied. Note that using excessively small criteria values can produce enormous meshes with the possibility of exhausting system memory. Also, if you want to ignore a particular criterion value (e.g., triangle area) then simply set the criterion value to a very large value (e.g., VTK_DOUBLE_MAX).
An incremental point locator is used because as new points are created, a search is made to ensure that a point has not already been created. This ensures that the mesh remains compatible (watertight) as long as certain criteria are not used (triangle area limit, and number of triangles limit).
To prevent overly large triangle meshes from being created, it is possible to set a limit on the number of triangles created. By default this number is a very large number (i.e., no limit). Further, a limit on the number of passes can also be set, this is mostly useful to generated animations of the algorithm.
Finally, the attribute data (point and cell data) is treated as follows. The cell data from a parent triangle is assigned to its subdivided children. Point data is interpolated along edges as the edges are subdivided.
@warning The subdivision is linear along edges. Thus do not expect smoothing or blending effects to occur. If you need to smooth the resulting mesh use an algorithm like vtkWindowedSincPolyDataFilter or vtkSmoothPolyDataFilter.
The filter retains mesh compatibility (watertightness) if the mesh was originally compatible; and the area, max triangles criteria are not used.
@warning The filter requires a triangle mesh. Use vtkTriangleFilter to tessellate the mesh if necessary.

@marcomusy
Copy link
Owner

Yep... not sure what to test at this point... :(

@ttsesm
Copy link
Author

ttsesm commented May 18, 2021

Hi Marco,

I was checking on different parameters (e.g. number of faces, number of points, area) of the two meshes that I am trying to process and merge finally. m is the initial mesh and m1 is a submesh of m of which I want to subdivide so that the faces to be approx. the same as in the m.

Looking below on the info of each:

m_faces: 16640
m_points: 49944
m_area: 178.82696989147524

m1_faces: 8
m1_points: 49944
m1_area: 1.440205667851934

# after subdividing m1 with the default mel
m1_subdivided_faces: 16384
m1_subdivided_points: 58380
m1_subdivided_area: 1.4402056678519877

# merging both m and m1_subdivided
mfinal_subdivided_faces: 33024
mfinal_subdivided_points: 16831
mfinal_subdivided_area: 180.2671755593273

I see that both m and m1 have the same number of points (this is because I use all the initial points of the orignal mesh from the data structure) do you think that this might be causing any issue?

As I understanding it m1 should have max 24 points (8x3) and considering that some of the points are used twice in some faces the number should be even smaller. Also the points in m should be m_points - m1_points. What do you think?

@marcomusy
Copy link
Owner

Hi , I don't understand this: if you say m1 is a submesh of m, how comes they have the same nrs of points? but only 8 faces?

@ttsesm
Copy link
Author

ttsesm commented May 18, 2021

Hi , I don't understand this: if you say m1 is a submesh of m, how comes they have the same nrs of points? but only 8 faces?

This is because if you see my code above with the example (snippet below):

verts = data[:, 16:28].reshape(-1, 3)
faces = np.arange(0, len(verts), 1)
faces = np.array(np.split(faces, faces.shape[0] / 3))  # this describes the faces (by vertex index)

luminaire_face_idxs = np.array(np.where(data[:, 7] > 0)).flatten()

m = Mesh([verts, np.delete(faces, luminaire_face_idxs, axis=0)]).alpha(0.2).lw(0.1)
m1 = Mesh([verts, faces[luminaire_face_idxs]]).alpha(0.2).lw(0.1)

I use all the loaded verts in each mesh but then I specify which faces to be used based on all verts. Is there a way that I keep only the corresponding vertices each time?

@marcomusy
Copy link
Owner

uhm , not sure if that can work.. you need to define your meshes properly without faceless points..
you may check out examples/basic/deleteMeshPoints.py may help

@ttsesm
Copy link
Author

ttsesm commented May 18, 2021

uhm , not sure if that can work.. you need to define your meshes properly without faceless points..

Yup, I concur that this should be the case.

you may check out examples/basic/deleteMeshPoints.py may help

If it is possible to retrieve the list of the points corresponding to the given faces then this might be redundant since I can form directly a new mesh from these. Do you have any example how to split an initial mesh to smaller ones given the faces or something? I think this would be more helpful.

@marcomusy
Copy link
Owner

If it is possible to retrieve the list of the points corresponding to the given faces

yes - that's trivial:
pts = mesh.points()[one_specific_face]

@ttsesm
Copy link
Author

ttsesm commented May 18, 2021

If it is possible to retrieve the list of the points corresponding to the given faces

yes - that's trivial:
pts = mesh.points()[one_specific_face]

perfect that does the trick for the points, then I am missing a way to adjust the indexing in the new faces list because they would still point in the old indexes of the points.

I guess I can do it in a similar way as:

faces = np.arange(0, len(new_verts), 1)
faces = np.array(np.split(faces, faces.shape[0] / 3))

if there is any other more direct way, let me know.

@marcomusy
Copy link
Owner

It seems ok to me!

@ttsesm
Copy link
Author

ttsesm commented May 19, 2021

Hi Marco, ok the following seems to be correct now:

m_faces: 16640
m_points: 49920
m_area: 178.82696989147524

m1_faces: 8
m1_points: 24
m1_area: 1.440205667851934

However, from these I need to get a mel between [0.0188-0.0265] which doesn't seem to be returned from the formula :-(.

Btw, do you have a link for the formula? How did you come up with it?

@marcomusy
Copy link
Owner

Btw, do you have a link for the formula? How did you come up with it?

it's just my back-of-the-envelope calculation, that's probably why it doesn't work :)
I divided the total area by the equilateral triangle area and solved for mel.

@ttsesm
Copy link
Author

ttsesm commented May 19, 2021

Hhmm I see, there should be some way though to extract the correct ratio. For example for the example with the sphere, I have the following data:

Sph1_faces: 390
Sph1_points: 1170
Sph1_area: 1.9251713393584104

Sph2_faces: 1722
Sph2_points: 5166
Sph2_area: 10.59400389764954

and my mel should be between [0.089-0.09] (found with trial and error that this gives me the closest new number of faces of Sph1 to Sph2).

@marcomusy
Copy link
Owner

marcomusy commented May 19, 2021

If A=total area then area of a triangle is a~sqrt(3)/4*d^2
so
A/a = n = 4*A/sqrt(3)*d^2 = N/2
so d is proportional to sqrt(A/N)
From your numbers it seems this factor of proportionality must be ~2 ? So formula would become
mel = 2*sqrt(A/N)

@ttsesm
Copy link
Author

ttsesm commented May 20, 2021

Well still it doesn't seem consistent, if you check on the output numbers they differ from the actual numbers that I should get. For example in the example with the sphere:

mel = 2*sqrt(1.9251713393584104 / 1170) = 0.0811281847251, while it should be between [0.089-0.09]

and in the case of the custom mesh:

mel = 2*sqrt(1.440205667851934 / 24) = 0.489932932119, way too big from the range [0.0188-0.0265]

:-(

@marcomusy
Copy link
Owner

maybe(!) this number is proportional to the bounding box diagonal size? otherwise I would have a look at the vtk class implementation to see how it's done.

@ttsesm
Copy link
Author

ttsesm commented May 21, 2021

My guess is that you cannot make the assumption of the equilateral triangle area since at some point the triangles will differ depending how many triangles you want to fit in this specific area. I've tried different things, they did not work. I've also tried to play with the other parameter, i.e. Maximum Triangle Area instead of the Maximum Edge Length no luck as well. For now I think I will give up (I need to move on since I have time pressure) and go with the solution of subdivide(method=1), decimation() and interpolateDataFrom(on='cells') this seems to do the job (at least to the so far examples that I've tested it). Might not be the ideal but is a good work around for what I wanted for.

If you have the time and the willingness to investigate it a bit further feel free ;-). I've opened also a thread upstream on the vtk forum in case that someone knows a bit more about how the vtk class implementation works (you can check it here). So far though no responses.

In any case, I am more than grateful for your time :-).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants