New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
up/oversampling point cloud with vedo for balancing feature distribution #338
Comments
Hi @ttsesm - although this could be in principle doable with Another useful method might be: |
@marcomusy as a continuation of the above question does |
"decimate" means reducing the nr of points/faces. If you have a subset of a mesh that you want to subdivide you can try: |
Perfect!!! And for the record:
Now I need to find a way so that in each new face to give the attributes of the original one (values in the other columns of the p.s. Indeed I was meaning "subdivide", decimation as you said is the opposite. |
you do that with |
Btw, is it possible somehow that any of the other methods
👍 Yes I remember that from here. Let me see how I can adapt it to this use case. |
I don't think you can control that in vtk I'm afraid. |
OK! maybe |
uhm, the output seems a bit unstable. I played a bit with the parameters but it doesn't seem to be a generic combination I can use. Anyways, for now I think I can rely on the decimation initial output. If on the way down I see that this might be causing problems I will reconsider any possible workaround. |
@marcomusy is it possible to add custom data arrays per face instead of points with I've tried to insert the intensity values in my submesh faces with |
yes obviously! |
Uhm, is there a way to retain the cell array after decimation though? It seems that decimation removes any connection with the existing cell arrays :-(. Also reading on the different filter methods, I see that the |
from vedo import *
sph = Sphere().clean().lineWidth(1)
nc = sph.NCells()
sph.addCellArray(range(nc), "mycellscalars")
sph2 = sph.clone().mapCellsToPoints()
sph2d = sph.clone().decimate(0.5)
sph2d.interpolateDataFrom(sph2, N=3).mapPointsToCells()
# sph2d.subdivide()
show(sph, sph2d, N=2, axes=1) I also realized that in vtk the interpolation only works on points that's why we need to map cell data to point data and finally back-map them. So the method should be better called
|
Thanks a lot Marco, appreciated. No worries, that's why we are here, to spot them and report them :-). In the new version will you have the
|
Hmm, but edge length and number of triangles should be connected somehow, shouldn't be? Meaning that setting one of two flags should not be allowing to the set the other. For example depending the number of triangles you should have a corresponding edge length (large number of triangles -> short edge lengths and vise versa) which should be computed internally. Thus, I believe both flags should be available but not possible to set both at the same time, since this might cause the problem you exemplify above. Moreover setting the number of desired triangles I think it is easier while calculating which edge length will give me the desired number of triangles might be more cumbersome (though sometimes might be useful, that's why I believe both flags should be accessible and settable). Also since it is a subdivide filter, looking on your example above it looks to me more like a decimation. Thus, you could have a check where you raise an error if the amount of triangles are less than the original ones of the mesh. |
From playing a bit with these params it seems to me they have different meanings.
good point but basically if I hope i'm not making a blunder :) |
Well maybe nothing happens because internally they are doing the estimation of the edge length and if that does not fulfill some criteria internally (from a quick look in the source code though I did not see anything related) then nothing is applied.
I see, well setting the
Well if something is wrong, it will be reveal at some point :-p |
Marco I was thinking today, how you will handle a mesh with different length of edges? It is possible that the same mesh can have different sizes of faces at the same time. I am asking that because in order to compute the |
Yes, sure. You will not be able to define exactly the number of final triangles, you can campute |
Is there a method in the |
No. |
Hhmm, are you sure about the formula? Also A and N are the area and points of the mesh or the submesh? I've tried both actually but it doesn't seem to give the correct result. The default |
Indeed it seems that from vedo import *
from vedo.pyplot import histogram
import numpy as np
s = Mesh(dataurl+"bunny.obj").lw(1)
s.scale(100).subdivide(2).smooth()
points= s.points()
ds = []
for f in s.faces():
p0,p1,p2 = points[f]
ds += [mag(p1-p0), mag(p2-p1), mag(p0-p2)]
printc(s.area(), s.N(), sqrt(s.area()/s.N()), np.mean(ds))
h = histogram(ds, xtitle="edge length")
show(s, h, N=2, sharecam=False) |
Indeed, it gets a bit interesting. For example in the following example the formulation works fine (even without specifying the
Output:
However, on my custom data:
Output:
If I though use the default
Which of course is closer to the correct number of faces. Any idea why this could be happening. I see that inside the subdivide you use the |
To be honest I don't quite understand it, It's possible that the vtk |
Hhm, I do not know either. I have the feeling that the issue comes from what is considered as the mesh area each time. Also reading the description does not make me wiser as well:
|
Yep... not sure what to test at this point... :( |
Hi Marco, I was checking on different parameters (e.g. number of faces, number of points, area) of the two meshes that I am trying to process and merge finally. Looking below on the info of each:
I see that both As I understanding it |
Hi , I don't understand this: if you say |
This is because if you see my code above with the example (snippet below):
I use all the loaded verts in each mesh but then I specify which faces to be used based on all verts. Is there a way that I keep only the corresponding vertices each time? |
uhm , not sure if that can work.. you need to define your meshes properly without faceless points.. |
Yup, I concur that this should be the case.
If it is possible to retrieve the list of the points corresponding to the given faces then this might be redundant since I can form directly a new mesh from these. Do you have any example how to split an initial mesh to smaller ones given the faces or something? I think this would be more helpful. |
yes - that's trivial: |
perfect that does the trick for the points, then I am missing a way to adjust the indexing in the new faces list because they would still point in the old indexes of the points. I guess I can do it in a similar way as:
if there is any other more direct way, let me know. |
It seems ok to me! |
Hi Marco, ok the following seems to be correct now:
However, from these I need to get a Btw, do you have a link for the formula? How did you come up with it? |
it's just my back-of-the-envelope calculation, that's probably why it doesn't work :) |
Hhmm I see, there should be some way though to extract the correct ratio. For example for the example with the sphere, I have the following data:
and my |
If A=total area then area of a triangle is |
Well still it doesn't seem consistent, if you check on the output numbers they differ from the actual numbers that I should get. For example in the example with the sphere: mel = 2*sqrt(1.9251713393584104 / 1170) = 0.0811281847251, while it should be between and in the case of the custom mesh: mel = 2*sqrt(1.440205667851934 / 24) = 0.489932932119, way too big from the range :-( |
maybe(!) this number is proportional to the bounding box diagonal size? otherwise I would have a look at the vtk class implementation to see how it's done. |
My guess is that you cannot make the assumption of the equilateral triangle area since at some point the triangles will differ depending how many triangles you want to fit in this specific area. I've tried different things, they did not work. I've also tried to play with the other parameter, i.e. If you have the time and the willingness to investigate it a bit further feel free ;-). I've opened also a thread upstream on the vtk forum in case that someone knows a bit more about how the vtk class implementation works (you can check it here). So far though no responses. In any case, I am more than grateful for your time :-). |
Hi @marcomusy,
I was wondering if vedo has any upsampling method for point and feature vectors on point clouds like SMOTE or something alike. For example I have the following point cloud:
where for each point (x,y,z) I have a corresponding feature vector, eg light intensity, among others (normals, reflectance factor, area, etc). Now if I check the distribution of the values in this feature vector (grouped in 9 clusters) I will notice that is heavily imbalanced:
and this is how the clusters correspond to the point cloud:
you will notice that values with high range (or really low range) are only a few (the bright area in the first image). Now what I would like to do is to created "fake" points (around these areas based on the lux values that lack in the distribution) so that to bring this distribution in a balanced form but at the same time populate with relevant values (as much as this is possible) in the other feature vectors that each new "fake" point will have.
Any idea whether this could be achieved?
I am attaching a .csv file in case it helps where each column corresponds to the following attributes for each point in the point cloud: [x, y, z, light_intensity, reflectance_red, reflectance_green, reflectance_blue, normal_x, normal_y, normal_z, area, lux_value, cluster_id] so I would l like to create new points and values for the other feature vectors so that the "lux_value" distribution gets balanced.
Thanks.
data.zip
The text was updated successfully, but these errors were encountered: