Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bogus XY coordinates, radii in inventory #28

Open
timwh opened this issue Oct 6, 2020 · 2 comments
Open

Bogus XY coordinates, radii in inventory #28

timwh opened this issue Oct 6, 2020 · 2 comments

Comments

@timwh
Copy link

timwh commented Oct 6, 2020

Hi there,
What would be causing bogus XY coordinates in the inventory? Also provides an incredibly large radius. See attached screenshot, trees 10, 1153 and 1154.
XY values are UTM. Occurs regardless of data set (TLS or drone LS). All data sets are filtered for outliers and are clipped to a 1 ha plot.
Cheers, Tim
image

@timwh timwh changed the title Bogus XY coordinates in inventory Bogus XY coordinates, radii in inventory Oct 6, 2020
@tiagodc
Copy link
Owner

tiagodc commented Oct 24, 2020

Hi there,

it's hard to evaluate your issue without a more comprehensive context. To understand it I need the complete workflow before applying the tlsInventory function, but I'll elaborate on a few key points below that might help you:

1 ha plots are quite large, and if the point cloud is not thoroughly treated before getting to the inventory part the results can vary widely across the point cloud. Preprocessing is highly dependent on the 3D scanner, survey type and the algorithm you choose further on. For instance, TLS and MLS point clouds usually have very heterogeneous point densities, so it's a good practice to standardize those point cloud densities to get a similar amount of pts/m³ on all point cloud regions, and/or to split the dataset into tiles and process them separately - which is easier to inspect visually.

All steps are really important to get good inventory/segmentation results, with the most critical ones probably being the treeMap and stemPoints, for which parameterization can be tricky and fine tuning over sample datasets is advisable through trial and error before running those algorithms on the entire point cloud - even though, visual inspections followed by manual corrections are often necessary even after.

By setting too strict parameters at those steps, you might omit many trees in the results, but if you pass too flexible parameters (low density criteria, wide expected angle/diameter intervals, large pixel/voxel sizes) you might get too many false positives, so the trial and error phase should focus on get a good balance on those results.

Just looking at the print from your console I'd say the results are weird for a few reasons:

  • the tree heights are really low, which might indicate poor tuning at the tree mapping step that matched false point clusters as trees
  • NaN values, extremely large radii and bogus coordinates are usually a symptom of few stem points in the diameter layer for a particular tree, thus yielding gross overestimates. Going back the workflow and double checking the parameterization at the treeMap and stemPoints steps might help, but an a posteriori possible fix might be simply removing estimates from point clusters with too few points. You can check the point number per dbh segment using the data.table syntax: lasnorm@data[Stem & Z > 1.05 & Z < 1.55, .N, by=TreeID] and then remove rows whose dbh estimates were made on less than 10 points (or any value you deem reasonable).

TL;DR
a trial and error phase for fine tuning the tree mapping and stem points algorithms is paramount for good performance, and visual inspections of the point cloud after each step is super important to assess the quality over the whole processing.

And finally, if you provide a more complete code snippet (from reading the point cloud up to the inventory step) and some data sample I can make a better assessment and help you with the specifics.

Cheers!

@timwh
Copy link
Author

timwh commented Nov 2, 2020

Thanks Tiago for your response.
I undertook a systematic approach to parameter selection for treeMap and found regardless of parameters, the Hough transform misses many trees. The knn eigen decomp tends to call too many. Area is savanna woodland so trees are of variable height, some with multiple stems.
Subset of data is here

Code snippet is below:

#Load normalised plot point cloud
las<-readLAS(las_str)
#Plot point cloud
plot(las)
#voxelise to homogenise point cloud
lasnorm_vox<-tlsSample(las,smp.voxelize(spacing=0.02))
#Set treeMap variables
minh=1
mindens=0.05
maxh=5
maxd=0.5
maxcurv=0.2
maxvert=10
pix_size=0.02
hstep=0.5
spacing=0.1
#Create tree map using eigen decomp or Hough transform method in TreeLS
map <- treeMap(lasnorm_vox, method = map.eigen.knn(max_curvature = maxcurv, max_verticality = maxvert, max_mean_dist = spacing, max_d = maxd, min_h =minh, max_h = maxh))
#map <- treeMap(lasnorm_vox, method = map.hough(min_h = minh, max_h = maxh, h_step = hstep, pixel_size = pix_size, max_d=maxd, min_density = mindens))
add_treeMap(x,map,color='yellow',size=3)
xymap <- treeMap.positions(map, plot = T)
head(xymap)
#Classify tree regions
lasnorm_vox<-treePoints(lasnorm_vox,map,trp.crop(1.5,TRUE))
add_treePoints(x,lasnorm_vox,size=3)
add_treeIDs(x,lasnorm,cex=2,col='yellow')
#Classify stem points
lasnorm_vox <- stemPoints(lasnorm_vox,stm.hough(h_step=0.5,max_d=maxd,h_base=c(1,2.5),pixel_size=0.03,min_votes=3))
add_stemPoints(x, lasnorm_vox, color='yellow', size=2)
#Create tree inventory
inv_vox = tlsInventory(lasnorm_vox, dh=1.3, dw=0.5, hp=0.95,d_method=shapeFit(shape='circle', algorithm = 'ransac',n=10))
#inv_vox = tlsInventory(lasnorm_vox, dh=1.3, dw=0.5, hp=0.95,d_method=shapeFit(shape='cylinder', algorithm = 'ransac',n=10))
add_tlsInventory(x, inv)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants