Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Standardization of common extrabytes #37

Open
esilvia opened this issue Sep 21, 2017 · 85 comments
Open

Standardization of common extrabytes #37

esilvia opened this issue Sep 21, 2017 · 85 comments
Assignees

Comments

@esilvia
Copy link
Member

@esilvia esilvia commented Sep 21, 2017

We've discussed listing some standardized extrabytes either in the specification itself or a supplementary document. This would encourage their adoption by the community and formalize common extrabytes as a guideline to future implementations.

We need to figure out the following:

  1. Which extrabytes merit standardization?
  2. Which fields should be formalized? e.g., optional fields like min, max, and nodata might not make sense.
  3. Should data_type be formalized?
  4. Where will this list live? Will it formally be included in the specification itself (thereby requiring ASPRS approval every time one gets added), or perhaps as a wiki page on GitHub with a link from the specification? I propose the latter.
  5. What will be the approval process for new additions? (I propose people submit new Issues and then LWG votes yes/no).
  6. Should units be formalized? For example, will we have to have separate entries for "refracted depth" in meters and feet?

Below is a link to what I think is a decent start to the standardized extrabytes. Once we get some agreement on a few of these I can start building a wiki page or contribute to Martin's pull request. Which one we do depends on the answer to the 4th question.

Standard ExtraBytes v1.docx

@esilvia esilvia self-assigned this Sep 21, 2017
@rapidlasso
Copy link

@rapidlasso rapidlasso commented Sep 21, 2017

3.) I think for data_type we can make a "best" and "worst" practice recommendation that shows what scales and offsets use the fewest bytes possible to get reasonable resolution.
BEST:
For a return echo width stored with 0.1 ns resolution use an 0.1 scaled unsigned char to cover the range 0.0 ns to 25.5 ns or an 0.1 scaled unsigned short to cover the range 0.0 ns to 6553.5 ns.
For a return echo width stored with 0.01 ns resolution use an 0.01 scaled unsigned short to cover the range 0.00 ns to 655.35 ns.
WORST:
Avoid using floats or doubles and email Martin if you really want to start a heated discussion about storing linear measurements using a floating-point format. (-;

@rapidlasso rapidlasso closed this Sep 21, 2017
@rapidlasso rapidlasso reopened this Sep 21, 2017
@lgraham-geocue
Copy link

@lgraham-geocue lgraham-geocue commented Sep 23, 2017

Other useful stuff under the "common" category. Not fleshed out but just introduced as ideas:

Group ID - this is a 32 bit (64 bit?) unsigned int that is used as a group index (or Object ID, OID). For example, all points that "belong" to a specific building roof identified as Object ID 246 will have this tag set to 246. Relationship maps can be built in a VLR, EVLR.

Sigma X, Y, Z --> Standard deviation of the point expressed in the units of the projection. For Geographic data, the units are meters. The values are doubles or they could just be long and follow the point scaling.

Normal - 3 tuple that defines the normal to the surface at the point of incidence. Direction is opposite the ray direction (toward the laser scanner).

@esilvia esilvia added this to the v1.4 R14 milestone Sep 27, 2017
@rapidlasso
Copy link

@rapidlasso rapidlasso commented Oct 3, 2017

Two issues with the Normal that @lgraham-geocue suggests.

(1) Directions are always troublesome because they are difficult to re-project correctly when going from one CRS to another. In the PulseWaves format we've solved this by expressing directions vectors as two points. Re-projecting both is always going to be correct (even if we go to non-euclidean space). How about a "trajectory" index instead that reference "trajectory points" stored in the same LAS file (but marked synthetic) that are on the trajectory. These "trajectory points" are then given the same index so they can be paired up with the actual returns.

(2) Triplets have been deprecated.

@esilvia
Copy link
Member Author

@esilvia esilvia commented Oct 3, 2017

@rapidlasso What is there to be gained by allowing the data_type to vary for a given ExtraByte definition? I've noticed that you allow it to vary for the "height from ground" extrabyte in your tools, but that's caused my implementations a little trouble when some files have it defined one way while others have it defined another way.

I guess this begs the question of why we're standardizing. I believe it's to encourage implementation by more software vendors, which means simplification is key. In my opinion that means guaranteeing a 1-to-1 relationship of the key attributes with a certain EB code. At a minimum I think data_type, name, and nodata should be fixed, while description, scale, offset, and validity of min/max are recommended.

What do you think about releasing a series instead? e,g, "height from ground [cm]" with data_type int16 gets one Standard value (e.g., 200) while "height from ground [mm]" with data_type int32 gets the next value (e.g., 201)?

@esilvia
Copy link
Member Author

@esilvia esilvia commented Oct 3, 2017

@rapidlasso Good point about the difficulties with reprojection. I've had this struggle with the Origin vector of FWF data, and I've often wondered whether those vectors are getting modified correctly.

Unfortunately, if the points get shifted (e.g., from calibration) I doubt whether any software would also update the point coordinates. That's the advantage of the vector. As you point out, though, the disadvantage is that they're only valid for a given projection.

@lgraham-geocue
Copy link

@lgraham-geocue lgraham-geocue commented Oct 3, 2017

@rapidlasso
Copy link

@rapidlasso rapidlasso commented Oct 3, 2017

@esilvia, not feeling strongly about the data type issue. Your suggestion is also good as it would prohibit folks from storing "height above ground" or "echo width" as floating point numbers. Now that is something that I really do feel strongly about. How do I allow different data types in LASlib? I have a "get attribute as float value" function to use "extra bytes" for processing so the actual storage format of the extra attribute does not matter in my implementation.

@rapidlasso
Copy link

@rapidlasso rapidlasso commented Oct 7, 2017

@esilvia and @lgraham-geocue my suggestion is to start this standardization with very few (two or three) additional attributes that are likely to be used or that are already used. "Height above ground" is an obvious candidate for derived (i.e. not new) information. "Echo width" is an obvious candidate for additional (i.e. new) information. I would recommend to start with just those two and see how it works out before adding a larger number ...

@lgraham-geocue
Copy link

@lgraham-geocue lgraham-geocue commented Oct 7, 2017

Yes, I agree. The more complex, the lower the adoption rate. I would like to see Group added in this initial change. It is just an unsigned long (4 byte) or unsigned long long (8 byte). In the initial version, there would be no restrictions on its use other than initializing to zero (meaning no group membership). We could write a short "best practices" on using Group but it would only be a guideline, not a requirement.

@esilvia
Copy link
Member Author

@esilvia esilvia commented Apr 13, 2018

@lgraham-geocue I like the idea of a GroupID/ObjectID attribute. Should the NULL value be 0 or INTMAX? Not sure which is more intuitive.

What if there are two different kinds of groups that a point could belong to? Should we include recommendations for supporting multiple attributes of the same kind? e.g., GroupID[1], GroupID[2], etc?

@esilvia
Copy link
Member Author

@esilvia esilvia commented Apr 13, 2018

Any preference on how to differentiate between the 32-bit and 64-bit ObjectID definitions? LongObjectID for 64bit?

@lgraham-geocue
Copy link

@lgraham-geocue lgraham-geocue commented Apr 13, 2018

@esilvia
Copy link
Member Author

@esilvia esilvia commented Sep 5, 2018

Here's an update to the proposed standard extrabytes.
Standard ExtraBytes v2.docx

@esilvia
Copy link
Member Author

@esilvia esilvia commented Sep 5, 2018

And here's another update including some of the feedback I got this summer at JALBTCX, adding the horizontal and vertical uncertainty fields.
Standard ExtraBytes v3.docx

@rapidlasso
Copy link

@rapidlasso rapidlasso commented Sep 13, 2018

The "Range" which is "defined as the three-dimensional distance from the sensor to the point, the range is useful for multiple computations such as intensity attenuation and measurement bias." is suggested to be of data type float. I vehemently oppose that. The data type should be an unsigned integer (or even just an unsigned short) with a scale that is similar to that of the LAS points (or less precise) and an offset of zero.

@rapidlasso
Copy link

@rapidlasso rapidlasso commented Sep 13, 2018

Are the tuples and triples finally deprecated? I'd like to completely remove them from LASlib. They never were properly supported and I've never seen them used anywhere.

@rapidlasso
Copy link

@rapidlasso rapidlasso commented Sep 13, 2018

I suggest we start with one or two or three standardization that are reasonably simple. My votes go to:

  • echo width (widely used by RIEGL exports)
  • height above ground (needed by many, already implemented and used long time in LAStools)
  • group ID (requested by Lewis and use case exists for Terrasolid's new group functionality)

@lgraham-geocue
Copy link

@lgraham-geocue lgraham-geocue commented Sep 13, 2018

@lgraham-geocue
Copy link

@lgraham-geocue lgraham-geocue commented Sep 13, 2018

@hobu
Copy link
Contributor

@hobu hobu commented Sep 13, 2018

In addition, all distance units in the file should be (we would say “must” in the spec) be in the vertical units of the Spatial Reference System of the file. I say vertical units because, in the USA, there are still some “official” SRS with horizontal in feet (INT, Survey?) and vertical in meters.

LAS abdicates responsibility for the coordinate system by handing it off to WKT. I disagree that the specification should get involved here, because the spec and the SRS are inevitably going to get into conflict.

LAS should investigate requiring OGC WKT2 in a future revision. WKT2 handles more situations and is more complete. See https://gdalbarn.com/ for some discussion related to the GDAL project on the topic (thanks for the contribution @lgraham-geocue!)

(tuples and triples) (Maybe Howard (Butler) is using them for something?

Triplets are common in graphics scenarios, and I proposed them thinking they would be well aligned with LAS. They aren't, and they introduce as many problems as they might solve. Few softwares produce or consume them. They should be dropped. No one will miss their removal.

@esilvia
Copy link
Member Author

@esilvia esilvia commented Sep 21, 2018

@rapidlasso Tuples and triples will be officially dropped with the next revision (#1).

I agree that range could be confusing because of potential desynchronization with the SRS units, but I believe that fixing it at meters and leaving it with the points prevents its loss when the trajectory files inevitably get lost. We hard-code units for the angles (at degrees), so I don't see why we can't do this with Range. Software can easily change units displayed while leaving the units stored untouched.

You've persuaded me that starting with a small handful is a good idea, and I like Martin's list. I'm tempted to add the topobathy-related ones, but perhaps that's better left in the LDP?

@lgraham-geocue
Copy link

@lgraham-geocue lgraham-geocue commented Sep 21, 2018

@rapidlasso
Copy link

@rapidlasso rapidlasso commented Sep 21, 2018

@esilvia "Tuples and triples will be officially dropped with the next revision". Happy to hear that. I just kicked them out of LASlib last week ... (-:

@rapidlasso
Copy link

@rapidlasso rapidlasso commented Sep 21, 2018

@lgraham-geocue I disagree. A range is - similar to the scan angle - something measured by a scientific measurement instrument and should follow international standards. I could see how your argument could apply to "height above ground" but even here I'm leaning to always making the measurement unit part of the standardized "extra bytes" because (1) the CRS often gets stripped, (2) reprojecting coordinates from a feet-based CRS to a meter-based CRS (or vice versa) without rescaling the "extra bytes" leads to wrong ranges / heights above ground, and (3) the best choices in scale and offset change when we go from meter to feet. A scale factor of 0.01 may be good when measuring the range or the height above ground for an airborne scan in meters but it is overly precise for feet. We will open a whole can of worms of "extra bytes" that do not have the correct unit or where we do not know the correct unit if we let the vertical unit of the CRS decide this.

@esilvia
Copy link
Member Author

@esilvia esilvia commented Oct 1, 2018

@rapidlasso You make a strong point regarding the scale/offset also being unit-dependent. I think that's also a strong argument in favor of fixing the units.

@lgraham-geocue observed that the horiziontal and vertical units can be different in LAS files, which is something I've also observed to my chagrin. Since Range is a 3D measurement, it could get very, very weird if the vertical units are meters and horizontal units are feet. I think this is another argument in favor of fixing the units at meters.

I can be persuaded that the height-from-ground will match the vertical units of the LAS file. Simple, and I think it's what people would expect when they receive data.

So here's the plan: I'm going to publish the following "Standard" extrabytes as a GitHub wiki page (https://github.com/ASPRSorg/LAS/wiki/Standard-ExtraByte-Definitions):

  • Pulse Width
  • Height Above Ground
  • Group ID (aka Object ID)
  • Horizontal Uncertainty
  • Vertical Uncertainty
  • Bathymetric Flags
  • Submerged/Refracted Vector Length (aka Refracted Depth)
  • Range

All of these Standard ExtraBytes will be assigned an integer value (ID) that can be assigned to the first two bytes of the ExtraByte definition structure (currently Reserved). It's a little longer than Martin's list but I think it captures the ones I've seen in the wild. I didn't get any feedback on incorporating the ExtraByte definitions from the topobathy LDP, so I decided to include the ones that I've seen most often.

Rather than include these definitions in the specification itself, I'll update the ExtraByte VLR description in the specification with a link to the wiki page and claim the two Reserved bytes for the ID field, which must be 0 unless it adheres to one of the definitions on the wiki page.

All of these changes will be included with the R14 revision, which I plan to submit to ASPRS in the next week or two. Last chance to comment. @rapidlasso @lgraham-geocue @csevcik01 @hobu @anayegandhi @jdnimetz

@esilvia
Copy link
Member Author

@esilvia esilvia commented Mar 25, 2019

LAS in TLS is less common but it happens because e57 and similar formats aren't widely supported, nor is there a standard way to store the setup location (something I'd like to fix). If we did a SF of 0.02 then we could have a range of 0-5.10 meters, with of course a slight decrease in precision. IMHO any more than 5ish meters starts to lose usefulness, but maybe satellite LiDAR hits that range?

1 byte per point isn't a huge issue, although remember that storing Hz Precision also means storing Vt Precision, so it's actually 2 bytes vs 4 bytes per point. Again, not a huge issue because storage is relatively cheap, but is there really a need for it? Maybe there is. @gimahori might know.

@rapidlasso
Copy link

@rapidlasso rapidlasso commented Mar 25, 2019

Indeed, storage is cheap and if the range is unused then the upper byte (or the upper bits of the upper byte) are mostly zero meaning they disappear when compressing the LAS file with LASzip (or any other redundancy removing compression scheme).

@esilvia esilvia removed this from the v1.4 R15 milestone Jun 4, 2019
@esilvia esilvia added this to the v1.4 R16 milestone Jun 4, 2019
@manfred-brands
Copy link

@manfred-brands manfred-brands commented Dec 13, 2019

LAS is no longer only used for laser data. We use LAS for multibeam echo sounder data. A typical system has 400 beams over a 150 degree arc. This gives a 0.375 degrees beam separation which is only part of the uncertainty and increases further away from nadir. Depending on range (water depth) that value can get big. 100m water depth results in a 9.5m beam width at the edge vs 0.65m at nadir. At 1000m they get 10x worse.
At the same time we use underwater laser at short range where uncertainty is sub-millimetre.
A single prescribed field will have not enough range in one case and not enough resolution in another.
What is the purpose of the standardization?
Knowing what field contains the uncertainty (number 113) so we don't have to check all kind of different names? In that case the type and scale factor can be different depending on the data at hand. All LAS readers should deal with that gracefully as it is defined in the ExtraBytesDefinition.
If the purpose is that we can combine LAS files from different data sources we need a larger field. An unsigned short would allow mm resolution to 65m. Any data worse could be encoded as MaxValue.

@rapidlasso
Copy link

@rapidlasso rapidlasso commented Dec 13, 2019

I agree with @manfred-brands. The standardization document should recommend suitable data types and strongly discourage the use of floats or doubles, but allow the data producer to populate scale and offset that are suitable for their data. Just like the LAS standard does it for x/y/z coordinates. But it is really important that the standardization document contains concrete use examples so we don't end up with attributes that are stored as 64 bit integers or as picometer scales or with double-precision floating-point. In the LASlib API I include a covenience function that can read any of the additional attributes from any scaled and offset representation and present it as a double-precision floating-point for processing.

@esilvia
Copy link
Member Author

@esilvia esilvia commented Jan 20, 2020

@manfred-brands @rapidlasso

From today's conference call: The purpose of standardization is fourfold:

  1. to prevent multiple different names for the same attribute
  2. to protect standardized names from acquiring multiple different meanings
  3. to provide a centralized location to learn more about an ExtraByte that's discovered in one's dataset
  4. to provide a method to publicize and therefore increase the value (i.e., usability) of ExtraBytes that users have produced

In that light, your points make sense to me, and imo also make the answer about units obvious. The standard ExtraBytes can recommend a standard unit, offset, and scale, but allow for deviations when the underlying technology, site, or application require greater range and/or precision.

If we don't do this, then we'll end up with multiple versions of the same "standard" ExtraByte for different levels of precision, and I believe that would be counterproductive to the stated goals. Thanks for providing some clarity on this issue. I believe that we can move forward with this information.

@rapidlasso
Copy link

@rapidlasso rapidlasso commented Jan 22, 2020

I recommend we start (quickly) with one or two "standardized additional attributes" and see what we learn in the process of adding them as addendums (?) to the specifiction and implementing them in a few software packages. My number one pick would be "echo width" in tenth of a nanoseconds. My number two pick would be "height above ground" in centimeters or millimeters.

@lgraham-geocue
Copy link

@lgraham-geocue lgraham-geocue commented Jan 22, 2020

@rapidlasso
Copy link

@rapidlasso rapidlasso commented Jan 23, 2020

@lgraham-geocue how do you currently encode this "emission point to impact point unit vector" into extra bytes? I assume you use three different additional attributes, one for each vector component? What data type, scale, and offset are you using?

@lgraham-geocue
Copy link

@lgraham-geocue lgraham-geocue commented Jan 23, 2020

@rapidlasso
Copy link

@rapidlasso rapidlasso commented Jan 24, 2020

@lgraham-geocue that is exactly what I was afraid of. (-: You are hereby excused from designing the storage details for standardization of "additional attributes" via extra bytes ... (-;

But seriously. For all near-nadir shots the ux and uy components will be close to zero and lead to very inefficient (aka over-precise) storage. We had this discussion before. It originally started when a fully flexible 2.0 version of the LAS specification was first proposed. It (fortunately) has died. This was about storing xyz in floating-point but the same argument holds for the three components of a unit vector. If we need to store unit vectors it may be worthwhile using a concise coding such as [Deering 1995]. The full discussion against floating-point is still available here and a screen shot of the opening argument is attached:

USGS_CLICK_LiDARBB_LAS2 0_floating_point_boycott

@dpev
Copy link

@dpev dpev commented Jan 24, 2020

@lgraham-geocue
Copy link

@lgraham-geocue lgraham-geocue commented Jan 24, 2020

@rapidlasso
Copy link

@rapidlasso rapidlasso commented Jan 24, 2020

Don't jump to conclusions too quickly about me having hidden LAZ intentions. Three fluffy floats will LAZ-compress with a higher compression rate than more compact unit vector representations. A recent paper on efficient storage of unit vectors (here with applications as shading normals) that also provides an accessible explanaition of why three floats are überfluffy is presented in this paper alongside a number of better alternatives. I think the "opt32" mapping looks promising:

https://www.researchgate.net/publication/301612007_A_Survey_of_Efficient_Representations_for_Independent_Unit_Vectors

Lewis' emotional response suggests that surface normals are not a suitable starting candidate for the first standardized additional attribute. (-; Maybe the beam or beamlet ID needed for Velodyne, Ouster, SPL100 and upcoming scanners is a less contentious candidate?

@abellgithub
Copy link

@abellgithub abellgithub commented Jan 24, 2020

This all seems to have gotten very confusing and confused. Can someone summarize the basic proposal and goal?

@gsmercier
Copy link

@gsmercier gsmercier commented Jan 24, 2020

@lgraham-geocue
Copy link

@lgraham-geocue lgraham-geocue commented Jan 25, 2020

@rapidlasso
Copy link

@rapidlasso rapidlasso commented Jan 26, 2020

I would appreciate if @lgraham-geocue could stop suggesting my comments are driven by me only "knowing about ALS" or me only "caring about LAZ" or the like. This is getting old.

In the seminal paper "Geometry Compression" from SIGGRAPH 95 Deering kick-started research on better representations of surface normals or unit vectors noting that "Traditionally 96-bit normals (three 32-bit IEEE floating-point numbers) are used in calculations to determine 8-bit color intensities. 96 bits of information theoretically could be used to represent 2 to the power of 96 different normals spread evenly over the surface of a unit sphere. This is a normal every 2 to the power of -46 radians in any direction. Such angles are so exact that spreading out angles evenly in every direction from earth you could point out any rock on Mars with sub-centimeter accuracy."

The summary paper I cited earlier points out that "Consider a straight forward representation of points on the unit sphere. A structure comprising three 32-bit floating scalars (struct { float x, y, z; }) occupies 3 floats = 96 bits per unit vector. This representation spans the full 3D real space, R3, distributing precision approximately exponentially away from the origin until it jumps to infinity. Since almost all representable points in this representation are not on the unit sphere, almost all 96-bit patterns are useless for representing unit vectors. Thus, a huge number of patterns have been wasted for our purpose, and there is an opportunity to achieve the same set of representable vectors using fewer bits, or to increase effective precision at the same number of bits."

So I am just one of pretty much any other geometry storage researcher in the world that would say that it's time to move past storing three floats for unit vectors or surface normals.

For every "additional attribute" stored as extra bytes we specify these things in the VLR:

  1. starting byte
  2. data type
  3. no data value
  4. scale
  5. offset

@lgraham-geocue, are you suggesting that reading 1 to 3 is ok but using 4 and 5 is too complex?

@lgraham-geocue
Copy link

@lgraham-geocue lgraham-geocue commented Jan 26, 2020

@rapidlasso
Copy link

@rapidlasso rapidlasso commented Feb 24, 2020

I recently published a little blog post on how to map the information stored in these kind of ASCII lines of LiDAR information to the LAS format:

1, 290.243, 28.663, -11.787, 0.060, -0.052, 0.997, 517.3170, -58.6934, 313.0817, 52
1, 290.208, 28.203, -11.825, 0.062, -0.056, 0.996, 517.3167, -58.6934, 313.0817, 49
1, 290.182, 27.739, -11.852, 0.063, -0.055, 0.997, 517.3164, -58.6935, 313.0817, 53
1, 290.165, 27.272, -11.866, 0.061, -0.058, 0.996, 517.3161, -58.6935, 313.0817, 53
1, 290.163, 26.800, -11.858, 0.061, -0.053, 0.997, 517.3157, -58.6935, 313.0817, 68
...

The first number is either a classification into ground, vegetation, or other surface, or represents an identifier for a planar shape that the return is part of. The next three numbers are the x, y, and z coordinate of the LiDAR point in some local coordinate system. The next three numbers are the x, y, and z coordinates of an estimated surface normal. The next three numbers are the x, y, and z coordinates of the sensor position in the same coordinate system. The last number is the intensity of the LiDAR return.

@parrishOSU
Copy link

@parrishOSU parrishOSU commented Mar 16, 2020

The cBLUE topo-bathy lidar TPU tool (https://github.com/noaa-rsd/cBLUE.github.io) is currently storing vertical uncertainty values in extra bytes as floats, rather than uchar, for increased precision. This differs from the LWG's DRAFT Standard ExtraByte Definitions but seems to be working for those groups using the tool. Input on this? @forkozi @esilvia ?

@rapidlasso
Copy link

@rapidlasso rapidlasso commented Mar 16, 2020

Do the TPU values require error values that are in increments of femtometer close to zero but drop exponentially to increments of decimeters close to one million? Then a float32 representation is suitable.

If the increments with which the error is to be expressed should be a constant centimeter or millimeter throughout the entire error value range then an unsigned integer scaled by 0.01 or 0.001 is the correct approach.

@parrishOSU
Copy link

@parrishOSU parrishOSU commented Mar 16, 2020

The range of plausible vertical uncertainty values, considering the range of possible data sources, is probably meters to millimeters. Adding one order of magnitude in either direction gives tens of meters to tenths of millimeters. If we use a scale factor, where is the scale factor stored? Is it the same as for the X Y Z coordinates?

@rapidlasso
Copy link

@rapidlasso rapidlasso commented Mar 16, 2020

The "scale factor" is core part of the "extra byte" definition. I recently published a little blog post on how to use txt2las (which is open source) to map the information stored in these kind of ASCII lines of LiDAR information to the LAS format and you see examples with different numbers of decimal digits being used there:

1, 290.243, 28.663, -11.787, 0.060, -0.052, 0.997, 517.3170, -58.6934, 313.0817, 52
1, 290.208, 28.203, -11.825, 0.062, -0.056, 0.996, 517.3167, -58.6934, 313.0817, 49
1, 290.182, 27.739, -11.852, 0.063, -0.055, 0.997, 517.3164, -58.6935, 313.0817, 53
1, 290.165, 27.272, -11.866, 0.061, -0.058, 0.996, 517.3161, -58.6935, 313.0817, 53
1, 290.163, 26.800, -11.858, 0.061, -0.053, 0.997, 517.3157, -58.6935, 313.0817, 68
...

@ASPRSorg ASPRSorg deleted a comment from rapidlasso Jan 15, 2021
@esilvia esilvia removed this from the v1.4 R16 milestone Mar 15, 2021
@rapidlasso
Copy link

@rapidlasso rapidlasso commented Mar 18, 2021

The "beam ID" seems a rather easy first candidate for standardization. Clearly there is a need and clearly users already store this information to "extra bytes" like here as "Velodyne Rings". In this blog post I describe how to copy the beam ID from the "point source ID" field or from the "user data" field into a new "extra bytes" attribute with two calls to las2las, namely

las2las ^
-i Samara\Drone\00_raw_aligned\*.laz ^
-add_attribute 1 "laser beam ID" "which beam ranged this return" ^
-odir Samara\Drone\00_raw_temp -olaz

las2las ^
-i Samara\Drone\00_raw_temp\*.laz ^
-copy_user_data_into_attribute 0 ^
-set_user_data 0 ^
-set_point_source 0 ^
-odir Samara\Drone\00_raw_ready -olaz

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet