You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For example, if you choose a scale of 2, each pixel in the image (excluding some border pixels) will be compared against the one that is two pixels to the right.
but we think this ought to state "two pixels in every direction" for a total of 16 (the ring of pixels two away from any given pixel).
Similarly, the help states this:
MeasureTexture quantizes the image into eight intensity levels.
and we are surprised it's only 8; we would have imagined it being quite a few more to be sensitive. We also wonder how this quantization is done: is it scaled to the maximum intensity value within each image (we hope not), which would make it quite relative from one image to the next.
The text was updated successfully, but these errors were encountered:
Re: direction - I thought that the "Angle to measure" setting below specifies the direction. But you're right: the module could be updated to reflect that this setting was added after the help was written.
I wold actually be much more impressed if are able to produce more definitive descriptions of each of the features. I get this question a lot.
I remember trying to flesh the docs out in the past (you can see that some feature descriptions have more than others), but in general, my google-fu skills have failed me in this area.
12/2017 update- I think the "2 pixels to the right" language still needs to be addressed; my reading of the mahotas Haralick impelmentation makes me think the 8 level thing is almost certainly no longer true but would appreciate someone else's backup on that.
The help for MeasureTexture says this:
but we think this ought to state "two pixels in every direction" for a total of 16 (the ring of pixels two away from any given pixel).
Similarly, the help states this:
and we are surprised it's only 8; we would have imagined it being quite a few more to be sensitive. We also wonder how this quantization is done: is it scaled to the maximum intensity value within each image (we hope not), which would make it quite relative from one image to the next.
The text was updated successfully, but these errors were encountered: