-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KAM: nearest neighbour, second nearest? etc... #6
Comments
In my mind, a kernel involves all neighbours of size specified. In addiction, there should be a criterion for ignoring large misorientations since such points belong to other grains or are misindexed points. A more general convolution approach might be interesting to for example weight contributions by the actual distance (points at a diagonal are slightly further away, for example), or even add a gaussian weight distributions but it depends on what you are trying to use the values for I guess. I would have though equal weight is probably fine for most cases. What do other people do?
João
… On 11 Jan 2018, at 14:51, AllanHarte ***@***.***> wrote:
Hi All
Am I correct in thinking that the current calcKam() function in ebsd.py takes the average of neighbouring pixels in the row, in the column, and then takes the average of those row and column values? Is this strictly a kernel? i.e. are the corner pixels of a 3x3 square kernel taken into account?
If we were to think about a 5x5 kernel or a 7x7, etc., would we have to do the calculation in a different way? There are scipy functions to convolve a 2D dataset with a kernel, such as scipy.signal.convolve2d().
@mikesmic maybe you think that we can adapt the current calcKam() function, or do you think that I should create a new function using something like convolve2d that is flexible for an nxn kernel?
Thanks, Allan
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
The current function has not been written a particularly generic way (for execution speed it calculates along entire rows and columns at a time) but it could be adapted easily to calculate KAM on any size square kernel. Currently it just takes the 4 nearest neighbours, so no diagonal points. You could write a separate function that takes in a 2d array of the kernel shape (which could also include weights) and calculates KAM based on this at each point. This would have to be done at each point individually however so might be quite slow for large maps. |
Do you guys know the local gradient definition that Kamaya proposed?
http://www.sciencedirect.com/science/article/pii/S0304399111000726#f0025
… On 11 Jan 2018, at 16:47, Michael Atkinson ***@***.***> wrote:
The current function has not been written a particularly generic way (for execution speed it calculates along entire rows and columns at a time) but it could be adapted easily to calculate KAM on any size square kernel. Currently it just takes the 4 nearest neighbours, so no diagonal points.
You could write a separate function that takes in a 2d array of the kernel shape (which could also include weights) and calculates KAM based on this at each point. This would have to be done at each point individually however so might be quite slow for large maps.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
I have uploaded a new notebook to my repository called calculateKamNxN.ipynb. There is a function in there to calculate KAM, ignoring large local misorientations and with a flexible NxN sized kernel. I haven't included weights (other than a weight of zero for large misorientations) and I haven't looked at Kamaya's local gradient, but I will look into including both of these. The function works by finding the local misorientations, ignoring the large misorientations and then finding the average orientation of the small misorientations. Then the misorioentation between the pixel and the local average is obtained and stored in a KAM map array. @mikesmic would you check that I've done the misorientation calculation properly (i.e. the Einstein summations)? I've not worked with quaternions in this way before. Maybe we could have a chat about quaternion maths at some point? Example images of a 3x3 and a 9x9 kernel with overlaid GBs attached here... |
I've had a look at the notebook and I'm at little confused. I can understand what you're trying to do with the first function (KAM not ignoring large values) but you can't convolve the quaternion components like you have. Addition of 2 quaternion gives the average of the 2 orientations. I'm lost with your second function though, you seem to have done this in a different way and the maps it has produced seem reasonable. Are you in to have to talk about this today? I can run you through some quaternion maths as well, I have a crib sheet with the important stuff on. |
Thanks for taking a look, I'll come to see you this afternoon about quat maths. In the first part of the notebook I wanted to average the quat coefficients within a kernel. I did this by separating the quat data into four maps of the separate quat coefficients and then convolving that quat data with a kernel that determines the local mean for each quat coefficient. Note that I find the local mean for the individual quat coefficients and not the full orientation. I then recombine the coefficient data to get local mean orientation map for a given kernel size. The kamMap is then generated by finding the misorientation between the original data and the local mean kernel data. It's all in arrays so it's vectorised and this makes the calculation pretty quick. In the second part of the notebook I had to do things differently to exclude large misorientations - I couldn't vectorise it. I can see how it might be a little confusing and will chat to you this afternoon about how to make it clearer. I will then update the notebook and highlight any significant changes in this thread. Allan |
After speaking with @mikesmic yesterday I realised that I was calculating the KAM in a strange way - finding local average orientations in a kernel and then the misorientation of a pixel with respect to that average. I have now changed the function so that it calculates the misorientation of a pixel with respect to its neighbours in a kernel and then it simply takes the average of those misorientations. Large misorientations are given a value of NaN, as is the central pixel's misorientation with respect to itself. The mean misorientation is therefore calculated with np.nanmean(). The result is the very similar to before but it's quicker. When we find the KAM in this way we can easily get influences from neighbouring grains for large sized kernels. To eliminate this effect, I will try this on grain objects as opposed to the full maps |
You shouldn’t have to do it per grain if you are ignoring large misorientations surely? As long threshold value <= to that chosen during grain detection.
On 24 Jan 2018, at 10:58, AllanHarte <notifications@github.com<mailto:notifications@github.com>> wrote:
After speaking with @mikesmic<https://github.com/mikesmic> yesterday I realised that I was calculating the KAM in a strange way - finding local average orientations in a kernel and then the misorientation of a pixel with respect to that average. I have now changed the function so that it calculates the misorientation of a pixel with respect to its neighbours in a kernel and then it simply takes the average of those misorientations. Large misorientations are given a value of NaN, as is the central pixel's misorientation with respect to itself. The mean misorientation is therefore calculated with np.nanmean(). The result is the very similar to before but it's quicker.
When we find the KAM in this way we can easily get influences from neighbouring grains for large sized kernels. To eliminate this effect, I will try this on grain objects as opposed to the full maps
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#6 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AD5bnd_heFKVwyO4HzNIH2g1yAGbh89Tks5tNwzBgaJpZM4Ra9im>.
|
Ah yes, good point - as long as the maximum misOrientation value for the KAM is less than the boundary definition when loading the ebsd data then this won't be a problem, i.e. for data with maps[i].findBoundaries(boundDef = 10) then the maximum misorientation value for the KAM has to be < 10 |
Hi All
Am I correct in thinking that the current calcKam() function in ebsd.py takes the average of neighbouring pixels in the row, in the column, and then takes the average of those row and column values? Is this strictly a kernel? i.e. are the corner pixels of a 3x3 square kernel taken into account?
If we were to think about a 5x5 kernel or a 7x7, etc., would we have to do the calculation in a different way? There are scipy functions to convolve a 2D dataset with a kernel, such as scipy.signal.convolve2d().
@mikesmic maybe you think that we can adapt the current calcKam() function, or do you think that I should create a new function using something like convolve2d that is flexible for an nxn kernel?
Thanks, Allan
The text was updated successfully, but these errors were encountered: