You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How should we refine the interrogation grid each iteration?
For a starting grid with spacing 2^n, we require n refinements to get to a minimum window spacing of 2px. (possibly n-1 if we enforce 4px as the minimum spacing).
For n=7, i.e. 128px, even 7 iterations is quite a lot. Furthermore, we have seen that actually we can get very close to the final solution in just 2 iterations.
In this case, how do we get close to the minimum resolution as quickly as possible without 'oversampling'.
A standard 50% reduction would take the WS from 256->128->64->32 over 4 iterations which seems very gradual.
It would still be reasonable to go 256->96->32->16 - just over doubling the number of samples in x and y each time
More aggressive would be 256->64->16 - i.e. quadrupling the number of samples in both x and y each time.
This tells me that I think it would be reasonable to split by 2 refinements each iteration if need be.
This will result in some 'oversampling' in certain areas, but I think this would actually be countered by the fact that certain areas accelerate analysis. Furthermore, I think some areas that might 'unnecessarily' be sampled, might end up being sampled anyway and so isn't completely a waste.
The text was updated successfully, but these errors were encountered:
How should we refine the interrogation grid each iteration?
For a starting grid with spacing 2^n, we require n refinements to get to a minimum window spacing of 2px. (possibly n-1 if we enforce 4px as the minimum spacing).
For n=7, i.e. 128px, even 7 iterations is quite a lot. Furthermore, we have seen that actually we can get very close to the final solution in just 2 iterations.
In this case, how do we get close to the minimum resolution as quickly as possible without 'oversampling'.
A standard 50% reduction would take the WS from 256->128->64->32 over 4 iterations which seems very gradual.
It would still be reasonable to go 256->96->32->16 - just over doubling the number of samples in x and y each time
More aggressive would be 256->64->16 - i.e. quadrupling the number of samples in both x and y each time.
This tells me that I think it would be reasonable to split by 2 refinements each iteration if need be.
This will result in some 'oversampling' in certain areas, but I think this would actually be countered by the fact that certain areas accelerate analysis. Furthermore, I think some areas that might 'unnecessarily' be sampled, might end up being sampled anyway and so isn't completely a waste.
The text was updated successfully, but these errors were encountered: