-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consistent ways to euclidify dissimilarities #179
Milestone
Comments
jarioksa
pushed a commit
that referenced
this issue
Jun 4, 2016
now fitted & residuals are consistent in capscale and dbrda which was one of the concerns in issue #179 in github.
jarioksa
pushed a commit
that referenced
this issue
Jun 4, 2016
now stressplot of sqrt.dist and Lingoes/Cailliez adjusted are consistent, and stressplots for dbrda and capscale are consistent. This was one of the concerns in issue #179.
jarioksa
pushed a commit
that referenced
this issue
Jun 4, 2016
jarioksa
pushed a commit
that referenced
this issue
Jun 5, 2016
solves problems in github issue #179: 1) sqrt.dist= and add= adjustments are regarded as internal to the method and fitted etc. remove these in dbrda & capscale and also when showing observed statistics in stressplot etc. 2) functions dbrda, capscale, varpart, adonis2 and betadisper all have similar sqrt.dist= and add= arguments.
These issues were solved with 5b7fb1e. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
We have dissimilarity-based methods that can handle negative eigenvalues in a correct and consistent ways:
dbrda
,capscale
,wcmdscale
,adonis
,adonis2
,betadisper
andvarpart
. Some of these functions also provide options to euclidify dissimilarities so that there are no negative eigenvalues. However, not all functions do this consistently. Another aspect is that even when functions have consistent euclidification, support functions handle the results inconsistently.Consistent Euclidification
We use two ways to euclidify dissimilarities: Lingoes or Cailliez adjustment of dissimilarities (argument
add
) and taking square root of dissimilarities (argumentsqrt.dist
). Lingoes and Cailliez adjustments is guaranteed to produce euclidified dissimilarities, whereassqrt.dist
has no guarantee, but still seems to work for most commonly used ecological dissimilarities. Currently functionsdbrda
,capscale
,wcmdscale
anddbrda
have Lingoes and Cailliez adjustment, butadonis2
andbetadisper
do not have adjustment. They should be made consistent. The internalsqrt.dist
is available indbrda
,capscale
andvarpart
, but not in others. It should be added at least toadonis2
andbetadisper
. I am not sure aboutwcmdscale
. Moreover, I want to leaveadonis
as a legacy function, and only have these new features inadonis2
.Consistent Handling of Euclidified Dissimilarities
The euclidified dissimilarities should be handled consistently when users access results objects. There are several inconsistencies. I have just started to search for these inconsistencies, and I do not know how much there is work ahead. We should first decide a policy and then implement the policy. Here some examples of inconsistencies:
Lingoes adjusted plot uses the unadjusted input dissimilarities as x-axis (and for the red line of perfect fit) and show the adjusted values only for ordination distances (points above the 1:1 red line), whereas
sqrt.dist
shows the observed dissimilarities after adjustments. This is a policy issue, but I think both should regard the adjustment as an internal operation: x-axis should so the observed dissimilarities like in the input, and y-axis the ordination distances after adjustment.The treatment is also inconsistent among methods:
In
capscale
(objectmlin
) we remove the Lingoes adjustments and return estimates of observed dissimilarities, whereas withdbrda
we return Lingoes adjusted values. This is a policy issue and we can argue for either case. However, I argue for regarding these adjustments (additive constant, square root) as internal and always returning similar data as was input. That is, with non-Euclidean dissimilarities, we should get back non-Euclidean dissimilarities instead of euclidified ones.The text was updated successfully, but these errors were encountered: