You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am confused about the meaning of the p-value between the software and the paper.
From the software document, p-value represents significant when p-value < 0.05 (normal usage in statistics)
However, the footnote in the page 17 of the paper "Power Law Distributions in Empirical Data" states that they use p-value as a measure of the hypothesis they are trying to verify.
Hence, high values, not low, are "good".
So, if I use distribution_compare(A, B), and R > 0, p-value > 0.1.
Is A better to fit the data?
Thank you.
The text was updated successfully, but these errors were encountered:
Read
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0085777#s3
Specifically the section on bootstrapping (What Clauset et al. recommended,
where a high p indicates a power law) vs. comparing between distributions
(what I recommend, where a low p indicates 1 of the 2 distributions
compared is a significantly better fit than the other).
On Fri, May 25, 2018 at 4:28 AM, lymanblue ***@***.***> wrote:
I am confused about the meaning of the p-value between the software and
the paper.
From the software document, p-value represents significant when p-value <
0.05 (normal usage in statistics)
However, the footnote in the page 17 of the paper "Power Law Distributions
in Empirical Data" states that they use p-value as a measure of the
hypothesis they are trying to verify.
Hence, high values, not low, are "good".
So, if I use distribution_compare(A, B), and R > 0, p-value > 0.1.
Is A better to fit the data?
Thank you.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#57>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/AA6_r-aPCN8oxV4QVPd9zwcCiU-cDmHSks5t18ChgaJpZM4UNkYY>
.
I am confused about the meaning of the p-value between the software and the paper.
From the software document, p-value represents significant when p-value < 0.05 (normal usage in statistics)
However, the footnote in the page 17 of the paper "Power Law Distributions in Empirical Data" states that they use p-value as a measure of the hypothesis they are trying to verify.
Hence, high values, not low, are "good".
So, if I use distribution_compare(A, B), and R > 0, p-value > 0.1.
Is A better to fit the data?
Thank you.
The text was updated successfully, but these errors were encountered: