Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Binomial distribution for distribution_compare #48

Closed
schumannd opened this issue Jan 26, 2018 · 8 comments
Closed

Binomial distribution for distribution_compare #48

schumannd opened this issue Jan 26, 2018 · 8 comments

Comments

@schumannd
Copy link

schumannd commented Jan 26, 2018

My goal is to find the point where scale-free networks become indistinguishable from random (non-scale-free) networks.

I would expect something like the binomial distribution to be implemented for comparison using distribution_compare().
Is there a specific reason it wasn't implemented?

For example I tried the following code to distinguish between an obviously scale-free network and an obviously non-scale-free network (both with similar numbers of nodes/edges):

non_sf_graph = nx.gnp_random_graph(10000, 0.002)
sf_graph = nx.barabasi_albert_graph(10000, 10)
fitpl = powerlaw.Fit(list(sf_graph.degree().values()))
fitnpl = powerlaw.Fit(list(non_sf_graph.degree().values()))

for dist in fitpl.supported_distributions.keys():
    print(dist)
    fitpl.distribution_compare('power_law', dist)
    fitnpl.distribution_compare('power_law', dist)

The output suggested that none of the implemented distributions provided a tool to discern between an preferential attachment model and a gnp random graph:

lognormal
(-0.23698971255249646, 0.089194415705275421)
(-20.320811335334504, 3.9097599268295484e-92)
exponential
(511.41420648854108, 7.3934851812182895e-23)
(24.215231521373582, 3.7251410948652104e-08)
truncated_power_law
(3.3213949937049847e-06, 0.99794356568650555)
(3.1510369047360598e-07, 0.99936659460444144)
stretched_exponential
(16.756797270053454, 1.6505119872120265e-05)
(8.7110005915424153, 8.7224098659112012e-05)
lognormal_positive
(30.428201968820289, 1.7275238929002278e-07)
(6.7992592335974233, 5.4945477823229749e-06)

I am asking as i am no statistics expert and I might not see the significance of all the available distributions. But they seem to fail this basic example. I would be happy to help implement a distribution, that successfully fits a random gnp network. Or are there some limitations which make this hard/impossible?

@jeffalstott
Copy link
Owner

Thanks for using powerlaw, David!

Your intuitions are good. However, try plotting the two degree distributions so you see the data you're asking the statistical test to deal with. Particularly try plotting both the PDF and the CCDF, then overlay powerlaw's best fit lines (examples of how to do this are in the paper).

@schumannd
Copy link
Author

schumannd commented Jan 26, 2018

Here are the plotted ccdf (1) and pdf (2) functions of the sf_graph (blue) and non_sf_graph (red)

They don't really seem that similar. Are you saying that detecting a binomial degree distribution is hard/impossible jsut from the pdf or ccdf?

(1)
github1ccdf
(2)
github1pdf

@schumannd
Copy link
Author

Also when looking at graphs i plotted myself, both pdf and ccdf of scale-free and random networks seem to be easily discernible:
ccdf_loglog.pdf
ccdf.pdf
degdist_loglog.pdf
degdist.pdf

@jeffalstott
Copy link
Owner

Great! Now plot the fitted power law for fitnpl. I.e. fitnpl.power_law.plot_pdf(), IIRC (which I may not)

@schumannd
Copy link
Author

schumannd commented Jan 26, 2018

I don't think this is necessary. I can see that a statistical test could fit a power law to this graph.

My question is however why we don't use a binomial distribution for comparison, which would fit the graph unarguably better than a power_law. It would be close to a perfect fit on all data points. Look eg at ccdf_loglog.pdf. For a power law to fir that curve it would have to choose a pretty high min_x and would still not hit the data points in a better way than the actual original distribution (binomial) would.

Is it hard to implement? Is the binomial distribution not well defined for the ccdf? Does the exponential / stretched exponential distribution already cover it? Or was it simply not deemed useful to implement it?

@jeffalstott
Copy link
Owner

Visualizing the fitted power law for fitnpl should show that the power law is only actually fitted for the extreme tail of the distribution. This is the important insight: by default, powerlaw finds the optimal value at which to cut off the tail of the distribution, where "optimal" is the tail that is best described by a power law. You will observe that fitnpl.xmin is different from fitpl.xmin (also callable with fitnpl.power_law.xmin and fitpl.power_law.xmin).

So, essentially, you're taking the tail of an exponential distribution, chopping off the tail that's near vertical, fitting a power law to that, and then asking if that near-vertical tail is better described by a power law or an exponential (or the other functional forms you tested). This is not what you want. The ability to notice undesirable values for xmin is why printing xmin is in the Basic Usage example.

tl;dr: Set xmin=1 when you call Fit. Does that yield the behavior that was expected?

As for no binomial distribution being implemented: It would be very welcome! powerlaw has a happy history of accepting pull requests from the community that implemented other distributions, like the stretched exponential.

@schumannd
Copy link
Author

schumannd commented Jan 26, 2018

Thanks for the elaborate response.

unfortunately setting xmin=1 does not solve the problem as now the power law tail of the scale-free network wont be detected.

But apparently the binomial distribution is not needed, as the exponential does fit very well for most small xmin.

I guess the issue can then be closed.


For the example I gave, I observe the following behavior when comparing 'powerlaw' vs 'exponential' distribution for different xmin:

xmin 1 to 7: power-law misclassified as exponential (p-val < 0.0001)
xmin 8 to 22: both graphs classified correctly (p-val < 0.001) <= sweet spot!
xmin 22 to 26: gnp graph unsure
xmin over 26: gnp graph misclassified as scale-free (p-val < 0.01)

So in the end it comes down to picking a good xmin. Are there best practices on how to pick the xmin in that case? Surely there must be some measure of what constitutes as a "power law distribution"? If any tail can be fitted to a power law that defeats the purpose. But I guess this is not within the realm of the powerlaw package, which simply provides the tools.

Thanks for all the help!

@jeffalstott
Copy link
Owner

Glad to help!

A few things to consider:

  • At xmin of 26, the GNP graph has virtually no data left.
  • Getting good sampling from a power law distribution is hard. You generated 10,000 nodes, but the scale-free network still has a very ragged PDF above k=20 or so.
  • If you consider the entirety of a distribution, the tail is by definition not much of the PDF, and so that data will not contribute as much to the likelihood function when fitting an equation to the full data. Thus, at small xmin we would expect that the tail of a power law would not contribute much. This is particularly true if we don't have good sampling of the power law, which will be particularly bad if the power law is steep (has a large negative alpha). BA networks in the limit yield a degree distribution of alpha of -3. The "scale free" properties of power laws only start at alpha=-3. If |alpha|<3, the distribution's variance is undefined. If |alpha| <2, the distribution's mean is undefined. This is to say, we only get the wacky "scale free" properties of power laws if alpha is small enough, which is when the tail is heavy enough, which is when we'll have good sampling of the tail, which is when we'll be most readily able to identify a power law. If |alpha|>3, it's indeed hard to identify a power law, but also there's little point. Steep power laws aren't cool power laws.
  • Identifying tails is philosophically hard. Ideally, you have some semantic understanding of the system that generated the data, which will inform where xmin "ought" to be (or where an xmax ought to be, which is described in the paper). Outside of that, there are no right answers (or at least there weren't when I was in this game several years ago). The original Clauset et al. paper that came up with powerlaw's procedure to identify an xmin was trying to give the strongest possible support for a power law in a given dataset, in part so that they could then show that even then power laws weren't supported in a bunch of empirical datasets.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants