Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Package metrics & stats #17

Open
jpivarski opened this issue Feb 27, 2023 · 19 comments
Open

Package metrics & stats #17

jpivarski opened this issue Feb 27, 2023 · 19 comments
Assignees

Comments

@jpivarski
Copy link

jpivarski commented Feb 27, 2023

From the Feb 27 meeting: "How do we collect metrics and package stats?"

@pllim
Copy link

pllim commented Feb 27, 2023

Is this related to #12 or something different?

@jpivarski
Copy link
Author

Different, I think (unless DevStats would be nested within this one). I'm putting up new topics that were brought up in the meeting 2 hours ago. I haven't copied over all the points mentioned and people interested, yet.

@lagru
Copy link
Member

lagru commented Apr 7, 2023

I would like to point out this blog post: Measuring API usage for popular numerical and scientific libraries . Perhaps the results could be updated or even improved during the summit.

Thanks to @jni for pointing it out to me. 🙏

(Edit: Fixed the link 😅)

@jjerphan
Copy link
Member

jjerphan commented Apr 7, 2023

Thanks for pointing this out, @lagru. Did you want to share this link instead? 🙂

@jpivarski
Copy link
Author

Wow! This is exactly what I'm working on for a physics conference, and I was planning on following up on these techniques at the Scientific Python Summit. I just didn't know that Christopher Ostrouchov has already done it, talked about it at SciPy 2019, and provided a tool.

Christopher has already addressed this problem:

def foobar(array):
    return array.transpose()

a = numpy.array(...)

a.transpose()
foobar(a)

and I'll look at his code to see how he did it or use that code directly.

On the tool's GitHub page, he notes

NOTE: this dataset is currently extremely biased as we are parsing the top 4,000 repositories for few scientific libraries in data/whitelist. This is not a representative sample of the python ecosystem nor the entire scientific python ecosystem. Further work is needed to make this dataset less biased.

In my case, I've been asking these questions about a specific sub-community, nuclear and high-energy physicists, and I have a trick for that (PDF page 29 of this talk): one major experiment, CMS, requires its users to fork a particular GitHub repo. From that, I can get a set of GitHub users who are all CMS physicists, and (where I wave my hands) I assume that the CMS experiment is representative of the whole field. This is 2847 GitHub users (CMS members over a 10 year timespan) and 22961 non-fork repositories.

I also have another technique I've been trying out: using the GitHub archive in BigQuery to find a set of GitHub users who have ever commented on the ROOT project, which occupies a central place in our ecosystem. Then I would look up their non-fork repos in the same way.

But Christopher has solved a lot of the other issues, and I'm going to use as much of his work, with credit, as I can. Thanks for the pointer!

@pllim
Copy link

pllim commented Apr 7, 2023

Re: #17 (comment)

@lagru , being able to see an updated stats of https://labs.quansight.org/blog/python-library-function-usage (thanks for the correct link, @jjerphan) and even compare the different years would be nice. 😸

I wonder if there is any big changes caused by, say, a pandemic. 💭

@jpivarski
Copy link
Author

Absolutely. Look at this:

It's a Google Trends search that I use to see how "data analysis" is associated with Java, R, and Python (Python overtook R's dominance) and "machine learning" (Python has always been dominant in the modern ML era). I've been making this plot for several years, starting before the pandemic, and look at that gap!

Interestingly, the pandemic affected Google searches for Python much more than R. My hypothesis for this is that Python has a higher industry/academic ratio than R, and that industry data analysis jobs were more affected by the pandemic than academic. I don't have anything quantitative backing up that interpretation.

@jpivarski
Copy link
Author

Oh, but you were asking about it in the context of python-library-function-usage, not just any metric.

I'd be a little surprised if the pandemic changed how people use APIs. It would surely change absolute rates, such as the Google searches, but given that someone is using e.g. NumPy, their fraction of np.array versus np.matrix calls wouldn't change much, would it?

For my part, I usually do plots in a time domain. One of the specific questions I'll be asking about ROOT/physics usage is how often people use TLorentzVector (deprecated in 2005, but still widely used) versus PxPyPzEVector (and its other replacements). That will definitely be a time-based plot. I'd want to see if there's any trend away from the legacy class. If there isn't, I think it would be a lesson that deprecation without consequences (never actually removing it) doesn't change user behavior.

@betatim
Copy link

betatim commented Apr 20, 2023

(commenting because I can't assign myself to this issue)

@lagru lagru self-assigned this Apr 20, 2023
@MridulS MridulS self-assigned this Apr 20, 2023
@lwasser lwasser self-assigned this Apr 20, 2023
@lwasser
Copy link

lwasser commented Apr 20, 2023

i am super interested in this. i started a small module (that needs a lot of work) to parse our packages and get some basic stats via the github api. we of course have a very specific use case with reviews and such. but if there were some tooling around getting other types of stats and storing i might pull that into our workflow rather than continue to develop that component myself! i was thinking it would be super cool to have a page of stats for each package in our ecosystem. think snyk stats but w a bit more depth potentially?

here is a quick snapshot of what i'm bringing down (statically) ... no time series right now (which would be super cool).

@tupui
Copy link
Member

tupui commented Apr 20, 2023

You might like this https://github.com/nschloe/github-trends

@tacaswell
Copy link

https://www.coiled.io/blog/how-popular-is-matplotlib seems on-topic for this as well. We are waiting to find out if we got a SDG to extend this work.

While it is looking from mpl specifically, I think it is a good proxy for general adoption.

@lwasser
Copy link

lwasser commented Apr 21, 2023

https://github.com/nschloe/github-trends

wow that is a really great repo @tupui !! also look at the mpl growth over time @tacaswell ! we'd much rather adopt something that others are using vs build something ourselves. super excited for this discussion in may!

@Carreau
Copy link

Carreau commented Apr 21, 2023

https://www.coiled.io/blog/how-popular-is-matplotlib seems on-topic for this as well. We are waiting to find out if we got a SDG to extend this work.

FYI, napari is now I believe also including a watermark in the image they generate.

@seberg seberg self-assigned this May 2, 2023
@stefanv
Copy link
Member

stefanv commented May 8, 2023

i am super interested in this. i started a small module (that needs a lot of work) to parse our packages and get some basic stats via the github api. we of course have a very specific use case with reviews and such. but if there were some tooling around getting other types of stats and storing i might pull that into our workflow rather than continue to develop that component myself! i was thinking it would be super cool to have a page of stats for each package in our ecosystem. think snyk stats but w a bit more depth potentially?

@lwasser I was wondering how this tool differs from the data gathering done in the devstats. Is this something we can combine efforts on?

@lwasser
Copy link

lwasser commented May 8, 2023

@stefanv i'd LOVE to combine efforts. i can show you what we have. some of what i'm parsing are github issues to get package names, reviews, etc. but other stuff i'm parsing to get stars and other metrics that i bet you are parsing for as well. What can i create to make this potential collab more efficient? we got some people working on this for us during our last sprints as well! but at the end of the day it's really just me working on this by myself to support tracking reviews, packages etc...

@stefanv
Copy link
Member

stefanv commented May 8, 2023

@stefanv i'd LOVE to combine efforts. i can show you what we have. some of what i'm parsing are github issues to get package names, reviews, etc. but other stuff i'm parsing to get stars and other metrics that i bet you are parsing for as well. What can i create to make this potential collab more efficient? we got some people working on this for us during our last sprints as well! but at the end of the day it's really just me working on this by myself to support tracking reviews, packages etc...

Great! There's a bit of machinery around GraphQL paging that is service specific (crazy, but so it is); so perhaps we can aggregate that into a "package" (submodule), and then just feed the package with the queries we want, built from GitHub GraphQL explorer. Later, we can add bells & whistles like caching, exporting in different formats, etc.

@lwasser
Copy link

lwasser commented May 8, 2023

cool. i'll spend a bit more time documenting what ours does and what we need. I need to do that anyone as i should have created a design from the start 😆 and i didn't. i just started writing stuff that did what I needed 🙃

We output to YAML right now but have no long term storage which i'd love to look at trends over time.

i've just been making REST API calls. and have hit rate limits but that may have been fixed in our last sprint. i'm happy to wrap around / use devstats as it makes sense and contribute effort there.

@juanis2112
Copy link
Member

Hackmd for the summit: https://hackmd.io/UNwG2BjJSxOUJ0M1iWI-nQ

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: No status
Development

No branches or pull requests