Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jupyterlab community survey #104

Closed
lresende opened this issue Oct 26, 2020 · 38 comments · Fixed by #107
Closed

Jupyterlab community survey #104

lresende opened this issue Oct 26, 2020 · 38 comments · Fixed by #107

Comments

@lresende
Copy link
Member

Based on the discussion initiated at the JupyterLab vision for the next few years a few JupyterLab community members put together a survey that would ask related questions based on items raised on that thread.

Note that this topic has already been discussed a few times during the weekly dev meetings and other parallel meetings.

We are at the point now that we would like to send this as a "JupyterLab community survey" and would like to seek community approval for doing so.

Please express your support with a proper comment.

I will leave this open for a week before trying to tally the votes.

@choldgraf
Copy link

choldgraf commented Oct 26, 2020

I think it's a great idea 👍

The only thing that I would caution is to be clear about who the survey has gone to, who has responded, etc. There may be certain groups of people that are less-likely to answer these questions than others, or a certain focus that is taken in the questions, and we may bias interpretation towards a particular perspective if that isn't recognized ahead of time. For example, the language here feels a bit "ML/Industry-focused" as opposed to research and education (which I think are two smaller communities from a user perspective, but are key communities to consider from a "mission" perspective). (as an aside, I would not lump "professor/instructor/teacher" together in the same group, they are very different roles and IMO it's important to know whether a professor is using jupyter for research vs. teaching, or whether a teacher is at a university vs. a community college, etc)

Just a note that the survey is called 'Jupyter Notebooks Survey'...I am not sure if / how much you want to conflate "Jupyter Lab" with "Jupyter Notebooks", but just wanted to point that out.

@goanpeca
Copy link
Member

goanpeca commented Oct 26, 2020

I think it's a great idea too 💯

Thanks for the comments @choldgraf I think it is worth revisiting based on your comments and make some small tweaks :) otherwise let's move forward!

@saulshanabrook
Copy link
Member

This sounds great! Thank you to everyone who put in effort on this survey and I am sure we will get a lot of useful information out of the results.

@ellisonbg
Copy link
Contributor

ellisonbg commented Oct 26, 2020 via email

@saulshanabrook
Copy link
Member

Given the importance of this, I believe this needs to be posted publicly as a PR (this repo is fine)

Could you clarify what you think should be posted as a PR? It looks like the content is in the linked survey monkey poll.

@isabela-pf
Copy link
Contributor

I’m glad to see discussion around this happening again! Overall I’m in favor of running this survey (but it’s worth noting I’m a biased party since I spent time working on it). I do think it is possible to gain meaningful info about community perceptions and concerns based on where the survey is now, though I think some of the above comments are helpful feedback worth considering.

The main questions I still have revolve around what the next steps will be if the community supports it here. I’m open to different ideas, I just would like to request that these next steps be transparent and publicly listed set of steps so that

  1. People who are interested in the survey know how they can easily track it or participate
  2. This work doesn’t get lost in a state of ambiguous decision-making
  3. It can serve as a reference for similar work in the future (including thoughts on how to navigate other non-code contributions around Jupyter)

@lresende
Copy link
Member Author

Based on that last state I saw the survey in, more work was needed. A couple of us had met a few times and were making good progress on it, but things slowed down due to JupyterCon. Given the importance of this, I believe this needs to be posted publicly as a PR (this repo is fine) for review and approval by the core team.

Constructive feedback is always welcome, and something like @choldgraf above gives us specific areas to work on.

@ellisonbg could you please clarify what else is needed from your point of view?

@ghost
Copy link

ghost commented Oct 26, 2020

Thank you, @choldgraf , this is the third time we've heard the feedback about the ML theme being over represented, so I just deleted another one of the ML questions. There is now only 1 remaining ML-specific question left in the survey that asks about the type of analysis being performed - compared to 7 questions about usage patterns, 3 about collaboration, and 3 about data. Maybe people are mixing up ETL and big data with ML?

@choldgraf regarding education/ research, what information would you like to know? Happy to add it in and delete things to make room for it. We cut out industry-related questions like finance vs pharma vs healthcare in order to focus on use cases, so less inclined to focus on distinctions like comm college vs uni, and more inclined to focus on the use cases you would like to learn more about. Would you add to question 7? Can you list the roles you would like to see?

@ellisonbg
Copy link
Contributor

ellisonbg commented Oct 26, 2020 via email

@ellisonbg
Copy link
Contributor

ellisonbg commented Oct 26, 2020 via email

@lresende
Copy link
Member Author

@ellisonbg For awareness of the others that might not have participated in the google docs discussions, could you please share them on this issue.

@psychemedia
Copy link

psychemedia commented Oct 27, 2020

Q5: "What are your go-to tools for performing data science, scientific computing, and machine learning on your laptop/ desktop (non-cloud) for data science? (pick up to 2)"

JupyterLab / Jupyter notebook are given as a single option: split these into two options (to capture those who actively use classic notebook & avoid JupyterLab) and raise the choices to up to 3.

Q6 "How do you run and/ or access Jupyter? (pick up to 4)"

I tend to run things locally in a Docker container, often built from a Github repo using repo2docker and built and pushed to the DockerHub from a Github repo using a repo2docker Github action.

BinderHub / MyBinder also appears to be missing?

Also, what about "institutionally provided JupyterHub / authenticated multiuser Jupyter notebook server" and BinderHub? eg try to draw out whether folk are using credentialed persistent servers, credentialed temporary servers, public temporary servers. Then have another question to try to tease out whether the server is provided by a corporate, teaching, government, research or "retail" provider.

Question 7 "What tasks do you need to perform and what tools do you use to accomplish them?" takes forever to complete. There are also different pathways / meaningfulness along the rows. If I NEVER in column A, the extent to which the other columns make sense may also depend on how I answer the other questions.

Q8 Datasources: are SQLite databases SQL or file system files?

Would it be useful to know what sort of resource (CPU/GPU/memorybandwidth) requirements people typically require and who provides them (own computer desktop/laptop computer, own local server, retail cloud, institutionally provided)?

Q18 "What is your reason for sharing a notebook with someone else?" What about other sorts of use case, like training or teaching tutorials, tutor support, providing feedback on assessment, help desk?

@ghost
Copy link

ghost commented Oct 27, 2020

@psychemedia thank you.

  • Q5: split the options and increased to 3.
  • Q6: added Docker back in and added BinderHub / MyBinder. that extra level of detail is best obtained by an independent assessment of jhub team.
  • Q7: reduced number of columns to 3 by combining the "how frequently" columns. this is the heart of the survey. yes, if we programmed the forms ourselves we could grey out the others.
  • Q8: added SQLite and drew a distinction between tradition SQL (mysql, psql) and embedded (SQLite). Changed the "File System" option to be more obvious. Kind of getting at the resource constraints in the scale challenges question.
  • Q18: Changed "Feedback/ comments" instead of "Review." Changed to "Share knowledge/ teach people."

@lresende
Copy link
Member Author

I am really glad we are making quick progress here. It's unfortunate we missed the opportunity to do this during JupyterCon and we should try to have a deadline for reviews by end of the week, and then give a couple of days for any extra voting/approval required otherwise we might get into the holiday season and I am not sure how that can affect participation.

@ellisonbg
Copy link
Contributor

ellisonbg commented Oct 28, 2020 via email

@ellisonbg
Copy link
Contributor

Yeah, I had a meeting canceled this morning so was able to submit the PR:

#106

I propose we start to provide feedback on this version through the PR.

@goanpeca
Copy link
Member

goanpeca commented Oct 28, 2020

It seems this survey has been taking a longer than most of the parties involved expected and I want to share a few thoughts:

Expanding on @lresende:

The JCon milestone was missed and with holidays approaching, the amount of users/devs/people we may to get feedback from will dimish.

We are at the point now that we would like to send this as a "JupyterLab community survey" and would like to seek community approval for doing so.

I think at this point most of the community approves making the survey but there are 4 things that have not been defined, and need to be defined:

1. How to collaborate and make suggestions on the latest content of the survey that was created on Survey Monkey?

Part of what we found is that Survey Money is not optimized for collaborative work or feedback.

  • Completely agree!, and a PR is not optimized for this either unless we are writing code. We are mostly developers here, but I really disagree on a PR being the right tool to provide any more feedback. A PR is slow, is not very dynamic and is tied to an author making the changes. We want A googledoc or a hackmd so there is not a dependency on a single author and can make the process easier to anyone reading these issues and wants to collaborate.
  • Moving the latest survey monkey content back to a GDoc should be straight forward.
  • Questions that do not reach comment consensus should be then presented with variants. Variant A and Variant B and vote on those once deadline has arrived.

Please reconsider using a publicly open Google Doc.

2. What is the deadline to make any suggestions on the content of the survey?

Making surveys is an art and a science. That does not mean that this should take for months. Some data is better than no data, and the perfect is the enemy of the good.

I propose some days before Community Call 11.17.20 so that the voting can be made on that date and moved to SurveyMonkey.

Please let's agree on a reasonable deadline for making comments.

3. What is the process to vote on the final content once the deadline has arrived, who is voting and how are votes resolved?

I propose we define a group with an odd number of members (5, 7 or 9) so that a vote on the final content for each question/answers can be made. Each member gets the same weight and can vote (Yes/No). Simple Yes majority means the question is approved and ready to be moved to the Survey Monkey. No majority means either remove the question or make very minor tweaks if that fixes it.

There should probably be another round of voting for questions that got a No majority, to review minor tweaks if any, to be solved on the same call/meeting.

Please let's agree on a reasonable voting process to decide the questions that will go into the survey after the deadline.

4. What are the next steps after passsing in the context of this being accepted by JupyterLab?

Once approved and moved to SurveyMonkey then:

  • Post from twitter official account?
  • Post on official discourse?
  • Post on official google groups?
  • Other?

Please let's agree on the steps, so that we can expand on this and document it moving forward for other similar initiatives.

@lresende
Copy link
Member Author

As many, if not all, the participants of the JupyterLab meeting today seem to agree, we have spent too much time on the current survey.

Here is a proposed timeline:

  1. What is the deadline to make any suggestions on the content of the survey?

How about a tentative and little aggressive schedule trying to match the Lab 3.0 release announcement?:

  • Final content to be put to a vote by Nov/02
  • Approval vote to be closed by Nov/06
  • Nov/09 JupyterLab 3.0 release announcement with the link to the survey.
  1. What is the process to vote on the final content once the deadline has arrived, who is voting and how are votes resolved?

Unfortunately, I don't think we have much of a say here. The "binding" votes are probably scoped by the official JupyterLab committers and it should be a majority vote (more than half of the votes cast) unless JupyterLab has any different bylaws related to voting procedure.

What is the alternative here if the approval vote fails? Well, we can always do a survey independent of the community which is definitely not the goal, but a "last case resort".

  1. What are the next steps after passing in the context of this being accepted by JupyterLab?

I believe that the more amplification the better. So on top of @goanpeca suggestions, I would say:

  • Update gitter, github, etc status to link to the survey, see what PyPi is doing:
    image

  • Also note that I have asked some colleagues that work on data science advocacy and tech marketing teams to help amplify the survey and they will help expand the word once this is available.

@psychemedia
Copy link

Just a note about amplification: I assume a banner on MyBinder might be possible, but that then introduces a possible bias. You can defend against that a little by having a form entry where did you find out about the survey and have MyBinder as one of the options etc.

@choldgraf
Copy link

choldgraf commented Oct 28, 2020

Yeah I think the Binder team would be +1 on including a survey from JupyterLab as a banner - we have done it for Binder before. Would need to run it by folks in a team meeting though.

@lresende
Copy link
Member Author

Looks like we are making good progress on reporting survey updates/suggestions on this issue, and @LayneSadler has been applying these very quickly (@choldgraf there is still one open issue waiting for your feedback).

As we are sort of far ahead with the contents, I would suggest we keep this way instead we really need to do some drastic changes to the survey which I think it's not the case. Yes, I understand it's not perfect, but it's one less place to try to maintain things in sync.

If everything goes as planned, we will listen to feedback and quickly iterate through it over the weekend and start a vote on Monday (Nov/02).

@choldgraf
Copy link

Thanks for those edits @goanpeca - regarding specific things to learn about research or education. I guess it may be hard to insert these kinds of questions into the questionnaire in a way that isn't obviously about research/education. E.g., I am interested to learn what people use for grading, whether they want/need an auto-grader, whether they want something integrated into an interface or something callable from an API, etc. But, perhaps that is better to run as a research/education-focused survey in the future. As I mentioned, I feel a bit weird pushing that issue since few people on the JupyterLab team are embedded within an academic context, so I think you all should go with the survey that makes the most sense for you.

One topic that I think is missing from a "research" standpoint is reproducibility and (scholarly/book) publishing. Questions about whether they want to be able to write documents in JupyterLab vs. just analyses and scripts, whether they care about having reproducibility features built into their interfaces, or things like versioning of notebooks etc.

I don't think that my first paragraph should be a blocker on this, though I think we should include some questions about writing documents/narratives/etc, as well as reproducibility, as those are both core parts of the Jupyter story.

@ghost
Copy link

ghost commented Oct 30, 2020

@choldgraf there's an idea; modes for editing myst markdown and jupyter-books. Let me get through the workweek and I'll run some changes by you.

@ghost
Copy link

ghost commented Oct 31, 2020

@ellisonbg raised important points about the nature of a collaboration. If we do not gather the information necessary to answer the question of "do users need realtime collaboration" then the survey has failed.

image

Added the above question to the existing collaboration section, which hopefully reconciles a major fork in the surveys in figuring out more of the who/what/when/where/why of collaboration.

@ghost
Copy link

ghost commented Oct 31, 2020

@choldgraf

Regarding research/ markdown docs:
Q7 use cases: Split content creation into “Documenting research (scientific papers, reports)” and “Creating content (blogs).” To make room for it, dropping “deploying code to production” and merging “building data intensive dashboards” into viz item.
Q20 ui problems: Added “No modes for editing other Jupyter documents (MyST, Jupyter Book).”

Regarding roles
Q4 roles: Split into "Teacher/ lecturer" and "Tutor/ Teacher's assistant." Leaving professor out of it. There are existing boxes for "researcher/ scientist."

Regarding versioning/ reproducibility
Q19 collaboration problems: Tweaked “More robust version history of a notebook.” Tweaked “Don't know what dependencies (versions of language, packages, extensions) a notebook uses.” Added “Don't have the data that a notebook is supposed to use.”
Q15 scale problems: already asks about batch execution (parameterization) and saving notebook outputs.

@choldgraf
Copy link

@LayneSadler these are great, thanks for these updates :-)

Ah one other thing I think is missing from this survey: extensions and development. In my opinion, a huge reason that the notebook was so popular was because it was so hackable. Anybody could write an extension with (relatively) minimal background knowledge. I often wonder to what extent JupyterLab should put effort towards re-capturing this. One of the core value propositions of other UIs like VSCode is in their extension mechanism, marketplace, and community. I think that "extensability" is a core feature of JupyterLab just as much as any other part of the UI or functionality is.

So I'm curious how users feel about this - do they wish they could extend JupyterLab more easily? Do they find documentation / tutorials / APIs for extending JupyterLab to be straightforward or confusing? Do they find it easy to discover and extend their interface via extensions?

@lresende
Copy link
Member Author

So I'm curious how users feel about this - do they wish they could extend JupyterLab more easily? Do they find documentation / tutorials / APIs for extending JupyterLab to be straightforward or confusing? Do they find it easy to discover and extend their interface via extensions?

I believe the main target here are users, and I don't expect to drop the burden to fix the deficiencies of a product by asking them to create extension. To that extent, I would re-phrase this to "what makes it hard to use jupyter notebooks/jupyterlab" and then we as developers would look into what are some of the improvements we should focus on.

@ghost
Copy link

ghost commented Oct 31, 2020

If Jupyter is going to compete and win against corporate giants, then it needs to wholeheartedly embrace the non-linear benefits of a platform strategy. I highly recommend Platform Revolution and Exponential Organizations.

image

Long tail distribution; The Jupyter core team cannot possibly develop all things for all users across science, data science, and software engineering - but these users will leave if their niches aren't fulfilled. The core team should focus on developing "the hits" to solve major problems and enabling power users in different fields to develop the "niche content." For example, a genome browser should be developed by someone in genomics, not Jupyter.

Regarding the survey, I agree, the focus should be the core product and getting a better idea of what "the hits" are. However, considering the strategic importance of extensibility, I also agree that it should not be neglected. After all, wouldn't something like an "App Store" be provided by the core product?

My recommendation is that we (a) gauge awareness/ satisfaction with the Extension system in general, and (b) try to tag people for recontact for a followup to learn more about extension development. This area feels better suited for qualitative research (user interviews) to identify the problem areas. @isabela-pf what are your thoughts here?


Changes:
image

image

@choldgraf
Copy link

I think that there's a more complex community make-up for Jupyter(Lab) than just "developers" and "users". One of Jupyter's strong points has always been the way in which it builds components for the community to extend and modify, and I think it fits in with Jupyter's open source culture of building a "big tent" of community members with a variety of development expertise.

As I mentioned before, I think one of the key reasons for the success of the Notebook UI was because "users" were empowered to hack and extend it for many purposes. Those people are certainly not the majority of Jupyter users but I think they are an important group of power users to remember. The people that were building the ipython notebook extension ecosystem were not always developers in the traditional sense, they often began as users, and realized they had a need and enough Python knowledge to build tools that were useful for others. I think we want to encourage that kind of creation and sharing as a core pillar of our community.

re: @LayneSadler 's recommendation, I think you make a good point re: qualitative research. There will certainly be fewer folks with opinions and interest in developing extensions, but those are folks that might be worth a heavier-touch investigation.

@psychemedia
Copy link

psychemedia commented Nov 2, 2020

I think one of the key reasons for the success of the Notebook UI was because "users" were empowered to hack and extend it for many purposes. Those people are certainly not the majority of Jupyter users but I think they are an important group of power users to remember.

The ability to treat the classic notebook view as a relatively easily extensible document editing platform on two counts is important to me:

  1. the ability to customise the environment using community developed extensions and share that environment to my "users" (students on a course). This has a multiplier effect: I have exposed third party to students as integral part of a provided environment to 2k+ distance education students, many of them in employment, over the last few years.
  2. the ability for me to develop relatively simple extensions as an educator/have-a-go technologist, not a as a developer, in order to support a teaching point or improve the presentation of notebooks as educational materials.

In both senses, I essentially act as a customiser and "reseller" of notebook environments and I'm not sure what set of answers to the current questions would allow this sort of segment to be detected?

As well as using notebooks to deliver teaching, we hope that students may go on to adopt notebooks outside of their courses in various ways:

  1. Continuing to use the VM provided environment as is (eg with extensions enabled) and making use of those extensions;
  2. Continuing to use the VM provided environment as is (eg with extensions enabled) but without making use of those extensions, and/or disabling those extensions;
  3. Using other environments and installing some of the extensions we made students aware of into those environments;
  4. Looking for and installing other extensions from an awareness of the extension ecosystem;
  5. using vanilla / unextended notebooks.

This also makes me wonder about the life history of Jupyter users:

  • how were they introduced / onboarded to Jupyter tools;
  • how has their use evolved, and how and why have they changed their practice, if at all?

@isabela-pf
Copy link
Contributor

I don’t know if this reply is too delayed with the voting starting today, but here it is anyways. I do agree @LayneSadler, I don’t think a survey can answer all our questions, so it may never feel as perfect as we want. I do know that I don’t feel like I have a good grasp on what the community does and how they feel working with Jupyter tools, which makes my work (and others’, if the push for this survey can be used as proof) much more difficult and probably hurts the project long term.

I think getting any info is the first step, and being able to ask more/better/specific questions will come later, probably with qualitative research like user interviews (as Layne mentioned, as we talked about in meetings when writing this survey, and as is mentioned at the top of the series of google docs open that were open for review). A lot of the questions on this issue and the questions I’ve heard in other conversations aren’t the kinds of questions that I think only deserve a single multiple-choice answer.

So once again, I agree that I think the goals for this survey can helpfully be paraphrased (from Layne again) as (a) gauge awareness/ satisfaction and (b) try to tag people for recontact for a followup.

@choldgraf
Copy link

I will defer to you all on the right balance of "breadth of use-cases we ask about" vs. "depth on any one specific use-case"...I trust your expertise over my own when it comes to this. In general I'd be +1 on more questionnaires over time so that each one can be tailored for specific needs / users / etc.

@lresende
Copy link
Member Author

lresende commented Nov 3, 2020

Thank you all for collaborating on this. Let's use this exercise to gain experience in creating and driving such surveys so that we can do these types of activities more often.

I will create a snapshot of the survey (a PDF) later today and start the vote as a PR as planned.

@ellisonbg
Copy link
Contributor

Great to see the collaboration and work on this, thanks everyone. I got pulled into helping resolve a JupyterCon Code of Conduct situation over the last week, so I haven't been able to participate, but glad to see the survey moving forward.

A few meta comments though...

  • The current Jupyter governance model is based on consensus rather than majority voting (https://github.com/jupyter/governance/blob/master/governance.md). The governance model phrases this as "In general all Project decisions are made through consensus among the Core Developers with input from the Community."
  • We have been working on a new decision making model (Decision making guide jupyter/governance#87) that has an optional voting phase, but that is not yet approved. If approved, all subprojects will be required to establish and publicly document their voting bodies. There will be standard voting procedures (quorum, notifications, voting periods, etc.).
  • In our current governance model, we do occasionally use informal voting to gauge consensus, but until a new governance model is passed, we should not be taking formal, majority based votes to make decisions.
  • For something like a UX Survey that is widely distributed to Jupyter users as official Jupyter sanctioned material, a relatively high bar is needed. The time/attention of our users on a survey is a finite resource and we can't do many of them and expect to get representative participation.
  • The current governance model makes it clear that consensus can be broad ("input from the community) but that it has to include the "Core Developers". We haven't formalized exactly who the core developers of JupyterLab are, but I don't believe we can move forward without some combination of people like @blink1073 @afshin @jasongrout @ian-r-rose @jtpio @telamonian @saulshanabrook @vidartf @echarles et al. reviewing and approving the text of the survey. I am not saying that only these people can/should review this, only that consensus without the core isn't consensus.
  • It is appropriate (even encouraged) for people to itereate on the survey in a range of different tools (PRs, Google Docs, Hackmd, Survey Monkey), but at the end of the day, we need a transparent/versioned/historical place to comment on and approve of the survey content. For the governance work, we have been working in Google Docs, and eventually submitting PRs to the repo for final comments and approval, and I think that is appropriate here.
  • Lastly, it it important to remember that consensus takes time. Even for highly important things (like the release of JupyterLab 3.0) we tend to set only gentle aspirational deadlines.
  • Regardless of what survey tool is used (Survey Monkey, Google Forms) a Jupyter account should be used so the core team can access it. We have a project wide Google Drive that can be used, or we can create a Survey Monkey account using the project email.

@lresende
Copy link
Member Author

lresende commented Nov 3, 2020

  • The current governance model makes it clear that consensus can be broad ("input from the community) but that it has to include the "Core Developers". We haven't formalized exactly who the core developers of JupyterLab are, but I don't believe we can move forward without some combination of people like @blink1073 @afshin @jasongrout @ian-r-rose @jtpio @telamonian @saulshanabrook @vidartf @echarles et al. reviewing and approving the text of the survey. I am not saying that only these people can/should review this, only that consensus without the core isn't consensus.

Currently, the only official list the project has is the "JupyterLab committers" GitHub team, which I believe is the one I suggested to use.

  • Lastly, it is important to remember that consensus takes time.

While I agree with this, taking too much time means consensus is not really being reached. Having said that, in this particular case, most of the community that is actually trying to implement the survey is in consensus, and I think it's fair with them, which has put a lot of time into driving this, to try to get a resolution into this, and unfortunately, the only way we seem to have is to call a vote, which will give closure to the matter one way or another.

  • Regardless of what survey tool is used (Survey Monkey, Google Forms) a Jupyter account should be used so the core team can access it. We have a project wide Google Drive that can be used, or we can create a Survey Monkey account using the project email.

When we look at the current URL https://www.surveymonkey.com/r/LCB7GBF and contents of the survey, there is no identification of whose account is being used. At the moment, the account is from a member of the community, and he has certified multiple times here that he is going to make all data available to the community. Based on that, I don't think this is a stop-ship for the survey.

@meeseeksmachine
Copy link

This issue has been mentioned on Jupyter Community Forum. There might be relevant details there:

https://discourse.jupyter.org/t/pyqtgraph-maintainers-seeking-input-regarding-user-survey/10303/4

@jmacagno
Copy link

jmacagno commented Sep 1, 2021

Hello All -

Have the results of this survey been shared with the community? Can anyone share access and insights gathered?

thank you

@isabela-pf
Copy link
Contributor

Hi @jmacagno! The results are at jupyter/surveys and @aiqc has been posting insights on various issues. I think one such set of insights can be found at #121, but maybe Layne has some others he'd like to point to as well?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants