Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Process for experimental maintainer change #37

Open
9 of 12 tasks
sdruskat opened this issue May 3, 2021 · 10 comments
Open
9 of 12 tasks

Process for experimental maintainer change #37

sdruskat opened this issue May 3, 2021 · 10 comments
Assignees

Comments

@sdruskat
Copy link
Contributor

sdruskat commented May 3, 2021

  1. Do a release
  2. Change maintainer to @bbunzeck
    1. Record baseline & profile future maintainer, e.g. questionnaire about previous experiences (see Process for experimental maintainer change #37 (comment)) ✔️
  3. @bbunzeck to read maintenance documentation
  4. @sdruskat or @thomaskrause (or both) to continue work on Hexatomic with different accounts
  5. @bbunzeck to maintain & document:
    1. Maintain 1 PR with documentation changes
    2. Maintain 1 PR with code changes
    3. Record differences between external (new account) and internal (existing account) PRs
    4. Maintain 1-2 feature releases
    5. Maintain 1 hotfix release
  6. Reporting & change back maintainer to @sdruskat or @thomaskrause:
    1. Analyse notes & document issues/solutions
    2. Classify issues
    3. Analyse GitHub activity logs
    4. Prepare paper
  7. Fixing:
    1. Fix documentation from issues reported in 5.
  8. Change maintainer to @bbunzeck
    1. Document improvements (improvmenet through experience vs. improvement through documentation)
      1. Why were interventions necessary before a feature release could be made (metric = bad)
      2. Why were interventions necessary after a feature release was made? (metric = semi-bad)

Note: Any abnormal (i.e. not expected maintainer/contributor) interaction between @bbunzeck & @sdruskat or @thomaskrause is an intervention.

@sdruskat
Copy link
Contributor Author

sdruskat commented May 12, 2021

Maintainer evaluation will be done via a questionnaire adapted from J. Feigenspan, C. Kastner, J. Liebig, S. Apel, and S. Hanenberg, “Measuring programming experience,” in 2012 20th IEEE International Conference on Program Comprehension (ICPC), Passau, Germany, 2012, pp. 73–82, doi: 10.1109/ICPC.2012.6240511 [Online]. Available: http://ieeexplore.ieee.org/document/6240511/. (preprint)

Source Question Scale
Self estimation On a scale from 1 to 10, how do you estimate your programming experience? 1: very inexperienced to 10: very experienced
How do you estimate your programming experience compared to experts with 10 years of practical experience? 1: very inexperienced to 5: very experienced
How do you estimate your programming experience compared to your classmates? 1: very inexperienced to 5: very experienced
How experienced are you with the Java programming language 1: very inexperienced to 5: very experienced
How many additional languages do you know (with medium experience or better)? int
How experienced are you with the following programming paradigms:functional/imperative/logical/object-oriented programming? 1: very inexperienced to 5: very experienced
On a scale from 1 to 10, how do you estimate your experience in maintaining software projects? 1: very inexperienced to 10: very experienced
How experienced are you with the following maintenance tasks: Bug fixing; Maintenance of pull requests; Code review; Working with issue trackers; Preparing and performing software releases; Writing software documentation; Communicating with contributors; Communicating with users; Continuous integration; 1: very inexperienced to 5: very experienced
Years For how many years have you been programming? int
For how many years have you been programming for larger software projects,e.g., in a company or a research institution? int
Number of projects How many open source projects have you worked on as a core contributor or maintainer? int
Education What year did you enroll at university? int
How many courses did you take in which you had to implement source code? int
Size How large were the software projects you have worked on typically? NA,<900, 900-40000,>40000 LOC
Other How old are you? int

@sdruskat
Copy link
Contributor Author

@thomaskrause Can you please have a look at this ☝️ and let me know what you think?

@thomaskrause
Copy link
Contributor

Here are my two cents about it:

  • We should try to not use too many different scales. I don't know what the original authors used as their scale. I think we should allow a Likert scale with a sensible mean (so 1 to 10 would not work, 1 to 7 and 1 to 5 as well).
  • "How do you estimate your programming experience compared to your classmates?" doesn't work when someone is not studying anymore. We should make this questionnaire generic, so we can fill it out ourselves or any other possible maintainer. Letting possible maintainers fill it out even when they are not actually taking over maintainership over Hexatomic allows us to compare participants to the "population" of all possible maintainers and draw more informed speculations why something happend or not.
  • "For how many years have you been programming for larger software projects,e.g., in a company or a research institution?" maybe something more like "programming in a team" to distinguish it from "smaller" work?
  • "What year did you enroll at university?" we should use age here for comparison. That also makes it comparable with the age at the last question. Or I might misunderstand the reason we ask this. If we want to compare cohorts we need at least add the Country or even the University. But maybe this is more about experience, so I would prefer age. Also we should make clear that this is about any field, not just computer science and that "NA" is a valid answer.

@sdruskat
Copy link
Contributor Author

sdruskat commented May 18, 2021

Thanks!

As a general point, I think we should take out the programming paradigm question as it isn't of much interest in our context. I think I had it originally in strikethrough but it ended up in the table for some reason...

We should try to not use too many different scales. I don't know what the original authors used as their scale. I think we should allow a Likert scale with a sensible mean (so 1 to 10 would not work, 1 to 7 and 1 to 5 as well).

The original authors used 1-10 & 1-5. Pros and cons for even or odd scales have been highly debated, with some of the cons of having a mid-point being that it's unclear what the mid-point is supposed to represent, and that it is being used in controversial research to give people an easy way out. Seeing that we don't deal with controversial research here (IMHO), my original hunch was to stick with the 1-10 scale for the two broader questions (programming experience & maintenance experience) as they basically intrinsically ask for something like levels, where 10 is the perceived "perfection". What are your reasons for introducing a mid-point for these two? This is also something that we'd have to argue for when describing the setup of the questionnaire in the paper.

I think 1-5 is suitable for the more detailed questions, also because a mid-point makes sense there, as some are comparison questions.

"How do you estimate your programming experience compared to your classmates?" doesn't work when someone is not studying anymore. We should make this questionnaire generic, so we can fill it out ourselves or any other possible maintainer. Letting possible maintainers fill it out even when they are not actually taking over maintainership over Hexatomic allows us to compare participants to the "population" of all possible maintainers and draw more informed speculations why something happend or not.

Agree. Should we change to something like "peers" (perhaps too general and hard to understand), or "people in the same role as you" (which can be interpreted to mean role within the project, career stage, or general experience)? Or do you think we shuold drop it altogether?

"For how many years have you been programming for larger software projects,e.g., in a company or a research institution?" maybe something more like "programming in a team" to distinguish it from "smaller" work?

Yes, I agree. May make sense to define a team size. A quick search for "what is the size of a programming team" yielded results (e.g., 1, 2, 3) that suggest a common team size between 3-5 and 10. As I think that it's probably unusual in a research setting (at our end, the tail end of project sizes) to have teams as large as 10, I'd suggest to change to "programming in a team of 5 or more people", which to my mind already means "a large team".

"What year did you enroll at university?" we should use age here for comparison. That also makes it comparable with the age at the last question. Or I might misunderstand the reason we ask this. If we want to compare cohorts we need at least add the Country or even the University. But maybe this is more about experience, so I would prefer age. Also we should make clear that this is about any field, not just computer science and that "NA" is a valid answer.

So you propose to change to "at what age did you enroll"? To circumvent scaring off non-academics, we could also ask "at what age did you start programming", which would also be comparable to age.

@sdruskat
Copy link
Contributor Author

On another note: how do you think we should implement this questionnaire. I have been thinking Google Forms/Limesurvey, but it may be good to have this as a special type of issue template?

@thomaskrause
Copy link
Contributor

As a general point, I think we should take out the programming paradigm question as it isn't of much interest in our context. I think I had it originally in strikethrough but it ended up in the table for some reason...

Yes, lets strike it. Most education in universities are very similar in their paradigms and if people used Java is still included (and more relevant.

As a general point, I think we should take out the programming paradigm question as it isn't of much interest in our context. I think I had it originally in strikethrough but it ended up in the table for some reason...

I am fine with keeping the scales like the original paper to allow comparison. I just found it odd that we are forcing people to choose one side some question but not in others.

Agree. Should we change to something like "peers" (perhaps too general and hard to understand), or "people in the same role as you" (which can be interpreted to mean role within the project, career stage, or general experience)? Or do you think we shuold drop it altogether?

If we change it to "peers" we still can't compare it, since we don't know what the peers are (we would need to ask if someone is still studying). I'm for dropping it, since we already have the "compared with 10 years expert" question which should make it possible to compare results.

Yes, I agree. May make sense to define a team size. A quick search for "what is the size of a programming team" yielded results (e.g., 1, 2, 3) that suggest a common team size between 3-5 and 10. As I think that it's probably unusual in a research setting (at our end, the tail end of project sizes) to have teams as large as 10, I'd suggest to change to "programming in a team of 5 or more people", which to my mind already means "a large team".

I think we should aim for "non-personal" size and vote for "programming in a team of 3 or more people (including non-programmers)". This includes student projects at universities that mimic professional software development and also the typical situation in academia with PIs, collegues as users etc. Also, I think a lot of professional startups start with less than 5 people.

So you propose to change to "at what age did you enroll"? To circumvent scaring off non-academics, we could also ask "at what age did you start programming", which would also be comparable to age.

We already ask for "For how many years have you been programming?" which with the current age gives the start age. I think it would be nice to know the academic experience so I would still include the original question with the "does not apply" option added to it and also specifying that this is about professional computer science education: "How many years did you study in computer science or a related field?" This ignores when people start (so we can't distinguish people studying computer science as second study) but gives us an idea of the depth of the education (bachelor vs. master).

On another note: how do you think we should implement this questionnaire. I have been thinking Google Forms/Limesurvey, but it may be good to have this as a special type of issue template?

We can use the HU LimeSurvey and download and archive the questionaire definition backup files.

@sdruskat
Copy link
Contributor Author

sdruskat commented May 19, 2021

@thomaskrause This then is the final version of the questionnaire, please review. Note: I have added a subqestions seRCP to the third question, asking for experience with the Eclipse RCP, because I think it is relevant.

If you can give me a 👍 (or further comment), I'll construct the Limesurvey questionnaire, ask you to pretest, and export the definition backup + publish on Zenodo.

Source Question Scale Abbreviation
Self estimation On a scale from 1 to 10, how do you estimate your programming experience? 1: very inexperienced to 10: very experienced sePE
How do you estimate your programming experience compared to experts with 10 years of practical experience? 1: very inexperienced to 5: very experienced seExperts
How experienced are you with the Java programming language/Eclipse RCP framework? 1: very inexperienced to 5: very experienced seJava;
seRCP
How many additional languages do you know (with medium experience or better)? int deNumLanguages
On a scale from 1 to 10, how do you estimate your experience in maintaining software projects? 1: very inexperienced to 10: very experienced seME
How experienced are you with the following maintenance tasks:
Bug fixing;
Maintenance of pull requests;
Code review;
Working with issue trackers;
Preparing and performing software releases;
Writing software documentation;
Communicating with contributors;
Communicating with users;
Continuous integration;
1: very inexperienced to 5: very experienced seBugFixing;
sePullRequests;
seReview;
seIssues;
seReleases;
seDocs;
seCommContributors;
seCommUsers;
seCI
Years For how many years have you been programming? int yProg
For how many years have you been programming in a team of 3 or more people (including non-programmers)? int yProgTeam
Number of projects How many open source projects have you worked on as a core contributor or maintainer? int pProjects
Education How many years did you study computer science or a related field? int,NA eduCS
How many courses did you take in which you had to implement source code? int,NA eduImpl
Size How large were the software projects you have worked on typically? NA,<900, 900-40000,>40000 LOC zLOCSize
Other How old are you? int oAge

@sdruskat
Copy link
Contributor Author

Published questionnaire at https://doi.org/10.5281/zenodo.4773927. Also deployed on the HU LimeSurvey instance.

@sdruskat
Copy link
Contributor Author

sdruskat commented May 19, 2021

@thomaskrause We (i.e., you 😉) need to do a release before the actual maintainer change. And perhaps even fix hexatomic/hexatomic#316.

@sdruskat
Copy link
Contributor Author

Options for solving the external contributor/CI/code analysis/security issue on PRs:

  • Make everything runnable locally through dedicated tools (e.g., SpotBugs)
  • Ask external contribs to set up SonarCloud and link to results in updates (potential issues: setting baseline correctly to calculate correct set of metrics, etc.)
  • Use external contributor functionality of coding platform (GitHub) to put trust in contributors (issues: can still steal secrets, but that's a general trust issue in projects)

@thomaskrause thomaskrause removed their assignment Mar 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants