Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add stubs for all exercises #238

Closed
iHiD opened this issue Apr 2, 2019 · 25 comments
Closed

Add stubs for all exercises #238

iHiD opened this issue Apr 2, 2019 · 25 comments

Comments

@iHiD
Copy link
Member

iHiD commented Apr 2, 2019

Some tracks currently have stub files for exercises. Other tracks only provide the test file. I propose making stubs a requirement on all exercises on all tracks. My reasons are:

  • Consistency: A few drive for me is to make Exercism more consistent. This gives a better experience to the user and less confusion.
  • Helping beginners: Creating a file is something which requires someone to Google. They have to work out correct file extensions, they have to work out correct basic syntax, etc. For C# a basic solution to two fer involves discovering and working out how to arrange all the following terms public, static, using, class and string. For Ruby it requires module or class and def - neither of which would necessarily be clear. Providing stubs reduces this hard learning curve.
  • Reduction in support: We get emailed a lot by people having no idea what to do to get started and a stub file would dramatically reduce that barrier.
  • Adding progression: Each core exercise can have a reduced stub, until we just have a blank file with the correct filename. This gives a student a bit more work to do each time, but also means they have previous exercises to refer back to.
  • Auto-mentoring: We need to be able to predict the file that someone will upload to be able to run static analysis on it. Having stubs enforces the file name of the solution, which reduces complexity for the auto-mentoring maintainers.

The three negatives I have heard to this proposal are:

  • Not teaching error messages: If someone sees an error message of "file missing" or "require failed" and learn how to deal with it, then that is valuable.
  • Learning how to create files: Creating the file means you become familiar with the file extension etc.
  • Another job for maintainers: All tracks will need to do this bit of housekeeping, and if we have progressively smaller stubs, then some thought needs to go into that.

My strong opinion is that the positives outway the negatives. I would also like to suggest adding the progression through reduced stubs into the Track Anatomy Project.

Previous discussion was here.

Have I missed any pros/cons? Does anyone have anything to add that I've not considered?

If people could 👍 and 👎 on this issue with their preference, I would appreciate it.

@SleeplessByte
Copy link
Member

Sidenote:

  • you've written two (2) negatives and are already listing three
  • can you please fix the bolding? Remove the last space

In general I agree with this.

Another negative:

  • you're going to need to make choices that might not be idiomatic but preference (module over class etc).

enforces the file name of the solution, which reduces complexity for the auto-mentoring maintainers.

  • No, it makes it much more likely. I personally would go the way of the javascript-analyzer: it searches for exercise.js but if it can't be found it will use whatever .js it can find that is not a spec file.

@cmccandless
Copy link

Python has had stubs for a while now.

I can attest to the benefit of knowing where to start. Whenever I start a new track in a language I haven't worked with before, I like to have something to point me the in right direction so I don't have to read the test suite or project files to figure out what to name my solution file.

@Smarticles101
Copy link
Member

This is something the java track has historically had issues with. We settled a while ago to provide stubs on all exercises with difficulty 4 or less.

On the first exercise with difficulty 5 we have a hint in the readme explaining how to add the stubs and why they suddenly disappeared. We also keep the solution file structures fairly similar between exercise, making it easy for the student to look at previous solutions on how to structure the file.

I don't think this affects the auto-mentoring for java track too much, as each solution file only has one name that will compile successfully with the tests.

I like the thought of removing stubs progressively, or removing them at a certain difficulty level like the java track does.

@pgaspar
Copy link
Member

pgaspar commented Apr 2, 2019

Having stubs enforces the file name of the solution, which reduces complexity for the auto-mentoring maintainers.

Aren't students able to split their work in multiple files and submit them all, though? I'm not sure if the analyzers will be able to assume there's only one file 🤔

I agree with this change specially for earlier exercises. I experienced it as an Elixir beginner in the Elixir track and I appreciated the confidence the stub gave me on that initial exercise.

@SleeplessByte
Copy link
Member

I'm not sure if the analyzers will be able to assume there's only one file 🤔

They probably will be refer_to_mentor for now.

@NobbZ
Copy link
Member

NobbZ commented Apr 2, 2019

Aren't students able to split their work in multiple files and submit them all, though?

Yeah, probably, but some tracks make this easier then others, as well as language tooling might influence this ability.

In Erlang we have a strict relationship between module- and filenames. In elixir the name of the script under test is hard coded in the testsuite, the same is true for the bash suite. Probably even for many other languages as well. And so far I have not yet seen any exercise that would require to split implementation into multiple files (except for those languages that have a header file concept)

@petertseng
Copy link
Member

Someone who has an abundance of time (wishful thinking on my part) would be able to read the linked issues from #114 and summarise any findings they get. Here is what I recall from that issue:

In general, we can see that there was a pattern that most tracks that had a discussion tended to decide in favour of having stub files. By nature of the survey method, there is no data for tracks that did not have a discussion on the record on GitHub.

From this past discussion, I retrieve one point in favour of stubs and contribute it to this discussion:

Reduced annoyance for students: We have a quote that creating the files is supremely annoying. Especially true for tracks where the file needs to go in some deep directory structure.

Subnote on reduced annoyance applicable for statically-typed languages in an exercise that asks the student to implement > 1 function: Typically the entire test file must compile before it can be run; choosing not to provide a stub file means the student has to write this stub for all tested functions, rather than working on one at a time.

@iHiD
Copy link
Member Author

iHiD commented Apr 2, 2019

Super helpful. Thank you, @petertseng. I'll add that link to my original post.

@kenliu
Copy link

kenliu commented Apr 3, 2019

As a student, I find it annoying as well (I brought this up @iHiD in Slack). Maybe not supremely annoying, but it seems like an unnecessary chore to have to do it on every single exercise.

While it doesn't take much time or effort to create a new file, if you consider that the ruby track has 90+ exercises, that a lot of wasted effort. Time is scarce for students, and we should find ways to help them use their time effectively.

@macta
Copy link

macta commented Apr 3, 2019

Can I step in a give a counter example: In Pharo I don't want to give a stub - the reason being that coming from the source of TDD - the idea is that when you hit an error your environment helps you correct it. So the Smalltalk way is to run the first test - hit an error (the model your test references is not defined) - and then click on the "correct/create" button and the class is defined and execution continues... the next error being - the method you called is not defined, again click the create button and boom - new method and the debugger stops on the error "implementation not defined".

I think the intent is still the same - the student needs to be productive and start writing code as quickly as they can, but I think specifying the implementation isn't always the correct way. Of course in many languages - a template is the right way, but not all languages.

@SleeplessByte
Copy link
Member

@iHiD cc @kytrinyx question regarding updating exercise to latest version.

When a student updates their exercise; how does it determine which files to overwrite and which files to leave alone?

@rpottsoh
Copy link
Member

rpottsoh commented Apr 4, 2019

I'm not using stubs right now so this has never crossed my mind. I'm very interested to know the answer too.

@NobbZ
Copy link
Member

NobbZ commented Apr 4, 2019

When a student updates their exercise; how does it determine which files to overwrite and which files to leave alone?

From my observations, user submitted files are kept as they have been submitted, everything else gets synced with upstream.

This is annoying when the name of the file you are supposed to implement changes for some reason...

@SleeplessByte
Copy link
Member

From my observations, user submitted files are kept as they have been submitted, everything else gets synced with upstream.

Which means that if they have added tests, they will never get new tests.
Which also means that if we add stubs it will not overwrite their submissions.

@iHiD
Copy link
Member Author

iHiD commented Apr 4, 2019

From my observations, user submitted files are kept as they have been submitted, everything else gets synced with upstream.
Which means that if they have added tests, they will never get new tests.
Which also means that if we add stubs it will not overwrite their submissions.

All these points are correct.

@macta
Copy link

macta commented Apr 5, 2019

When students add tests, do you not ask them to create a separate TestCase? That’s what I want students to do in Pharo (but you’ve reminded me I need to make that more explicit in our track docs). The reasoning is that the separate test case then gets uploaded as part of their submission for review.

@NobbZ
Copy link
Member

NobbZ commented Apr 5, 2019

As most of us are using filedependant languages, and that its much simpler for students to just add a single function/test description to the existing file, rather than to create a complete new file which has to obey naming rules to be found be the testrunner while also having to include boilerplate code into this new file to set up the testing library and or doing before/after hooks for the tests, and and and…

@NobbZ
Copy link
Member

NobbZ commented Apr 5, 2019

Which means that if they have added tests, they will never get new tests.

This is true regardless if you have stubs or not.

@iHiD iHiD transferred this issue from exercism/exercism Jun 26, 2019
@wolf99
Copy link

wolf99 commented Jun 26, 2019

The C track does something similar to what @Smarticles101 described for the Java track. Creating files is a key part of learning how to C good. The test file on the track make it evident what file/filename is required to be implemented.

I understand that this might be different in different languages. The fact that this difference exists should show that a blanket mandate that all language tracks provide files in a given way could reduce the value of those tracks for which it is not needed.

An alternative might be to expose some metric on how many/much students struggle with this for any given track and thus handle it per track?

@ErikSchierboom
Copy link
Member

Creating files is a key part of learning how to C good. The test file on the track make it evident what file/filename is required to be implemented.

I don't see why the C track is different from other tracks to be honest. Creating a file is a key part of using any programming language that uses source files (which most do). However, exercism aims to teach fluency in a language, not to know how to use an IDE or build system. I understand that creating files is an important skill to have, but I don't think it is something exercism should be teaching (or at least not for the vast majority of exercises, maybe only for the later exercises). An alternative take on this is that we are teaching people to become fluent that already know how to program! This means that it is already extremely likely that our students know how to create a file, and we are then forcing upon them a repetitive action that doesn't teach them anything new.

@sshine
Copy link

sshine commented Jun 27, 2019

I don't see why the C track is different from other tracks to be honest.

This.

I'm not a regular contributor to the Perl track, but we use it (and Exercism Teams) at work as part of training new hires, since most come without a Perl background. Some Perl track contributors prefer that there are no stubs for approximately the same reason that @wolf99 gives, and it is a valid reason. But I've witnessed how it affected our newcomers, and I don't think the gain is greater than the annoyance.

(Sample bias: I've tried this with 5 hires, and they were all well-versed in one or more other languages, but they generally had problems figuring out what to put in the file, since there are no header files in Perl. Maybe the C track doesn't have that problem. Edit: And maybe it's hard to know because of missing feedback mechanisms at that particular point.)

My strong opinion is that the positives outway the negatives.

So the issue is not as much "It's more useful on track X" as it is "The maintainers of track X have autonomy to decide". The Exercism Project generally gives a lot of freedom to track maintainers, and I believe this is a strong motivator when you don't get paid.

So if you can't convince everyone that the positives outweigh the negatives, should there be a kind of voting of global policies and get done with it? Or in the case of being able to predict what the expected filename is, can we at least agree that even if the file is missing to begin with, a solution's files must be named predictably? Just like "solution_pattern": "example.*[.]hs" occurs in config.json, we could have another key that guides the exercism CLI towards warning students that the proper file(s) were not submitted. Edit 2: Or some other way to see which files were touched since the exercise was downloaded.

(A side note on auto-mentoring and file prediction: The Haskell track hasn't got an auto-mentoring tool yet, but one problem that will eventually occur is that iterations that use external libraries are rarely accompanied by a package.yaml with the proper library inclusion. So I foresee that the problem extends to guiding the student towards submitting all the files that are technically necessary. Maybe others have found a similar problem and a way to deal with it.)

@iHiD
Copy link
Member Author

iHiD commented Jun 27, 2019

The Exercism Project generally gives a lot of freedom to track maintainers, and I believe this is a strong motivator when you don't get paid.
So if you can't convince everyone that the positives outweigh the negatives, should there be a kind of voting of global policies and get done with it?

If there's a question where there is no consensus (e.g the name vs description on problem-specifications right now, and seemingly this), and a decision needs making, then the leadership team will make a decision considering what everyone has said, and our wider knowledge/opinions of Exercism. However, as both Katrina and I have had a really busy couple of months that's not happened (as making a decision where lots of people disagree needs time and thought to get right) so there's a couple of issues like this that are lagging.

With an issue like this, where I've specified a strong opinion up front and there is no overwhelming disagreement, it would probably, need a new "con" that I've not thought of in the introduction for me not to decide to move the policy forward.

Fundamentally, it would take a lot for us to override a strong consensus by the maintainers, but if there's not one, then we'll just make the decision that we feel leads to the best experience for Exercism's users, with the least burden for the maintainers.

@petertseng
Copy link
Member

The summary of the below comment:

  • I wanted to make a brief comment on the meta-issue of how decisions are made (maybe this is worth splitting off to its own discussion), so this comment will add no addition information regarding stubs
  • It reads to me like leadership would prefer to exercise their authority as little as possible.
  • I discuss situations where the benefits of a choice naturally encourage adoption without having to invoke authority, and some situations where we don't see that happen in this organisation.

Given the above observations that:

  • it takes time and thought to make decisions on issues that lack consensus
  • time and thought of the leadership team is finite by nature

I inferred that the leadership would prefer to only to invoke authority when necessary (of course, this is only an inference, worth what you paid for it).

One sort of situation where it is possible to minimise the burden on the leadership is when:

  • a certain choice has a natural benefit attached to it such as "If the track's maintainers make choice X, the maintainers' lives become concretely easier in Y way"
  • AND the alternatives don't cause a worse experience for students and mentors.

In these situations, no consensus is necessary; maintainers that choose not to make choice X presumably have judged that they are willing to accept not having Y benefit, and no harm is done.

A common pattern in recent choices that lie outside the above category is: A choice made by maintainers may affect students and/or mentors. In these situations the natural alignment of incentives doesn't exist. Since maintainers are not necessarily students nor mentors, they aren't personally feeling the disadvantages of not making a certain choice. This is when the aforementioned wider knowledge comes in.

I don't know if it's possible to create more natural alignment of incentives, but if it were then more decisions would be easier. So I suppose the best I can do for now is to encourage looking for situations where it is possible to better align the incentives.

@iHiD
Copy link
Member Author

iHiD commented Jun 28, 2019

Thanks @petertseng. That's helpful. I have one point to add clarity to. There are roughly two different areas of Exercism;

  • Open source side: This includes tracks, problem-specs, analyzers, etc.
  • Product side: Website, and the interactions around that with mentors/students etc.

(These are badly named, as both are open-source, and both are really about product, but hopefully the distinction is clear enough.)

We have a very firm grip on the product side (e.g. we don't generally accept PRs for the website, we say "this is the direction we're taking this feature" etc). That's because product work is best done by specialistic individuals or tightly formed teams, and also it often requires full-time effort.

In contrast we aim to have a light touch on the "open source side". We are blessed with a variety of different people that maintain the majority of Exercism's code, and those variety of opinions tend to mean that everything self-regulates. In fact generally the maintainers know better than the leadership team what is actually needed. When people cannot self-regulate, what we really need is someone to say "Katrina, Jeremy - please can you make a decision on this", and then we'll do that.

This issue is slightly different, as I wrote it, with a firm opinion, asking for anything I might have not considered. It's also really a product question - "is this better/worse for students". The main reason I put it up for discussion rather than that as an announcement of something we're doing was that it impacts the maintainers' time as they will need to create stubs.

I think the issue here is that I should have made a decision a while back and closed the issue, but I've had a very intense two months and got behind on things, so that didn't happen. I will rectify this in my next comment.

@iHiD
Copy link
Member Author

iHiD commented Jun 28, 2019

So thank you all for your pitching in your thoughts 💙

To conclude the discussion, I don't believe anyone has suggested anything new that wasn't in the OP. I agree that there is definite value in learning to create files, but I don't feel that that is something that Exercism should need to teach. The moment someone works on a project in the "real world" they'll learn how to create files if they don't know, and it won't be hard for them to work out. If we've tooled them up with everything but that skill, but made solving exercises less annoying for that person while learning, I'm fine with that situation.

So, I'm going to move forward with this proposal and ask all tracks to add stub-files to their exercises. When working through the Track Anatomy project, tracks can decide whether they want to provide content within those stubs, or just the empty files.

I'll work out with Katrina how best to communicate this to the everyone.

This was referenced May 4, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests