Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Decide on track curriculum #1099

Closed
4 tasks
ilya-khadykin opened this issue Nov 10, 2017 · 11 comments
Closed
4 tasks

Decide on track curriculum #1099

ilya-khadykin opened this issue Nov 10, 2017 · 11 comments

Comments

@ilya-khadykin
Copy link
Contributor

ilya-khadykin commented Nov 10, 2017

We have to decide on track curriculum and make changes in config.json accordingly (reorder exercises by their difficulty for example since they are served in order, as reported in #891 (comment)).

Moreover, it would be awesome to have hints.md for each exercise to explain certain concepts helpful for that exercise.

TODO:

  • decide on a list of core exercises
  • decide what unlock what (set unlocked_by accordingly)
  • reorder exercises in config.json
  • add hints if necessary for core exercises

Hopefully I'll have some time to work on thin over the next weekends

@cmccandless
Copy link
Contributor

decide on a list of core exercises

I'm not sure what the original intent behind core exercises was, or how other tracks are using them. If someone can provide a good argument for designating a subset of our <100 exercises as core, I'm not opposed to it. I just don't see what it adds to the track.

reorder exercises in config.json

I'd also like to add "set unlocked_by for related exercises" to this.

add hints if necessary for core exercises

Not a bad idea, particularly for the first exercise in the track containing a new Python-specific concept (magic methods, context managers, etc.) It would be best to do this after the reorder.

In fact, (intentional or not) the order in which these tasks were presented is probably the order they should be completed.

@N-Parsons
Copy link
Contributor

I'm not sure what the original intent behind core exercises was, or how other tracks are using them.

I think that in Nextercism, there will be core exercises that unlock additional exercises. The aim of this is for the core to cover all of the key topics and concepts in the language, and then have each exercise unlock a set of exercises that are in the same area. For example, if isogram was core, it would unlock pangram, acronym, etc, since they are all about string manipulation.

@ilya-khadykin
Copy link
Contributor Author

ilya-khadykin commented Nov 12, 2017

@N-Parsons yes, you are absolutely right.
You can take a look the prototype here https://v2.exercism.io/

I'd also like to add "set unlocked_by for related exercises" to this.

Added in the list of tasks

Such organization can also help us focus on main exercises

@cmccandless
Copy link
Contributor

Well if it's used in nextercism, then yes we will want to determine core exercises. I had thought it was just a legacy attribute.

@AtelesPaniscus
Copy link
Contributor

I have been asked for input on this topic as it was me that complained about the parallel-letter-frequency exercise.

I am not a contributor (to this track). Only a student. There are three, non-trivial, views I would like to express. Apologies in advance if do not seem relevant to you.

Who Cares if a Level 5 exercise occurs between two Level 1 exercises ?

Well, who cares if this intimidates beginners enough they drop out ?

Well, who cares if this encourages otherwise diligent students to find ways to skip an exercise ? (Is there an 'proper' way to skip an exercise ?)

Well, who cares if this encourages till now honest students to submit a blank (not an incomplete but a not even attempted) solution so they can plagiarise someone else's ?

But most off all, who cares if students do this and base their solution on a broken solution under the delusion that it must be good because it passes all the tests?

Hopefully, everyone on the project cares.

How to Detect if an Exercise is Out of Sequence ?

I assume that exercises are supposed to get more difficult gradually. An easy exercise at the end of config.json is a waste of time.

If contributors followed the guidelines diligently and adjusted the 'canonical' difficulty level to something appropriate for their track and configlet gave no warnings to suggest anything amiss with the order of entries in config.json you would still have a mechanism that is only as good as the assigned level of difficulty. Changing the level of difficulty does not make an exercise easier or more difficult.

It's the GIGO principle.

The Python track has lots of students. Some statistical analysis of submissions might be meaningful.

A simple analysis, ordered by config,json, of how many students attempt each exercise each week should show a gradual decline due to natural wastage. A large drop between two consecutive exercises might be worth a closer look.

Likewise an analysis of the average number of iterations per student might suggest exercises that many students don't take in their stride. Likewise how many first (and only) submissions do not pass the tests.

If such analyses don't suggest some exercises are giving students more trouble than others, then there is no good reason to change anything.

It's the don't fix it until you know is it broken (and can test the fix) principle.

Testing the Untestable ?

The parallel_letter_test does not test for parallel execution. I submitted a 'test passing' solution that makes no attempt to execute anything in parallel. More than one of the solutions I examined did likewise.

I am not criticising the contributor. I am asking why is this test case in the Python track at all ? What purpose does it serve ?

I am not familiar with Python's concurrent processing modules but I do know something of the issues involved. Of the other solutions I examined, I judged more than half were broken.

Not good examples except perhaps as how not to do things but how is the level 1 student to know ?

The pyunit test framework does not, as far as I know, have any support for testing concurrent execution. I thought there might be one that does but I could not find it.

The tests do not run for long enough to find those broken solutions that update a single counter without any protection against 'lost updates'.

The tests do not measure execution times with and without parallel execution so don't find those broken solutions whose protection against 'lost updates' effectively serialises computation of multiple execution threads.

The tests do not even check that the solution imports an appropriate Python module.

@N-Parsons
Copy link
Contributor

Thanks for raising this @AtelesPaniscus, I've created #1106 for discussion of parallel-letter-frequency in particular.

@cmccandless
Copy link
Contributor

@AtelesPaniscus some good points there. I'll likely as more to my response later after giving it more thought, but for now I'd just like to throw out that there is in fact a proper method to skip exercises.

exercism skip <track> <exercise>

However, it goes without saying that if you're using this commands often then you might be at your challenge point in the track.

@AtelesPaniscus
Copy link
Contributor

@m-a-ge Good luck with the todo list this weekend.

I agree that isogram, pangram and anagram are similar but I'm a little surprised they are considered 'string manipulation' exercises ....

For me they were about learning (so as I will never forget) that for ... in ... loops are not Pythonesque but that map and filter with lambdas are deprecated in favour of list comprehensions, which are generally not as efficient as generator expressions but, alas, Exercism exercises do not like iterable results all the while not forgetting that any and all are good for cutting a long sentence short.

So, my question is not which exercises are about comprehensions and generators but will these exercises come before or after the string manipulation exercises ?

Looking around the solutions provided by other students one might be forgiven for thinking these exercises are about avoiding string manipulation using a "two out of three ain't bad" combo of set, collections.Counter and re.

It seems these exercises are about (premature) optimisation. I am not impressed. My hints would be something along the lines of:

  • isogram: The alphabet is as 26 letter isogram. Which letter would you add next to get a 27 letter isogram ?
  • pangram - Suppose the input were the complete works of Shakespeare with all the full stops removed. How would you describe an algorithm that went all the way to the end of the the last scene of the final act of the last play to prove the input is a pangram ?
  • anagram - Python does not have 'pure' functions and it is an interpreted language. Do you remember reading somewhere that this means it is so good at loop-constant code motion that you don't need to consider doing this yourself ?

Overall hint: There is more to optimisation than big O notation.

Enjoy.

@ilya-khadykin
Copy link
Contributor Author

ilya-khadykin commented Dec 25, 2017

@AtelesPaniscus thanks a lot for your input and sorry for late reply.
I guess we will come up with something to give students additional information as a hint for each exercise.

@cmccandless
Copy link
Contributor

cmccandless commented Feb 26, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants