-
Notifications
You must be signed in to change notification settings - Fork 5
Task Efficiency
Ideas about how can we compute task efficiency.
The basic idea is that the time to solve the task is composed of two portions: the efficient time of solving (mostly thinking about how to solve it) and the inefficient time (building the solution you already know is correct, it just includes many blocks so it takes some time). (Of course, it's a simplified model.)
In other words, for fixed difficulty, some tasks take more time to solve - and these are the less efficient. The high-level formula is solving time = difficulty + inefficiency
and if we define efficiency = -inefficiency
, we get:
efficiency = difficulty - solving time
By difficulty we mean the skill level which "matches the task", or in other words, the skill which maximizes the probability of being in the state of flow while solving the task (and solving the task after some slight struggle). We don't know the true difficulty, but we can approximate it using e.g. the ELO model and the self-reported perceived difficulty ("too easy", "just right", "too difficult"). By solving time we mean time needed to solve the task on a logarithmic scale. In a non-personalized model, we can estimate the solving time by a median of all solving times (logarithms) of given task.
To make the high-level formula work, we need to transform the difficulty and solving time to the same scale. The simplest approach for non-personalized model is to normalize these quantities by subtracting their mean and dividing by standard deviation:
efficiency = normalized difficulty - normalized time
This results in efficiency
being a random variable with mean 0 and standard deviation √(2). For scoring, a sigmoid may be applied to bound the efficiency score to the (0, 1) interval.
This non-personalized model suffers from several problems. Of course, one of the problem is that it's non-personalized, while the efficiency might clearly differ for different students. But another problem is its offline nature (computing mean and standard deviation of all solving times) and also such a simple estimation of time is not population invariant, in particular the estimate will be lower than the true median for more difficult tasks, because only already skilled users will be presented to these difficult tasks. For these reasons, we might want want to go a step further (or rather two steps further) and use an online personalized estimate of the efficiency:
efficiency = personalized difficulty - personalized time
Personalized difficulty is captured by the ELO model by comparing the task difficulty and student skill vectors (it's an opposite of flow). Similarly there are models for estimating solving times for given student (see e.g. papers about Problem Solving Tutor).
- The logistic model for task solving times has a discrimination parameter (slope, "a") which possibly already captures the similar concept as is the efficiency described in this article (small discrimination means that the task takes nearly the same time for low-skilled and high-skilled students, i.e. it's more clicking than thinking)
- Another approach: using logs of clicks: more frequent clicking ~ less thinking. Possibility to explore this idea (and compare with the discrimination approach) using data from Tutor (Robotanist). [Radek]
- Rather than being another criterion for recommendation, the efficiency itself might be already part of the flow and the flow should be the single metric for "what is best for the user" (if I am in the state of the flow, sure that the task I am solving is efficient). In this view, flow should be (nearly) the only metric to optimize against ("nearly" because of the "exploration vs. exploitation" tradeoff, we might want to do not only what is good for the single user, but also what is "good for the system", which is in turn good for all users).