You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For background, see the README for all the theory.
Current questions on the stats theory
Re. the model for realtime guesses:
What kind of model can it be? A HMM (hidden Markov model)?
When the realtime guess is wrong and the user fixes it, how long should we respect that fix?
Can we use some indicators the fix is no longer valid?
Simple approach: after some time, ask the user if their org clock is still pointing at the correct activity, and increment the wait time between each repeat of this question.
Most users who correct the clock will probably also use org-clock-out when done (whereupon we take over), so it's not a major problem.
Re. the model for classifying the last 24-48 hours all at once:
Can the realtime guesses from the other model be used to inform this model?
Probably not directly, but if the user corrects it on its mistakes, this produces extra activity_verified datapoints. Including that data makes it a non-random sample with regards to time, but I don't think we'll use it in a way that needs it to be random.
Can this model use the activity_verified data at all, since they're only about single instants "I'm doing X right now this nanosecond"?
Probably. Assume the verification is good at least until the next time buffer_kind changes.
Should we assume that the verification stays good past that point, with an exponentially decaying effect?
Yes, if we use this data at all. See the issue of minimizing Org-clock lines #issuecomment-903853120. I guess the parameter to this exponential function would be determined by how large chunks we want to see in our agenda log.
Let's say it's Day 2 and we run a model overnight that classifies all of Day 1 and 2. The user wakes up on Day 3 and sees the results in their agenda log (or whatever visualization they prefer). At this point, Day 1 and 2 are "locked" from the VA's perspective, it'll never attempt to re-classify them, but they're still free for the user to modify.
Should we consider the locked days as all "verified", like observed data, or as still unverified (invisible to future runs of the model) and let the user verify blocks if they feel like it (signing off on them as "yes this is definitely what happened during that time"?
When it comes to verified time chunks from the past, how do we use it exactly? (We can insert the information as an activity_verified value attached to every buffer focus during these chunks.)
General questions:
Clarify the causal relation between activity and time.since.bufkind.change. (use an exponential decay)
Is there any causal relation between buffer_kind and time.since.bufkind.change?
This is like asking if there's a causal relation between buffer and another constructed variable time.since.buf.change. Tentatively, I think not.
How does activity_verified's missingness process missingness_verification be modeled?
Perhaps we could just eliminate it together with the idea of asking at random times at all, as I believe we don't need the info to be gathered at random times.
Recent thought: the past-classification model could be implemented if you consider carving up the last 24-48 hours into small time blocks. These blocks are most naturally delineated by the times of buffer change (not buffer_kind change, just buffer change!). So some blocks would last mere seconds, and a rare few could last hours. During such a block, all variables stay fixed -- they only change when a buffer change occurs. You can assume these variables are correct the entire time within a block.
Of course if you have a buffer under focus for 3 hours but are idle for 1h40 of that, then it was only under focus for 1h20. Perhaps we'll insert idle blocks into the dataset as a sort of pseudo-buffer to reflect the fact that the user isn't focusing on any buffer.
In my experience, oftentimes the user will land in a situation where they are rapidly switching between buffers that belong to different buffer_kind categories. If we naively classify these without any regard to the fact of the rapid switching, we'll end up with Org clock lines looking like this:
That is not nice, and clutters up the agenda log too. If we don't have a model component that will handle rapid switching elegantly, I'm inclined to just carve up the day into 30-minute blocks and be content with classifying each as the average of what happened within them. Such a model has its own complexities -- many variables are no longer fixed within time blocks, and we have to consider what's a good way of weighting the averages of each variable.
For background, see the README for all the theory.
Current questions on the stats theory
Re. the model for realtime guesses:
org-clock-out
when done (whereupon we take over), so it's not a major problem.Re. the model for classifying the last 24-48 hours all at once:
activity_verified
datapoints. Including that data makes it a non-random sample with regards to time, but I don't think we'll use it in a way that needs it to be random.activity_verified
data at all, since they're only about single instants "I'm doing X right now this nanosecond"?buffer_kind
changes.When it comes to verified time chunks from the past, how do we use it exactly?(We can insert the information as anactivity_verified
value attached to every buffer focus during these chunks.)General questions:
Clarify the causal relation between(use an exponential decay)activity
andtime.since.bufkind.change
.buffer_kind
andtime.since.bufkind.change
?buffer
and another constructed variabletime.since.buf.change
. Tentatively, I think not.activity_verified
's missingness processmissingness_verification
be modeled?The text was updated successfully, but these errors were encountered: