You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you very much for your implementation. It's a well-organized and clear one.
I would like to clarify some doubts, if possible.
1 - From assist2009.py, in preprocess() you perform the following operation: df = df.drop_duplicates(subset=["order_id", "skill_name"])
Why you drop these duplicates? We can have multiple interactions with repeated order_id and skill_name/skill_id attributes right?
2 - From what I've seen in previous KT implementations, the input data (x) consists of a merge between the skill_id and the correct attributes (this would create a new synthetic feature). In turn, the data for prediction (y) will just be the correct labels. When analyzing your implementation, I can't understand how (or when) you make that merge (for both skill_id and correct) in the input data x. Can you clarify this for me?
3 - There is a specific part of the code that I am having some difficulties in understanding what is done. These are two lines fromdkt.py:
y = self(q.long(), r.long())
y = (y * one_hot(qshft.long(), self.num_q)).sum(-1)
Can you enlighten me what is done here?
Thanks in advance!
Regards,
Bernardo
The text was updated successfully, but these errors were encountered:
The preprocessing method is originally from the paper of DKVMN. You can check the paper in the reference list. Additionally, "order_id" is an id of one data sample. That is why I dropped the duplicates.
The input data "x" is the previous interactions of the current student. You can check the definition of the interaction as "(question, response)" in the original DKT paper.
Also, this part is explained in the original DKT paper. The intend of the part is for training the DKT model with only the next question's response.
Hey hcnoh,
Thank you very much for your implementation. It's a well-organized and clear one.
I would like to clarify some doubts, if possible.
1 - From
assist2009.py
, inpreprocess()
you perform the following operation:df = df.drop_duplicates(subset=["order_id", "skill_name"])
Why you drop these duplicates? We can have multiple interactions with repeated
order_id
andskill_name/skill_id
attributes right?2 - From what I've seen in previous KT implementations, the input data (x) consists of a merge between the
skill_id
and thecorrect
attributes (this would create a new synthetic feature). In turn, the data for prediction (y) will just be thecorrect
labels. When analyzing your implementation, I can't understand how (or when) you make that merge (for bothskill_id
andcorrect
) in the input data x. Can you clarify this for me?3 - There is a specific part of the code that I am having some difficulties in understanding what is done. These are two lines from
dkt.py
:Can you enlighten me what is done here?
Thanks in advance!
Regards,
Bernardo
The text was updated successfully, but these errors were encountered: