Skip to content

Glossary

Nikola Luburić edited this page Nov 9, 2021 · 18 revisions

Here we maintain the ubiquitous language used in our research and Clean CaDET Tutor. We describe the general meaning of each concept and note the sources which introduce or extensively describe the term. We also note similar terms along with their significant source.

This page does not present a thorough overview of the terminology used in educational technologies (for that, see Pelánek (2021)) and instead focuses only on the terminology relevant to our work.

Intelligent Tutoring System

An intelligent tutoring system (ITS) is an AI-based computer system that provides personalized instruction and feedback to learners without requiring intervention from a human teacher (Brown and Sleeman, 1982). An ITS looks to replicate the demonstrated benefits of one-to-one, personalized tutoring (as opposed to "one-size-fits-all") to provide access to high-quality education to every learner (VanLehn, 2011).

Similar terms:

  • Adaptive E-learning Environment (Shute and Towle, 2003).

Knowledge-Learning-Instruction Framework

The knowledge-learning-instruction (KLI) framework (Koedinger et al., 2012) specifies three taxonomies - kinds of knowledge, kinds of learning processes, and kinds of instructional choices. It also defines the dependencies between them - how kinds of knowledge constrain learning processes and how these processes constrain optimal instructional choices for producing robust learning.

The following image denotes the main elements of the KLI framework.

Instructional Events (IEs)

IEs are observable variations in the learning environment that facilitate learning by introducing new knowledge or providing a different perspective on familiar knowledge. Examples of IEs include: a statement of fact, the definition of a rule, an example of a concept, a case study, an illustrative metaphor. IEs can be presented in many formats, like text, audio, video, image, or animation. They can vary in the time required to process them, from a brief paragraph of text to a comprehensive case study description.

Similar terms:

  • Non-practice learning object (Churchill, 2007).

Assessment Events (AEs)

AEs are observable variations in the learning environment that require a submission from the learner. They evaluate the Submission with to infer the learner's mastery of the related knowledge components. Examples of AEs include: multiple-choice questions, multiple-response questions, math problems, refactoring challenges. AEs can be instructional when they provide feedback for submissions.

Similar terms:

  • Practice learning object (Churchill, 2007).

Submission

Each AE type defines the structure for the Submission the AE evaluates. The Submission contains all the information required by the AE to evaluate the learner's response to the AE. The Submission can be evaluated for correctness and fluency (i.e., how fast was the learner able to solve the task). For multiple-response questions, a Submission entails the set of marked responses (e.g., statements selected as true). For refactoring challenges, a submission includes the source code.

Evaluation

Each AE type defines the structure for the Evaluation that results from the AE's submission processing. This includes the correctness of the Submission, any instructional feedback, and the correct solution. For multiple-response questions, this can include the set of correct responses and feedback for each response describing why it is or is not correct. For refactoring challenges, this can include a set of hints that highlight which areas of the code can be improved and a video lecture that showcases the expected refactoring, thereby transforming the AE into an IE worked example (Sweller, 2020).

Learning Events (LEs)

LEs are unobservable processes that create changes in cognitive and brain states. The learner's existing knowledge influences LEs (e.g., learning an advanced rule is difficult or impossible when the foundational rules have not been mastered). LEs create or refine knowledge components.

Knowledge Components (KCs)

KCs define an acquired unit of cognitive function or mental structure inferred from performance on a set of related assessment events. The KLI framework defines the KC as a broad term for describing pieces of cognition or knowledge, including production rules, schemas, misconceptions, concepts, principles, facts, or skills.

KC Relationships

KCs vary in granularity and present a hierarchical structure (e.g., to master the broader skill of solving arithmetic equations, the learner needs to master the addition, subtraction, multiplication, and division KCs). Other relations among KCs exist, such as prerequisite relationships (Pelánek, 2021). Notably, the hierarchical relationship implies a prerequisite relationship - the top level KC can be mastered after its children have been mastered.

The image below illustrates a few KCs from the domain of clean code analysis and refactoring, where the yellow lines denote a hierarchical relationship and the blue line denotes a prerequisite. The presented refactoring skills assume that each learner achieved sufficient mastery in many KCs not presented in the sample (i.e., to learn how to create clean code, learners must know how to code). Within a hierarchy of KC, a knowledge analysis for a particular course may focus only on a single level that lies just above the level at which novices have achieved mastery.

KC Learning Engineering

Learning engineers focus their effort on analyzing a target domain and decomposing it into its underlying KCs. The initial collection of KCs is likely incomplete and inefficient. To refine the set of KCs, learning engineers monitor the learners' mastery to identify KCs that are challenging to master and further refine them (e.g., by decomposing them into smaller KCs, discovering integrative KCs (Koedinger et al., 2012), or refining related IEs). Likewise, KCs that all learners quickly master might waste the learners' time due to their simplicity. Finally, learning engineers examine KCs and the related AEs to determine if the correct KC mastery can be inferred or if overspecialized KCs are developed (Koedinger et al., 2012).

KC Mastery

Given a set of KCs, each learner starts with a mastery level of 0 for each KC in the set. When a learner makes a submission for an AE, they are graded based on their performance (the correctness of the submission and possibly the time it took them to complete it). Based on the Evaluation, their current mastery level, and the difficulty of the AE, their mastery might increase. The maximum mastery level is 1 (i.e., 100%), where we denote that a learner has mastered a KC when their mastery is equal to or above 0.9.

Tracing a learner's KC mastery (known as knowledge tracing) enables the ITS to reduce over-practice (i.e., when a learner that has mastered a KC is offered additional AEs that are redundant) and under-practice (i.e. when a learner has not mastered a KC and the system does not offer additional IEs and AEs to help him develop the mastery).

Robust learning outcomes

Learning is robust when it lasts over time (long-term retention), transfers to new situations that differ from the learning situation along various dimensions (e.g., material content, setting), or accelerates future learning in new situations. A general instructional goal is robust learning efficiency, where robust learning outcomes are achieved without requiring additional time.

References

  • Churchill, D., 2007. Towards a useful classification of learning objects. Educational Technology Research and Development, 55(5), pp.479-497.
  • Koedinger, K.R., Corbett, A.T. and Perfetti, C., 2012. The Knowledge‐Learning‐Instruction framework: Bridging the science‐practice chasm to enhance robust student learning. Cognitive science, 36(5), pp.757-798.
  • Pelánek, R., 2021. Adaptive, Intelligent, and Personalized: Navigating the Terminological Maze Behind Educational Technology. International Journal of Artificial Intelligence in Education, pp.1-23.
  • Shute, V. and Towle, B., 2003. Adaptive e-learning. Educational psychologist, 38(2), pp.105-114.
  • Sleeman, D. and Brown, J.S., 1982. Intelligent tutoring systems (pp. 345-pages). London: Academic Press.
  • Sweller, J., 2020. Cognitive load theory and educational technology. Educational Technology Research and Development, 68(1), pp.1-16.
  • VanLehn, K., 2011. The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46(4), pp.197-221.