Technology-Enhanced Learning: Best Practices and Data Sharing in Higher Education
Case Study
KLI: A Theoretical Framework for Improving Student Learning
Submitted by:
Ken Koedinger, Carnegie Mellon University
Intervention Types
Process
Related Recommendations
Culture Recommendation 3
Improvement Recommendation 1
Community Recommendation
Summary
The Knowledge Learning Instruction (KLI) Framework addresses a gap in education theories, which tend to focus on grain sizes either at the macro level or the micro level, constraining their applications. KLI promotes instructional principles that can be generalized, while explicitly identifying constraints and opportunities for detailed analysis of the knowledge students may acquire in courses. Drawing on research across the domains of science, math, and language learning, we illustrate the analyses of knowledge, learning, and instructional events that the KLI framework affords.
Evidence for the Design
The development of a shared framework has been a center-wide endeavor of the LearnLab since its inception. This work has been embodied in collaborative publications, an open research wiki, and, ultimately, in the publication of the Knowledge Learning Instruction Framework, the theoretical framework of broad application to learning and education of which we are describing in the July 2012 issue of Cognitive Science (Koedinger, Perfetti, & Corbett, 2012).
That publication followed an earlier release on our web site (http://learnlab.org/documents/KLI-Framework.pdf). The publication of the framework in Cognitive Science introduced KLI, which identifies and interconnects three analyses (taxonomies) of the triangulated anchors of learning and instruction: kinds of knowledge, kinds of learning processes, and kinds of instructional methods.
Many subsequent projects and publications have pursued the KLI Framework. Those are described in the sections below. One of those publications appeared in the November 2014 issue of Science (Koedinger, Booth, & Klahr, 2013) and presented the challenge of “instructional complexity” with recommendations for addressing it by employing the KLI Framework and the kind of socio-technical cyber infrastructure that LearnLab has developed.
Context of Application
Learning theories at both the macro level of analysis and the micro level can be problematic.
At the macro level of analysis, education theories have tended to rely on units at large grain sizes. Take situated learning, which following its origins as a general proposition about the social-contextual basis of learning (Lave & Wenger, 1991), has been extended to an educational hypothesis (e.g., Greeno, 1998). It tends to use a rather large grain size that includes groups and environmental features as causal factors in performance. It also focuses on rich descriptions of case studies. These features work against two important goals that we have:
- identifying mechanisms of student learning that lead to instructional principles
- communicating instructional principles that are both general over contexts and unambiguous in the guidelines they provide to instructional designers
When learning theories rely on small grain sizes, there are also concerns. Consider micro-level theories such as variations on Hebb’s idea (Hebb, 1949) of neuronal plasticity, which refers to a basic mechanism of learning. It is often expressed simplistically as “neurons that fire together, wire together.” This idea has been captured by neural network models of learning mechanisms that use multi-level networks (e.g., O’Reilly & Munakata, 2000). Micro-level theories also give accounts of elementary causal events by using symbols, rules, and operations on basic entities expressed as computer code, as in the ACT-R theory of Anderson (1993). Although initially developed without attention to biology, ACT-R has been tested and extended using brain imaging data (e.g., Anderson et al., 2004). It is important for the learning sciences that such theories demonstrate the ability to predict and explain human cognition at a detailed level that can be subject to empirical testing. The small grain size of such theories and the studies that support them leave them mostly untested at the larger grain size of knowledge-rich academic learning. Thus, they tend to be insufficient to constrain instructional design choices.
The level of explanation we target is intermediate to the larger and smaller grain sizes exemplified above. This level must contain propositions whose scope includes observable entities related to instructional environments and learner characteristics that affect learning. These propositions must be testable by experiments and allow translation both downward to micro-level mechanisms and upward to classroom practices.
The typical instructional experiment, in KLI terms, explores how variations in instructional events affect performance on subsequent assessment events. The interpretation of such experiments may involve inferences about mediating learning events and knowledge component (KC) changes. As an example, we can explain the results of Aleven and Koedinger (2002) in these terms. They found that adding prompts for self-explanation to tutored problem-solving practice (without adding extra instructional time) produced greater explanation ability and conceptual transfer to novel problems while maintaining performance on isomorphic problems. In KLI terms, changing instruction from pure practice to practice with self-explanation (kinds of instructional events) engaged more verbally-mediated explanation-based learning in addition to non-verbal induction (kinds of learning events) and thus produced more verbal declarative knowledge in addition to non-verbal procedures (kinds of KCs). KC differences were inferred from the observed contrast in student performance on isomorphic problem-solving test items compared with conceptual-transfer test items (kinds of assessment events). These results require an explanation that appeals to KCs: The groups perform the same when assessment tasks allow either verbal declarative or non-verbal procedural knowledge, but only the self-explainers do well on the transfer tasks that require just verbal declarative knowledge.
In developing the KLI framework, we emphasized the importance of KCs rather than domains (Geometry; English). In contrast to Bloom’s well-known taxonomy (Bloom, 1956), which is expressed in terms of instructional objectives, our taxonomy focuses on the knowledge needed to achieve those objectives. It is expressed in cognitive process terms. It is at a more abstract and coarse-grain level than the representations used in computational models of cognition (e.g., Anderson & Lebiere, 1998; McClelland & Cleeremans, 2009; Newell, 1990; Sun, 1994).
Knowledge, in our account, is decomposable into units that relate some input characteristics or features of the student’s perceived world or mental state (the conditions) to some output in the students’ changeable world or mental state (the response). Unlike production rules in theories of cognitive architecture (Anderson & Lebiere, 1998; Newell, 1990), which are implicit components outside a students’ awareness, the KCs in KLI include explicit, verbalizable knowledge. Given the prominence of comprehension, reasoning, dialogue, and argumentation in more complex forms of instruction (e.g., prompted self-explanation, accountable talk), the KLI knowledge taxonomy distinguishes between kinds of KCs that have accessible rationales, such that students can effectively reason and argue about them, and some that do not, such that explicit reasoning and argumentation may be of little value for learning.
Datasets
Through ten years of operation, LearnLab has leveraged cognitive theory and computational modeling to identify the instructional conditions that cause robust student learning resulting in the KLI Framework. As of January 2015, our research strategies have led to 1,934 publications, including 294 journal articles, seven books, 116 book chapters and 231 invited talks. As of January 2015, we had completed 156 lab and 161 data mining studies. We also completed 361 controlled LearnLab experiments in real classes in elementary, middle, high school and college environments. Collectively these studies amount to more than 400,000 student-hours of human learning data, all of which is now available in DataShop through more than 600 datasets.
Results
The KLI theoretical framework links knowledge, learning, and instruction to organize the development of instructional theory at a grain size appropriate for guiding the design, development, and continual improvement of effective and efficient academic course materials, technologies, and instructor practices.
The need for this cumulative theory is illustrated both by the general lack of consensus around educational practices that work and by the limitations of large-scale randomized control trials, which are great tests of instructional practices, but do not generate new theory or practices.
Broader Applications
The KLI framework may be effectively used to analyze an instructional principle. This analysis:
- finds support for the KC Gap Theory as a viable alternative to Cognitive Load Theory
- suggests that “novice” is not a relation between a student and a domain, but between a student and a KC
- proposes boundary conditions between this principle and a related one, the testing effect, and
- illustrates how knowledge-level analysis can be combined with a general instructional principle to produce a student adaptive version of that principle that enhances robust learning.
Lessons and Considerations
Learning occurs as unobservable events that can be inferred from performance and appropriately attributed to instruction events under circumstances of experimental control. The processes of learning include both simple associations and more complex, reflective processes that result in KC changes of three broad types:
- memory and fluency building
- induction and refinement
- understanding and sense making
These learning processes can proceed more or less independently or in some synchrony.
Warning: count(): Parameter must be an array or an object that implements Countable in /home/mmy95vu0bovp/domains/cmu.thoughtform.design/html/wp-includes/class-wp-comment-query.php on line 399
-
This document includes case studies that show how learning science supports TEL.
Do you have a relevant project? Share it with us and gain visibility for your work.
- Submit a Case Study
- See All Case Studies
