Learning online via prompts to explain (original) (raw)
Related papers
Revising Learner Misconceptions Without Feedback: Prompting for Reflection on Anomalies
The Internet has enabled learning at scale, from Massive Open Online Courses (MOOCs) to Wikipedia. But online learners may become passive, instead of actively constructing knowledge and revising their beliefs in light of new facts. Instructors cannot directly diagnose thousands of learners' misconceptions and provide remedial tutoring. This paper investigates how instructors can prompt learners to reflect on facts that are anomalies with respect to their existing misconceptions, and how to choose these anomalies and prompts to guide learners to revise incorrect beliefs without any feedback. We conducted two randomized experiments with online crowd workers learning statistics. Results show that prompts to explain why these anomalies are true drive revision towards correct beliefs. But prompts to simply articulate thoughts about anomalies have no effect on learning. Furthermore, we find that explaining multiple anomalies is more effective than explaining only one, but the anomalies should rule out multiple misconceptions simultaneously.
Revising Learner Misconceptions Without Feedback
Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 2016
The Internet has enabled learning at scale, from Massive Open Online Courses (MOOCs) to Wikipedia. But online learners may become passive, instead of actively constructing knowledge and revising their beliefs in light of new facts. Instructors cannot directly diagnose thousands of learners' misconceptions and provide remedial tutoring. This paper investigates how instructors can prompt learners to reflect on facts that are anomalies with respect to their existing misconceptions, and how to choose these anomalies and prompts to guide learners to revise incorrect beliefs without any feedback. We conducted two randomized experiments with online crowd workers learning statistics. Results show that prompts to explain why these anomalies are true drive revision towards correct beliefs. But prompts to simply articulate thoughts about anomalies have no effect on learning. Furthermore, we find that explaining multiple anomalies is more effective than explaining only one, but the anomalies should rule out multiple misconceptions simultaneously.
Self-Explanation Prompts on Problem-Solving Performance in an Interactive Learning Environment
This study examined the effects of self-explanation prompts on problem-solving performance. In total, 47 students were recruited and trained to debug web-program code in an online learning environment. Students in an open self-explanation group were asked to explain the problem cases to themselves, whereas a complete other-explanation group was provided with partial explanations and asked to complete them by choosing correct key-words. The results indicate that students in the open self-explanation condition (a) outperformed in a debugging task, (b) perceived higher confidence for their explanations, and (c) showed a strong positive relationship between the quality of their explanation and their performance. These results demonstrate the benefits of the open self-explanation prompts. Cognitive load of self-explanation and quality of explanation are discussed.
Enhancing learning through self-explanation
International Conference on Computers in Education, 2002. Proceedings., 2002
Self-explanation is an effective teaching/learning strategy that has been used in several intelligent tutoring systems in the domains of Mathematics and Physics to facilitate deep learning. Since all these domains are well structured, the instructional material to self-explain can be clearly defined. We are interested in investigating whether self-explanation can be used in an open-ended domain. For this purpose, we enhanced KERMIT, an intelligent tutoring system that teaches conceptual database design. The resulting system, KERMIT-SE, supports self-explanation by engaging students in tutorial dialogues when their solutions are erroneous. We plan to conduct an evaluation in July 2002, to test the hypothesis that students will learn better with KERMIT-SE than without self-explanation.
Tutoring Answer Explanation Fosters Learning with Understanding Understanding
1999
In a previous formative evaluation of the PACT Geometry tutor, we found significant learning gains, but also some evidence of shallow learning. We hypothesized that a cognitive tutor may be even more effective if it requires students to provide explanations for their solution steps. We hypothesized that this would help students to learn with deeper understanding, as reflected by an ability to explain answers, transfer to related but different test items, and an ability to deal with unfamiliar questions. We conducted two empirical experiments to test the effectiveness of learning at the explanation level. In each experiment, we compared two versions of the PACT Geometry Tutor. In the "answeronly" version of the tutor, students were required only to calculate unknown quantities in each geometry problem. In the "reason" version, students also needed to provide correct explanations for their solution steps, by citing geometry theorems and definitions. They did so by selecting from the tutor's Glossary of geometry knowledge, presented on the screen. The results provide evidence that explaining answers leads to greater understanding. Students who explained their solution steps had better post-test scores and were better at providing reasons for their answers. They also did better at a type of transfer problem in which they were asked to judge if there was sufficient information to find unknown quantities. Also, students who explained solution steps were significantly better at steps where quantities sought were difficult to guess (in other words, require deeper knowledge) while answer-only students did better on easy-to-guess items indicating superficial understanding. Finally, students who explained answers did fewer problems during training, which suggests that the training on reason-giving transfers to answer-giving.
The design of self-explanation prompts: The fit hypothesis
2009
Cognitive science principles should have implications for the design of effective learning environments. The selfexplanation principle was chosen for the current project because it has developed significantly over the past few years. Early formulations suggested that self-explanation facilitated inference generation to supply missing information about a concept or target skill, whereas later work suggested that selfexplanation facilitated mental-model revision (Chi, 2000). To better understand the complex interaction between prior knowledge, cognitive processing, and changes to a learner's representation, three different types of self-explanation prompts were designed and tested in the domain of physics problem solving. The results suggest that prompts designed to focus on problem-solving steps led to a sustained level of engagement with the examples and a reduction in the number of hints needed to solve the physics problems.
Eliciting Self-Explanations Improves Understanding
Cognitive Science, 1994
Learning involves the integration of new information into existing knowledge. Generoting explanations to oneself (self-explaining) facilitates that integration process. Previously, self-explanation has been shown to improve the acquisition of problem-solving skills when studying worked-out examples. This study extends that finding, showing that self-explanation can also be facilitative when it is explicitly promoted, in the context of learning declarative knowledge from an expository text. Without any extensive training, 14 eighth-grade students were merely asked to self-explain after reading each line of a possage on the human circulatory system. Ten students in the control group read the same text twice, but were not prompted to self-explain. All of the students were tested for their circulatory system knowledge before and after reading the text. The prompted group had a greater gain from the pretest to the posttest. Moreover, prompted students who generated o large number of self-explanations (the high explainers) learned with greater understanding than low explainers. Understanding was assessed by answering very complex questions and inducing the function of a component when it was only implicitly stated. Understanding was further captured by a mental model onolysis of the self-explanation protocols. High explainers all achieved the correct mental model of the circulatory system, whereas many of the unprompted students as well as the low explainers did not. Three processing characteristics of self-explaining are considered as reasons for the gains in deeper understanding.
Adding self-explanation prompts to an educational computer game
Computers in Human Behavior, 2014
Proponents envision a role for computer games in improving student learning of academic material, including mathematics and science. Asking learners to engage in self-explanations during learning has been found to be an effective instructional method. In the present experiment, we examined the effects of adding a self-explanation prompt-asking players to answer one of three questions after completing each level of the game-within a children's math game on addition of fractions. Middle-school participants played either a base version of the game (n = 57) or the base version with a self-explanation instructional feature (n = 57). Participants' learning was measured by a fractions posttest and their learning processes measured via in-game measures of game progress and errors. When we separated the selfexplanation condition into participants who used a focused self-explanation strategy versus those who did not, the focused participants had significantly fewer game level deaths, game level resets, and progressed significantly farther in the game, compared to the control group, than participants not using a focused self-explanation strategy. The major new contribution of this study is that self-explanation can help the process of playing educational games in some situations and hurt in others. In particular, the most effective self-explanation prompts were aimed at helping learners make connections between game terminology and mathematics terminology, whereas the least effective self-explanation prompts asked very simple or very abstract questions.
Effects of visual cues and self-explanation prompts: Empirical evidence in a multimedia environment
The purpose of this study was to investigate the impacts of visual cues and different types of self-explanation prompts on learning, cognitive load and intrinsic motivation in an interactive multimedia environment that was designed to deliver a computer-based lesson about the human cardiovascular system. A total of 126 college students were randomly assigned in equal numbers (N = 21) to one of the six conditions in a 2 x 3 factorial design with visual cueing (cueing vs. no cueing) and type of self-explanation prompts (prediction prompts vs. reflection prompts vs. no prompts) as the between-subjects factors. The results revealed that (a) participants presented with cued animations had significantly higher learning outcome scores than their peers who viewed uncued animations, and (b) cognitive load and intrinsic motivation had different impacts on learning outcomes due to the moderation effect of cueing. The results suggest that the cues may not only enhance learning, but also indirectly impact learning, cognitive load, and intrinsic motivation.