Category: Papers

Cognitive Task Analysis Example

Paper on Scribed:   IDT873 CTA Maddrell

Cognitive Task Analysis Running head: COGNITIVE TASK ANALYSIS 1 Cognitive Task Analysis Jennifer Maddrell Old Dominion University IDT 873 Advanced Instructional Design Techniques Dr. Gary Morrison October 15, 2008 Cognitive Task Analysis Traditional Task Analysis A traditional procedural task analysis describes a task as a series of discrete actions (Jonassen, Tessmer, & Hannum, 1999). Figure 1 diagrams a procedural task analysis for the insurance underwriting submission review task. Within this triage task, the underwriter must evaluate various aspects of the new submission and decide whether to quote or decline the submission. Figure 1. Procedural Analysis of Insurance Underwriting Submission Review Task. 2 Guide to symbols: = Input and exit point; = Mental operation; = Decision Point; = Direction in Step Cognitive Task Analysis 3 As depicted in Figure 1, in completing the submission review task, the underwriter must make a series of mental operations and decisions in route to a conclusion to either a) decline the submission or b) quote the submission. These mental operations and subsequent decisions include the following: Assessing the viability of the opportunity. Upon receipt of the submission, the underwriter must make a quick review of the information provided to assess the viability of the opportunity. Given the information presented within the submission and discussions with the broker, the underwriter must judge the likelihood the account will actually leave the incumbent carrier. Critical cues to consider include prior service and claims handling problems with the incumbent carrier, time to transition the account, and completeness of the submission. If the relationship with the prior carrier has been good, there is little time to transition the account, or the broker only provided enough information to provide a price (not service) quote, it is likely the insured is not serious about moving from the incumbent carrier and the broker is just seeking comparative price quotes. However, if the insured is dissatisfied with the incumbent carrier’s service, there is ample time to transition the servicing of the account, or the submission provides a comprehensive overview of both price and service requirements, it is likely the opportunity is viable. If the assessment of the information leads to a conclusion that the chances are slim the account will move, the underwriter makes the decision to decline the account. However, if the assessments leads to a conclusion that there is a good chance of writing the account, the underwriter makes the decision to continue working on the account. Examining the employee concentrations. Given the potentially catastrophic exposure of providing casualty insurance at locations with high employee concentrations, the underwriter’s triage of the submission includes an examination of employee concentrations. If the insured has employee concentrations at any one location above company guidelines, the underwriter makes the decision to decline the account. Otherwise, the underwriter makes the decision to continue working on the account. Comparing the account’s exposures to the company’s underwriting guidelines. Upon receipt of the submission, the underwriter must compare the prospective account’s exposures to the insurance company’s underwriting guidelines. Critical to this comparison is a review of the insured’s current and prior operations. If the company is involved in any operations which result in exposures that are against the underwriting guidelines, the underwriter makes the decision to decline the account. Otherwise, the underwriter makes the decision to move forward with the quotation task (beyond the scope of this submission triage task analysis). Cognitive Task Analysis A cognitive task analysis (CTA) offers an alternative means of describing the cognitive elements of the evaluation and decision making processes involved in the task. The following provides the results of an Applied Cognitive Task Analysis (ACTA) based on interviews conducted with an underwriting subject matter expert (SME) to gain information about cognitive strategies used to complete the submission triage task (Militello & Hutton, 1998). The ACTA includes a task diagram, knowledge audit table, simulation interview, and cognitive demands table. Cognitive Task Analysis 4 Task diagram Figure 2 is the task diagram generated after an initial interview with the underwriting SME. The task diagram offers a high level overview of the submission triage task which focuses on the most difficult cognitive aspects. The SME was asked, “Think about what you do when you triage a new prospect. Can you break this task down into less than six, but more than three steps?” The SME mentioned five steps, but one was eliminated (financial approval) as it is not task performed by underwriter. Figure 2. Task Diagram for New Account Prospect Triage. Knowledge Audit Table During interviews with the SME, the interviewer probed for concrete examples, cues and strategies, and reasons why the task is often difficult for novices. The interviewer asked the SME to focus on specific examples for each aspect of expertise. Table 1 summarizes the results of the knowledge audit for the submission triage task. Simulation Interview During a simulation interview with the SME, the interviewer asked the SME to focus on the challenging aspects of a specific representative scenario associated with new submission triage. Table 2 summarizes the results of the simulation interview, including the actions, assessments, cues, and potential errors identified for each central event. Cognitive Demands Table Table 3 consolidates and synthesizes the data collected during the interview process. The cognitive demands table centers on the common themes that came from the interviews and identifies the difficult cognitive elements, common errors, and cues or strategies used by experts to overcome these challenges. Cognitive Task Analysis Table 1. Knowledge Audit Table. Aspect of expertise Past and future Example: Call from broker about account where incumbent carrier messed up on claim and insured’s legal department insisting the account must move. 5 • • Cues and strategies High level nature of incumbent mess up Level of people involved in decision (low level versus high level) • • Why Difficult? Novice may not recognize significance of messed up claim handling Novice may not link level of insured to severity of problem Novice may not link severity of problem to increased chance of writing account. Novices may not consider other issues beyond price that influence buying decision Novices do not have relationship with broker to know when you are getting the “straight” facts versus a “sales pitch” Novices may get into the minutia of the account specifics and not step back and realize the timeframe is not feasible to actually move the account Novices are focused on details within submission Novices are familiar with “outside” considerations that affect the likelihood of writing the account • • • • Big picture Example: Steps back from all the facts about the account presented by the broker to consider what is the “real” motivation behind looking for a quote? Is this prospect a true opportunity or does the broker just need a competing price quote? If it is only a need to get competing price quotes, highly unlikely the account will move. Noticing Example: Broker not soliciting TPA quotes for claim handling which would be a #1 condition of actually moving the account. Job Smarts Example: Focus on what broker said in conversation versus purely what is presented in the quote. Opportunities Example: Our unit can’t work on this account, but other units in company can. Anomalies Example: Broker doesn’t return phone calls. Shows a lack of interest. • • • • Beyond price, there service issues with prior carrier Your personal history with that broker. Time frame to release quote What other carriers are quoting • • • • • Going beyond underwriting information presented in the submission Considering conditional things that impact your quote Timeframes Others carriers being asked to quote. Reasons for leaving Understanding of underwriting appetite of other units Knowing how to access those people Timing of returned phone calls Extent of response to questions • • • • • • • • • Novices tend to be preoccupied with verifying details within submission Novices not aware of situational issues that can be “deal breakers” or “deal makers” Novices don’t know underwriting appetite of other units Novices don’t know people outside of the unit Novices may not recognize they are “getting blown off” and they continue working on submission • • Cognitive Task Analysis (either lacking or detailed) 6 • Novices don’t recognize significance of “out of sight / out of mind” which is signal if you are alive or dead Table 2. Simulation Interview. Events Discussion about prospect with broker Actions Ask probing questions about opportunity Sensing tone from broker of urgency and desire to have you quote. Assessment Answers to question make sense or not with what is in the submission Broker wants to work with you or just wants a quote for comparison purposes How much time is there between now and effective date? Are the exposures inherent in risk acceptable under our underwriting guidelines? Critical Cues Can you meet the issued There is disaffection with incumbent Openness of the broker Willingness to provide additional information Too much time signals the broker is “shopping” for an early quote. Too little time signals that broker just wants to keep current carrier “honest” “Red flag” exposures that we cannot write “Go” classes of business that we are targeting Potential Errors Being overly optimistic about any opportunity Not probing deeply for hidden facts about situation Not reading the verbal and nonverbal cues the broker is giving you. • • • • • • • • • • • Deciding whether to quote • • Evaluating time frame between quote deadline and effective date Assessing if account meets underwriting guidelines • • • • • • • • • Being so excited about the opportunity that you rush to judgment Spin wheels on accounts where there isn’t a true opportunity Don’t dig deeply enough into what the account really does or did in the past that could represent “hidden” exposures Cognitive Task Analysis 7 Cognitive Task Analysis 8 Table 3. Cognitive Demands Table. Difficult cognitive elements Assessing whether broker’s answers make sense or not with what is in the submission Considering the “real” opportunity and exposures beyond the obvious information given in the submission Comparing account’s exposure information with underwriting guidelines Why difficult Common errors Cues and strategies used • • Consider if you really know the story behind the story Get and keep the broker talking to elicit information beyond the submission Ask about reasons why account would move Consider whether timeframe to move account is realistic • • • • • • • Considering and suggesting alternatives • • Novice underwriters tend to focus on basic facts in the submission versus what the broker is telling them Brokers reluctant to voluntarily air dirty laundry about account Novices underwriters tend to focus on information given versus information needed to make decision Can be uncomfortable situation for novice underwriters to probe for answers Companies often have many types of operations which cross several classes of business Novice underwriters tend to focus on the primary business operations Novice underwriters often have difficult assigning an account to the appropriate business classification within the guidelines. Novice underwriters tend to focus on what broker is asking you to do Novice underwriters often fail to identify ways to adjust quotation options to meet guidelines • • Don’t recognize or probe for hidden “red flags” Focus exclusively on information in submission Taking the submission at “face value” Failing to engage in uncomfortable probing conversations with the broker Failing to fully capture exposures Getting lost in the details Misinterpreting underwriting data Misinterpreting the underwriting guidelines • • • • • • • • • Review account with senior underwriter Check multiple sources to evaluate exposures • • • Quote only what is asked by broker Failing to probe for alternate opportunities with the broker • • Consider ways to adjust quotation options to fit within underwriting guidelines. Consider other coverages and limits that you or other departments could quote Cognitive Task Analysis Comparison of Approaches 9 Analysis Comparison In comparing the results of the traditional task analysis with the cognitive task analysis, significant differences emerge in following areas: a) the identification and analysis of hidden cognitive processes, b) the relative level of elaboration regarding the central task elements, c) the focus on expert and novice differences. Overt behaviors versus cognitive processes. The key strength of the traditional task analysis is the ability to examine overt behaviors required to complete a task. However, as seen in this example, additional critical cognitive processes and actions were uncovered within the ACTA. Further, the ACTA offered a means of analyzing the relative significance and difficulty of the required task elements. Level of elaboration. The traditional task analysis identified the relevant processes and decision points in the submission triage task. However, by focusing on the difficult cognitive aspects of the task, the ACTA provided greater elaboration with regard to the knowledge and cognitive processes required to perform the task. As the cognitive demands table highlights, the ACTA focused attention on the difficult cognitive elements, common errors, and strategies to overcome those difficulties and errors. Unfortunately, these elements were not unearthed within the traditional task analysis. Focus on expert and novice differences. Unlike the traditional task analysis, the ACTA analysis focused on the central differences between how an expert and a novice perform the submission triage task. The result is a comparison of current state (novices) and desired state (experts), as well as strategies to take the novice to an expert level. Implications for Practice Traditional task analysis allows practitioners to target the inputs, central operations, and decision points involved in carrying out a task. While this provides a good overview of what happens as the task is carried out, it does not provide the designer with an understanding of the nature of the cognitive processes required to complete the task. Further, following a traditional task analysis, the practitioner cannot gage the relative importance of the various tasks elements or which aspect(s) of the task are harder for the novice. As seen in the results between the two analyses, the cognitive task analysis provides practitioners with a better understanding of the difficult and critical cognitive processes, as well as the and cues and strategies, which are central to successful completion of the task. When to use Traditional Task Analysis versus Cognitive Task Analysis Both a traditional task analysis and cognitive task analysis highlight key aspects of the task. However, as seen in the two analyses above, each produces different results. As noted, the cognitive task analysis offers a better analysis of the central knowledge and decision making cognitive processes. Given that each task is different, the following provides a comparison of which analysis is more appropriate based on the degree of observable behaviors, the degree of required expertise, and the relative cognitive difficulty of the task. Cognitive Task Analysis 10 Degree of observable behaviors. The difference in outcomes between the two approaches is likely less significant when the task involves primarily observable behaviors. However, if the task involves primarily mental actions that result in less observable behaviors, a cognitive task analysis is the more appropriate option. Expert versus novice differences. When little task related expertise is required to perform the task, the results of both analyses would likely be similar. However, if successful completion of the task requires knowledge that a novice would not possess, a cognitive task analysis allows the practitioner to uncover or drill down on the difficult cognitive elements. As noted, these cognitive elements are less likely to be adequately analyzed in a traditional task analysis. Relative cognitive difficulty. While a traditional task analysis provides a comprehensive outline of the steps in the task, it does not offer a relative assessment of which steps are harder or more critical to successful completion. Instead, each step in the task is considered equally. However, as seen in the cognitive demands table, some tasks hinge on a smaller number of critical or difficult elements. Therefore, the ACTA is more appropriate when successful task outcomes depend upon cognitively difficult judgments or decision. Cognitive Task Analysis 11 References Jonassen, D. H., Tessmer, M., & Hannum, W. H. (1999). Task analysis methods for instructional design. Mahwah, N.J.: L. Erlbaum Associates. Militello, L. G., & Hutton, R. J. B. (1998). Applied cognitive task analysis (ACTA): a practitioner’s toolkit for understanding cognitive task demands. Ergonomics, 41(11), 1618-1641.

IDT 873: Concept Attainment

IDT 873 Abstracts: Concepts Jennifer Maddrell Klausmeier, H. J., & Feldman, K. V. (1975). Effects of a definition and a varying number of examples and nonexamples on concept attainment. Journal of Educational Psychology, 67(2), 174-178. Research Purpose and focus. Klausmeier and Feldman (1975) focused their research on concept attainment which they defined within their study as the ability to a) discriminate defining attributes, b) name the concept and each defining attribute, c) evaluate examples and nonexamples, and d) define the word representing the concept. In reviewing prior literature on concept attainment, they highlighted four categories of variables generally studied, including 1) a rational set of examples and nonexamples, 2) definitions of a concept (based on the relevant attributes of the concept), 3) emphasizers to facilitate discrimination, and 4) feedback. The purpose of this study was to evaluate the effect of presenting various combinations of concept definitions and rational sets. They predicted better attainment from those presented with both a rational set and a definition than those presented with either one or the other. Further, they predicted better attainment from those presented with the definition and additional different rational sets. Methodology. 134 fourth-grade students from two Wisconsin (Go Badgers!) elementary schools participated in the study. The students were stratified into high, medium and low levels based on their performance on the most recent Iowa Tests of Basic Skills test. The subject matter concept was the equilateral triangle. Students within each stratification level were randomly assigned to one of four treatment groups which included those presented with 1) a definition of the concept without examples or nonexamples, 2) a rational set of three examples and five nonexamples, 3) a combination of the same definition and rational set, and 4) a combination of the same definition and three different rational sets of three examples and five nonexamples. The treatment lesson was presented in four printed lesson booklets. Following instruction, students were given 1 minute to read each lesson page and then were instructed to turn to the next page allowing 5 minutes per lesson booklet. Immediately following the last lesson, a classification task within a printed booklet measured concept attainment. Without time limit, students viewed 38 instances and were asked to identify whether the instance was an example (by circling yes) or nonexample (by circling no) of an equilateral triangle. Results and conclusions. Means for the stratified groups reflected the initial levels with means for high > medium > low. As predicted, no significant difference in concept attainment was found between those who were presented with either a definition or a rational set. Contrary to the researchers’ prediction, there was also no significant difference from a combination of a definition and the single rational set. However, there was a significant difference between those presented with a definition and those who also received three rational sets. These findings are important as they suggest an advantage for presenting additional rational sets of examples and non-examples. Heuristics The results of these experiments suggest that designers should augment the presentation of the concept definition with multiple rational sets of examples and non-examples when teaching concepts. As seen in this experiment, providing learners with additional rational sets to consider may increase their attainment of the concept. Critique Page | 1 Submitted 20081008 IDT 873 Abstracts: Concepts Jennifer Maddrell The results of this study are important as they provide support for the hypothesis that presenting learners with more examples and non-examples is better. However, if three sets of examples and non-examples are better than one, is more than three even better? A criticism of this study is the short intervention and the focus on a single math related concept. Would these results be replicated over a longer period of time with other types of concepts and with different age groups of learners? Tennyson, R. D., & Rothen, W. (1977). Pretask and on-task adaptive design strategies for selecting number of instances in concept acquisition. Journal of Educational Psychology, 69(5), 586-592. Research Purpose and focus. Tennyson and Rothen (1977) sought to expand the previously reviewed work of Klausmeier and Feldman (1975) by evaluating the effect on concept attainment of adapting the number of examples and nonexamples based on individual need. They predicted that an adaptive design strategy that varied the presentation of examples and nonexamples based on student need would improve concept attainment over a nonadaptive strategy. Methodology. 67 undergraduate students participated in the study. The students were randomly assigned to one of three treatment groups, including 1) full adaptive, 2) partial adaptive, and 3) nonadaptive. The adaptive designs were modified using a computer-based Bayesian adaptive strategy which altered the number of examples learners viewed based on a) pretreatment measures of aptitude, b), pretests of prior achievement, and c) task performance. A pretest, treatment lesson, and posttest were administered individually via computer. The untimed lesson focused on two legal concepts, including best evidence rule and hearsay. For all groups, the learning task defined the concept based on the critical attributes of the concepts. The number of instances presented to students varied based on their assigned treatment group. The nonadaptive group received the same number of instances. The number of instances in the partial adaptive model was based on pretest data while the number presented within the full adaptive model was modified based on both pretest data and on-task responses. The study also evaluated the time on task which did not include pre- or post-test time. Results and conclusions. While no significant mean differences were found in pretest measures, significant mean differences were reported regarding time on task and posttest score measures. As predicted by the researchers, the results suggest that full adaptive strategies were more effective than partial adaptive strategies and that the two adaptive strategies were more effective than nonadaptive conditions. In addition, the full adaptive group finished the program significantly faster than the partial group which in turn finished faster than the nonadaptive groups. In attempting to explain the results, the researchers suggest that learning tasks where instance presentation is not modified based adaptive strategies may not keep learners’ interest in the task. Heuristics The results of these experiments suggest modifying instructional concept presentation based on learner mastery. Based on the findings of this study, presentation of examples and nonexamples after the learner has achieved mastery may result in learners losing interest in the learning task. Critique Page | 2 Submitted 20081008 IDT 873 Abstracts: Concepts Jennifer Maddrell The results of this study are important as they suggest that optimal presentation varies based on the each individual learner’s level of mastery. In this controlled experiment, using a computer based model, the researchers were able to alter the individual presentation based on each learner’s level of mastery which resulted in more effective instruction. However, altering presentation to an individual learner in real-world instructional settings is difficult, especially in group face to face settings. Therefore, while the results suggest an important finding with regard to tailoring instruction to meet the individual learner, such modifications may not be feasible in practice. Page | 3 Submitted 20081008

IDT 873: 4C / ID Model and the Cognitive Load of Authentic Tasks

IDT 873 Abstract: Cognitive Task Analysis Jennifer Maddrell van Merrienboer, J. J. G., Kirschner, P. A., & Kester, L. (2003). Taking the Load Off a Learner’s Mind: Instructional Design for Complex Learning. Educational Psychologist, 38(1), 513. Overview Citing decades of prior cognitive load theory and research, van Merrienboer, Kirschner, and Kester (2003) offer a theoretical framework and instruction design model for complex learning. Noting a recent emphasis on authentic learning tasks (such as project and problembased learning approaches) to support complex learning, they consider the implications on cognitive load and offer a model designed to manage both intrinsic and extraneous cognitive load. Theory While the theories underlying the use of authentic learning tasks may vary, a common assumption is that authentic tasks help learners to integrate the knowledge and skills necessary for complex task performance (van Merrienboer et al., 2003). However, given the novice learner’s weak problem-solving methods, they face high extraneous cognitive load when confronted with authentic tasks. In addition, the complexity inherent in the authentic task presents high intrinsic cognitive load. Therefore, based on cognitive load theory, engaging in highly complex authentic learning tasks may strain the novice learner’s limited working memory and subject the learner to excessive cognitive load. Proposal van Merrienboer et al. focus their attention on both the nature and the delivery timing of the presented information. They suggest that supportive information (knowledge necessary for problem solving and reasoning) is best presented before the learner engages in the learning task. Such supportive task specific information is inherently complex and needed in order to know how to approach the learning task. Presenting the supportive information first helps learners construct schemas to be used as they begin task performance. In contrast, van Merrienboer et al. suggest that procedural information (the how to instructions for rule application) is best presented when needed during task performance. They argue that such just-in-time presentation of procedural information reduces the potential for splitattention effects that may occur when the learner attempts to integrate procedural information learned previously with actions he or she is taking now. Heuristics From these suggested practices, van Merrienboer et al. offer an instructional design model (the 4C / ID model) for complex learning that focuses on four components: 1) learning tasks, 2) supportive information, 3) procedural information, and 4) part-task practice. The heuristics for designers within the 4C / ID model is to sequence from simple versions of the whole task beginning with a high level of support and ending with a complex version without support. In addition, as discussed above, supportive information is to be presented in advance of performance while procedural information required to perform the task is to be presented as the task is being performed. Finally, to encourage automaticity, additional repetitive practice should be incorporated for parts of the task. Critique The focus of the article is not an examination of the effects of authentic learning tasks on learning, but rather the implications of incorporating such tasks on the learner’s cognitive load. As such, the article offers a bridge across theory, research, and practice. A key strength of the article is the authors’ focus on the reality of limited working memory and the high cognitive load IDT 873 Abstract: Cognitive Task Analysis Jennifer Maddrell imposed by authentic learning tasks. The 4C / ID model offers designers a way of incorporating authentic tasks while at the same time better managing cognitive load. However, as a theoretical article, it does not offer results from a study of the model in practice. Do the heuristics within the 4C / ID model help to manage cognitive load? Further, do authentic learning tasks designed within the framework of the 4C / ID model effectively and efficiently support learning? These questions are left to future research.

IDT 873: Cognitive Task Analysis for Troubleshooting

Read this document on Scribd: IDT873 Maddrell Cognitive TA Abstract 3
IDT 873 Abstract: Cognitive Task Analysis Jennifer Maddrell Schaafstal, A., Schraagen, J. M., & van Berlo, M. (2000). Cognitive task analysis and innovation of training: The case of the structured troubleshooting. Human Factors, 42(1), 75–86. Research Overview. Following an instructional design evaluation of an existing Royal Netherlands Navy maintenance training course, Schaafstal, Schraagen, and van Berlo (2000) observed a gap between the instruction and the practice of troubleshooting the subject system. They observed that the existing instruction was based largely on the technical equipment documentation from engineers which focused exclusively on the system’s components. Following a comprehensive cognitive task analysis (CTA), Schaafstal et al. revised the instruction under the assumption that maintenance system troubleshooting is a complex cognitive task requiring not only knowledge about the system’s components, but also knowledge about how the system functions and how to consider possible causes and solutions to maintenance problems. The CTA consisted of several observational studies of troubleshooting with technicians of varying expertise levels. Based on information from the CTA, a modified course was prepared which focused on a functional understanding of the system versus the component orientation of the prior course. In addition, general troubleshooting strategies were incorporated which gave learners instruction on how to a) describe the problem, b) generate causes, c) test causes, d) repair, and e) evaluate solutions. Purpose. The purpose of the presented research was to evaluate the modified structured troubleshooting training course and to compare it with the exiting maintenance training course. Schaafstal et al. predicted superior outcomes from the revised course. Methodology. A series of experimental studies compared the learning outcomes of maintenance trainees taking the new structured troubleshooting training course with groups of maintenance trainees taking the existing training course. Outcome measures included malfunction identification, reasoning, and functional understanding of the system. Conclusions. The modifications in the course reduced the course duration by 33% (from six to four weeks). Even at the shortened length, those participating in the new course achieved statistically superior results as compared to those in the original course. Based on the results of the study, Schaafstal et al. suggest that novice technicians lack both a systematic approach to troubleshooting, as well as a functional understanding of the equipment. As seen in prior research, they observed that novices face information overload (lose the forest for the trees), lack hierarchically organized cognitive frameworks, lack functional understanding, possess inadequate mental models of underlying system, and lack the ability to isolate causes of the problem. Therefore, based on the results of their evaluation, they suggest that training in troubleshooting should focus on three areas: 1) system independent troubleshooting strategies to be used across systems, 2) system specific functional models, and 3) system specific domain knowledge. Heuristics Results of this research suggest the importance of moving away from a purely component oriented analysis to what the researchers term a functional decomposition when designing troubleshooting skills instruction. While analysis and instruction on the components is necessary, it is not sufficient. Analysis and instruction should also focus on the functional processes, including likely causes of potential problems and paths to solutions, in order for learners to know what to do when troubleshooting. Further, the results indicate that training in system independent troubleshooting skills can further augment the troubleshooting skills instruction. IDT 873 Abstract: Cognitive Task Analysis Jennifer Maddrell Critique The presented research is important for two reasons. The research suggests a positive influence of CTA on outcomes in troubleshooting training. By revising the instruction to focus on a functional understanding of the system from information gleaned in the CTA, the instruction appears to have been significantly improved. In addition, the findings suggest a positive impact from teaching system independent troubleshooting skills. Also, the paper is valuable for the information provided about the evolution of the authors’ CTA process. This information will be helpful to future designers and researchers. Unfortunately, the written presentation of this paper is horribly disjointed. It is doubtful that most readers will devote the time necessary to weave a coherent narrative out of the broken threads of theory, prior research, CTA processes, instructional design considerations, research methodologies, and conclusions. There is a wealth of information included in the paper, but unfortunately the reader must devote an unnecessary amount of effort to piece it all together.

IDT 873: Self-Pacing versus Instructor-Pacing

Read this document on Scribd: IDT873 Maddrell Behavioral Abstract 2
Behavioral Strategy Abstract Running head: BEHAVIORAL STRATEGY ABSTRACT 1 Behavioral Strategy Abstract: Self-Pacing Versus Instructor-Pacing Jennifer Maddrell Old Dominion University IDT 873 Advanced Instructional Design Techniques Dr. Morrison September 8, 2008 Behavioral Strategy Abstract Self-Pacing Versus Instructor-Pacing 2 Overview Morris, Surber and Bijou (1978) report on research conducted to compare achievement, student satisfaction, and retention between self-paced and instructor-paced personalized systems of instruction (PSI). While noting that one of the key features of PSIs is the ability for learners to self-pace, the authors cite prior research that suggests students who are allowed to self-pace may be more likely to procrastinate or withdraw from the course entirely. These finding have led some to incorporate instructor-paced schedules into the PSI. However, what had been less clear in prior research is the impact of self-pacing on learner achievement (both short term and longer term following course completion) and learner satisfaction with the learning experience. Research Purpose. The purpose of the reported study is to compare progress rates, withdrawal rates, achievement, satisfaction, and longer term retention between learners completing selfpaced or instructor-paced PSI. The researchers set out to extend prior research by focusing on the effect of pacing on these measures. Methodology. All 149 students enrolled in an introductory child development class were randomly assigned to either self-paced (S-P) or instructor-paced (I-P) PSI. The syllabi, course materials, and assessments were identical for both groups. Within each of the 15 units of the PSI, all learners were required to either achieve 90% mastery within a 10-item short-answer essay quiz and oral examination at a testing center or take a make-up quiz until 90% mastery was achieved. Learners in the S-P condition were able to complete all 15 required units within the PSI at their own pace within the semester. Semester grades for the S-P group were based solely on the number of units mastered. In contrast, the I-P students were subject to a grading scheme that could result in a one letter grade drop if the student did not complete at least one unit of material each week. To evaluate and compare pacing, the semester was divided into five 15 day increments. For the purpose of measuring student achievement, a 53 item multiple-choice pre and post-test based on a few items from each unit was administered to all learners. In addition, nine months after the semester, students were asked to return (with compensation) for a follow-up test. They were all informed that the pre and post-tests would not impact final grades. A course evaluation questionnaire addressed student satisfaction with the course. Conclusions. As shown in prior research, the completion rates between the S-P and I-P groups were not the same. I-P learners progressed through the material at a more even rate throughout the semester, while S-P learners completed fewer units in the initial time periods as compared to the latter time periods. However, there were no statistically significant differences in course withdrawal rates, final grade distributions, course evaluations, or achievement measures between the two groups. Yet, there were statistically significant differences between the number of repeated quizzes during the semester and the follow up retention scores. S-P students repeated 4.1% of their quizzes, while I-P students repeated 7.2% of theirs. While the S-P learners’ delayed rate of completion may signal cramming or procrastination, self-pacing did not appear to negatively impact course achievement or Behavioral Strategy Abstract 3 withdrawal rates which were two areas of concern in prior PSI practice and research. Further, the S-P learners’ ability to control pacing may have aided in their longer term retention of the material. Heuristics Based on the results of this experiment, lesson pacing by the instructor or designer may reduce cramming and procrastination, but may do nothing to improve learner achievement, overall satisfaction, or course retention. Further, allowing learners to self-pace may improve their longer term retention of the material. However, it is important to note that these results are based on otherwise rigid instructional parameters in which learners were required to complete highly structured lesson units during the single semester. Therefore, while the learners were allowed the ability to complete the units at their own pace during the course of the semester, they otherwise had little control. As such, it is unclear if this heuristic would apply to a more flexible learning environment in which the learners had more choice, such as in the selection or sequencing of instructional content. Critique of Article A key strength of this research is the direct comparison of pacing on achievement, retention, satisfaction, and longer term retention within an otherwise highly structured instructional setting. The research methodology appears effective at comparing the two types of PSI pacing schemes. However, as noted above, these results are based on otherwise rigid instructional parameters. It is unclear if these results would be replicated in situations where more learner choice and control is available. In addition, the research has done little to further an evaluation of the effect of PSIs on a broad range of learning outcomes. In reporting on learning achievement, the authors do not elaborate on what was learned. Did the PSI lead to anything more than basic recall and retention of facts or concepts? Are the learners able to apply the instruction in diverse contexts? Unfortunately, the authors offer the results as a demonstration of learning achievement, but it is unclear from the results what precisely was learned. Behavioral Strategy Abstract References Morris, E. K., Surber, C. F., & Bijou, S. W. (1978). Self- versus instructor-pacing: Achievement, evaluations, and retention. Journal of Educational Psychology, 70(2), 224-230. 4

IDT 873: Note-taking Generative Strategy

Read this document on Scribd: IDT873 Maddrell Generative Abstract 1
Generative Strategy Abstract Running head: GENERATIVE STRATEGY ABSTRACT 1 Note Taking as a Generative Strategy Abstract Jennifer Maddrell Old Dominion University IDT 873 Advanced Instructional Design Techniques Dr. Morrison September 2, 2008 Generative Strategy Abstract Note Taking as a Generative Strategy 2 Overview Citing a large and conflicting body of prior research, Peper and Mayer (1986) suggest that three main hypotheses are forwarded by prior research on the effect of note taking on a learner’s cognitive processing, including 1) the attention hypothesis (note takers pay closer attention to the to-be-learned material), 2) the distraction hypothesis (note takers concentrate on the act of writing instead of listening), and 3) the generative hypothesis (note taking enables learners to actively relate material to existing knowledge). Peper and Mayer suggest that evaluations of both attention and distraction hypothesis have tended to focus on how much is recalled. In contrast, by focusing on the generative hypothesis within their reported experiments, the goal is to evaluate the difference in what is learned between note takers and non-note takers. Research Focus. Perry and Mayer (1986) focus on three generative hypothesis predictions. The first prediction is that note takers will perform better on far-transfer test measures (problemsolving) and worse on near-transfer test measures (verbatim recognition and fact recall). This is based on the assumption that note taking offers an opportunity for integration with existing knowledge, but the process of reorganizing the new information interferes with near-transfer verbatim recall of specific facts. Secondly, these results will be stronger for those unfamiliar with the material given the processing required to integrate and organize new information. Finally, the results associated with the note taking generative activity will be similar to those for other types of generative activities. Methodology. Two separate experiments were conducted to test these predictions. The first experiment involved a group of high school students while the second included college students at the University of California at Santa Barbara. To test the first hypothesis, Experiment 1 included only subjects unfamiliar with the to-be-learned topic. The students were divided equally between either a “notes” and “no-notes” group. The same video lecture was shown to each group. Afterward, the notes were collected from the “notes” group and the same posttest was administered to both groups. Recognition questions asking the students to identify sentences that occurred verbatim in the lecture were followed by fact retention and problem solving questions. To assess the second and third hypothesis, Experiment 2 included some subjects who were familiar with the topic and added a question-answering treatment group. The same materials and posttests were used for both experiments. Conclusions. In contrast to the attention hypothesis, the superior results of the “no-note” group to verbatim recognition measures does not support the prediction that note taking results in better total recall. Further, in contrast to the distraction hypothesis, the “notes” group performed better than the “no-note” group in some measures. However, significant differences existed between the measures of what was learned (far-transfer versus near-transfer measures) supporting the generative hypothesis. Note takers excelled on the far-transfer (problem solving) test measures. In contrast, “no-note” takers were more successful in near transfer verbatim and fact recall of information. Supporting the second prediction, the results in Experiment 2 were strong for learners unfamiliar with the topic, but not for familiar learners. Further, in support of Generative Strategy Abstract 3 the third prediction, the other tested generative activity (within the questioning-answering treatment) had similar results as note taking. Perry and Mayer (1986) viewed these results as support for generative theory. They concluded that the process of note taking (especially for those unfamiliar with the material) encourages the note takers to assimilate new information with past experience and make interconnections among pieces of information. Heuristics Based on the results of these experiments, learners should be offered the opportunity to take notes as a means of supporting the long term encoding of new information. This research suggests that the note taking process offers learners the opportunity for integration and organization of the new information with existing knowledge. However, this research also suggests that these results are more likely when the to-be-learned information is unfamiliar to the learner. Further, the process of re-organization and integration with prior knowledge involved in note taking may interfere with verbatim encoding of information and facts. Critique of Article A key strength of this research is the evaluation of note taking across three separate hypotheses, including attention, distraction, and generative theories. Further, the research highlights the advantages, as well as potential limitations, of note taking on encoding. However, it is important to note that the test measures were based on cued recall versus free recall. A possible source of future research would be to replicate the experiments with free recall test measures. In addition, the research analysis did not provide a qualitative analysis of the notes taken by students. An analysis of the qualitative features of the notes, such as the use of diagrams, would have helped to augment the findings. Also, as noted by the authors, this research provides an incomplete analysis of the relationship between note content and problemsolving performance. Generative Strategy Abstract References 4 Peper, R. J., & Mayer, R. E. (1986). Generative Effects of Note-Taking during Science Lectures. Journal of Educational Psychology, 78(1), 34.