Cognitive Engineering: Toward a Workable Concept of Mind (original) (raw)
Related papers
The state of cognitive systems engineering
IEEE Intelligent Systems, 2002
Since then, this field has taught us many important things, including two major lessons. First, the road to user-hostile systems is paved with designers' user-centered intentions. Even smart, clever, well-intentioned people can build fragile, hostile devices that force the human to adapt and build local kludges and workarounds. Worse still, even if you are aware of this trap, you will still fall into it. Second, technology developers must strive to build truly human-centered systems. Machines should adapt to people, not the other way around. Machines should empower people. The process of designing machines should leverage what we know about human cognitive, perceptual, and collaborative skills. Time to rethink We're in a new ballgame, in which the modern "sociotechnical" workplace is characterized by changing collaborative mixes of humans and machines. Advances in technology have opened new horizons that are changing the nature of work and education, including distance learning, distance collaboration, training support, and performance support. 1-3 Consider, for example, the notion from human factors engineering of task analysis: you can decompose jobs into invariant linear or tree-like sequences of actions (and some cognitions). This notion has a long history, dating to the applied psychological research in Europe dubbed psychotechnics in the late 1800s. (This notion of task differs from the AI notion of "generic tasks." 4) Studies of the modern workplace suggest that significant problems can arise when you design systems based on a decomposition of tasks into invariant sequences of prescribed steps. Sometimes, people might appear to be conducting linear sequences of actions, when they are actually engaging in context-sensitive, knowledge-driven choices among alternative actions. 5,6 Would loss of the uplink to the weather radar keep a forecaster from crafting a forecast? No, the forecaster can work around it because knowledge permits the creation of alternative paths to the goal. When you are forced to adapt, kicking and screaming, to a new software upgrade and are frustrated by changes in functionality, are you totally paralyzed? No, you can craft a workaround. The point is not that something is inherently wrong about the notion of a task as an expression of a particular goal, but that task analysis as it has been applied can sometimes be limiting. When regularly occurring sequences are regarded as invariant and therefore predefined, systems designed on this basis can run a substantial risk of being flawed. Specifically, you can expect them to lead to fragilities, hostilities, and automation surprises. 3,7 In short, they might not be human-centered. Over the past decade, research activities have converged on new notions of "cognitive field research" and new frameworks that point toward methodologies for crafting human-centered systems. 8-13 Understanding Research over the past decade has led to T he widespread introduction of the personal computer, beginning about 1970, helped spawn the field of inquiry called cognitive engineering, which concerns itself with such things as interface design and user friendliness.
Inventing the Future of Cognitive Work: Navigating the" Northwest Passage
2005
Computer scientists, engineers, managers, and practitioners make claims about how new technologies will change cognitive work-how workers in various fields of practice solve problems in analysis, fault management, control, coordination, and replanning. When systems are built and fielded based on these beliefs, the actual effects on practice, including new forms of error, are quite different from what was envisioned as users workaround complexities or exploit new capabilities (Woods and Dekker, 2000). The gap between hopes and reality in changing the face of cognitive work arises for 2 factors: (1) because claims for new technology ignore the research findings of the field of Cognitive Systems Engineering on how people cope with complexity and (2) because advocates for new technology are trapped in a narrow range of possible expressions of the new capabilities relative to the demands of cognitive work. Since design methods have not had the desired impact of guiding designing in the context of cognitive work, the voyage of discovery that should follow from insight through research has been limited. Concepts were identified, but their implementation into the world of practice calls for an extended presence of design thinking in technology application-one that is human centered, not technology oriented (Winograd and Woods, 1997; Hoffman et al., 2002; Hoffman et al., 2004). This research has examined how technologists envision the future of cognitive work and found a variety of oversimplifications that narrow the process of discovery (Feltovich et al., 2004). Based on these results, the paper proposes an integration of methods from Cognitive Systems Engineering and Design Innovation for finding promising directions (e.g., Winograd and Flores, 1986; Woods and Christoffersen, 2000; and Alexander, 1964; Jones, 1970 respectively). The integration, or de:cycle, coordinates three roles (and associated processes and artifacts produced through these design processes): practitioner-how they adapt to complexity, innovator-how they envision what would be useful, and technologist-how they bring the anticipated change into the world of practice.
1986
PROLOGUE cognitive Engineering, a term invented to reflect the enterprise I find myself engaged in: neither Cognitive Psychology, nor Cognitive Science, nor Human Factors. It is a type of applied Cognitive Science, try-ing to apply what is known from science to the design and construction of machines. It is a surprising business. On the one hand, there actu-ally is quite a lot known in Cognitive Science that can be applied. But on the other hand, our lack of knowledge is appalling. On the one hand, computers are ridiculously difficult to use.
Electronics , 2022
The understanding of human cognition has not been fully achieved; therefore, Information Systems (IS) are not yet fully synchronized with humans. By understanding the cognition process, we will be able to create a human-tailored Cognitive Information Systems (CISs). The necessity for this research is supported by the fact that present business decision makers are faced with challenges that they cannot solve in the time available without CIS. Here, the aim of the authors is to underpin the adaptability of cognitive resonance and the role of the info-communication via Human–Computer Interaction (HCI)—including linkage, relation and impacts, showing the needed direction to increase the effectiveness of the Human–Computer Interaction (HCI), that leads to an improved CIS building with a higher cognitive level. The applied research methodology consists of research analyses and an assessment of the available publications to pursue a comparative study pattern; then, a model building paradigm has been used for observing and monitoring the work with a CIS during HCI. We found a huge gap regarding information processing in the recent literature, that has been caused by the wide range of interdisciplinarity. Our research approach provides an overview of how other disciplines influence HCI and how the human mental model is supported with value added.
Conceptions of cognition for cognitive engineering
The International Journal of Aviation Psychology, 2011
Cognitive processes, cognitive psychology tells us, unfold in our heads. In contrast, several approaches in cognitive engineering argue for a shift of unit of analysis from what is going on in the heads of operators to the workings of whole socio-technical systems. This shift is sometimes presented as part of the development of a new understanding of what cognition is and where the boundaries of cognitive systems are. Cognition, it is claimed, is not just situated or embedded, but extended and distributed in the world. My main question in this article is what the practical significance is of this framing of an expanded unit of analysis in a cognitive vocabulary. I focus on possible consequences for how cognitive engineering practitioners think about function allocation in system design, and on what the relative benefits and costs are of having a common framework and vocabulary for talking about both human and technical system components. I argue for what I call an *expansive but deflated conception of cognition*, primarily on pragmatic grounds. In addition, I claim that the important lesson of the “boundaries of cognition” debate in cognitive science is the negative claim that there is not anything special about the biological boundary of the skin and skull per se, rather than some positive claim about where the boundaries of extended or distributed cognitive systems really are. I also examine the role of the concept of cognition in the theoretical frameworks of Distributed Cognition, Joint Cognitive Systems (also known as Cognitive System Engineering), and Cognitive Work Analysis.
Cognitive engineering: issues in user-centered system design
2002
Suppose you were assigned the task of designing software to help automobile mechanics troubleshoot engine malfunctions. How would you approach the problem to ensure that you developed a useful and usable system? Or, suppose you were asked to develop computer-based procedures to replace the paper-based procedures that operators now use to monitor and control a paper-mill process. Or, suppose you were asked to build an information system to support customer service personnel in fielding phone inquiries. How would you know what information to include in the computer database or knowledge base? On what basis would you design the human-computer dialogue structure? How would you know you have developed a usable system that aids users in accomplishing their tasks and leads to improved performance? These questions do not have simple answers. In this article, we introduce some basic concepts from an emerging field called cognitive engineering that is designed to address these types of questions.
Mind over machine - the power of human intuition and expertise in the era of the computer
IEEE Expert / IEEE Intelligent Systems, 1987
Hubert Dreyfus has acquired a reputation as being one of the most vehement and outspoken critics of the goals and accomplishments of research in artificial intelligence. The recently published book by Hubert and Stuart Dreyfus, Mmd over Machine, expands upon the theme expressed in earlier works [2] that AI systems are based upon an overly simplistic model of human problem solving and as a result will never achieve levels of performance approaching those of human beings.