TOWARD ASSESSMENT OF DESIGN SKILL IN ENGINEERING (original) (raw)

Effective Assessment of Engineering Design in an Exam Environment

One of the most difficult aspects of engineering is the effective teaching of engineering design. While it is paramount that every engineering student be exposed to engineering design, it can be difficult to assess the design skills of individual students. Most design assessment is typically conducted at the project or team level, and many assessments of design effectiveness only use the capstone experience. This is clearly inadequate. Instead, what is needed is an effective method that can be used to partially assess the design capabilities of individual students in an exam setting. This article will discuss an approach to assessing design skills in the exam environment. It allows for the effective assessment of some design skills and practices in an exam environment without placing undue stress upon the student. The approach involves a short case study provided to the students as part of an exam review sheet, the construction of design questions based upon the case study, the definition of detailed rubrics to assess the quality of the design, and the administration of the exam in a controlled setting. Student achievement is discussed, as well as the advantages and disadvantages of this approach to assessing design skills.

Development of a Supplemental Evaluation for Engineering Design Courses

2016

Compared to the learning that occurs in most engineering courses, the learning that occurs in design courses is more dependent on students, and less dependent on instructors. Because typical course evaluations are instructor-centric and do not provide information about students’ contributions to their learning, we developed a supplemental evaluation to assess student actions and attitudes important to a quality design experience. We detected statistically-significant, logical shifts in self-reported practices and attitudes as student cohorts progressed through design projects. Factor analysis showed that the evaluation questions could be grouped into eight thematic categories, with most of the questions assessing student ability to function independently in uncertain situations, self-perception of maturation and achievement, and acceptance of responsibility for learning. Accepting responsibility for learning and believing that design experiences helps transition from being a student...

Engineering student design processes: Looking at evaluation practices across problems

The act of evaluating solutions is a common engineering design activity. Over the past eight years we have used verbal protocol analysis to gain insight into engineering students' design processes. This study includes protocols from 32 freshmen and 61 seniors who solved 2 design problems that differed in complexity. In this dataset, 18 of the subjects solved the same problems as both freshmen and seniors. This dataset has allowed us to characterize differences between freshmen and seniors on a global scale as well as an individual scale. Additionally, the inclusion of two problems that vary in complexity allows us to analyze differences in performance and behaviors across problems. One of the important findings that has emerged from an across problem comparison is differences in the amount of time that students spent evaluating their solutions. In particular, (i) students spent more time evaluating their solutions and (ii) a greater number of students evaluated their solutions when solving a more "complex" problem as compared to a less "complex" one. In this paper, we present these results and discuss reasons for these differences. These include differences in the complexity of the two problems and the kinds of processes students employed while designing their solutions. We will also discuss the relationships between time spent evaluating and the number of constraints considered (constraints either given or introduced by the student). We conclude this paper with a summary of implications for engineering education.

A Conceptual Model For Capstone Engineering Design Performance And Assessment

2006 Annual Conference & Exposition Proceedings

Assessment in capstone engineering design courses is vital to engineering education programs. The capstone design course is the climax of design education and often the context for much of the assessment done in engineering degree programs. Capstone design course instructors' admittedly low confidence for assessing student performance in these courses poses a crucial obstacle to the assessment process. A key issue is a lack of clear outcomes definition for engineering design and sound, defensible assessments for these outcomes. This paper draws from findings in design literature and from engineering design education experience to construct a conceptual model of engineering design that guides development of associated design learning outcomes and assessment of student achievements in design.

A Rubrics-Based Methodological Approach for Evaluating the Design Competency of Engineering Students

In this research, an evaluation roadmap is developed using rubrics to provide instructors with guidance in clarifying expectations to students for maximum grade results, while providing themselves with a "three-dimensional" methodological approach for evaluating and ranking students' competency in design engineering. The efforts were focused on developing a model to evaluate the extent to which students have applied their knowledge in various design engineering projects over their undergraduate education. More specifically, a multidimensional rubric was developed to accommodate the levels at which students are taught the concepts required for their design projects; that is, how well are students understanding a topic when they are first introduced to it, when they are taught it at an intermediate level, or when advanced learning is expected.

Special session — Assessing student learning of engineering design

2011 Frontiers in Education Conference (FIE), 2011

Design is a central aspect of engineering and engineering education, but is challenging to teach and even more challenging to assess. In this special session participants co-construct an understanding of design and what aspects of design should be (and can be) assessed. Additionally, the special session will review the instrument development process (including the process of validating instruments) and will provide examples of existing instruments for assessing learners' understanding of design. These instruments measure a variety of topics and concepts related to design, have been designed for many different audiences and have been developed for different purposes. This session will equip educators with tools that are useful for assessing and promoting students' understanding of design. Additionally, this session may benefit educators and researchers interested in adopting or adapting design assessments for use with K-12 populations.

Description of, and Outcomes from, a Novel First Year Engineering Design Course

Proceedings of the Canadian Engineering Education Association (CEEA)

In the Fall of 2021, the University of Saskatchewan’s College of Engineering implemented a new first year Engineering Design course called GE 142 (Design I). In comparison to similar courses in other Engineering programs, the course was unique in a few respects. First, it ran from mid-October to mid-December, and it included 7 lectures and 4 labs. Second, it was focused almost entirely on problem definition. Third, the assessment system was competency based. Each of these elements made for a unique design course, and each element will be described in detail. The course had a number of Learning Outcome goals in the general areas of knowledge, skills, experiences, and attitudes. Knowledge was assessed using an automated adaptive quiz system employing Mobius™ software, linked to the Canvas™ Learning Management System (LMS). Design skills were assessed through a series of six assignments that focused on the ability to characterize design problems, maintain an effective logbook, ...

Assessing Engineering Design: A Comparison of the Effect of Exams and Design Practica on First-Year Students’ Design Self-Efficacy

Journal of Mechanical Design

In response to calls for engineering programs to better prepare students for future careers, many institutions offer courses with a design component to first-year engineering students. This work proposes that traditional exam-based assessments of design concepts are inadequate, and alternative forms of assessment are needed to assess student learning in design courses. This paper investigates the self-efficacy differences between a traditional exam and a two-part practicum as a mid-semester assessment for introductory engineering students enrolled in a first-year design course. Increased self-efficacy has been linked to various positive student outcomes and increased retention of underrepresented students. The practicum consisted of an in-class team design task and an out-of-class individual reflection, while the exam was a traditional, individual written exam. All students completed a pre-assessment survey and a post-assessment survey, both of which included measures of design self...

How Do First-Year Engineering Students Experience Ambiguity in Engineering Design Problems: The Development of a Self-Report Instrument

2016 ASEE Annual Conference & Exposition Proceedings, 2000

Design is widely recognized as a keystone of engineering practice. Within the context of engineering education, design has been categorized as a type of ill-structured problem solving that is crucial for engineering students to engage with. Improving undergraduate engineering education requires a better understanding of the ways in which students experience ill-structured problems in the form of engineering design. With special attention to the experiences of first-year engineering students, prior exploratory work identified two critical thresholds that distinguished students' ways of experiencing design as less or more comprehensive: accepting ambiguity and recognizing the value of multiple perspectives. The goal of current (work-in-progress) research is to develop and pilot a self-report instrument to assess students' relation to these two thresholds at the completion of an illstructured design project within the context of undergraduate engineering education. The specific research questions addressed in this study are 1) if the piloted self-report instrument can be used to identify discrete constructs, and 2) how these constructs align with prior qualitative research findings. The objective of this study was addressed using a quantitative exploratory research design. Items for the self-report Likert-scaled instrument were designed to distinguish student experience that either accept or reject the presence of ambiguity and the value of multiple perspectives. The instrument was disseminated to a total of 214 first-year engineering students. Exploratory factor analysis was used to identify the constructs that emerge from the self-report data, and these constructs were checked for alignment with the previously identified thresholds. The results of this investigation will be used to help advance progress towards an easily administered instrument able to assist engineering educators with the identification of students in need of intervention or explicit instruction related to critical aspects of learning engineering design. The instrument could also be used to track student growth over time, and, with further development, to provide evidence for ABET student outcomes.