Disseminating Project Outcomes in a Scholarly Poster (original) (raw)
Related papers
Evaluating Evidence-Based Studies and Design Projects
HERD: Health Environments Research & Design Journal, 2015
Health care design research answers important questions about the effects of specific design features on patient, provider, or organizational outcomes. Research generates data, findings, and new knowledge that becomes evidence when it is published or presented, that can be used by the designer in his or her everyday practice. The current evidence-based design (EBD) focus has shifted to the process of integrating and using existing credible evidence in the design process. But, how do we know if evidence is credible and of high quality? The purpose of this methodology column is to describe methods and tools that clients and design firms can use (1) to critique and appraise the evidence (expert opinion, best practice examples, or published research) to determine the level and quality of the evidence and (2) to evaluate projects to determine the level and quality of evidence that was used to guide design decisions.
Evaluating the Evidence in Evidence-Based Design
JONA: The Journal of Nursing Administration, 2010
Evidence-based practice has become a valued process on which to base our clinical and facility design decisions, yet not all evidence is created equal. This facility design department aims to expand nurse leaders' knowledge and competencies in health facility design and enables them to take leadership roles in design efforts. This article focuses on the need to critical appraise facility design research articles and rate the strength of the evidence using a hierarchical model.
Ascertaining Success in Graphic Design: an Evidence-Based Approach
The state of the graphic design industry has made some practitioners apprehensive. By employing widely available design software and online tutorials, self-taught lay people are able to perform the work of professional graphic designers, often faster and more cheaply. Many companies do not consider professional design to be a necessity. Clients who do not value good design do not understand the interrelatedness of aesthetics and effective communication, and it is difficult for designers to demonstrate the latter. Thus, designers are left making empty claims about design successes, which are often based on arbitrary and unreliable criteria. When stakes are high, clients expect the professionals with whom they consult to be confident that proffered recommendations will achieve desired outcomes. Such confidence is strengthened when founded in evidence, not intuition alone (Dawes, Faust, & Meehl, 1989). As Becker (1999) writes, “Scientists don’t make decisions without checking the research data. Physicians don’t prescribe penicillin because they like the color pink” (p. 57). Designers, then, should not propose solutions without being able to anticipate and measure the effects of those solutions with confidence. Because of this increased demand for certainty in design, there has been a growing interest in the adoption of evidence-based design (EBD), the practice of grounding design solutions and decisions in a researched and documented knowledge base that includes the analysis and interpretation of research (Stewart-Pollack & Menconi, 2005). This paper will provide an overview of the history of the evidence-based approach, weigh the costs and benefits of its implementation in design, identify facets of the industry that could benefit from evidence-based design, and demonstrate how an evidence-based approach might be implemented in those contexts.
Grounding Evidence in Design: Framing Next Practices
Design Journal, 2017
By focusing on episodes from a case study of healthcare design practice investigated in situ, the aim of this paper is to provide a better understanding of the nature and use of evidence in design. Our account portrays a practice where sources other than scientific research findings were also considered. Based on observations and interviews from the field, the paper first provides a brief account of sources and representations of evidence. The varieties of evidence within the observed practice fall into four major groups: precedents, scientific research, embodied knowledge, and anecdotes. We observed how the participants in the design process used each of these forms of evidence to formulate and explain their design ideas in terms of mechanistic models to form causal links. These mechanistic arguments, which follow a model of scientific thinking, were repositories of transdisciplinary knowledge involving design and other disciplines.
2013
Problems related to value loss in building design are well known (Huovila et al., 1997; Koskela, 2000). Common problems include clients' requirements not being captured or lost throughout the design process; little improvement and optimisation of design solutions and mistakes whilst developing design (Huovila et al., 1997). Solutions developed to date are considered to be still insufficient to resolve these issues. Many authors discuss problems associated with the design process in construction including
Identifying Key Components for an Effective Case Report Poster: An Observational Study
Journal of General Internal Medicine, 2009
BACKGROUND Residents demonstrate scholarly activity by presenting posters at academic meetings. Although recommendations from national organizations are available, evidence identifying which components are most important is not. OBJECTIVE To develop and test an evaluation tool to measure the quality of case report posters and identify the specific components most in need of improvement. DESIGN Faculty evaluators reviewed case report posters and provided on-site feedback to presenters at poster sessions of four annual academic general internal medicine meetings. A newly developed ten-item evaluation form measured poster quality for specific components of content, discussion, and format (5-point Likert scale, 1 = lowest, 5 = highest). Main outcome measure(s): Evaluation tool performance, including Cronbach alpha and inter-rater reliability, overall poster scores, differences across meetings and evaluators and specific components of the posters most in need of improvement. RESULTS Forty-five evaluators from 20 medical institutions reviewed 347 posters. Cronbach’s alpha of the evaluation form was 0.84 and inter-rater reliability, Spearman’s rho 0.49 (p < 0.001). The median score was 4.1 (Q1 -Q3, 3.7-4.6)(Q1 = 25th, Q3 = 75th percentile). The national meeting median score was higher than the regional meetings (4.4 vs, 4.0, P < 0.001). We found no difference in faculty scores. The following areas were identified as most needing improvement: clearly state learning objectives, tie conclusions to learning objectives, and use appropriate amount of words. CONCLUSIONS Our evaluation tool provides empirical data to guide trainees as they prepare posters for presentation which may improve poster quality and enhance their scholarly productivity.