Readability analysis of patient information on the American Academy of Otolaryngology–Head and Neck Surgery website (original) (raw)

Quantitative Readability Assessment of the Internal Medicine Online Patient Information on Annals.org

Cureus

Background Approximately 90% of Americans have access to the internet with the majority of people searching online for medical information pertaining to their health, or the health of loved ones. The public relies immensely on online health information to make decisions related to their healthcare. The American Medical Association (AMA) and the National Institute of Health (NIH) recommend that publicly available health-related information be written at the level of the sixth-seventh grade. Materials and methods Patient education materials available to the public on the Annals.org, a website sponsored by the American College of Physicians, were collected. All 89 patient education articles were downloaded from the website and analyzed for their ease of readability. The articles were analyzed utilizing a readability software generating five quantitative readability scores: Flesch Reading Ease (FRE), Flesch-Kincaid Grade Level (FKGL), Gunning Fog Index (GFI), Coleman-Liau Index (CLI), Simple Measure of Gobbledygook (SMOG). All scores, with the exception of FRE, generate a grade level that correlates with the required school-grade level to ensure adequate readability of the information. Results Eighty-nine articles were analyzed generating an average score as follows: FRE 62.8, FKGL 7.0, GFI 8.6, CLI 9.6 and SMOG 9.8. Overall, 87.6% of the articles were written at a level higher than the 7th-grade level, which is recommended by the AMA and NIH.

Readability assessment of online patient education materials from academic otolaryngology–head and neck surgery departments

American Journal of Otolaryngology, 2013

Purpose: The aim of this study was to compare the readability of online patient education materials among academic otolaryngology departments in the mid-Atlantic region, with the purpose of determining whether these commonly used online resources were written at a level readily understood by the average American. Methods: A readability analysis of online patient education materials was performed using several commonly used readability assessments including the Flesch Reading Ease Score, the Flesch-Kincaid Grade Level, Simple Measure of Gobbledygook, Gunning Frequency of Gobbledygook, the New Dale-Chall Test, the Coleman-Liau Index, the New Fog Count, the Raygor Readability Estimate, the FORCAST test, and the Fry Graph. Results: Most patient education materials from these programs were written at or above an 11th grade reading level, considerably above National Institutes of Health guidelines for recommended difficulty. Conclusions: Patient educational materials from academic otolaryngology Web sites are written at too difficult a reading level for a significant portion of patients and can be simplified.

Assessing the readability of clinicaltrials.gov

Journal of the American Medical Informatics Association : JAMIA, 2015

ClinicalTrials.gov serves critical functions of disseminating trial information to the public and helping the trials recruit participants. This study assessed the readability of trial descriptions at ClinicalTrials.gov using multiple quantitative measures. The analysis included all 165 988 trials registered at ClinicalTrials.gov as of April 30, 2014. To obtain benchmarks, the authors also analyzed 2 other medical corpora: (1) all 955 Health Topics articles from MedlinePlus and (2) a random sample of 100 000 clinician notes retrieved from an electronic health records system intended for conveying internal communication among medical professionals. The authors characterized each of the corpora using 4 surface metrics, and then applied 5 different scoring algorithms to assess their readability. The authors hypothesized that clinician notes would be most difficult to read, followed by trial descriptions and MedlinePlus Health Topics articles. Trial descriptions have the longest average ...

A Practical Guide to Research: Design, Execution, and Publication

Arthroscopy: The Journal of Arthroscopic & Related Surgery, 2011

W hy is this work needed, and who would benefit from it? First of all, we must realize that this work is on a high but at the same time moderate level. The aim is to put together a Research Methods Handbook that can be of practical help to those writing manuscripts for submission to Arthroscopy and similar journals. We are referring to people working full time, taking care of patients, with busy outpatient clinics and fully booked surgical schedules. These are persons who do not devote the majority of their time to research. And in most cases they do not have any major training in scientific research methods. Since sound research methods are the backbone of a good study, the methods must be solid to ensure that the results are valid. If the methods are not good from the beginning, the outcome will not be good either, and the manuscript will not be published despite the investigator's best effort.

Readability of Online Patient Education Materials From the AAOS Web Site

Clinical Orthopaedics and Related Research, 2008

One of the goals of the American Academy of Orthopaedic Surgeons (AAOS) is to disseminate patient education materials that suit the readability skills of the patient population. According to standard guidelines from healthcare organizations, the readability of patient education materials should be no higher than the sixth-grade level. We hypothesized the readability level of patient education materials available on the AAOS Web site would be higher than the recommended grade level, regardless when the material was available online. Readability scores of all articles from the AAOS Internet-based patient information Web site, ''Your Orthopaedic Connection,'' were determined using the Flesch-Kincaid grade formula. The mean Flesch-Kincaid grade level of the 426 unique articles was 10.43. Only 10 (2%) of the articles had the recommended readability level of sixth grade or lower. The readability of the articles did not change with time. Our findings suggest the majority of the patient education materials available on the AAOS Web site had readability scores that may be too difficult for comprehension by a substantial portion of the patient population.

Readability assessment of the American Rhinologic Society patient education materials

International Forum of Allergy & Rhinology, 2012

Background: The extensive amount of medical literature available on the Internet is frequently accessed by patients. To effectively contribute to healthcare decision-making, these online resources should be worded at a level that is readable by any patient seeking information. The American Medical Association and National Institutes of Health recommend the readability of patient information material should be between a 4th to 6th grade level. In this study, we evaluate the readability of online patient education information available from the American Rhinologic Society (ARS) website using 9 different assessment tools that analyze the materials for reading ease and grade level of the target audience. Methods: Online patient education material from the ARS was downloaded in February 2012 and assessed for level of readability using the Flesch Reading Ease, Flesch-Kincaid Grade Level, Simple Measure of Gobbledygook (SMOG) Grading, Coleman-Liau Index, Gunning-Fog Index, FORCAST formula, Raygor Readability Estimate, the Fry Graph, and the New Dale-Chall Readability Formula. Each article was pasted as plain text into a Microso R Word R document and each subsection was analyzed using the so ware package Readability Studio Professional Edition Version 2012.1. Results: All healthcare education materials assessed were wri en between a 9th grade and graduate reading level and were considered "difficult" to read by the assessment scales. Conclusion: Online patient education materials on the ARS website are wri en above the recommended 6th grade level and may require revision to make them easily understood by a broader audience.

Assessing readability formula differences with written health information materials: Application, results, and recommendations

Research in Social and Administrative Pharmacy, 2013

Background: Readability formulas are often used to guide the development and evaluation of literacysensitive written health information. However, readability formula results may vary considerably as a result of differences in software processing algorithms and how each formula is applied. These variations complicate interpretations of reading grade level estimates, particularly without a uniform guideline for applying and interpreting readability formulas. Objectives: This research sought to (1) identify commonly used readability formulas reported in the health care literature, (2) demonstrate the use of the most commonly used readability formulas on written health information, (3) compare and contrast the differences when applying common readability formulas to identical selections of written health information, and (4) provide recommendations for choosing an appropriate readability formula for written health-related materials to optimize their use. Methods: A literature search was conducted to identify the most commonly used readability formulas in health care literature. Each of the identified formulas was subsequently applied to word samples from 15 unique examples of written health information about the topic of depression and its treatment. Readability estimates from common readability formulas were compared based on text sample size, selection, formatting, software type, and/or hand calculations. Recommendations for their use were provided. Results: The Flesch-Kincaid formula was most commonly used (57.42%). Readability formulas demonstrated variability up to 5 reading grade levels on the same text. The Simple Measure of Gobbledygook (SMOG) readability formula performed most consistently. Depending on the text sample size, selection, formatting, software, and/or hand calculations, the individual readability formula estimated up to 6 reading grade levels of variability. Conclusions: The SMOG formula appears best suited for health care applications because of its consistency of results, higher level of expected comprehension, use of more recent validation criteria for determining reading grade level estimates, and simplicity of use. To improve interpretation of readability results, reporting reading grade level estimates from any formula should be accompanied with information about word sample size, location of word sampling in the text, formatting, and method of calculation.

Readability of Patient Education Materials on the American Association for Surgery of Trauma Website

Archives of Trauma Research, 2014

Background: Because the quality of information on the Internet is of dubious worth, many patients seek out reliable expert sources. As per the American Medical Association (AMA) and the National Institutes of Health (NIH) recommendations, readability of patient education materials should not exceed a sixth-grade reading level. The average reading skill of U.S. adults is at the eighth-grade level. Objectives: This study evaluates whether a recognized source of expert content, the American Association for Surgery of Trauma (AAST) website's patient education materials, recommended readability guidelines for medical information. Materials and Methods: Using the well-validated Flesch-Kincaid formula to analyze grade level readability, we evaluated the readability of all 16 of the publicly-accessible entries within the patient education section of the AAST website. Results: Mean ± SD grade level readability was 10.9 ± 1.8 for all the articles. All but one of the articles had a readability score above the sixth-grade level. Readability of the articles exceeded the maximum recommended level by an average of 4.9 grade levels (95% confidence interval, 4.0-5.8; P < 0.0001). Readability of the articles exceeded the eighth-grade level by an average of 2.9 grade levels (95% confidence interval, 2.0-3.8; P < 0.0001). Only one of the articles had a readability score below the eighth-grade level. Conclusions: The AAST's online patient education materials may be of limited utility to many patients, as the readability of the information exceeds the average reading skill level of adults in the U.S. Lack of patient comprehension represents a discrepancy that is not in accordance with the goals of the AAST's objectives for its patient education efforts.

A Web-Based Software for Reporting Guidelines of Scientific Researches

The journal of cognitive systems, 2021

It is very important to use accurate reporting guidelines when reporting a study in cognitive science and health. This study aims to develop a web-based tool that leads to a reporting guideline that includes checklists and flowcharts for relevant research by type of research designs (qualitative, descriptive, experimental, and methodological studies, etc.) Materials and Methods: The current study covers qualitative research, systematic review/metaanalysis, case presentations, case series, correlational (ecological), case-control, cross-sectional, cohort, randomized clinical trial, non-randomized clinical trial, field studies (for there are reporting guideline for primary protection measures), health care research (animal experiments, etc.), validity studies, consistency studies, simulation studies. For this purpose, the researcher is asked which epidemiological research design is useful and is directed to the reporting guideline for the relevant research type. During the development of the software, the DASH Library in the Python programming language was used. Results: The Scientific Research Guidelines Software developed in this study can improve the reporting quality of the studies by guiding researchers to the correct reporting guide. The webbased software developed can be accessed at http://biostatapps.inonu.edu.tr/BAKY/. The software has English and Turkish language options. Conclusion: Scientific Research Guidelines Software allows researchers to clearly state what they do and what they don't do in their study, how they do it, and what they find as a result. Besides, this software provides access to guides where they can learn about the meaning, strengths, and weaknesses of the study being done.