Evaluating the readability and quality of online cataract surgery patient information
Session Details
Session Title: Presented Poster Session: New & Interesting I
Venue: Poster Village: Pod 3
First Author: : N.Mohamed UK
Co Author(s): : A. Rothwell P. Hossain
Abstract Details
Purpose:
With internet health-seeking behaviour at an all-time high, concerns about its credibility and impact on the traditional physician-patient relationship persist. Previous studies have demonstrated that patient-orientated literature consistently falls short of national reading ability and quality standards. This potentially jeopardises the safety of cataract surgery patients, who face distinct challenges in giving informed consent. We seek to determine whether online cataract surgery patient information is written at an appropriate comprehension level for the general public.
Setting:
This study was conducted at the Ophthalmology Department, University Hospital Southampton, United Kingdom. The terms ‘cataract operation’, ‘cataract treatment’ and ‘cataract surgery’ were searched via the five most popular internet search engines. The top 300 webpages were reviewed using validated tools.
Methods:
Readability was assessed using two validated metrics: Flesch Reading Ease score (FRES) and Flesch-Kincaid reading grade level (FKGL). Quality was evaluated using two validated metrics: the DISCERN tool and the CDC (Centers for Disease Control and Prevention) CCI (Clear Communication Index). ‘Health-On-the-Net Code of Conduct’ (HON) and ‘The Information Standard’ (TIS) certification was noted. Websites were categorised according to authorship, country of origin and ‘for-profit’ status. Statistical analysis was conducted to identify any relationship(s) between readability, quality and website category.
Results:
The mean FRES (50.1±9.9) and FKGL (10.7±2.0) indicate that the websites were written at a level well above the national reading average and thus defined as “difficult to read”. 59.6% websites had ‘serious or extensive shortcomings’ regarding quality, with none attaining the minimum standard set by the CCI. HON/TIS accreditation predicted higher DISCERN scores (p<0.001). UK websites had significantly higher FRES (p<0.001). Non-UK websites had significantly higher DISCERN scores (p=0.027). Not-for-profit websites had significantly higher DISCERN scores (p<0.001). Websites of academic/clinical authorship achieved the highest DISCERN scores (p<0.001).
Conclusions:
Online patient information materials for cataract surgery were of a substandard quality and too difficult to read for the average member of the public. Regarding a resource’s readability, we conclude that improving the readability of a written document may compromise its quality, and readability metrics should be used as a rough guide rather than relied on exclusively. Clinicians should develop materials themselves, as ‘for-profit’ enterprises produce poorer quality texts. An individualised approach to delivering patient information should be adopted by clinicians.
Financial Disclosure:
None