Ljubljana 2014 Registration Programme Satellite Meetings Exhibition Hotel Information Virtual Exhibition Visa Information

The development of a virtual reality training programme for ophthalmology: repeatability and reproducibility (part of the International Forum for Ophthalmic Simulation Studies)

Poster Details

First Author: KorinaTheodoraki UK

Co Author(s):    George M Saleh   Stewart Gillan   Paul Sullivan   Fiona O’Sullivan   Badrul Hussain   Catey Bunce

Abstract Details



Purpose:

To evaluate the variability of performance among novice ophthalmic trainees in a range of repeated tasks using the Eyesi virtual reality (VR) simulator. The subjects were assessed across a number of cataract specific and generic 3-dimensional tasks. Previous studies qualitatively suggested that inexperienced ophthalmic surgeons may have a wider spread of performance in the early stages of training. However, no specific analysis was ever undertaken with regards repeatability, variability and reproducibility (differences between an individual’s performance and then differences between the individuals themselves) of performance.

Setting:

: This prospective study was conducted at Moorfields Eye Hospital with the support of STeLI (Simulation & Technology-enhanced Learning Initiative), the London Deanery School of Ophthalmology and IFOS (International Forum of Ophthalmic Simulation). The Eyesi VR simulator was used. All eligible novice ophthalmic trainees were invited to participate.

Methods:

Eighteen subjects participated in the study. A consultant attending trainer (GS) gave them a standardized simulator induction. Each trainee received a personalized account through which all data acquisition was captured. A clear description of all tasks was presented prior to commencement. Five modules were selected, including one cataract specific task (capsulorhexis level 1) and four generic 3-dimensional tasks (cracking and chopping level 2, cataract navigation level 3, cataract bimanual training level 1, anti-tremor level 2). Each one of the tasks was repeated three times to test for repeatability and reliability. Scores for each attempt were out of a maximum of 100 points. Data were analysed using non-parametric tests because of evidence of non-normality. The Signed-Rank and the Kruskal-Wallis test were employed. For all tests, a P-value of less than 0.05 was considered statistically significant.

Results:

There was no significant variability in the overall score between the trainees (no inter-novice variability, P=0.1104). Similar outcomes/results were found when the difference between the highest and the lowest score was examined (P=0.3878). Highly significant differences were revealed between the scores achieved in the first attempt and that during the second (P < 0.0001) and third (P < 0.0001) but not between the second and third attempt (P = 0.65). When examining the scores achieved during each task, it was shown that performance was significantly affected by the complexity of a task. Highly significant differences between tasks was shown both in the overall score (P=0.0001) and in the difference between highest and lowest score (P=0.003).

Conclusions:

This study, which is the first to quantify reproducibility of performance in entry level trainees using a VR tool, demonstrated significant intra-novice variability. The cohort of subjects performed equally overall in the range of tasks (no inter-novice variability, P=0.1104). This result indicates that they all found the tasks equally challenging and had an equally varied performance when repeating the given modules. This trend holds true when the difference among the highest and lowest scores were assessed (P=0.3878). There is a clear upward trend of performance with repeated attempts. Novice trainees seem to achieve a certain level of competency and consistency on their scores between the 2nd and the 3rd attempt (P=0.65). Importantly therefore, at this earliest stage of training a minimum of three repeats of any given task should be encouraged both for learning and benchmarking. This study showed that performance varies significantly with the complexity of the task (p=0.0001) when using this high fidelity instrument. This is helpful for standardising trainee scores, benchmarking progression and for task selection in future, more structured, ophthalmic simulation programmes FINANCIAL INTEREST: NONE

Back to previous