Discussion
In the adoption of any new instrument, its validity must be proven. Gallagher et al24 define six criteria to gauge validity:
Face validity: will the instrument measure what it is supposed to measure? A subjective validation.
Content validity: an estimate of the validity of a testing instrument based on detailed examination of test contents. A rigorous subjective validation.
Construct validity: an evaluation of the degree of a testing instrument to identify the quality it was designed to measure. Often gauged by the ability of an instrument to differentiate novices from experts.
Concurrent validity: an evaluation in which the relationship between the test scores and the scores on another instrument purporting to measure construct are compared.
Discriminate validity: an evaluation that reflects the extent to which the scores generated by the assessment tool correlate with factors with which they should correlate.
Predictive validity: the extent to which the scores on a test are predictive of actual performance.
Face and content validity are proven in the early stages of instrument development and are not significantly consequential.24 Establishing construct and concurrent validity is the first step in the adoption of simulators to current surgical training programmes and assessment criteria. Four studies in current literature showed that the EyeSi has concurrent validity.15 17 19 20
Jacobsen et al15 correlated scores from the OSACSS criteria and found that participants’ OSACSS scores correlated with EyeSi scores across all experience levels. Rohipoor et al20 correlated GRASIS scores with EyeSi proficiency scores. Motion tracking-based grading of cataract surgeries was also found to be correlated with EyeSi scores by Thomsen et al.17 Lastly, the total number of surgeries performed by surgeons was also correlated with their EyeSi scores by Bozkurt et al.19 Through this the concurrent validity of the EyeSi cataract simulator was validated across four independent indices: OSCASS, GRASIS, motion-tracking grading and surgical experience.
Specific modules of the EyeSi were also construct validated. The capsulorhexis module, in particular, was validated by both Rohipoor et al20 and Bozkurt et al19 which showed that EyeSi was able to accurately differentiate between novice and experienced surgeons. Capsulorhexis is generally believed to be the most accurately simulated step currently available in the EyeSi cataract simulator18 and these findings corroborate that belief. Antitremor and forceps training on the EyeSi were also individually construct validated against GRASIS scores by Rohipoor et al.20
Of all included studies, only Rohipoor et al15 17 19 20 established predictive validity through looking at how EyeSi scores of residents in early residency positively correlated with performance in their final year of residency.
These findings demonstrate that individual models of the EyeSi are independently correlated to established indices. Further analysis of which specific models are most strongly correlated with real-life performance could yield valuable information on how to further optimise both EyeSi grading software.
Building on the growing body of evidence supporting construct and concurrent validity of the EyeSi, numerous ophthalmic residency programmes have recently incorporated the EyeSi into their training programmes.14 18 21–23 This has led to studies that compare the outcomes of EyeSi-integrated training programmes with the outcomes of traditionally structured programmes. These studies all report positive outcomes associated with supplementing training programmes with the EyeSi.14 18 21–23 All but one study reported a statistically significant reduction in the complication rates of surgeons trained with EyeSi compared with their EyeSi-naïve counterparts.23
The risk of bias assessment shows that only one study was fully bias free. The majority of the bias in the studies arose under ‘other bias’ and was due to studies comparing complication rates between EyeSi-naïve and EyeSi-trained batches of residents. With each successive year, training protocols, knowledge and resources available to residents improve. As such, a natural improvement over the years is expected, which cannot be controlled for when comparing two cohorts of residents from different years, as acknowledged by Ferris et al.21 Moreover, there exists an inherent variation in residents’ skill within each year that is not neutralised due to the small sample size of trainees. Despite this bias however, all studies generally ranked low in bias risk and adhere to acceptable reporting standards.
Moving forward, the projected impact that adoption of the EyeSi can have on training programmes lies in its potential for improving patient safety and the earlier competency gains of surgical trainees. Under traditional training models, patients’ eyes serve as a training ground for residents in training.9 Cataract surgery conducted by trainee surgeons is associated with increased complication rates.25 With EyeSi it may no longer be necessary to compromise on patient safety in order to facilitate resident training. The EyeSi allows for residents to gain hands-on equipment and situational experience. This simultaneously shifts the learning curve of phacoemulsification towards the safe and controlled environment of the simulator where complications do not result in harm to patients. Of the studies evaluating complication rates, four of five also showed significantly decreased complication rates by residents who trained on the EyeSi prior to conducting real-life operations.14 18 21 22 This shows how the EyeSi objectively improves patient health outcomes by foregoing both the learning curve in a controlled environment and by training surgeons who go on to have lower complication rates. Meanwhile, the four construct validity studies unanimously showed that the EyeSi simulator realistically simulated many aspects of cataract surgery for the trainee.15 17 19 20 This proves that the EyeSi also does not compromise on the quality of resident training.
A big hurdle in wider adoption of high fidelity simulation training, particularly in low-income and middle-income countries, is the significant initial cost of purchasing a simulator as well as the ongoing maintenance and upgrade costs: the purchase cost of an EyeSi is ~£100 000–£150 000, with ongoing maintenance costs in the region of £5000–£10 000 per year for the cataract module only. There may also be additional costs of upgrading modules that undergo iterative improvements.
These costs may be mitigated in a number of ways: share with other teaching programmes within the same institution where additional EyeSi modules can be acquired, for example, the vitreo-retinal module, in effect sharing the acquisition cost. The cost of acquisition for an individual centre could further be diluted by sharing the simulator with other institutions in designated regional training centres. Pooling resources allows a number of trainees to undergo simulation training as an introductory step prior to intraocular surgery. This has the additional benefit of aligning divergent training programmes in a region to a similarly high level of standardised basic training.
The cost of simulation also needs to be balanced against the long-term cost benefits of dealing with fewer per-operative complications, and the sequelae of complications. The undoubted safety benefit to patient not being on a trainee’s learning curve may not be something that can be easily costed in a tangible fashion.
Discussions among surgical trainers in the UK in recent years report fewer trainee complications which is resulting in a greater prominence of simulation training rather than wet-lab training prior to intraocular surgery within many training programmes in the UK.21 Because of this, many regions in the UK are adopting the above model and it will be interesting to note whether these anecdotal reports are reflected in the upcoming Royal College’s National Ophthalmology Database audit, and whether the complication rates differ between simulation-induced trainees and the rates of trainees prior to simulation training.
Strengths and limitations
A strength of this study is that in our understanding it is the first systematic review conducted on simulator training in cataract surgery within the past 6 years.12 26 Given the novelty of this technology and the volume of new studies published on it, this represents a significant interval which requires review. Second, the study is conducted as per the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines.27 A limitation of this study is that unpublished literature was not searched for which may have led to only studies reporting positive results being selected due to selective reporting bias. The second limitation is the lack of current studies comparing EyeSi-only training regimes against traditional-only training regimes to truly gauge how much of the improvement is due to EyeSi training. This is crucial as current studies subject their intervention groups of trainees to EyeSi training in addition to the same traditional-only training their controls also undergo. This results in the intervention cohorts having greater net training hours than the control cohorts. It stands to reason that the addition of any supplementary training programme will bring about some improvement in trainee performance. This confounds the degree to which the noted improvement in the intervention cohorts may be credited to EyeSi.