Introduction
Glaucoma is the third most common cause of blindness globally after cataract and uncorrected refractive error.1 Surgical treatment for glaucoma is considered when medical and laser treatment options are exhausted, inappropriate, or unavailable. In many instances, surgical trabeculectomy is considered as a first-line treatment for moderate to advanced glaucoma. Early surgery can provide lower intraocular pressure (IOP) than medical therapy.2 3 A prospective multicentre randomised controlled trial is currently underway to compare the effectiveness of primary medical and primary surgical management for people presenting with advanced glaucoma, the Treatment of Advanced Glaucoma Study.4
Surgical education for glaucoma is challenging. Opportunities for trainees are often sparse. In the USA, the mean number of trabeculectomies performed by trainees is four.5 Similarly, in sub-Saharan Africa the mean number performed by senior trainees was also four (article under review). This may be due to reluctance of surgeons to perform and patients to accept surgery, driven at least in part by the lack of expectation of improvement in vision and visual field loss. Vision never improves, and often is slightly worse following surgery: a recent meta-analysis showed that visual function (mean deviation and best-corrected visual acuity) drops after surgery, however, the gains from reduced rate of progression balance after 18 months, leaving patients better off.6 Moreover, the operated eye may be an only eye, often with good visual acuity. There is recent evidence that visual field loss can improve after surgery reduces the IOP.7
A structured curriculum, involving extensive simulation-based training, can assist in introducing trainees to glaucoma surgery.5 However, there is a paucity of data on the efficacy of simulation-based surgical education in glaucoma surgery techniques, including trabeculectomy. Therefore, to begin to address this gap, we designed a surgical competency assessment tool for simulated trabeculectomy surgery, based on the International Council of Ophthalmology (ICO) ophthalmology surgical competency assessment rubric (OSCAR) for trabeculectomy.8
Surgeons begin their training in a specific technique as ‘novices’, having incomplete knowledge and understanding, approaching a task relatively mechanistically. After time observing, learning and practicing under supervision a novice may progress to being an ‘advanced beginner’, demonstrating situational awareness and a working understanding of what is before them. They tend to see actions as a series of separated steps, and can complete some simpler surgical steps without supervision. A surgeon who is ‘competent’ in a technique has a good working and background understanding, and sees actions in relation to goals, at least partly in context. They may complete work independently to a standard that is acceptable, though it may lack refinement. They are capable of deliberate planning and can formulate surgical routines.9 Proficiency and full expertise are considered outside of the scope of this context of simulation-based surgical education in trabeculectomy. Even after an ophthalmology trainee has completed training, there is still a great amount of continued training and experience to be gained in order to become a glaucoma ‘specialist’ and attain a level to be considered and gain recognition as an ‘expert’.10
It is towards the stage of ‘competent’ through structured ophthalmic surgical training that this development and use of the ophthalmic simulated surgical competency assessment rubric for trabeculectomy (Sim-OSSCAR) is designed to support. The Sim-OSSCAR is aimed at evaluating the progress made by a trainee towards a basic level of competence, in a simulation environment. Specifically, it addresses the binary question: has the trainee invested sufficient sustained deliberate practice on artificial materials for the trainer to decide it is reasonable to progress to supervised live surgical training?
In medical and surgical education, validity refers to the degree to which an instrument measures what it sets out to measure. Face validity describes whether the simulated tasks resemble those that are performed during a surgical procedure in a real-life situation. Content validity is whether the test resembles a specific skill, not other aspects such as anatomical knowledge. Intergrader reliability is the degree of agreement among different graders, and will provide a measure of consensus.
It is accepted that a unified approach of demonstrating evidence to either support or refute the overall validity of an instrument should be used.11 Studies of the assessment of surgical education, training and curricula should have discrete benchmarks as guides: described as face, content, construct, concurrent, discriminative and predictive validity.12 There is an even greater need for this in high-stakes assessments such as Board or Surgical College certification examinations. The ICO OSCAR for trabeculectomy has been validated for live surgical performance assessment.8 This current study is not aimed at validation of a curriculum nor a high-stakes live surgical assessment.
In this study, we aimed to modify the ICO OSCAR, using it as a starting point for developing a formative and summative assessment tool for simulated ophthalmic surgical training in trabeculectomy surgery.