Introduction

The retina is the only tissue in the body where neural and vascular tissue can be visualized simultaneously in a non-invasive manner. Ophthalmologists have been doing so since the ophthalmoscope was introduced into clinical practice in the mid 1800s1. It has also been increasingly recognized that retinal biomarkers may map effectively to systemic indices of healthy ageing and disease2,3,4,5,6. Examples of these oculomics-based findings include vascular tortuosity and arteriolar narrowing for cardiovascular disease, and retinal cell layer changes for neurological disorders7,8,9,10,11.

Relationships between retinal morphology and systemic health have traditionally been evaluated using statistical modelling, such as multivariable regression. However, such techniques may have limited incremental value when leveraged on very large datasets and for complex data12,13. As data availability has increased, and mathematical models have improved, the success of deep learning in ophthalmic disease classification in the research setting has been striking14,15,16,17,18. Deep neural networks, which process input images by applying mathematical operations to connected nonlinear units in multiple layers, largely avoid manual feature engineering, and are able to derive previously hidden patterns in large volumes of data19.

The discovery of quantitative relationships between retinal appearance and systemic pathophysiology readily aligns with pre-established conceptions of microvascular and degenerative tissue-level insults20. However, deep learning has shown that these algorithms demonstrate capability in tasks which were not previously thought possible21. Harnessing this power, new insights into relationships between retinal structure and systemic pathophysiology could expand existing knowledge of disease mechanisms. A study by Poplin et al. demonstrated a deep-learning learning algorithm which could accurately predict cardiovascular risk factors from fundus photos22; More surprising to ophthalmologists was the successful prediction of demographic information such as age and gender, the latter with an area under the curve (AUC) of 0.97. Here, the physiologic cause and effect relationships are not readily apparent to domain experts21. Predicting gender from fundus photos, previously inconceivable to those who spent their careers looking at retinas, also withstood external validation on an independent dataset of patients with different baseline demographics23. Although not likely to be clinically useful, this finding hints at the future potential of deep learning for the discovery of novel associations through unbiased modelling of high-dimensional data.

We previously reported on the ability of physicians to create automated machine learning (AutoML) models for medical image analysis24. Since that proof of concept, AutoML platforms have advanced significantly, with multiple employing code free deep learning (CFDL). Herein, we demonstrate AutoML as a tool for automated discovery of novel insights by performing sex classification from retinal fundus photos, and comparing its performance to the bespoke deep learning model by Poplin et al22.

Results

CFDL model results

The CFDL model had an AUROC and AUPRC of 0.93 and 0.94 respectively (Table 1). Overall sensitivity (recall), specificity, PPV (precision), and ACC were 88.8%, 83.6%, 87.3%, and 86.5% respectively (Fig.Ā 1). Genetic sex was discordant from reported sex in one validation set image, and this image was incorrectly predicted by the model; that is the model predicted sex consistent with genetic sex in this case (Table S1). To evaluate reproducibility and address varying performance of deep learning algorithms involving random seed initiation, we retrained the model to identical specifications, and found similar performance with an AUC of 0.93.

Table 1 Comparison of reported fundus photo sex prediction algorithms.
Figure 1
figure 1

Precision-recall curve.

External validation

External validation was performed on the Moorfields dataset. This dataset differed from the UK Biobank development set with respect to both fundus camera used, and in sourcing from a pathology-rich population at a tertiary ophthalmic referral center. The resulting sensitivity, specificity, PPV and ACC were 83.9%, 72.2%, 78.2%, and 78.6% respectively.

Presence of foveal pathology

To evaluate the influence of foveal pathology on the performance of the CFDL model, we subgrouped the Moorfields external validation dataset into fundus photos with (nā€‰=ā€‰108) and without (nā€‰=ā€‰144) foveal pathology (Table 2). The model classified sex correctly in 85.4% of patients without foveal pathology, a population more similar to the largely health UK Biobank population, compared to 69.4% in patients with foveal pathology. Logistic regression showed that presence of foveal pathology was a significant factor in model performance OR (95% CI): 0.36 (0.19, 0.70) pā€‰=ā€‰0.0022. Sex was not associated with presence of foveal pathology (pā€‰=ā€‰0.94). This suggests that the fovea may be a salient region of fundus photographs for the neural networkā€™s sex classification performance. Region attribution saliency maps suggest the optic nerve and vascular arcades as additional important input regions for the modelā€™s prediction (Fig.Ā 2).

Table 2 Model performance on external validation dataset subgrouped by presence of foveal pathology.
Figure 2
figure 2

Region based saliency maps for model prediction: colors represent regions in order of decreasing performance: Yellow, Green, Blue. Images sourced at random from validation set, with the addition of an ungradable image.

Ungradable UK biobank images

Consensus ungradable images (nā€‰=ā€‰714), which were formerly removed from the UK Biobank validation dataset were separately processed by the model as an experimental adjunct batch prediction. The resulting sensitivity, specificity, PPV and accuracy were 82.6%, 71.2%, 75.2%, and 77.0% respectively.

Discussion

Our results demonstrate robust overall performance of the CFDL model for predicting sex from retinal fundus photos. The AUROC of 0.93 from this framework, which does not require coding expertise, suggests significant capability of the CFDL platform for this task. Our code-free modelā€™s performance is comparable with the Poplin et al. model AUROC of 0.97, which was designed and tuned by machine learning experts (Table 1). Our model was trained on a similar (UK Biobank), albeit significantly smaller dataset, as it did not include the additional 1.5ā€‰+ā€‰million EyePACS fundus photos which Poplin et al. also utilized for training.

To our knowledge, two other studies have attempted to perform this image classification task. Yamashita et al. performed logistic regression on several features that were identified to be associated with sex. These features included papillomacular angle, retinal vessel angles and retinal artery trajectory25. They achieved an AUROC of 0.78, which further underscores the limitations of a classical machine learning approach, utilizing human-identified features for such novel tasks. Deep learning, even utilizing our CFDL approach, seems to outperform manual feature engineering significantly. Various studies have shown retinal morphology differences between the sexes, including retinal and choroidal thickness26,27,28. Others have demonstrated variation of ocular blood flow and have suggested the effect of sex hormones, but thus far, consensus is lacking29,30. The coder-engineered deep learning model developed by Dieck et al. for this task, which also included an image preprocessing step, demonstrated an accuracy of 82.9%, which was lower than our automated code-free approach31. The retinal features apparent to domain experts for this task may go unanswered, as the power of deep learning in integrating population-level patterns from billions of pixel-level variations is impossible for humans to match.

Performance of our model was slightly worse with external validation on the Moorfields dataset, which is typical when deep learning models are evaluated with datasets dissimilar from their training data distribution32,33. Specifically, the Moorfields dataset was obtained from a tertiary referral center, and 42.9% of the fundus photos contained foveal pathology. In eyes without foveal pathology, the external validation accuracy was within 1.5% of the Biobank validation set. The worse performance in pathologic eyes suggests the significance of the fovea for sex prediction, and was similarly demonstrated in the attention maps of Poplin et al. The region-based saliency maps we generated suggest that the optic nerve and vascular arcades are also regions of importance (Fig.Ā 2). In the study by Poplin et al., when subgrouped for diabetic retinopathy (DR) presence, their model similarly trended towards worse performance for pathologic images compared with healthy controls. Furthermore, the ophthalmologists in that study ā€œrepeatedly reported highlighting of the macula for the gender predictionsā€ when interpreting the attention maps22. These findings highlight the importance of considering machine learning performance only in context of the specific training and evaluation datasets utilized. This is especially critical for our task, when the salient features of an input image are unclear to domain experts.

Ungradable images from the UK Biobank validation dataset were labeled as such by retina specialists to the guidelines of lacking adequate visibility of the macula, optic nerve, and vascular arcades (Table S2). However, those images demonstrated only a slight reduction in model performance. Furthermore, the model shows similar salient regions in ungradable input images as in gradables (Fig.Ā 2). This suggests that the model is sensitive to signal in poor quality images from subtle pixel-level luminance variations, which are likely indifferentiable to humans. This finding underscores the promising ability of deep neural networks to utilize salient features in medical imaging which may remain hidden to human experts.

Through characterization of high-dimensional data, our findings suggest that deep learning will be a useful tool for the exploration of novel disease and biomarker associations. Clinician driven research, particularly through the use of AutoML, has the potential to move this field forward. Crucially, AutoML as a platform does not fully automate the process of machine learning. Data preparation remains an essential manual step. As demonstrated by population differences in our external validation dataset, tasks such as equitable and representative acquisition, cleaning, and subgrouping of datasets remain important factors for the production of useful models. Clinicians are uniquely positioned to understand both the complexities of the clinical data, and the use-cases for the design of clinically relevant production algorithms.

While our deep learning model was specifically designed for the task of sex prediction, we emphasize that this task has no inherent clinical utility. Instead, we aimed to demonstrate that AutoML could classify these images independent of salient retinal features being known to domain experts, that is, retina specialists cannot readily perform this task. We intended to show that our frameworkā€™s performance may be comparable to state of the art algorithms designed for the same task by coders. This portends for the capacity of AutoML, utilized by clinician use-case experts, to design models for tasks where specific retinal features have not been categorized. Examples of such use-cases include cardiovascular and neurological disease characterization from retinal photos.

Limitations

Our study had several limitations. The design of the CFDL model was inherently opaque due to the frameworkā€™s automated nature with respect to model architecture and hyperparameters. While this opacity is not unique to CFDL, there is potential to further reduce ML explainability due to lack of insight of model architectures and parameters employed. Although we compared our performance to other models via AUROC, we were unable to compare performance using clinically relevant metrics such as sensitivity and specificity, as these were not provided by the authors of the other studies34. The UK Biobank dataset, composed of a generally healthy Caucasian population, was not fully representative of the general UK population, and demonstrates potential for algorithmic bias35,36,37. Although we attempted to address this with an external validation population with a higher prevalence of pathology, our patient level data was limited and did not include additional demographic information. Since both datasets were from UK populations and de-identified, there is the potential of overlap at the patient level.

Through our investigations of predicting other novel systemic signals from fundus photos, we noted several inherent limitations of the CFDL platform. Utilizing buckets of varying range, which were necessary due to lack of support for continuous variable prediction, we were unable to successfully predict age. Experiments to predict smoking status resulted in models with significantly lower AUC (0.64) as compared with Poplin et al. (0.71)22. We have engaged the platform development team, and aim to repeat our experiments as new platform features are released.

Conclusion

We demonstrate clinician-driven design of a deep learning sex classification model from retinal fundus photographs, and comparable performance to the same task in a landmark study. In contrast to the latter model designed by expert engineers, our model was created by clinicians without coding. Our external validation on a population with high levels of foveal pathology suggests that the foveal region is important for this task. This demonstrates AutoML as a tool for novel insight discovery for medical imaging by its clinician end users. Although ophthalmologists may continue to ponder what these deep learning models are ā€œlooking atā€, our study demonstrates the robust potential of CFDL to characterize images independent of expertsā€™ knowledge of contributing features, and its ability to democratize access to deep learning.

Methods

Participants and data

This work utilized two datasets. For deep learning model development, we used the UK Biobank dataset, which is an observational study in the United Kingdom that began in 2006 and has recruited over 500,000 participantsā€”85,262 of which received eye imaging38. Eye imaging was obtained at 6 centers in the UK and comprises over 10 terabytes of data39. Participants volunteered to provide data including other medical imaging, laboratory results, and detailed subjective questionnaires. Consent was provided by each participant, and the study was approved by the North West Multi-Centre Research Ethics Committee. Detailed protocols may be located at www.ukbiobank.ac.uk. Retinal imaging was obtained with a Topcon OCT-1000 MKII. Each capture consisted of optical coherence tomography and a paired 45-degree retinal fundus photograph. The UK Biobank fundus photo dataset of 175,825 images was split chronologically to train, tuning, and validation sets (Table 3). The train/tuning and validation sets contained 53.6% and 56.0% reported women respectively. For temporal validation, chronologic splits were performed to simulate model performance on the validation set in a manner which would align with a prospective trial40; that is a model trained on the first chronological period of data (training dataset), and then evaluated on the subsequent chronological period of data (validation dataset). Participant level splits were preserved throughoutā€”each participantā€™s left, right, and repeat photos were never split between image subsets.

Table 3 Dataset characteristics of UK biobank and moorfields external validation sets.

For external validation, we utilized an anonymized clinical dataset convenience sampled from Moorfields Eye Hospital of 400 adult patients. These patients received 45-degree fundus photography with Topcon OCT-2000 in December 2019. In order to obtain a representative dataset of all patients presenting from that time period, no other filters were applied. Both datasets consisted of 50% left and 50% right eyes. Average age in the external validation dataset was 64.0 as compared with the UK Biobank validation dataset average of 55.7 (Table 3). Input of the external validation dataset into Cloud AutoML was through the Moorfields Eye Hospital Research Informatics Strategy Data Platform, a secure cloud-based infrastructure facilitating storage and processing of anonymized clinical data. Project-specific approval from local information governance was granted following submission of a Cloud AutoML data privacy impact assessment and separate dataset-specific treatment standard operating procedure. Research and development approval by the Institutional Review Board at Moorfields was obtained (ROAD17/031). Local and national research opt-outs were queried, and the corresponding patients were excluded. All methods were performed in accordance with the relevant guidelines and regulations.

Participants in the UK Biobank study have provided written informed consent. The external validation set is part of a retrospective, non-interventional study on de-identified data. National and local opt-out guidance was followed for anonymized datasets, Moorfields information governance waived the requirement for informed consent accordingly.

Data processing and labeling

The gender variable, as described by the UK Biobank, was acquired from the central registry at recruitment, but in some cases was updated by the participant. Therefore, this field may contain a mixture of NHS recorded gender and self-reported gender. Genetic sex in the UK Biobank was determined by genotyping performed at Affymetrix3 Inc. with quality control of the data at the Wellcome Trust Centre for Human Genetics.

Ungradable images were removed from both validation datasets. De-identified images were assessed for gradability by two retina specialists masked to patient demographics (E.K., H.K.). Gradability was defined as a field of view ensuring adequate visibility of the vascular arcades, macula, and optic nerve, and sufficient image quality to exclude microaneurysm sized features. Any disagreements were resolved via in-person discussion. In cases where agreements could still not be resolved (nā€‰=ā€‰122), a gradability algorithm, described in a model card (Table S2), was used to adjudicate disagreements, with 70 of the disagreed images being classified as ungradable41. After ungradables were removed, 252 Moorfields images remained for external validation. Ungradable rate was 35.7% and 32.2% in the UK Biobank validation and Moorfields datasets respectively. Moorfields images were also graded for presence of foveal pathology. This was defined as any retinal lesion which extended into the central one disc diameter around the fovea. Examples of foveal lesions included but were not limited to macular holes, microaneurysms, RPE tears, and pigment atrophy (Fig.Ā 3).

Figure 3
figure 3

Representative Fundus Photos. Correct (A) and incorrect (B) cases without (1) and with (2) foveal pathology.

Model training

Our deep learning model was trained using code-free deep learning (CFDL) with the Google Cloud AutoML platform. As described and demonstrated in a supplemental video of our prior work24, the platform provides a graphical user interface (GUI) for data upload, labeling, and model training. Alternatively, the CFDL platform provides the option of image upload via shell-scripting utilizing a .csv spreadsheet containing labels with associated cloud storage locations. We utilized the latter upload approach for the efficient management of our large datasets. Automated machine learning was then employed, which entails neural architecture search and hyperparameter tuning. Training was performed with maximum allowable cloud graphics processing unit (GPU) hours, and early stopping was enabled, which automatically terminated training when no further model improvement was noted after 581 node-hours. Each node hour represents eight cloud nodes used in parallel, and each node is equivalent to a NVIDIA Tesla V100 GPU. XRAI region based attribution saliency maps were generated from an edge optimized version of the model utilizing the AutoML explainable AI framework42.

Statistical analysis

Statistical analysis was performed with Microsoft Excel and Stata. The model was evaluated at a softmax confidence threshold of 0.5. Area under the precision recall curve (AUPRC) was provided by the platform, and area under the receiver operating characteristic curve (AUROC) was obtained via shell-scripting in order to enable comparison with other reported models. We manually calculated sensitivity, specificity, positive predictive value (PPV), and accuracy (ACC) from confusion matrices provided by the CFDL platform (Tables S3, S4). The former four metrics were calculated with respect to prediction of female sex. For subgroup analysis of model performance on images with foveal pathology, a chi squared test was performed.