Retina

Baseline characteristics associated with the first year treatment interval of intravitreal faricimab in neovascular age-related macular degeneration (nAMD)

Abstract

Aims To identify baseline characteristics that best correlate to treatment interval for naive neovascular age-related macular degeneration patients treated with faricimab in the first year (Y1) of the TENAYA and LUCERNE phase 3 trials, and to further understand how these characteristics may impact treatment intervals.

Methods This post-hoc analysis of Y1 data from the TENAYA and LUCERNE trials evaluated ocular baseline characteristics associated with Y1 treatment intervals. Patients were categorised into three subgroups based on their Y1 treatment interval: Q16W, Q12W or Q8W. Baseline characteristics (central subfield thickness (CST), best-corrected visual acuity, presence of subretinal fluid in centre 1 mm, presence of retinal fluid in centre 1 mm, macular neovascularisation (MNV) location and MNV type) were inputted into an R package ‘rpart’ to create a classification tree model. A data-driven tree model based on CST was fitted, producing CST subgroups of low, middle and high ranges. Within each CST subgroup, the model identified the most impactful variables and associated thresholds.

Results After fitting the data to produce data-driven CST ranges, the model chose MNV location, followed by MNV lesion type as the most impactful baseline characteristics with these factors having a p value <0.05 in a multivariate analysis.

Conclusions Among the selected ocular baseline characteristics from TENAYA and LUCERNE trial, CST, MNV type and MNV location were seen as the most relevant variables to enable extension of treatment intervals during Y1. While this analysis provides insights for treatment intervals during the first year, further analysis incorporating Y2 data from the TENAYA and LUCERNE studies will be needed to assess factors influencing treatment intervals over a longer period.

What is already known on this topic

  • Neovascular age-related macular degeneration (nAMD) is a chronic disease that requires regular intravitreal anti-vascular endothelial growth factor injections particularly in the first year. At the initiation of treatment, it is challenging to predict the injection frequency for each individual. Previous analysis of ranibizumab and aflibercept treated patients has shown that certain baseline characteristics such as larger lesion size and lower baseline best-corrected visual acuity (BCVA) can be predictive of a higher injection burden.

What this study adds

  • By using the patient’s ocular baseline characteristics this analysis sought to provide a better understanding of a patient’s treatment frequency at year 1 when treated with faricimab. This post-hoc analysis of the TENAYA and LUCERNE studies found that eyes with higher baseline central subfield thickness, occult type and subfoveal location of macular neovascularisation required more frequent faricimab injections. However, BCVA was not found to be as impactful as found in similar analyses with ranibizumab and aflibercept.

How this study might affect research, practice or policy

  • This is the first time a classification tree model has been used to determine ocular baseline factors that predict nAMD treatment frequency in a clinical trial. It demonstrates the potential utility of such analyses in a clinical setting to inform both patients and clinicians of the likely intensity of treatment based on baseline ocular characteristics.

Introduction

Neovascular age-related macular degeneration (nAMD) continues to be a leading cause of vision loss in the elderly.1 The disease is caused by abnormal choroidal neovascularisation or retinal angiomatous proliferation (RAP), which leak fluid, lipid and/or blood in the macula with the potential of severe, irreversible central vision loss, if left untreated.2

Since their introduction, intravitreal anti-vascular endothelial growth factor (anti-VEGF) therapies have been the mainstay of nAMD treatment.3 However, the outcomes achieved in the clinical trial setting are often not replicated in the real world, partly due to undertreatment and burden of frequent monitoring and injections.4 Alternative dosing approaches of existing anti-VEGF agents aimed at increasing treatment and monitoring intervals have been evaluated with variable visual results.5–9 Furthermore, selective VEGF neutralisation alone does not address all potential targets or disease mechanisms in nAMD pathophysiology.10 11 Thus, there is a need to target additional pathways involved in nAMD beyond the VEGF pathway that might offer longer durability, reduce treatment burden and potentially improve patient outcomes in clinical practice compared with currently available therapies.

The angiopoietin (Ang)-Tie2 signalling pathway plays a crucial role in maintaining vascular stability and regulating angiogenesis in healthy retinal vessels. In nAMD, elevated levels of angiopoietin-2 (Ang-2) disrupt the signalling of angiopoietin-1, resulting in vascular instability. This, combined with increased levels of VEGF can result in angiogenesis, vascular leakage, inflammation and eventual visual loss.12

Faricimab is the first and only bispecific antibody designed for intraocular use that provides dual inhibition of Ang-2 and VEGF-A.13 The TENAYA and LUCERNE phase 3 clinical trials reported the efficacy, durability and safety of faricimab up to every 16 weeks (Q16W) in patients with nAMD.5

The two studies met their primary endpoint of non-inferior change from baseline best-corrected visual acuity (BCVA) averaged over weeks 40, 44 and 48 with faricimab up to Q16W versus aflibercept every 8 weeks (Q8W). Treatment with faricimab offered extended durability, with approximately 80% of patients receiving dosing every 12 (Q12W) or 16 weeks (Q16W) at year 1.

Effectively engaging patients in their care is essential to improving health outcomes and improving satisfaction with the care experience.6 7 This could be of importance as treatment of nAMD with anti-VEGF agents typically lasts many years,8 with most of the visits occurring in the first year of treatment.9 The need for lifelong management is important to convey to the patient on diagnosis, so they are better informed of their treatment journey from the outset. Less treatment frequency in the first year may also help with compliance with ongoing treatment.

Previous studies have focused on the associations of baseline characteristics with treatment response to anti-VEGF therapies.10 11 14 15 This study’s purpose was to evaluate the key baseline factors associated with faricimab treatment interval in the first year of treatment, using a post-hoc analysis of TENAYA and LUCERNE phase 3 clinical trials, with the aim to provide clinicians with a tool to better inform their patients at an individual level of the likely initial treatment intensity.

Materials and methods

Study design

The design of TENAYA (NCT0382328) and LUCERNE (NCT0382330) have been previously described.5 16 In brief, TENAYA and LUCERNE were identically designed, multicentre, randomised, active comparator-controlled, double-masked, parallel-group, 112-week trials conducted at 271 clinical sites worldwide (TENAYA 149 sites in 15 countries, LUCERNE 122 sites in 20 countries). Patients were randomised 1:1 to faricimab up to Q16W or aflibercept Q8W. At weeks 20/24, patients’ treatment response was assessed using protocol defined criteria, and they were then assigned one of three fixed treatment intervals (Q8W, Q12W or Q16W) through week 60. Patients treated with aflibercept were kept at Q8W intervals throughout the studies; as such a similar analysis could not be undertaken for this group.

Main outcome measures

This post-hoc analysis of Year 1 data from the TENAYA and LUCERNE studies was conducted to evaluate the key baseline factors associated with faricimab treatment intervals in year 1. Year 2 data were not included. First, a literature search was conducted to identify previously identified ocular baseline characteristics that predicted nAMD treatment outcomes. Based on this search the following baseline characteristics were highlighted to be particularly important and were included in the analyses: central subfield thickness (CST), BCVA, presence of subretinal fluid in centre 1 mm (SRF), presence of intraretinal fluid in centre 1 mm (IRF), low luminance deficit (LLD), macular neovascularisation (MNV) location, size and type.10 11 14 15 MNV location was subfoveal versus non-subfoveal and MNV type was classified as occult versus non-occult (classic, minimally classic, predominantly classic, polypoidal choroidal vasculopathy and RAP). The anatomical characteristics were all measured at an independent reading centre.5

Initially both univariate and multivariate logistic regression analysis were carried out for all eight identified ocular baseline characteristics to explore their associations with treatment interval achieved at year 1 (defined as week 48). These characteristics were then inputted into a classification tree model which used a recursive process to differentiate the patients into subgroups that maximised the separation of patients based on their treatment frequency at the end of year 1.

Statistical methods

Logistic models were implemented using extended (Q12W and Q16W intervals combined) versus non-extended (Q8W) intervals as the binary response with all eight ocular characteristics as predictors. For univariate logistic models, the ocular characteristics were included each at one time, rendering eight distinct logistic regressions. These models did not adjust for other variables to explore the association between the potential interval extendibility and each ocular characteristic. Following this, a multivariate logistic regression was performed using all ocular characteristics as the main effects to the model to assess the impact of each characteristic with adjustment of each of the other ones.

Following the logistic regression analysis, two classification tree models17 were applied to the data to determine the baseline characteristics that correlated with the faricimab treatment interval in the first year. The classification tree model fitting process starts by selecting the most relevant variable at each step, identified through a measure of impurity using Gini index. The model then determines the optimal threshold for splitting the data along this variable, aiming to maximise the homogeneity of the resulting subsets. The algorithm continues this process of splitting, at each stage choosing the variable and threshold that best divides the data, until a stopping criterion is met. In this work, we adopted a model that stops splitting whenever a further split provides only minimal improvement in the model’s accuracy, defined as <1% improvement in Gini index.

The tree models were performed using R package ‘rpart’ (R V.4.0.318; package V.4.1.1519) with treatment interval at year 1 as response and baseline ocular characteristics as predictors. The two decision tree models were termed the ‘full model’ and the ‘practical model’.

In brief, both tree models begin with the entire sample which is typically referred to as the root node. The model splits the root node into daughter nodes by choosing the variable and threshold to that variable which is most impactful in differentiating the whole sample based on the treatment frequency at the end of year 1. As an example, the full model (figure 1) used all eight baseline characteristics, with the model choosing lesion size as the initial most impactful variable with an associated threshold for this. The two daughter nodes are the ‘yes’ and ‘no’ responses to the variable and its threshold. The model then finds further splits in each daughter node by recursively employing this partitioning technique until further splitting cannot provide sufficient improvement in model fitting where sufficient improvement is defined as more than a 1% decrease in Gini index. Each terminal node provides the user information on the proportion of patients from the study that followed that route down the tree and within that proportion what percentage of patients were on the three different treatment frequencies.

Figure 1
Figure 1

Classification tree model based on all ocular baseline characteristics from table 1. Top of the tree is the nodal point (all patients from the faricimab treatment arm). The clinician follows each layer till they get to one of the seven terminal nodes at the bottom of the tree. Below each terminal node is the percentage of patients that followed those decisions to get to that terminal node. Within each terminal node are the three treatment intervals the patients were on at the end of year 1 (Q8W=grey, Q12W=purple and Q16W=red). BCVA, best-corrected visual acuity; CST, central subfield thickness; MNV, macular neovascularisation; SRF, subretinal fluid in centre 1 mm.

The ‘full model’ implemented all measured baseline ocular characteristics as predictors with the treatment interval at year 1 (three groups: Q8W, Q12W and Q16W) as the response. The classification tree formed had 13 nodes (of which 7 were terminal nodes).

The ‘practical model’ was built to reduce the complexity of the main model and use characteristics that are measured typically in all clinics. Therefore, LLD and MNV lesion size were not incorporated at the outset. In addition, due to the utility of anatomical parameters in clinical practice and the dominating relevance of CST, CST was fitted at the first splitting process. This produced CST subgroups of low (<229 µm), middle (230–625 µm) and high (≥626 µm) ranges as the root node. Within each CST subgroup, the data-driven model then chose the next most impactful variables and associated thresholds. This practical model included the treatment interval at year 1 (3 groups: Q8W, Q12W and Q16W) as the response and CST, BCVA, presence of SRF, presence of IRF, MNV location and MNV type as predictors. An additional step was carried out at the end of the model to remove any nodes (pruning) that were not clinically important thereby improving usability for the clinician.

Results

By year 1 (defined as week 48), there were 631 out of 665 patients in the faricimab treatment arm that remained in the study. Baseline ocular characteristics of the study eyes are shown (table 1), with the eyes grouped by treatment interval (ie, Q8W, Q12W or Q16W).

Table 1
|
Baseline characteristics by treatment interval at year 1

General trends were noted with respect to baseline characteristics and treatment frequency at the end of year 1. Specifically, patients who had a greater CST, larger lesion size, higher LLD and lesions that were classified as occult or subfoveal tended to be on shorter treatment frequency intervals (table 1).

Results from univariate and multivariate logistic regression are shown in table 2. The univariate models showed that baseline LLD, presence of subfoveal MNV, baseline fundus fluorescein angiography (FFA) lesion size (disc area) and baseline CST were associated with the extension of treatment at year 1 (nominal p<0.05). In the multivariate model, only baseline CST (p=0.011), presence of a subfoveal MNV (p=0.028) and presence of occult MNV (p=0.018) maintained a consistent association with the year 1 treatment interval.

Table 2
|
Univariate and multivariate model analysis of ocular baseline characteristics indicative of treatment extension at 1 year

Classification tree model

Two classification tree models were built, one which used all eight ocular baseline characteristics listed in table 1, referred to as the ‘full model’ (figure 1). The second model (referred to as the ‘practical model’; figure 2) was built using the same baseline characteristics as the full model, but removing LLD and MNV lesion size at the outset as these characteristics are not measured routinely in most clinical practice. In addition, presence of IRF was removed (pruned) from this model as it produced a terminal node that did not sufficiently differentiate between the three treatment frequency intervals. Furthermore, the practical model was adjusted to fix baseline CST as the root node, with the model forming three CST subgroups, low (≤229 µm), mid (230–625 µm) and high (≥626 µm). Both models start with baseline data from all patients completing the first 48 weeks of the trials (figures 1 and 2). The root nodes for the full and practical decision tree models were lesion size and CST respectively.

Figure 2
Figure 2

Classification tree model based on all ocular baseline characteristics from table 1 except low luminance deficit, MNV lesion size and presence of subretinal fluid in centre 1 mm. Top of the tree is all the patients from the faricimab treat and extend interval arm (631). The clinician follows each layer till they get to one of the five terminal nodes at the bottom of the tree. Below each terminal node is the percentage of patients that followed those decisions to get to the terminal node. Within each terminal node are the three treatment intervals the patients were on at the end of year 1 (Q8W=grey, Q12W=purple and Q16W=red). CST, central subfield thickness; MNV, macular neovascularisation; IRF, intraretinal fluid in centre 1 mm.

In the full model (figure 1) a majority of patients (43%) were located within one specific output box of the seven available. These were patients that followed the path of having a lesion size of greater than or equal to 0.39 disk areas, then having CST greater than or equal to 249 µm, followed by not having subretinal fluid and lastly not having a BCVA score of less than 44 letters. Within these patients, 23% ended on Q8W, 34% ended on Q12W and 43% ended on Q16W treatment intervals at the end of year 1. That is, the majority of patients with these specific baseline features ended up on an extended (≥Q12W) interval.

The practical model (figure 2) had fewer levels and focused on using more readily available metrics measured in most clinical environments. For example, the majority of patients (34%) were located within one specific output box of the five available. These were patients that followed the path of having a CST inclusive of 230 and 625 µm but who did not have a subfoveal lesion. Within this specific cohort, 15% ended on Q8W, 30% ended on Q12W and 55% ended on Q16W treatment intervals at the end of year 1. That is, the majority of patients with these specific baseline features ended up on an extended (≥Q12W) interval.

Figure 3 illustrates three examples of patients from the TENAYA and LUCERNE trials. All three cases had a CST of between 230 and 625 µm inclusive (mid-CST group) which is reflective of the trial as 89% of patients fell into this category. The figure highlights the imaging carried out at baseline assessment for each case (OCT and FFA), OCT at week 16, OCT after the loading phase and OCT at the week 20/24 visit when the patient is assigned their treatment interval for the first year and then finally, OCT at the week 48 visit. The figure also highlights the frequency and time of treatment for each of the cases over the 48-week period. Each of the cases had a robust response to faricimab treatment with all cases having resolution of fluid at the end of year 1.

Figure 3
Figure 3

Three case examples from the TENAYA/LUCERNE phase 3 studies, each of them from different terminal nodes within the mid-CST node defined by the decision tree. BCVA, best-corrected visual acuity; CST, central subfield thickness; FFA, fundus fluorescein angiography; IRF, intraretinal fluid in centre 1 mm.

Discussion

This post-hoc analysis used a classification tree model to highlight the baseline characteristics that are most associated with extended faricimab treatment interval at the end of year 1 in patients in the pivotal TENAYA and LUCERNE trials. To our knowledge, this is the first time a classification tree model has been used to determine ocular baseline factors predicting nAMD treatment frequency in a clinical trial. The practical decision tree model identified baseline CST and MNV lesion type and location were the most important factors in predicting extended treatment intervals in the first year.

Of note, our classification tree model did not consider baseline BCVA as an important factor in determining the year 1 treatment interval. In addition, the outcome measure of this post-hoc analysis was treatment frequency at year 1 which differed from previous post-hoc analyses that used BCVA change as the main outcome measure, thereby limiting comparison between these types of analyses.

The use of tree-based models to predict treatment-specific outcomes is not new and has been used in other branches of medicine.20 21 In ophthalmology, a classification tree model was applied to predict factors that could identify appropriate candidates for cataract surgery, using selected variables that could easily be assessed by clinicians.22 The advantage of a classification tree model is that its data-driven approach results in a clear visualisation of the classification process and allows an assessment of each characteristic’s importance.

Interestingly baseline IRF and SRF in the central 1 mm did not appear to be as relevant in the univariate and multivariate analysis, and as such, they did not appear in the ‘practical model’. This contrasts with findings from another analysis that assessed the effect of baseline characteristics on BCVA after ranibizumab treatment, which found that greater SRF thickness at baseline correlated with a greater injection frequency in the first year of treatment.11 Similarly, other studies using machine and deep learning techniques have found SRF, IRF and total retinal thickness to be important markers of treatment burden.23 24 The apparent differences with our findings are likely related to the number of baseline variables introduced in our model, which were chosen because of being relatively easy to assess in the clinical setting. Furthermore, our baseline characteristics did not quantify the degree of fluid, but only the presence or absence of IRF/SRF in the central 1 mm area.

Currently clinic space is increasingly limited due to the growing treatment burden of AMD as a result of an ageing population and new treatments now available for patients with geographic atrophy on the horizon.25 Therefore, being able to predict how frequently a patient would require treatment has the potential to be a valuable tool in optimising clinic planning to ensure safe clinical practice. This post-hoc analysis and future iterations of this work have the potential to alleviate this problem.

The present study demonstrates several strengths in its approach and methodology. First, the study benefits from the inclusion of data from two large global phase 3 prospective, randomised, controlled clinical trials. This incorporation of a substantial sample size results in sizeable sample sizes for the various subgroups that were analysed. Another significant strength of the study lies in its methodology, where disease activity criteria akin to those used in clinical practice were employed to determine the initial treatment interval for each patient. This approach ensures that the study findings are more closely aligned with real-world clinical scenarios. By having a treatment interval assignment dictated by a disease activity criterion based on CST and/or BCVA, the study captures a more realistic treatment strategy akin to clinical practice.

A limitation of the decision tree’s clinical utility is that if the clinician does not follow a similar treatment pattern or criteria as the TENAYA and LUCERNE studies then different results may be observed. Second, as a post-hoc analysis, only the variables recorded in the original studies were assessed and some other factors could also play a role in treatment interval. Finally, the scope of the analysis was confined to the first year of treatment, and nAMD patients typically require treatment over a longer period of time. A further analysis incorporating year 2 data from the TENAYA and LUCERNE studies would allow for the assessment of longer-term trends and factors influencing treatment intervals and response over time. Further work is required to validate this model using real-world data, which may provide a more practical assessment of how the model performs in diverse clinical settings.

This post-hoc analysis of faricimab data from the phase 3 TENAYA/LUCERNE studies incorporates a novel concept of a decision tree model to predict treatment burden in the first year, with the aim of providing clinicians with practical information to inform their patient on the potential faricimab treatment frequency in the first year of treatment. Further work is required to assess the utility of the tool in the real world.