Quantitative Multimodal Imaging Characterization of Intraretinal Cysts versus Degenerative Pseudocysts in Neovascular Age-Related Macular Degeneration.
To differentiate intraretinal fluid (IRF) cysts from degenerative pseudocysts in neovascular age-related macular degeneration (AMD) by quantitative multimodal imaging.
Observational, cross-sectional.
Patients affected by macular neovascularization secondary to AMD.
All patients were analyzed by OCT, OCT angiography (OCTA), and dense automatic real-time (ART) OCTA. New-onset cysts were considered IRF, whereas those cysts that were found to be persistent for at least 3 months were categorized as degenerative pseudocysts. Intraretinal cysts were automatically segmented to calculate cyst circularity. Peri-cyst space was quantitatively analyzed to assess the presence of perfusion signal and hyperreflective foci (HF).
Best-corrected visual acuity, cyst circularity, peri-cyst perfusion, peri-cyst HF, fibrosis, and outer retinal atrophy.
We analyzed 387 cysts collected from 35 eyes of 35 patients with neovascular AMD (14 men; mean age, 80 ± 5 years). We classified 302 IRF cysts and 85 degenerative pseudocysts. Intraretinal fluid cysts were characterized by significantly higher circularity (0.86; range, 0.81-0.91), perfusion signal in the peri-cyst space, and peri-cyst HF in 89% of cases (all P < 0.05). Degenerative pseudocysts showed significantly lower circularity (0.68; range, 0.64-0.76), no perfusion signal in the peri-cyst space, and peri-cyst HF in only 29% of cases (all P < 0.05). The adopted quantitative metrics significantly correlated with disease duration, number of injections, fibrosis, and outer retinal atrophy.
Intraretinal fluid cysts can be discriminated from degenerative pseudocysts using a quantitative multimodal imaging approach. These findings are clinically relevant and should be included in future training models for artificial intelligence algorithms to improve the diagnostic power and fluid monitoring in neovascular AMD.
The author(s) have no proprietary or commercial interest in any materials discussed in this article.
Arrigo A
,Aragona E
,Battaglia Parodi M
,Bandello F
... -
《-》
Decreased Macular Choriocapillaris Perfusion Correlates with Contrast Sensitivity Function in Dry Age-Related Macular Degeneration.
To investigate the relationships between contrast sensitivity (CS), choriocapillaris perfusion, and other structural OCT biomarkers in dry age-related macular degeneration (AMD).
Cross-sectional, observational study.
One hundred AMD eyes (22 early, 52 intermediate, and 26 late) from 74 patients and 45 control eyes from 37 age-similar subjects.
All participants had visual acuity (VA) assessment, quantitative CS function (qCSF) testing, macular OCT, and 6 × 6-mm swept-source OCT angiography scans on the same day. OCT volumes were analyzed for subretinal drusenoid deposits and hyporeflective drusen cores, and to measure thickness of the outer nuclear layer. OCT angiography scans were utilized to calculate drusen volume and inner choroid flow deficit percentage (IC-FD%), and to measure the area of choroidal hypertransmission defects (HTDs). Inner choroid flow deficit percentage was measured from a 16-μm thick choriocapillaris slab after compensation and binarization with Phansalkar's method. Generalized linear mixed-effects models were used to evaluate the associations between functional and structural variables.
To explore the associations between qCSF-measured CS, IC-FD%, and various AMD imaging biomarkers.
Age-related macular degeneration exhibited significantly reduced qCSF metrics eyes across all stages compared with controls. Univariate analysis revealed significant associations between various imaging biomarkers, reduced qCSF metrics, and VA in both groups. Multivariate analysis confirmed that higher IC-FD% in the central 5 mm was significantly associated with decreases in all qCSF metrics in AMD eyes (β = -0.74 to -0.25, all P < 0.05), but not with VA (P > 0.05). Outer nuclear layer thickness in the central 3 mm correlated with both VA (β = 2.85, P < 0.001) and several qCSF metrics (β = 0.01-0.90, all P < 0.05), especially in AMD eyes. Further, larger HTD areas were associated with decreased VA (β = -0.89, P < 0.001) and reduced CS at low-intermediate frequencies across AMD stages (β = -0.30 to -0.29, P < 0.001).
The significant association between IC-FD% in the central 5 mm and qCSF-measured CS reinforces the hypothesis that decreased macular choriocapillaris perfusion contributes to visual function changes in AMD, which are more pronounced in CS than in VA.
Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Romano F
,Vingopoulos F
,Yuan M
,Ding X
,Garcia M
,Ploumi I
,Rodriguez J
,Garg I
,Tracy JH
,Bannerman A
,Choi H
,Stettler I
,Bennett C
,Overbey KM
,Laìns I
,Kim LA
,Vavvas DG
,Husain D
,Miller JW
,Miller JB
... -
《-》
Long-term effect of fluid volumes during the maintenance phase in neovascular age-related macular degeneration: results from Fight Retinal Blindness!
To investigate the effect of macular fluid volumes (subretinal fluid [SRF], intraretinal fluid [IRF], and pigment epithelium detachment [PED]) after initial treatment on functional and structural outcomes in neovascular age-related macular degeneration in a real-world cohort from Fight Retinal Blindness!
Treatment-naive neovascular age-related macular degeneration patients from Fight Retinal Blindness! (Zürich, Switzerland) were included. Macular fluid on optical coherence tomography was automatically quantified using an approved artificial intelligence algorithm. Follow-up of macular fluid, number of anti-vascular endothelial growth factor treatments, effect of fluid volumes after initial treatment (high, top 25%; low, bottom 75%) on best-corrected visual acuity, and development of macular atrophy and fibrosis was investigated over 48 months.
A total of 209 eyes (mean age, 78.3 years) were included. Patients with high IRF volumes after initial treatment differed by -2.6 (p = 0.021) and -7.4 letters (p = 0.007) at months 12 and 48, respectively. Eyes with high IRF received significantly more treatments (+1.6 [p < 0.001] and +5.3 [p = 0.002] at months 12 and 48, respectively). Patients with high SRF or PED had comparable best-corrected visual acuity outcomes but received significantly more treatments for SRF (+2.4 [p < 0.001] and +11.4 [p < 0.001] at months 12 and 48, respectively) and PED (+1.2 [p = 0.001] and +7.8 [p < 0.001] at months 12 and 48, respectively).
Patients with high macular fluid after initial treatment are at risk of losing vision that may not be compensable with higher treatment frequency for IRF. Higher treatment frequency for SRF and PED may result in comparable treatment outcomes. Quantification of macular fluid in all compartments is essential to detect eyes at risk of aggressive disease.
Reiter GS
,Mares V
,Leingang O
,Fuchs P
,Bogunovic H
,Barthelmes D
,Schmidt-Erfurth U
... -
《-》
Comparison of Two Modern Survival Prediction Tools, SORG-MLA and METSSS, in Patients With Symptomatic Long-bone Metastases Who Underwent Local Treatment With Surgery Followed by Radiotherapy and With Radiotherapy Alone.
Survival estimation for patients with symptomatic skeletal metastases ideally should be made before a type of local treatment has already been determined. Currently available survival prediction tools, however, were generated using data from patients treated either operatively or with local radiation alone, raising concerns about whether they would generalize well to all patients presenting for assessment. The Skeletal Oncology Research Group machine-learning algorithm (SORG-MLA), trained with institution-based data of surgically treated patients, and the Metastases location, Elderly, Tumor primary, Sex, Sickness/comorbidity, and Site of radiotherapy model (METSSS), trained with registry-based data of patients treated with radiotherapy alone, are two of the most recently developed survival prediction models, but they have not been tested on patients whose local treatment strategy is not yet decided.
(1) Which of these two survival prediction models performed better in a mixed cohort made up both of patients who received local treatment with surgery followed by radiotherapy and who had radiation alone for symptomatic bone metastases? (2) Which model performed better among patients whose local treatment consisted of only palliative radiotherapy? (3) Are laboratory values used by SORG-MLA, which are not included in METSSS, independently associated with survival after controlling for predictions made by METSSS?
Between 2010 and 2018, we provided local treatment for 2113 adult patients with skeletal metastases in the extremities at an urban tertiary referral academic medical center using one of two strategies: (1) surgery followed by postoperative radiotherapy or (2) palliative radiotherapy alone. Every patient's survivorship status was ascertained either by their medical records or the national death registry from the Taiwanese National Health Insurance Administration. After applying a priori designated exclusion criteria, 91% (1920) were analyzed here. Among them, 48% (920) of the patients were female, and the median (IQR) age was 62 years (53 to 70 years). Lung was the most common primary tumor site (41% [782]), and 59% (1128) of patients had other skeletal metastases in addition to the treated lesion(s). In general, the indications for surgery were the presence of a complete pathologic fracture or an impending pathologic fracture, defined as having a Mirels score of ≥ 9, in patients with an American Society of Anesthesiologists (ASA) classification of less than or equal to IV and who were considered fit for surgery. The indications for radiotherapy were relief of pain, local tumor control, prevention of skeletal-related events, and any combination of the above. In all, 84% (1610) of the patients received palliative radiotherapy alone as local treatment for the target lesion(s), and 16% (310) underwent surgery followed by postoperative radiotherapy. Neither METSSS nor SORG-MLA was used at the point of care to aid clinical decision-making during the treatment period. Survival was retrospectively estimated by these two models to test their potential for providing survival probabilities. We first compared SORG to METSSS in the entire population. Then, we repeated the comparison in patients who received local treatment with palliative radiation alone. We assessed model performance by area under the receiver operating characteristic curve (AUROC), calibration analysis, Brier score, and decision curve analysis (DCA). The AUROC measures discrimination, which is the ability to distinguish patients with the event of interest (such as death at a particular time point) from those without. AUROC typically ranges from 0.5 to 1.0, with 0.5 indicating random guessing and 1.0 a perfect prediction, and in general, an AUROC of ≥ 0.7 indicates adequate discrimination for clinical use. Calibration refers to the agreement between the predicted outcomes (in this case, survival probabilities) and the actual outcomes, with a perfect calibration curve having an intercept of 0 and a slope of 1. A positive intercept indicates that the actual survival is generally underestimated by the prediction model, and a negative intercept suggests the opposite (overestimation). When comparing models, an intercept closer to 0 typically indicates better calibration. Calibration can also be summarized as log(O:E), the logarithm scale of the ratio of observed (O) to expected (E) survivors. A log(O:E) > 0 signals an underestimation (the observed survival is greater than the predicted survival); and a log(O:E) < 0 indicates the opposite (the observed survival is lower than the predicted survival). A model with a log(O:E) closer to 0 is generally considered better calibrated. The Brier score is the mean squared difference between the model predictions and the observed outcomes, and it ranges from 0 (best prediction) to 1 (worst prediction). The Brier score captures both discrimination and calibration, and it is considered a measure of overall model performance. In Brier score analysis, the "null model" assigns a predicted probability equal to the prevalence of the outcome and represents a model that adds no new information. A prediction model should achieve a Brier score at least lower than the null-model Brier score to be considered as useful. The DCA was developed as a method to determine whether using a model to inform treatment decisions would do more good than harm. It plots the net benefit of making decisions based on the model's predictions across all possible risk thresholds (or cost-to-benefit ratios) in relation to the two default strategies of treating all or no patients. The care provider can decide on an acceptable risk threshold for the proposed treatment in an individual and assess the corresponding net benefit to determine whether consulting with the model is superior to adopting the default strategies. Finally, we examined whether laboratory data, which were not included in the METSSS model, would have been independently associated with survival after controlling for the METSSS model's predictions by using the multivariable logistic and Cox proportional hazards regression analyses.
Between the two models, only SORG-MLA achieved adequate discrimination (an AUROC of > 0.7) in the entire cohort (of patients treated operatively or with radiation alone) and in the subgroup of patients treated with palliative radiotherapy alone. SORG-MLA outperformed METSSS by a wide margin on discrimination, calibration, and Brier score analyses in not only the entire cohort but also the subgroup of patients whose local treatment consisted of radiotherapy alone. In both the entire cohort and the subgroup, DCA demonstrated that SORG-MLA provided more net benefit compared with the two default strategies (of treating all or no patients) and compared with METSSS when risk thresholds ranged from 0.2 to 0.9 at both 90 days and 1 year, indicating that using SORG-MLA as a decision-making aid was beneficial when a patient's individualized risk threshold for opting for treatment was 0.2 to 0.9. Higher albumin, lower alkaline phosphatase, lower calcium, higher hemoglobin, lower international normalized ratio, higher lymphocytes, lower neutrophils, lower neutrophil-to-lymphocyte ratio, lower platelet-to-lymphocyte ratio, higher sodium, and lower white blood cells were independently associated with better 1-year and overall survival after adjusting for the predictions made by METSSS.
Based on these discoveries, clinicians might choose to consult SORG-MLA instead of METSSS for survival estimation in patients with long-bone metastases presenting for evaluation of local treatment. Basing a treatment decision on the predictions of SORG-MLA could be beneficial when a patient's individualized risk threshold for opting to undergo a particular treatment strategy ranged from 0.2 to 0.9. Future studies might investigate relevant laboratory items when constructing or refining a survival estimation model because these data demonstrated prognostic value independent of the predictions of the METSSS model, and future studies might also seek to keep these models up to date using data from diverse, contemporary patients undergoing both modern operative and nonoperative treatments.
Level III, diagnostic study.
Lee CC
,Chen CW
,Yen HK
,Lin YP
,Lai CY
,Wang JL
,Groot OQ
,Janssen SJ
,Schwab JH
,Hsu FM
,Lin WH
... -
《-》