Comparing clinical only and combined clinical laboratory models for ECPR outcomes in refractory cardiac arrest.
Extracorporeal cardiopulmonary resuscitation (ECPR) improves survival for prolonged cardiac arrest (CA) but carries significant risks and costs due to ECMO. Previous predictive models have been complex, incorporating both clinical data and parameters obtained after CPR or ECMO initiation. This study aims to compare a simpler clinical-only model with a model that includes both clinical and pre-ECMO laboratory parameters, to refine patient selection and improve ECPR outcomes. Medical records between January 2012 and January 2019 in our institution were retrospectively reviewed. Patients who met the following criteria were enrolled in the ECPR program: age 18-75 years, CCPR started with CA in < 5 min, CA was assumed to be of heart origin, and refractory CA. Survivors had similar underlying diseases and younger age without statistical significance (57.0 vs. 61.0 years, p = 0.117). Survivors had significantly higher rates of initial shockable rhythm, pulseless ventricular tachycardia and ventricular fibrillation, shorter low-flow time (CPR-to-ECMO time), lower lactate levels, and higher initial pH. Survival to discharge was higher for emergency department CA than for out-of-hospital and in-hospital CA (63.3% vs. 35.3%, p = 0.007). Two models were used for evaluating survival to discharge and good neurological outcomes. Model 1, short version based on clinical factors, (S1, survival score 1; F1, function score 1) included the patient's characteristics before ECPR, whereas Model 2, full version included clinical factors and laboratory data including lactate and pH levels (S2, survival score 2; F2, function score 2). Both Model 1(S1) and Model 2(S2) showed good predictive ability for survival to discharge with areas under the receiver operating characteristic (AUROCs) of 0.79 and 0.83, respectively. Model 1(F1) and Model 2(F2) revealed prediction power for good neurological outcomes, with AUROCs of 0.80 and 0.79, respectively. The AUROCs of survival score Model 1(S1) and 2(S2) and function score Model 1(F1) and 2(F2) were not significantly different. This study demonstrates that clinical factors alone can effectively predict survival to discharge and favorable neurological outcomes at 6 months. This emphasizes the importance of early prognostic evaluation and supports the use of clinical data as a practical tool for clinicians in decision-making for this difficult situation.
Chiu CC
,Chang YJ
,Chiu CW
,Chen YC
,Hsieh YK
,Hsiao SW
,Yen HH
,Siao FY
... -
《Scientific Reports》
Comparison of Two Modern Survival Prediction Tools, SORG-MLA and METSSS, in Patients With Symptomatic Long-bone Metastases Who Underwent Local Treatment With Surgery Followed by Radiotherapy and With Radiotherapy Alone.
Survival estimation for patients with symptomatic skeletal metastases ideally should be made before a type of local treatment has already been determined. Currently available survival prediction tools, however, were generated using data from patients treated either operatively or with local radiation alone, raising concerns about whether they would generalize well to all patients presenting for assessment. The Skeletal Oncology Research Group machine-learning algorithm (SORG-MLA), trained with institution-based data of surgically treated patients, and the Metastases location, Elderly, Tumor primary, Sex, Sickness/comorbidity, and Site of radiotherapy model (METSSS), trained with registry-based data of patients treated with radiotherapy alone, are two of the most recently developed survival prediction models, but they have not been tested on patients whose local treatment strategy is not yet decided.
(1) Which of these two survival prediction models performed better in a mixed cohort made up both of patients who received local treatment with surgery followed by radiotherapy and who had radiation alone for symptomatic bone metastases? (2) Which model performed better among patients whose local treatment consisted of only palliative radiotherapy? (3) Are laboratory values used by SORG-MLA, which are not included in METSSS, independently associated with survival after controlling for predictions made by METSSS?
Between 2010 and 2018, we provided local treatment for 2113 adult patients with skeletal metastases in the extremities at an urban tertiary referral academic medical center using one of two strategies: (1) surgery followed by postoperative radiotherapy or (2) palliative radiotherapy alone. Every patient's survivorship status was ascertained either by their medical records or the national death registry from the Taiwanese National Health Insurance Administration. After applying a priori designated exclusion criteria, 91% (1920) were analyzed here. Among them, 48% (920) of the patients were female, and the median (IQR) age was 62 years (53 to 70 years). Lung was the most common primary tumor site (41% [782]), and 59% (1128) of patients had other skeletal metastases in addition to the treated lesion(s). In general, the indications for surgery were the presence of a complete pathologic fracture or an impending pathologic fracture, defined as having a Mirels score of ≥ 9, in patients with an American Society of Anesthesiologists (ASA) classification of less than or equal to IV and who were considered fit for surgery. The indications for radiotherapy were relief of pain, local tumor control, prevention of skeletal-related events, and any combination of the above. In all, 84% (1610) of the patients received palliative radiotherapy alone as local treatment for the target lesion(s), and 16% (310) underwent surgery followed by postoperative radiotherapy. Neither METSSS nor SORG-MLA was used at the point of care to aid clinical decision-making during the treatment period. Survival was retrospectively estimated by these two models to test their potential for providing survival probabilities. We first compared SORG to METSSS in the entire population. Then, we repeated the comparison in patients who received local treatment with palliative radiation alone. We assessed model performance by area under the receiver operating characteristic curve (AUROC), calibration analysis, Brier score, and decision curve analysis (DCA). The AUROC measures discrimination, which is the ability to distinguish patients with the event of interest (such as death at a particular time point) from those without. AUROC typically ranges from 0.5 to 1.0, with 0.5 indicating random guessing and 1.0 a perfect prediction, and in general, an AUROC of ≥ 0.7 indicates adequate discrimination for clinical use. Calibration refers to the agreement between the predicted outcomes (in this case, survival probabilities) and the actual outcomes, with a perfect calibration curve having an intercept of 0 and a slope of 1. A positive intercept indicates that the actual survival is generally underestimated by the prediction model, and a negative intercept suggests the opposite (overestimation). When comparing models, an intercept closer to 0 typically indicates better calibration. Calibration can also be summarized as log(O:E), the logarithm scale of the ratio of observed (O) to expected (E) survivors. A log(O:E) > 0 signals an underestimation (the observed survival is greater than the predicted survival); and a log(O:E) < 0 indicates the opposite (the observed survival is lower than the predicted survival). A model with a log(O:E) closer to 0 is generally considered better calibrated. The Brier score is the mean squared difference between the model predictions and the observed outcomes, and it ranges from 0 (best prediction) to 1 (worst prediction). The Brier score captures both discrimination and calibration, and it is considered a measure of overall model performance. In Brier score analysis, the "null model" assigns a predicted probability equal to the prevalence of the outcome and represents a model that adds no new information. A prediction model should achieve a Brier score at least lower than the null-model Brier score to be considered as useful. The DCA was developed as a method to determine whether using a model to inform treatment decisions would do more good than harm. It plots the net benefit of making decisions based on the model's predictions across all possible risk thresholds (or cost-to-benefit ratios) in relation to the two default strategies of treating all or no patients. The care provider can decide on an acceptable risk threshold for the proposed treatment in an individual and assess the corresponding net benefit to determine whether consulting with the model is superior to adopting the default strategies. Finally, we examined whether laboratory data, which were not included in the METSSS model, would have been independently associated with survival after controlling for the METSSS model's predictions by using the multivariable logistic and Cox proportional hazards regression analyses.
Between the two models, only SORG-MLA achieved adequate discrimination (an AUROC of > 0.7) in the entire cohort (of patients treated operatively or with radiation alone) and in the subgroup of patients treated with palliative radiotherapy alone. SORG-MLA outperformed METSSS by a wide margin on discrimination, calibration, and Brier score analyses in not only the entire cohort but also the subgroup of patients whose local treatment consisted of radiotherapy alone. In both the entire cohort and the subgroup, DCA demonstrated that SORG-MLA provided more net benefit compared with the two default strategies (of treating all or no patients) and compared with METSSS when risk thresholds ranged from 0.2 to 0.9 at both 90 days and 1 year, indicating that using SORG-MLA as a decision-making aid was beneficial when a patient's individualized risk threshold for opting for treatment was 0.2 to 0.9. Higher albumin, lower alkaline phosphatase, lower calcium, higher hemoglobin, lower international normalized ratio, higher lymphocytes, lower neutrophils, lower neutrophil-to-lymphocyte ratio, lower platelet-to-lymphocyte ratio, higher sodium, and lower white blood cells were independently associated with better 1-year and overall survival after adjusting for the predictions made by METSSS.
Based on these discoveries, clinicians might choose to consult SORG-MLA instead of METSSS for survival estimation in patients with long-bone metastases presenting for evaluation of local treatment. Basing a treatment decision on the predictions of SORG-MLA could be beneficial when a patient's individualized risk threshold for opting to undergo a particular treatment strategy ranged from 0.2 to 0.9. Future studies might investigate relevant laboratory items when constructing or refining a survival estimation model because these data demonstrated prognostic value independent of the predictions of the METSSS model, and future studies might also seek to keep these models up to date using data from diverse, contemporary patients undergoing both modern operative and nonoperative treatments.
Level III, diagnostic study.
Lee CC
,Chen CW
,Yen HK
,Lin YP
,Lai CY
,Wang JL
,Groot OQ
,Janssen SJ
,Schwab JH
,Hsu FM
,Lin WH
... -
《-》
Extracorporeal cardiopulmonary resuscitation outcomes in pre-Glenn single ventricle infants: Analysis of a ten-year dataset.
While several studies have reported on outcomes of extracorporeal membrane oxygenation (ECMO) in patients with single ventricle physiology, few studies have described outcomes of extracorporeal cardiopulmonary resuscitation (ECPR) in this unique population. The objective of this study was to determine survival and risk factors for mortality after ECPR in single ventricle patients prior to superior cavopulmonary anastomosis, using a large sample from the Extracorporeal Life Support Organization (ELSO) Registry.
We included single ventricle patients who underwent ECPR for in-hospital cardiac arrest (IHCA) between January 2012 and December 2021. We excluded patients who had undergone a superior cavopulmonary anastomosis, inferior cavopulmonary anastomosis, or who were older than 180 days at the time of ECPR. We collected data on mortality, ECMO course and ECMO complications. Subjects who survived to hospital discharge after ECPR were compared to subjects who did not survive to hospital discharge. We then performed univariate logistic regression followed by multivariable logistic regression analysis for associations with survival to hospital discharge.
There were 420 subjects included who had index ECPR events. Median age was 14 (IQR 7,44) days and median weight was 3.14 (IQR 2.8, 3.8) kg.. Hypoplastic left heart syndrome was the most common diagnosis (354/420; 84.2%), and 47.4% of the cohort (199/420) had undergone a Norwood operation. Survival to hospital discharge occurred in 159/420 (37.9%) of subjects. Median number of hours on ECMO (122 vs. 93 h; p < 0.001), presence of seizures by electroencephalography (24% vs. 15%; p = 0.033), and need for renal replacement therapy (45% vs. 34%; p = 0.023) were significantly higher among non-survivors compared to survivors. In the subgroup of Norwood patients, survival was 43.2% after ECPR. Presence of Norwood variable was 54% among ECPR survivors in the overall cohort, compared to 43% among non-survivors (p = 0.032). In a multivariable logistic regression model to test association with survival to discharge, number of ECMO hours and presence of seizures were associated with decreased odds of survival to hospital discharge [adjusted odds ratio 0.95 (95% C.I. 0.92-0.98) and 0.57 (95% C.I. 0.33-0.97) respectively]. The odds ratio for ECMO hours demonstrated a decrease in odds of survival by 5% for every 12 h on ECMO. Presence of Norwood operation pre-arrest was associated with increased odds of survival [adjusted odds ratio 1.53 (95% C.I. 1.01-2.32)].
In our cohort of pre-Glenn single ventricle infants, survival after ECPR for in-hospital cardiac arrest was 37.9%. Number of hours on ECMO and seizures post-ECMO cannulation were associated with decreased odds of survival. Single ventricle infants who had undergone Norwood palliation pre-arrest were more likely to survive to hospital discharge.
Esangbedo I
,Brogan T
,Chan T
,Tjoeng YL
,Brown M
,McMullan DM
... -
《-》
The Design of an Adaptive Clinical Trial to Evaluate the Efficacy of Extra-Corporeal Membrane Oxygenation for Out-of-Hospital Cardiac Arrest.
Extracorporeal cardiopulmonary resuscitation (ECPR) is a promising therapy for out-of-hospital cardiac arrest (OHCA) that is refractory to standard therapy, but no multicenter randomized clinical trials have been conducted to establish its efficacy. We report the design and operating characteristics of a proposed randomized Bayesian adaptive "enrichment" clinical trial designed to determine whether ECPR is effective for refractory OHCA and, if effective, to define the interval after arrest during which patients derive benefit.
Through iterative trial simulation and trial design modification, we developed a Bayesian adaptive trial of ECPR for adults who experience non-traumatic out-of-hospital cardiac arrest. Our proposed trial design addresses the threats to trial success identified during the design process, which were (1) the uncertainty surrounding the cardiac arrest (CA)-to-ECPR interval within which clinical benefit might be preserved (2) the difference in prognosis between patients with an initial rhythm that is non-shockable vs. shockable. Trial subjects will be randomized 1:1 to receive either standard care or expedited transport to a hospital for potential ECPR. The CA-to-ECPR interval will be estimated in real time based on the sum of the estimated paramedic response time (911 call to scene arrival), paramedic scene time, and transport time to hospital. A Bayesian decreasing step function will be used to estimate the efficacy of the treatment with an outcome of the 90-day utility-weighted Modified Rankin Scale (uwmRS) for each rhythm subgroup and estimated CA-to-ECPR interval at pre-specified interims. The trial will adaptively lengthen the estimated CA-to-ECPR eligibility window if the treatment appears effective at the upper limit of initial eligibility window. If ECPR appears ineffective at longer estimated CA-to-ECPR intervals, the upper limit of the window for enrollment eligibility will be shortened. The analysis will be stratified by rhythm subgroup.
With a maximum total sample size of 400, and a cap on the maximum sample size of 300 for the non-shockable rhythm subgroup, the trial design has power ranging from 91-100% to detect a benefit from ECPR for non-shockable rhythms under the various efficacy scenarios simulated and power ranging from 69-98% for shockable rhythms under the same scenarios. The trial design also has a high probability of correctly identifying the maximum CA-to-ECPR interval within which ECPR produces a clinically significant benefit of 0.2 on the uwMRS. If ECPR is equivalent to standard CA care, the type I error is 2.5% with a 99% probability of stopping enrollment early for futility in the non-shockable subgroup and a 97% probability of stopping enrollment early for futility in the shockable subgroup.
This proposed adaptive trial design helps to ensure the population of patients who are most likely to benefit from treatment-as defined both by rhythm subgroup and estimated CA-to-ECPR interval-is enrolled. The design promotes early termination of the trial if continuation is likely to be futile.
Tolles J
,Kidwell KM
,Broglio K
,Graves T
,Meurer W
,Lewis RJ
,Neumar RW
... -
《-》