-
Impact of residual disease as a prognostic factor for survival in women with advanced epithelial ovarian cancer after primary surgery.
Ovarian cancer is the seventh most common cancer among women and a leading cause of death from gynaecological malignancies. Epithelial ovarian cancer is the most common type, accounting for around 90% of all ovarian cancers. This specific type of ovarian cancer starts in the surface layer covering the ovary or lining of the fallopian tube. Surgery is performed either before chemotherapy (upfront or primary debulking surgery (PDS)) or in the middle of a course of treatment with chemotherapy (neoadjuvant chemotherapy (NACT) and interval debulking surgery (IDS)), with the aim of removing all visible tumour and achieving no macroscopic residual disease (NMRD). The aim of this review is to investigate the prognostic impact of size of residual disease nodules (RD) in women who received upfront or interval cytoreductive surgery for advanced (stage III and IV) epithelial ovarian cancer (EOC).
To assess the prognostic impact of residual disease after primary surgery on survival outcomes for advanced (stage III and IV) epithelial ovarian cancer. In separate analyses, primary surgery included both upfront primary debulking surgery (PDS) followed by adjuvant chemotherapy and neoadjuvant chemotherapy followed by interval debulking surgery (IDS). Each residual disease threshold is considered as a separate prognostic factor.
We searched CENTRAL (2021, Issue 8), MEDLINE via Ovid (to 30 August 2021) and Embase via Ovid (to 30 August 2021).
We included survival data from studies of at least 100 women with advanced EOC after primary surgery. Residual disease was assessed as a prognostic factor in multivariate prognostic models. We excluded studies that reported fewer than 100 women, women with concurrent malignancies or studies that only reported unadjusted results. Women were included into two distinct groups: those who received PDS followed by platinum-based chemotherapy and those who received IDS, analysed separately. We included studies that reported all RD thresholds after surgery, but the main thresholds of interest were microscopic RD (labelled NMRD), RD 0.1 cm to 1 cm (small-volume residual disease (SVRD)) and RD > 1 cm (large-volume residual disease (LVRD)).
Two review authors independently abstracted data and assessed risk of bias. Where possible, we synthesised the data in meta-analysis. To assess the adequacy of adjustment factors used in multivariate Cox models, we used the 'adjustment for other prognostic factors' and 'statistical analysis and reporting' domains of the quality in prognosis studies (QUIPS) tool. We also made judgements about the certainty of the evidence for each outcome in the main comparisons, using GRADE. We examined differences between FIGO stages III and IV for different thresholds of RD after primary surgery. We considered factors such as age, grade, length of follow-up, type and experience of surgeon, and type of surgery in the interpretation of any heterogeneity. We also performed sensitivity analyses that distinguished between studies that included NMRD in RD categories of < 1 cm and those that did not. This was applicable to comparisons involving RD < 1 cm with the exception of RD < 1 cm versus NMRD. We evaluated women undergoing PDS and IDS in separate analyses.
We found 46 studies reporting multivariate prognostic analyses, including RD as a prognostic factor, which met our inclusion criteria: 22,376 women who underwent PDS and 3697 who underwent IDS, all with varying levels of RD. While we identified a range of different RD thresholds, we mainly report on comparisons that are the focus of a key area of clinical uncertainty (involving NMRD, SVRD and LVRD). The comparison involving any visible disease (RD > 0 cm) and NMRD was also important. SVRD versus NMRD in a PDS setting In PDS studies, most showed an increased risk of death in all RD groups when those with macroscopic RD (MRD) were compared to NMRD. Women who had SVRD after PDS had more than twice the risk of death compared to women with NMRD (hazard ratio (HR) 2.03, 95% confidence interval (CI) 1.80 to 2.29; I2 = 50%; 17 studies; 9404 participants; moderate-certainty). The analysis of progression-free survival found that women who had SVRD after PDS had nearly twice the risk of death compared to women with NMRD (HR 1.88, 95% CI 1.63 to 2.16; I2 = 63%; 10 studies; 6596 participants; moderate-certainty). LVRD versus SVRD in a PDS setting When we compared LVRD versus SVRD following surgery, the estimates were attenuated compared to NMRD comparisons. All analyses showed an overall survival benefit in women who had RD < 1 cm after surgery (HR 1.22, 95% CI 1.13 to 1.32; I2 = 0%; 5 studies; 6000 participants; moderate-certainty). The results were robust to analyses of progression-free survival. SVRD and LVRD versus NMRD in an IDS setting The one study that defined the categories as NMRD, SVRD and LVRD showed that women who had SVRD and LVRD after IDS had more than twice the risk of death compared to women who had NMRD (HR 2.09, 95% CI 1.20 to 3.66; 310 participants; I2 = 56%, and HR 2.23, 95% CI 1.49 to 3.34; 343 participants; I2 = 35%; very low-certainty, for SVRD versus NMRD and LVRD versus NMRD, respectively). LVRD versus SVRD + NMRD in an IDS setting Meta-analysis found that women who had LVRD had a greater risk of death and disease progression compared to women who had either SVRD or NMRD (HR 1.60, 95% CI 1.21 to 2.11; 6 studies; 1572 participants; I2 = 58% for overall survival and HR 1.76, 95% CI 1.23 to 2.52; 1145 participants; I2 = 60% for progression-free survival; very low-certainty). However, this result is biased as in all but one study it was not possible to distinguish NMRD within the < 1 cm thresholds. Only one study separated NMRD from SVRD; all others included NMRD in the SVRD group, which may create bias when comparing with LVRD, making interpretation challenging. MRD versus NMRD in an IDS setting Women who had any amount of MRD after IDS had more than twice the risk of death compared to women with NMRD (HR 2.11, 95% CI 1.35 to 3.29, I2 = 81%; 906 participants; very low-certainty).
In a PDS setting, there is moderate-certainty evidence that the amount of RD after primary surgery is a prognostic factor for overall and progression-free survival in women with advanced ovarian cancer. We separated our analysis into three distinct categories for the survival outcome including NMRD, SVRD and LVRD. After IDS, there may be only two categories required, although this is based on very low-certainty evidence, as all but one study included NMRD in the SVRD category. The one study that separated NMRD from SVRD showed no improved survival outcome in the SVRD category, compared to LVRD. Further low-certainty evidence also supported restricting to two categories, where women who had any amount of MRD after IDS had a significantly greater risk of death compared to women with NMRD. Therefore, the evidence presented in this review cannot conclude that using three categories applies in an IDS setting (very low-certainty evidence), as was supported for PDS (which has convincing moderate-certainty evidence).
Bryant A
,Hiu S
,Kunonga PT
,Gajjar K
,Craig D
,Vale L
,Winter-Roach BA
,Elattar A
,Naik R
... -
《Cochrane Database of Systematic Reviews》
-
Falls prevention interventions for community-dwelling older adults: systematic review and meta-analysis of benefits, harms, and patient values and preferences.
About 20-30% of older adults (≥ 65 years old) experience one or more falls each year, and falls are associated with substantial burden to the health care system, individuals, and families from resulting injuries, fractures, and reduced functioning and quality of life. Many interventions for preventing falls have been studied, and their effectiveness, factors relevant to their implementation, and patient preferences may determine which interventions to use in primary care. The aim of this set of reviews was to inform recommendations by the Canadian Task Force on Preventive Health Care (task force) on fall prevention interventions. We undertook three systematic reviews to address questions about the following: (i) the benefits and harms of interventions, (ii) how patients weigh the potential outcomes (outcome valuation), and (iii) patient preferences for different types of interventions, and their attributes, shown to offer benefit (intervention preferences).
We searched four databases for benefits and harms (MEDLINE, Embase, AgeLine, CENTRAL, to August 25, 2023) and three for outcome valuation and intervention preferences (MEDLINE, PsycINFO, CINAHL, to June 9, 2023). For benefits and harms, we relied heavily on a previous review for studies published until 2016. We also searched trial registries, references of included studies, and recent reviews. Two reviewers independently screened studies. The population of interest was community-dwelling adults ≥ 65 years old. We did not limit eligibility by participant fall history. The task force rated several outcomes, decided on their eligibility, and provided input on the effect thresholds to apply for each outcome (fallers, falls, injurious fallers, fractures, hip fractures, functional status, health-related quality of life, long-term care admissions, adverse effects, serious adverse effects). For benefits and harms, we included a broad range of non-pharmacological interventions relevant to primary care. Although usual care was the main comparator of interest, we included studies comparing interventions head-to-head and conducted a network meta-analysis (NMAs) for each outcome, enabling analysis of interventions lacking direct comparisons to usual care. For benefits and harms, we included randomized controlled trials with a minimum 3-month follow-up and reporting on one of our fall outcomes (fallers, falls, injurious fallers); for the other questions, we preferred quantitative data but considered qualitative findings to fill gaps in evidence. No date limits were applied for benefits and harms, whereas for outcome valuation and intervention preferences we included studies published in 2000 or later. All data were extracted by one trained reviewer and verified for accuracy and completeness. For benefits and harms, we relied on the previous review team's risk-of-bias assessments for benefit outcomes, but otherwise, two reviewers independently assessed the risk of bias (within and across study). For the other questions, one reviewer verified another's assessments. Consensus was used, with adjudication by a lead author when necessary. A coding framework, modified from the ProFANE taxonomy, classified interventions and their attributes (e.g., supervision, delivery format, duration/intensity). For benefit outcomes, we employed random-effects NMA using a frequentist approach and a consistency model. Transitivity and coherence were assessed using meta-regressions and global and local coherence tests, as well as through graphical display and descriptive data on the composition of the nodes with respect to major pre-planned effect modifiers. We assessed heterogeneity using prediction intervals. For intervention-related adverse effects, we pooled proportions except for vitamin D for which we considered data in the control groups and undertook random-effects pairwise meta-analysis using a relative risk (any adverse effects) or risk difference (serious adverse effects). For outcome valuation, we pooled disutilities (representing the impact of a negative event, e.g. fall, on one's usual quality of life, with 0 = no impact and 1 = death and ~ 0.05 indicating important disutility) from the EQ-5D utility measurement using the inverse variance method and a random-effects model and explored heterogeneity. When studies only reported other data, we compared the findings with our main analysis. For intervention preferences, we used a coding schema identifying whether there were strong, clear, no, or variable preferences within, and then across, studies. We assessed the certainty of evidence for each outcome using CINeMA for benefit outcomes and GRADE for all other outcomes.
A total of 290 studies were included across the reviews, with two studies included in multiple questions. For benefits and harms, we included 219 trials reporting on 167,864 participants and created 59 interventions (nodes). Transitivity and coherence were assessed as adequate. Across eight NMAs, the number of contributing trials ranged between 19 and 173, and the number of interventions ranged from 19 to 57. Approximately, half of the interventions in each network had at least low certainty for benefit. The fallers outcome had the highest number of interventions with moderate certainty for benefit (18/57). For the non-fall outcomes (fractures, hip fracture, long-term care [LTC] admission, functional status, health-related quality of life), many interventions had very low certainty evidence, often from lack of data. We prioritized findings from 21 interventions where there was moderate certainty for at least some benefit. Fourteen of these had a focus on exercise, the majority being supervised (for > 2 sessions) and of long duration (> 3 months), and with balance/resistance and group Tai Chi interventions generally having the most outcomes with at least low certainty for benefit. None of the interventions having moderate certainty evidence focused on walking. Whole-body vibration or home-hazard assessment (HHA) plus exercise provided to everyone showed moderate certainty for some benefit. No multifactorial intervention alone showed moderate certainty for any benefit. Six interventions only had very-low certainty evidence for the benefit outcomes. Two interventions had moderate certainty of harmful effects for at least one benefit outcome, though the populations across studies were at high risk for falls. Vitamin D and most single-component exercise interventions are probably associated with minimal adverse effects. Some uncertainty exists about possible adverse effects from other interventions. For outcome valuation, we included 44 studies of which 34 reported EQ-5D disutilities. Admission to long-term care had the highest disutility (1.0), but the evidence was rated as low certainty. Both fall-related hip (moderate certainty) and non-hip (low certainty) fracture may result in substantial disutility (0.53 and 0.57) in the first 3 months after injury. Disutility for both hip and non-hip fractures is probably lower 12 months after injury (0.16 and 0.19, with high and moderate certainty, respectively) compared to within the first 3 months. No study measured the disutility of an injurious fall. Fractures are probably more important than either falls (0.09 over 12 months) or functional status (0.12). Functional status may be somewhat more important than falls. For intervention preferences, 29 studies (9 qualitative) reported on 17 comparisons among single-component interventions showing benefit. Exercise interventions focusing on balance and/or resistance training appear to be clearly preferred over Tai Chi and other forms of exercise (e.g., yoga, aerobic). For exercise programs in general, there is probably variability among people in whether they prefer group or individual delivery, though there was high certainty that individual was preferred over group delivery of balance/resistance programs. Balance/resistance exercise may be preferred over education, though the evidence was low certainty. There was low certainty for a slight preference for education over cognitive-behavioral therapy, and group education may be preferred over individual education.
To prevent falls among community-dwelling older adults, evidence is most certain for benefit, at least over 1-2 years, from supervised, long-duration balance/resistance and group Tai Chi interventions, whole-body vibration, high-intensity/dose education or cognitive-behavioral therapy, and interventions of comprehensive multifactorial assessment with targeted treatment plus HHA, HHA plus exercise, or education provided to everyone. Adding other interventions to exercise does not appear to substantially increase benefits. Overall, effects appear most applicable to those with elevated fall risk. Choice among effective interventions that are available may best depend on individual patient preferences, though when implementing new balance/resistance programs delivering individual over group sessions when feasible may be most acceptable. Data on more patient-important outcomes including fall-related fractures and adverse effects would be beneficial, as would studies focusing on equity-deserving populations and on programs delivered virtually.
Not registered.
Pillay J
,Gaudet LA
,Saba S
,Vandermeer B
,Ashiq AR
,Wingert A
,Hartling L
... -
《Systematic Reviews》
-
Defining the optimum strategy for identifying adults and children with coeliac disease: systematic review and economic modelling.
Elwenspoek MM
,Thom H
,Sheppard AL
,Keeney E
,O'Donnell R
,Jackson J
,Roadevin C
,Dawson S
,Lane D
,Stubbs J
,Everitt H
,Watson JC
,Hay AD
,Gillett P
,Robins G
,Jones HE
,Mallett S
,Whiting PF
... -
《-》
-
Comparison of Two Modern Survival Prediction Tools, SORG-MLA and METSSS, in Patients With Symptomatic Long-bone Metastases Who Underwent Local Treatment With Surgery Followed by Radiotherapy and With Radiotherapy Alone.
Survival estimation for patients with symptomatic skeletal metastases ideally should be made before a type of local treatment has already been determined. Currently available survival prediction tools, however, were generated using data from patients treated either operatively or with local radiation alone, raising concerns about whether they would generalize well to all patients presenting for assessment. The Skeletal Oncology Research Group machine-learning algorithm (SORG-MLA), trained with institution-based data of surgically treated patients, and the Metastases location, Elderly, Tumor primary, Sex, Sickness/comorbidity, and Site of radiotherapy model (METSSS), trained with registry-based data of patients treated with radiotherapy alone, are two of the most recently developed survival prediction models, but they have not been tested on patients whose local treatment strategy is not yet decided.
(1) Which of these two survival prediction models performed better in a mixed cohort made up both of patients who received local treatment with surgery followed by radiotherapy and who had radiation alone for symptomatic bone metastases? (2) Which model performed better among patients whose local treatment consisted of only palliative radiotherapy? (3) Are laboratory values used by SORG-MLA, which are not included in METSSS, independently associated with survival after controlling for predictions made by METSSS?
Between 2010 and 2018, we provided local treatment for 2113 adult patients with skeletal metastases in the extremities at an urban tertiary referral academic medical center using one of two strategies: (1) surgery followed by postoperative radiotherapy or (2) palliative radiotherapy alone. Every patient's survivorship status was ascertained either by their medical records or the national death registry from the Taiwanese National Health Insurance Administration. After applying a priori designated exclusion criteria, 91% (1920) were analyzed here. Among them, 48% (920) of the patients were female, and the median (IQR) age was 62 years (53 to 70 years). Lung was the most common primary tumor site (41% [782]), and 59% (1128) of patients had other skeletal metastases in addition to the treated lesion(s). In general, the indications for surgery were the presence of a complete pathologic fracture or an impending pathologic fracture, defined as having a Mirels score of ≥ 9, in patients with an American Society of Anesthesiologists (ASA) classification of less than or equal to IV and who were considered fit for surgery. The indications for radiotherapy were relief of pain, local tumor control, prevention of skeletal-related events, and any combination of the above. In all, 84% (1610) of the patients received palliative radiotherapy alone as local treatment for the target lesion(s), and 16% (310) underwent surgery followed by postoperative radiotherapy. Neither METSSS nor SORG-MLA was used at the point of care to aid clinical decision-making during the treatment period. Survival was retrospectively estimated by these two models to test their potential for providing survival probabilities. We first compared SORG to METSSS in the entire population. Then, we repeated the comparison in patients who received local treatment with palliative radiation alone. We assessed model performance by area under the receiver operating characteristic curve (AUROC), calibration analysis, Brier score, and decision curve analysis (DCA). The AUROC measures discrimination, which is the ability to distinguish patients with the event of interest (such as death at a particular time point) from those without. AUROC typically ranges from 0.5 to 1.0, with 0.5 indicating random guessing and 1.0 a perfect prediction, and in general, an AUROC of ≥ 0.7 indicates adequate discrimination for clinical use. Calibration refers to the agreement between the predicted outcomes (in this case, survival probabilities) and the actual outcomes, with a perfect calibration curve having an intercept of 0 and a slope of 1. A positive intercept indicates that the actual survival is generally underestimated by the prediction model, and a negative intercept suggests the opposite (overestimation). When comparing models, an intercept closer to 0 typically indicates better calibration. Calibration can also be summarized as log(O:E), the logarithm scale of the ratio of observed (O) to expected (E) survivors. A log(O:E) > 0 signals an underestimation (the observed survival is greater than the predicted survival); and a log(O:E) < 0 indicates the opposite (the observed survival is lower than the predicted survival). A model with a log(O:E) closer to 0 is generally considered better calibrated. The Brier score is the mean squared difference between the model predictions and the observed outcomes, and it ranges from 0 (best prediction) to 1 (worst prediction). The Brier score captures both discrimination and calibration, and it is considered a measure of overall model performance. In Brier score analysis, the "null model" assigns a predicted probability equal to the prevalence of the outcome and represents a model that adds no new information. A prediction model should achieve a Brier score at least lower than the null-model Brier score to be considered as useful. The DCA was developed as a method to determine whether using a model to inform treatment decisions would do more good than harm. It plots the net benefit of making decisions based on the model's predictions across all possible risk thresholds (or cost-to-benefit ratios) in relation to the two default strategies of treating all or no patients. The care provider can decide on an acceptable risk threshold for the proposed treatment in an individual and assess the corresponding net benefit to determine whether consulting with the model is superior to adopting the default strategies. Finally, we examined whether laboratory data, which were not included in the METSSS model, would have been independently associated with survival after controlling for the METSSS model's predictions by using the multivariable logistic and Cox proportional hazards regression analyses.
Between the two models, only SORG-MLA achieved adequate discrimination (an AUROC of > 0.7) in the entire cohort (of patients treated operatively or with radiation alone) and in the subgroup of patients treated with palliative radiotherapy alone. SORG-MLA outperformed METSSS by a wide margin on discrimination, calibration, and Brier score analyses in not only the entire cohort but also the subgroup of patients whose local treatment consisted of radiotherapy alone. In both the entire cohort and the subgroup, DCA demonstrated that SORG-MLA provided more net benefit compared with the two default strategies (of treating all or no patients) and compared with METSSS when risk thresholds ranged from 0.2 to 0.9 at both 90 days and 1 year, indicating that using SORG-MLA as a decision-making aid was beneficial when a patient's individualized risk threshold for opting for treatment was 0.2 to 0.9. Higher albumin, lower alkaline phosphatase, lower calcium, higher hemoglobin, lower international normalized ratio, higher lymphocytes, lower neutrophils, lower neutrophil-to-lymphocyte ratio, lower platelet-to-lymphocyte ratio, higher sodium, and lower white blood cells were independently associated with better 1-year and overall survival after adjusting for the predictions made by METSSS.
Based on these discoveries, clinicians might choose to consult SORG-MLA instead of METSSS for survival estimation in patients with long-bone metastases presenting for evaluation of local treatment. Basing a treatment decision on the predictions of SORG-MLA could be beneficial when a patient's individualized risk threshold for opting to undergo a particular treatment strategy ranged from 0.2 to 0.9. Future studies might investigate relevant laboratory items when constructing or refining a survival estimation model because these data demonstrated prognostic value independent of the predictions of the METSSS model, and future studies might also seek to keep these models up to date using data from diverse, contemporary patients undergoing both modern operative and nonoperative treatments.
Level III, diagnostic study.
Lee CC
,Chen CW
,Yen HK
,Lin YP
,Lai CY
,Wang JL
,Groot OQ
,Janssen SJ
,Schwab JH
,Hsu FM
,Lin WH
... -
《-》
-
Interventions to prevent surgical site infection in adults undergoing cardiac surgery.
Surgical site infection (SSI) is a common type of hospital-acquired infection and affects up to a third of patients following surgical procedures. It is associated with significant mortality and morbidity. In the United Kingdom alone, it is estimated to add another £30 million to the cost of adult cardiac surgery. Although generic guidance for SSI prevention exists, this is not specific to adult cardiac surgery. Furthermore, many of the risk factors for SSI are prevalent within the cardiac surgery population. Despite this, there is currently no standard of care for SSI prevention in adults undergoing cardiac surgery throughout the preoperative, intraoperative and postoperative periods of care, with variations in practice existing throughout from risk stratification, decontamination strategies and surveillance.
Primary objective: to assess the clinical effectiveness of pre-, intra-, and postoperative interventions in the prevention of cardiac SSI.
(i) to evaluate the effects of SSI prevention interventions on morbidity, mortality, and resource use; (ii) to evaluate the effects of SSI prevention care bundles on morbidity, mortality, and resource use.
We searched the Cochrane Central Register of Controlled Trials (CENTRAL) in the Cochrane Library, MEDLINE (Ovid, from inception) and Embase (Ovid, from inception) on 31 May 2021.
gov and the WHO International Clinical Trials Registry Platform (ICTRP) were also searched for ongoing or unpublished trials on 21 May 2021. No language restrictions were imposed.
We included RCTs evaluating interventions to reduce SSI in adults (≥ 18 years of age) who have undergone any cardiac surgery.
We followed the methods as per our published Cochrane protocol. Our primary outcome was surgical site infection. Our secondary outcomes were all-cause mortality, reoperation for SSI, hospital length of stay, hospital readmissions for SSI, healthcare costs and cost-effectiveness, quality of life (QoL), and adverse effects. We used the GRADE approach to assess the certainty of evidence.
A total of 118 studies involving 51,854 participants were included. Twenty-two interventions to reduce SSI in adults undergoing cardiac surgery were identified. The risk of bias was judged to be high in the majority of studies. There was heterogeneity in the study populations and interventions; consequently, meta-analysis was not appropriate for many of the comparisons and these are presented as narrative summaries. We focused our reporting of findings on four comparisons deemed to be of great clinical relevance by all review authors. Decolonisation versus no decolonisation Pooled data from three studies (n = 1564) using preoperative topical oral/nasal decontamination in all patients demonstrated an uncertain direction of treatment effect in relation to total SSI (RR 0.98, 95% CI 0.70 to 1.36; I2 = 0%; very low-certainty evidence). A single study reported that decolonisation likely results in little to no difference in superficial SSI (RR 1.35, 95% CI 0.84 to 2.15; moderate-certainty evidence) and a reduction in deep SSI (RR 0.36, 95% CI 0.17 to 0.77; high-certainty evidence). The evidence on all-cause mortality from three studies (n = 1564) is very uncertain (RR 0.66, 95% CI 0.24 to 1.84; I2 = 49%; very low-certainty evidence). A single study (n = 954) demonstrated that decolonisation may result in little to no difference in hospital readmission for SSI (RR 0.80, 95% CI 0.44 to 1.45; low-certainty evidence). A single study (n = 954) reported one case of temporary discolouration of teeth in the decolonisation arm (low-certainty-evidence. Reoperation for SSI was not reported. Tight glucose control versus standard glucose control Pooled data from seven studies (n = 880) showed that tight glucose control may reduce total SSI, but the evidence is very uncertain (RR 0.41, 95% CI 0.19 to 0.85; I2 = 29%; numbers need to treat to benefit (NNTB) = 13; very-low certainty evidence). Pooled data from seven studies (n = 3334) showed tight glucose control may reduce all-cause mortality, but the evidence is very uncertain (RR 0.61, 95% CI 0.41 to 0.91; I2 = 0%; very low-certainty evidence). Based on four studies (n = 2793), there may be little to no difference in episodes of hypoglycaemia between tight control vs. standard control, but the evidence is very uncertain (RR 2.12, 95% CI 0.51 to 8.76; I2 = 72%; very low-certainty evidence). No studies reported superficial/deep SSI, reoperation for SSI, or hospital readmission for SSI. Negative pressure wound therapy (NPWT) versus standard dressings NPWT was assessed in two studies (n = 144) and it may reduce total SSI, but the evidence is very uncertain (RR 0.17, 95% CI 0.03 to 0.97; I2 = 0%; NNTB = 10; very low-certainty evidence). A single study (n = 80) reported reoperation for SSI. The relative effect could not be estimated. The certainty of evidence was judged to be very low. No studies reported superficial/deep SSI, all-cause mortality, hospital readmission for SSI, or adverse effects. Topical antimicrobials versus no topical antimicrobials Five studies (n = 5382) evaluated topical gentamicin sponge, which may reduce total SSI (RR 0.62, 95% CI 0.46 to 0.84; I2 = 48%; NNTB = 32), superficial SSI (RR 0.60, 95% CI 0.37 to 0.98; I2 = 69%), and deep SSI (RR 0.67, 95% CI 0.47 to 0.96; I2 = 5%; low-certainty evidence. Four studies (n = 4662) demonstrated that topical gentamicin sponge may result in little to no difference in all-cause mortality, but the evidence is very uncertain (RR 0.96, 95% CI 0.65 to 1.42; I2 = 0%; very low-certainty evidence). Reoperation for SSI, hospital readmission for SSI, and adverse effects were not reported in any included studies.
This review provides the broadest and most recent review of the current evidence base for interventions to reduce SSI in adults undergoing cardiac surgery. Twenty-one interventions were identified across the perioperative period. Evidence is of low to very low certainty primarily due to significant heterogeneity in how interventions were implemented and the definitions of SSI used. Knowledge gaps have been identified across a number of practices that should represent key areas for future research. Efforts to standardise SSI outcome reporting are warranted.
Cardiothoracic Interdisciplinary Research Network
,Rogers LJ
,Vaja R
,Bleetman D
,Ali JM
,Rochon M
,Sanders J
,Tanner J
,Lamagni TL
,Talukder S
,Quijano-Campos JC
,Lai F
,Loubani M
,Murphy GJ
... -
《Cochrane Database of Systematic Reviews》