-
Toe-brachial index and toe systolic blood pressure for the diagnosis of peripheral arterial disease.
Peripheral arterial disease (PAD) of the lower limbs is caused by atherosclerotic occlusive disease in which narrowing of arteries reduces blood flow to the lower limbs. PAD is common; it is estimated to affect 236 million individuals worldwide. Advanced age, smoking, hypertension, diabetes and concomitant cardiovascular disease are common factors associated with increased risk of PAD. Complications of PAD can include claudication pain, rest pain, wounds, gangrene, amputation and increased cardiovascular morbidity and mortality. It is therefore clinically important to use diagnostic tests that accurately identify PAD. Accurate and timely detection of PAD allows clinicians to implement appropriate risk management strategies to prevent complications, slow progression or intervene when indicated. Toe-brachial index (TBI) and toe systolic blood pressure (TSBP) are amongst a suite of non-invasive bedside tests used to detect PAD. Both TBI and TSBP are commonly utilised by a variety of clinicians in different settings, therefore a systematic review and meta-analysis of their diagnostic accuracy is warranted and highly relevant to inform clinical practice.
To (1) estimate the accuracy of TSBP and TBI for the diagnosis of PAD in the lower extremities at different cut-off values for test positivity in populations at risk of PAD, and (2) compare the accuracy of TBI and TSBP for the diagnosis of PAD in the lower extremities. Secondary objectives were to investigate several possible sources of heterogeneity in test accuracy, including the following: patient group tested (people with type 1 or type 2 diabetes, people with renal disease and general population), type of equipment used, positivity threshold and type of reference standard.
The Cochrane Vascular Information Specialist searched the MEDLINE, Embase, CINAHL, Web of Science, LILACS, Zetoc and DARE databases and the World Health Organization International Clinical Trials Registry Platform and ClinicalTrials.gov trials registers to 27 February 2024.
We included diagnostic case-control, cross-sectional, prospective and retrospective studies in which all participants had either a TSBP or TBI measurement plus a validated method of vascular diagnostic imaging for PAD. We needed to be able to cross-tabulate (2 x 2 table) results of the index test and the reference standard to include a study. To be included, study populations had to be adults aged 18 years and over. We included studies of symptomatic and asymptomatic participants. Studies had to use TSBP and TBI (also called toe-brachial pressure index (TBPI)), either individually, or in addition to other non-invasive tests as index tests to diagnose PAD in individuals with suspected disease. We included data collected by photoplethysmography, laser Doppler, continuous wave Doppler, sphygmomanometers (both manual and aneroid) and manual or automated digital equipment.
Two review authors independently completed data extraction using a standardised form. We extracted data to populate 2 x 2 contingency tables when available (true positives, true negatives, false positives, false negatives). Where data were not available to enable statistical analysis, we contacted study authors directly. Two review authors working independently undertook quality assessment using QUADAS-2, with disagreements resolved by a third review author. We incorporated two additional questions into the quality appraisal to aid our understanding of the conduct of studies and make appropriate judgements about risk of bias and applicability.
Eighteen studies met the inclusion criteria; 13 evaluated TBI only, one evaluated TSBP only and four evaluated both TBI and TSBP. Thirteen of the studies used colour duplex ultrasound (CDU) as a reference standard, two used computed tomography angiography (CTA), one used multi-detector row tomography (MDCT), one used angiography and one used a combination of CDU, CTA and angiography. TBI was investigated in 1927 participants and 2550 limbs. TSBP was investigated in 701 participants, of which 701 limbs had TSBP measured. Studies were generally of low methodological quality, with poor reporting of participant recruitment in regard to consecutive or random sampling, and poor reporting of blinding between index test and reference standard, as well as timing between index test and reference standard. The certainty of evidence according to GRADE for most studies was very low.
Whilst a small number of diagnostic test accuracy studies have been completed for TBI and TSBP to identify PAD, the overall methodological quality was low, with most studies providing a very low certainty of evidence. The evidence base to support the use of TBI and TSBP to identify PAD is therefore limited. Whilst both TBI and TSBP are used extensively clinically, the overall diagnostic performance of these tests remains uncertain. Future research using robust methods and clear reporting is warranted to comprehensively determine the diagnostic test accuracy of the TBI and TSBP for identification of PAD with greater certainty. However, conducting such research where some of the reference tests are invasive and only clinically indicated in populations with known PAD is challenging.
Tehan PE
,Mills J
,Leask S
,Oldmeadow C
,Peterson B
,Sebastian M
,Chuter V
... -
《Cochrane Database of Systematic Reviews》
-
Falls prevention interventions for community-dwelling older adults: systematic review and meta-analysis of benefits, harms, and patient values and preferences.
About 20-30% of older adults (≥ 65 years old) experience one or more falls each year, and falls are associated with substantial burden to the health care system, individuals, and families from resulting injuries, fractures, and reduced functioning and quality of life. Many interventions for preventing falls have been studied, and their effectiveness, factors relevant to their implementation, and patient preferences may determine which interventions to use in primary care. The aim of this set of reviews was to inform recommendations by the Canadian Task Force on Preventive Health Care (task force) on fall prevention interventions. We undertook three systematic reviews to address questions about the following: (i) the benefits and harms of interventions, (ii) how patients weigh the potential outcomes (outcome valuation), and (iii) patient preferences for different types of interventions, and their attributes, shown to offer benefit (intervention preferences).
We searched four databases for benefits and harms (MEDLINE, Embase, AgeLine, CENTRAL, to August 25, 2023) and three for outcome valuation and intervention preferences (MEDLINE, PsycINFO, CINAHL, to June 9, 2023). For benefits and harms, we relied heavily on a previous review for studies published until 2016. We also searched trial registries, references of included studies, and recent reviews. Two reviewers independently screened studies. The population of interest was community-dwelling adults ≥ 65 years old. We did not limit eligibility by participant fall history. The task force rated several outcomes, decided on their eligibility, and provided input on the effect thresholds to apply for each outcome (fallers, falls, injurious fallers, fractures, hip fractures, functional status, health-related quality of life, long-term care admissions, adverse effects, serious adverse effects). For benefits and harms, we included a broad range of non-pharmacological interventions relevant to primary care. Although usual care was the main comparator of interest, we included studies comparing interventions head-to-head and conducted a network meta-analysis (NMAs) for each outcome, enabling analysis of interventions lacking direct comparisons to usual care. For benefits and harms, we included randomized controlled trials with a minimum 3-month follow-up and reporting on one of our fall outcomes (fallers, falls, injurious fallers); for the other questions, we preferred quantitative data but considered qualitative findings to fill gaps in evidence. No date limits were applied for benefits and harms, whereas for outcome valuation and intervention preferences we included studies published in 2000 or later. All data were extracted by one trained reviewer and verified for accuracy and completeness. For benefits and harms, we relied on the previous review team's risk-of-bias assessments for benefit outcomes, but otherwise, two reviewers independently assessed the risk of bias (within and across study). For the other questions, one reviewer verified another's assessments. Consensus was used, with adjudication by a lead author when necessary. A coding framework, modified from the ProFANE taxonomy, classified interventions and their attributes (e.g., supervision, delivery format, duration/intensity). For benefit outcomes, we employed random-effects NMA using a frequentist approach and a consistency model. Transitivity and coherence were assessed using meta-regressions and global and local coherence tests, as well as through graphical display and descriptive data on the composition of the nodes with respect to major pre-planned effect modifiers. We assessed heterogeneity using prediction intervals. For intervention-related adverse effects, we pooled proportions except for vitamin D for which we considered data in the control groups and undertook random-effects pairwise meta-analysis using a relative risk (any adverse effects) or risk difference (serious adverse effects). For outcome valuation, we pooled disutilities (representing the impact of a negative event, e.g. fall, on one's usual quality of life, with 0 = no impact and 1 = death and ~ 0.05 indicating important disutility) from the EQ-5D utility measurement using the inverse variance method and a random-effects model and explored heterogeneity. When studies only reported other data, we compared the findings with our main analysis. For intervention preferences, we used a coding schema identifying whether there were strong, clear, no, or variable preferences within, and then across, studies. We assessed the certainty of evidence for each outcome using CINeMA for benefit outcomes and GRADE for all other outcomes.
A total of 290 studies were included across the reviews, with two studies included in multiple questions. For benefits and harms, we included 219 trials reporting on 167,864 participants and created 59 interventions (nodes). Transitivity and coherence were assessed as adequate. Across eight NMAs, the number of contributing trials ranged between 19 and 173, and the number of interventions ranged from 19 to 57. Approximately, half of the interventions in each network had at least low certainty for benefit. The fallers outcome had the highest number of interventions with moderate certainty for benefit (18/57). For the non-fall outcomes (fractures, hip fracture, long-term care [LTC] admission, functional status, health-related quality of life), many interventions had very low certainty evidence, often from lack of data. We prioritized findings from 21 interventions where there was moderate certainty for at least some benefit. Fourteen of these had a focus on exercise, the majority being supervised (for > 2 sessions) and of long duration (> 3 months), and with balance/resistance and group Tai Chi interventions generally having the most outcomes with at least low certainty for benefit. None of the interventions having moderate certainty evidence focused on walking. Whole-body vibration or home-hazard assessment (HHA) plus exercise provided to everyone showed moderate certainty for some benefit. No multifactorial intervention alone showed moderate certainty for any benefit. Six interventions only had very-low certainty evidence for the benefit outcomes. Two interventions had moderate certainty of harmful effects for at least one benefit outcome, though the populations across studies were at high risk for falls. Vitamin D and most single-component exercise interventions are probably associated with minimal adverse effects. Some uncertainty exists about possible adverse effects from other interventions. For outcome valuation, we included 44 studies of which 34 reported EQ-5D disutilities. Admission to long-term care had the highest disutility (1.0), but the evidence was rated as low certainty. Both fall-related hip (moderate certainty) and non-hip (low certainty) fracture may result in substantial disutility (0.53 and 0.57) in the first 3 months after injury. Disutility for both hip and non-hip fractures is probably lower 12 months after injury (0.16 and 0.19, with high and moderate certainty, respectively) compared to within the first 3 months. No study measured the disutility of an injurious fall. Fractures are probably more important than either falls (0.09 over 12 months) or functional status (0.12). Functional status may be somewhat more important than falls. For intervention preferences, 29 studies (9 qualitative) reported on 17 comparisons among single-component interventions showing benefit. Exercise interventions focusing on balance and/or resistance training appear to be clearly preferred over Tai Chi and other forms of exercise (e.g., yoga, aerobic). For exercise programs in general, there is probably variability among people in whether they prefer group or individual delivery, though there was high certainty that individual was preferred over group delivery of balance/resistance programs. Balance/resistance exercise may be preferred over education, though the evidence was low certainty. There was low certainty for a slight preference for education over cognitive-behavioral therapy, and group education may be preferred over individual education.
To prevent falls among community-dwelling older adults, evidence is most certain for benefit, at least over 1-2 years, from supervised, long-duration balance/resistance and group Tai Chi interventions, whole-body vibration, high-intensity/dose education or cognitive-behavioral therapy, and interventions of comprehensive multifactorial assessment with targeted treatment plus HHA, HHA plus exercise, or education provided to everyone. Adding other interventions to exercise does not appear to substantially increase benefits. Overall, effects appear most applicable to those with elevated fall risk. Choice among effective interventions that are available may best depend on individual patient preferences, though when implementing new balance/resistance programs delivering individual over group sessions when feasible may be most acceptable. Data on more patient-important outcomes including fall-related fractures and adverse effects would be beneficial, as would studies focusing on equity-deserving populations and on programs delivered virtually.
Not registered.
Pillay J
,Gaudet LA
,Saba S
,Vandermeer B
,Ashiq AR
,Wingert A
,Hartling L
... -
《Systematic Reviews》
-
Comparison of Two Modern Survival Prediction Tools, SORG-MLA and METSSS, in Patients With Symptomatic Long-bone Metastases Who Underwent Local Treatment With Surgery Followed by Radiotherapy and With Radiotherapy Alone.
Survival estimation for patients with symptomatic skeletal metastases ideally should be made before a type of local treatment has already been determined. Currently available survival prediction tools, however, were generated using data from patients treated either operatively or with local radiation alone, raising concerns about whether they would generalize well to all patients presenting for assessment. The Skeletal Oncology Research Group machine-learning algorithm (SORG-MLA), trained with institution-based data of surgically treated patients, and the Metastases location, Elderly, Tumor primary, Sex, Sickness/comorbidity, and Site of radiotherapy model (METSSS), trained with registry-based data of patients treated with radiotherapy alone, are two of the most recently developed survival prediction models, but they have not been tested on patients whose local treatment strategy is not yet decided.
(1) Which of these two survival prediction models performed better in a mixed cohort made up both of patients who received local treatment with surgery followed by radiotherapy and who had radiation alone for symptomatic bone metastases? (2) Which model performed better among patients whose local treatment consisted of only palliative radiotherapy? (3) Are laboratory values used by SORG-MLA, which are not included in METSSS, independently associated with survival after controlling for predictions made by METSSS?
Between 2010 and 2018, we provided local treatment for 2113 adult patients with skeletal metastases in the extremities at an urban tertiary referral academic medical center using one of two strategies: (1) surgery followed by postoperative radiotherapy or (2) palliative radiotherapy alone. Every patient's survivorship status was ascertained either by their medical records or the national death registry from the Taiwanese National Health Insurance Administration. After applying a priori designated exclusion criteria, 91% (1920) were analyzed here. Among them, 48% (920) of the patients were female, and the median (IQR) age was 62 years (53 to 70 years). Lung was the most common primary tumor site (41% [782]), and 59% (1128) of patients had other skeletal metastases in addition to the treated lesion(s). In general, the indications for surgery were the presence of a complete pathologic fracture or an impending pathologic fracture, defined as having a Mirels score of ≥ 9, in patients with an American Society of Anesthesiologists (ASA) classification of less than or equal to IV and who were considered fit for surgery. The indications for radiotherapy were relief of pain, local tumor control, prevention of skeletal-related events, and any combination of the above. In all, 84% (1610) of the patients received palliative radiotherapy alone as local treatment for the target lesion(s), and 16% (310) underwent surgery followed by postoperative radiotherapy. Neither METSSS nor SORG-MLA was used at the point of care to aid clinical decision-making during the treatment period. Survival was retrospectively estimated by these two models to test their potential for providing survival probabilities. We first compared SORG to METSSS in the entire population. Then, we repeated the comparison in patients who received local treatment with palliative radiation alone. We assessed model performance by area under the receiver operating characteristic curve (AUROC), calibration analysis, Brier score, and decision curve analysis (DCA). The AUROC measures discrimination, which is the ability to distinguish patients with the event of interest (such as death at a particular time point) from those without. AUROC typically ranges from 0.5 to 1.0, with 0.5 indicating random guessing and 1.0 a perfect prediction, and in general, an AUROC of ≥ 0.7 indicates adequate discrimination for clinical use. Calibration refers to the agreement between the predicted outcomes (in this case, survival probabilities) and the actual outcomes, with a perfect calibration curve having an intercept of 0 and a slope of 1. A positive intercept indicates that the actual survival is generally underestimated by the prediction model, and a negative intercept suggests the opposite (overestimation). When comparing models, an intercept closer to 0 typically indicates better calibration. Calibration can also be summarized as log(O:E), the logarithm scale of the ratio of observed (O) to expected (E) survivors. A log(O:E) > 0 signals an underestimation (the observed survival is greater than the predicted survival); and a log(O:E) < 0 indicates the opposite (the observed survival is lower than the predicted survival). A model with a log(O:E) closer to 0 is generally considered better calibrated. The Brier score is the mean squared difference between the model predictions and the observed outcomes, and it ranges from 0 (best prediction) to 1 (worst prediction). The Brier score captures both discrimination and calibration, and it is considered a measure of overall model performance. In Brier score analysis, the "null model" assigns a predicted probability equal to the prevalence of the outcome and represents a model that adds no new information. A prediction model should achieve a Brier score at least lower than the null-model Brier score to be considered as useful. The DCA was developed as a method to determine whether using a model to inform treatment decisions would do more good than harm. It plots the net benefit of making decisions based on the model's predictions across all possible risk thresholds (or cost-to-benefit ratios) in relation to the two default strategies of treating all or no patients. The care provider can decide on an acceptable risk threshold for the proposed treatment in an individual and assess the corresponding net benefit to determine whether consulting with the model is superior to adopting the default strategies. Finally, we examined whether laboratory data, which were not included in the METSSS model, would have been independently associated with survival after controlling for the METSSS model's predictions by using the multivariable logistic and Cox proportional hazards regression analyses.
Between the two models, only SORG-MLA achieved adequate discrimination (an AUROC of > 0.7) in the entire cohort (of patients treated operatively or with radiation alone) and in the subgroup of patients treated with palliative radiotherapy alone. SORG-MLA outperformed METSSS by a wide margin on discrimination, calibration, and Brier score analyses in not only the entire cohort but also the subgroup of patients whose local treatment consisted of radiotherapy alone. In both the entire cohort and the subgroup, DCA demonstrated that SORG-MLA provided more net benefit compared with the two default strategies (of treating all or no patients) and compared with METSSS when risk thresholds ranged from 0.2 to 0.9 at both 90 days and 1 year, indicating that using SORG-MLA as a decision-making aid was beneficial when a patient's individualized risk threshold for opting for treatment was 0.2 to 0.9. Higher albumin, lower alkaline phosphatase, lower calcium, higher hemoglobin, lower international normalized ratio, higher lymphocytes, lower neutrophils, lower neutrophil-to-lymphocyte ratio, lower platelet-to-lymphocyte ratio, higher sodium, and lower white blood cells were independently associated with better 1-year and overall survival after adjusting for the predictions made by METSSS.
Based on these discoveries, clinicians might choose to consult SORG-MLA instead of METSSS for survival estimation in patients with long-bone metastases presenting for evaluation of local treatment. Basing a treatment decision on the predictions of SORG-MLA could be beneficial when a patient's individualized risk threshold for opting to undergo a particular treatment strategy ranged from 0.2 to 0.9. Future studies might investigate relevant laboratory items when constructing or refining a survival estimation model because these data demonstrated prognostic value independent of the predictions of the METSSS model, and future studies might also seek to keep these models up to date using data from diverse, contemporary patients undergoing both modern operative and nonoperative treatments.
Level III, diagnostic study.
Lee CC
,Chen CW
,Yen HK
,Lin YP
,Lai CY
,Wang JL
,Groot OQ
,Janssen SJ
,Schwab JH
,Hsu FM
,Lin WH
... -
《-》
-
Serum and urine nucleic acid screening tests for BK polyomavirus-associated nephropathy in kidney and kidney-pancreas transplant recipients.
BK polyomavirus-associated nephropathy (BKPyVAN) occurs when BK polyomavirus (BKPyV) affects a transplanted kidney, leading to an initial injury characterised by cytopathic damage, inflammation, and fibrosis. BKPyVAN may cause permanent loss of graft function and premature graft loss. Early detection gives clinicians an opportunity to intervene by timely reduction in immunosuppression to reduce adverse graft outcomes. Quantitative nucleic acid testing (QNAT) for detection of BKPyV DNA in blood and urine is increasingly used as a screening test as diagnosis of BKPyVAN by kidney biopsy is invasive and associated with procedural risks. In this review, we assessed the sensitivity and specificity of QNAT tests in patients with BKPyVAN.
We assessed the diagnostic test accuracy of blood/plasma/serum BKPyV QNAT and urine BKPyV QNAT for the diagnosis of BKPyVAN after transplantation. We also investigated the following sources of heterogeneity: types and quality of studies, era of publication, various thresholds of BKPyV-DNAemia/BKPyV viruria and variability between assays as secondary objectives.
We searched MEDLINE (OvidSP), EMBASE (OvidSP), and BIOSIS, and requested a search of the Cochrane Register of diagnostic test accuracy studies from inception to 13 June 2023. We also searched ClinicalTrials.com and the WHO International Clinical Trials Registry Platform for ongoing trials.
We included cross-sectional or cohort studies assessing the diagnostic accuracy of two index tests (blood/plasma/serum BKPyV QNAT or urine BKPyV QNAT) for the diagnosis of BKPyVAN, as verified by the reference standard (histopathology). Both retrospective and prospective cohort studies were included. We did not include case reports and case control studies.
Two authors independently carried out data extraction from each study. We assessed the methodological quality of the included studies by using Quality Assessment of Diagnostic-Accuracy Studies (QUADAS-2) assessment criteria. We used the bivariate random-effects model to obtain summary estimates of sensitivity and specificity for the QNAT test with one positivity threshold. In cases where meta-analyses were not possible due to the small number of studies available, we detailed the descriptive evidence and used a summative approach. We explored possible sources of heterogeneity by adding covariates to meta-regression models.
We included 31 relevant studies with a total of 6559 participants in this review. Twenty-six studies included kidney transplant recipients, four studies included kidney and kidney-pancreas transplant recipients, and one study included kidney, kidney-pancreas and kidney-liver transplant recipients. Studies were carried out in South Asia and the Asia-Pacific region (12 studies), North America (9 studies), Europe (8 studies), and South America (2 studies).
blood/serum/plasma BKPyV QNAT The diagnostic performance of blood BKPyV QNAT using a common viral load threshold of 10,000 copies/mL was reported in 18 studies (3434 participants). Summary estimates at 10,000 copies/mL as a cut-off indicated that the pooled sensitivity was 0.86 (95% confidence interval (CI) 0.78 to 0.93) while the pooled specificity was 0.95 (95% CI 0.91 to 0.97). A limited number of studies were available to analyse the summary estimates for individual viral load thresholds other than 10,000 copies/mL. Indirect comparison of thresholds of the three different cut-off values of 1000 copies/mL (9 studies), 5000 copies/mL (6 studies), and 10,000 copies/mL (18 studies), the higher cut-off value at 10,000 copies/mL corresponded to higher specificity with lower sensitivity. The summary estimates of indirect comparison of thresholds above 10,000 copies/mL were uncertain, primarily due to a limited number of studies with wide CIs contributed to the analysis. Nonetheless, these indirect comparisons should be interpreted cautiously since differences in study design, patient populations, and methodological variations among the included studies can introduce biases. Analysis of all blood BKPyV QNAT studies, including various blood viral load thresholds (30 studies, 5658 participants, 7 thresholds), indicated that test performance remains robust, pooled sensitivity 0.90 (95% CI 0.85 to 0.94) and specificity 0.93 (95% CI 0.91 to 0.95). In the multiple cut-off model, including the various thresholds generating a single curve, the optimal cut-off was around 2000 copies/mL, sensitivity of 0.89 (95% CI 0.66 to 0.97) and specificity of 0.88 (95% CI 0.80 to 0.93). However, as most of the included studies were retrospective, and not all participants underwent the reference standard tests, this may result in a high risk of selection and verification bias.
urine BKPyV QNAT There was insufficient data to thoroughly investigate both accuracy and thresholds of urine BKPyV QNAT resulting in an imprecise estimation of its accuracy based on the available evidence.
There is insufficient evidence to suggest the use of urine BKPyV QNAT as the primary screening tool for BKPyVAN. The summary estimates of the test sensitivity and specificity of blood/serum/plasma BKPyV QNAT test at a threshold of 10,000 copies/mL for BKPyVAN were 0.86 (95% CI 0.78 to 0.93) and 0.95 (95% CI 0.91 to 0.97), respectively. The multiple cut-off model showed that the optimal cut-off was around 2000 copies/mL, with test sensitivity of 0.89 (95% CI 0.66 to 0.97) and specificity of 0.88 (95% CI 0.80 to 0.93). While 10,000 copies/mL is the most commonly used cut-off, with good test performance characteristics and supports the current recommendations, it is important to interpret the results with caution because of low-certainty evidence.
Maung Myint T
,Chong CH
,von Huben A
,Attia J
,Webster AC
,Blosser CD
,Craig JC
,Teixeira-Pinto A
,Wong G
... -
《Cochrane Database of Systematic Reviews》
-
Strategies to improve smoking cessation rates in primary care.
Primary care is an important setting in which to treat tobacco addiction. However, the rates at which providers address smoking cessation and the success of that support vary. Strategies can be implemented to improve and increase the delivery of smoking cessation support (e.g. through provider training), and to increase the amount and breadth of support given to people who smoke (e.g. through additional counseling or tailored printed materials).
To assess the effectiveness of strategies intended to increase the success of smoking cessation interventions in primary care settings. To assess whether any effect that these interventions have on smoking cessation may be due to increased implementation by healthcare providers.
We searched the Cochrane Tobacco Addiction Group's Specialized Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, and trial registries to 10 September 2020.
We included randomized controlled trials (RCTs) and cluster-RCTs (cRCTs) carried out in primary care, including non-pregnant adults. Studies investigated a strategy or strategies to improve the implementation or success of smoking cessation treatment in primary care. These strategies could include interventions designed to increase or enhance the quality of existing support, or smoking cessation interventions offered in addition to standard care (adjunctive interventions). Intervention strategies had to be tested in addition to and in comparison with standard care, or in addition to other active intervention strategies if the effect of an individual strategy could be isolated. Standard care typically incorporates physician-delivered brief behavioral support, and an offer of smoking cessation medication, but differs across studies. Studies had to measure smoking abstinence at six months' follow-up or longer.
We followed standard Cochrane methods. Our primary outcome - smoking abstinence - was measured using the most rigorous intention-to-treat definition available. We also extracted outcome data for quit attempts, and the following markers of healthcare provider performance: asking about smoking status; advising on cessation; assessment of participant readiness to quit; assisting with cessation; arranging follow-up for smoking participants. Where more than one study investigated the same strategy or set of strategies, and measured the same outcome, we conducted meta-analyses using Mantel-Haenszel random-effects methods to generate pooled risk ratios (RRs) and 95% confidence intervals (CIs).
We included 81 RCTs and cRCTs, involving 112,159 participants. Fourteen were rated at low risk of bias, 44 at high risk, and the remainder at unclear risk. We identified moderate-certainty evidence, limited by inconsistency, that the provision of adjunctive counseling by a health professional other than the physician (RR 1.31, 95% CI 1.10 to 1.55; I2 = 44%; 22 studies, 18,150 participants), and provision of cost-free medications (RR 1.36, 95% CI 1.05 to 1.76; I2 = 63%; 10 studies,7560 participants) increased smoking quit rates in primary care. There was also moderate-certainty evidence, limited by risk of bias, that the addition of tailored print materials to standard smoking cessation treatment increased the number of people who had successfully stopped smoking at six months' follow-up or more (RR 1.29, 95% CI 1.04 to 1.59; I2 = 37%; 6 studies, 15,978 participants). There was no clear evidence that providing participants who smoked with biomedical risk feedback increased their likelihood of quitting (RR 1.07, 95% CI 0.81 to 1.41; I2 = 40%; 7 studies, 3491 participants), or that provider smoking cessation training (RR 1.10, 95% CI 0.85 to 1.41; I2 = 66%; 7 studies, 13,685 participants) or provider incentives (RR 1.14, 95% CI 0.97 to 1.34; I2 = 0%; 2 studies, 2454 participants) increased smoking abstinence rates. However, in assessing the former two strategies we judged the evidence to be of low certainty and in assessing the latter strategies it was of very low certainty. We downgraded the evidence due to imprecision, inconsistency and risk of bias across these comparisons. There was some indication that provider training increased the delivery of smoking cessation support, along with the provision of adjunctive counseling and cost-free medications. However, our secondary outcomes were not measured consistently, and in many cases analyses were subject to substantial statistical heterogeneity, imprecision, or both, making it difficult to draw conclusions. Thirty-four studies investigated multicomponent interventions to improve smoking cessation rates. There was substantial variation in the combinations of strategies tested, and the resulting individual study effect estimates, precluding meta-analyses in most cases. Meta-analyses provided some evidence that adjunctive counseling combined with either cost-free medications or provider training enhanced quit rates when compared with standard care alone. However, analyses were limited by small numbers of events, high statistical heterogeneity, and studies at high risk of bias. Analyses looking at the effects of combining provider training with flow sheets to aid physician decision-making, and with outreach facilitation, found no clear evidence that these combinations increased quit rates; however, analyses were limited by imprecision, and there was some indication that these approaches did improve some forms of provider implementation.
There is moderate-certainty evidence that providing adjunctive counseling by an allied health professional, cost-free smoking cessation medications, and tailored printed materials as part of smoking cessation support in primary care can increase the number of people who achieve smoking cessation. There is no clear evidence that providing participants with biomedical risk feedback, or primary care providers with training or incentives to provide smoking cessation support enhance quit rates. However, we rated this evidence as of low or very low certainty, and so conclusions are likely to change as further evidence becomes available. Most of the studies in this review evaluated smoking cessation interventions that had already been extensively tested in the general population. Further studies should assess strategies designed to optimize the delivery of those interventions already known to be effective within the primary care setting. Such studies should be cluster-randomized to account for the implications of implementation in this particular setting. Due to substantial variation between studies in this review, identifying optimal characteristics of multicomponent interventions to improve the delivery of smoking cessation treatment was challenging. Future research could use component network meta-analysis to investigate this further.
Lindson N
,Pritchard G
,Hong B
,Fanshawe TR
,Pipe A
,Papadakis S
... -
《Cochrane Database of Systematic Reviews》