-
Antimicrobial peptide LyeTx I mn∆K labeled with (68)Ga is a potential PET radiopharmaceutical for molecular imaging of infections.
Antimicrobial peptides have been radiolabeled and investigated as molecular diagnostic probes due to their propensity to accumulate in infectious sites rather than aseptic inflammatory lesions. LyeTx I is a cationic peptide from the venom of Lycosa erythrognatha, exhibiting significant antimicrobial activity. LyeTx I mn∆K is a shortened derivative of LyeTx I, with an optimized balance between antimicrobial and hemolytic activities. This study reports the first 68Ga-radiolabeling of the DOTA-modified LyeTx I mn∆K and primarily preclinical evaluations of [68Ga]Ga-DOTA(K)-LyeTx I mn∆K as a PET radiopharmaceutical for infection imaging.
DOTA(K)-LyeTx I mn∆K was radiolabeled with freshly eluted 68Ga. Radiochemical yield (RCY), radiochemical purity (RCP), and radiochemical stability (in saline and serum) were evaluated using ascending thin-layer chromatography (TLC) and reversed-phase high-performance liquid chromatography (RP-HPLC). The radiopeptide's lipophilicity was assessed by determining the logarithm of the partition coefficient (Log P). Serum protein binding (SBP) and binding to Staphylococcus aureus (S. aureus) cells were determined in vitro. Ex vivo biodistribution studies and PET/CT imaging were conducted in healthy mice (control) and mice with infection and aseptic inflammation to evaluate the potential of [68Ga]Ga-DOTA(K)-LyeTx I mn∆K as a specific PET radiopharmaceutical for infections.
[68Ga]Ga-DOTA(K)-LyeTx I mn∆K was obtained with a high RCY (>90 %), and after purification through a Sep-Pak C18 cartridge, the RCP exceeded 99 %. Ascending TLC and RP-HPLC showed that the radiopeptide remained stable for up to 3.0 h in saline solution and up to 1.5 h in murine serum. [68Ga]Ga-DOTA(K)-LyeTx I mn∆K exhibited hydrophilic characteristics (Log P = -2.4 ± 0.1) and low SPB (ranging from 23.3 ± 0.4 % at 5 min of incubation to 10.5 ± 1.1 % at 60 min of incubation). The binding of [68Ga]Ga-DOTA(K)-LyeTx I mn∆K to S. aureus cells was proportional to bacterial concentration, with binding percentages of 8.8 ± 0.5 % (0.5 × 109 CFU.mL-1), 16.2 ± 1.4 % (1.0 × 109 CFU.mL-1), and 62.2 ± 0.6 % (5.0 × 109 CFU.mL-1). Ex vivo biodistribution studies and PET/CT images showed higher radiopeptide uptake at the infection site compared to the aseptic inflammation site; the latter was similar to the control group. Target-to-non-target (T/NT) ratios obtained by ex vivo biodistribution data were approximately 1.0, 1.3, and 3.0 at all investigated time intervals for the control, aseptic inflammation, and infection groups, respectively. Furthermore, T/NT ratios obtained from PET/CT images were 1.1 ± 0.1 for the control group and 1.4 ± 0.1 for the aseptic inflammation group. For the infection group, T/NT ratio was 5.0 ± 0.3, approximately 5 times greater compared to the former groups.
The results suggest the potential of [68Ga]Ga-DOTA(K)-LyeTx I mn∆K as a PET radiopharmaceutical for molecular imaging of infections.
Fuscaldi LL
,Durante ACR
,Dapueto R
,Reyes AL
,Paolino A
,Savio E
,Malavolta L
,de Lima ME
,Fernandes SOA
,Cardoso VN
,de Barboza MF
... -
《-》
-
Comparison of Two Modern Survival Prediction Tools, SORG-MLA and METSSS, in Patients With Symptomatic Long-bone Metastases Who Underwent Local Treatment With Surgery Followed by Radiotherapy and With Radiotherapy Alone.
Survival estimation for patients with symptomatic skeletal metastases ideally should be made before a type of local treatment has already been determined. Currently available survival prediction tools, however, were generated using data from patients treated either operatively or with local radiation alone, raising concerns about whether they would generalize well to all patients presenting for assessment. The Skeletal Oncology Research Group machine-learning algorithm (SORG-MLA), trained with institution-based data of surgically treated patients, and the Metastases location, Elderly, Tumor primary, Sex, Sickness/comorbidity, and Site of radiotherapy model (METSSS), trained with registry-based data of patients treated with radiotherapy alone, are two of the most recently developed survival prediction models, but they have not been tested on patients whose local treatment strategy is not yet decided.
(1) Which of these two survival prediction models performed better in a mixed cohort made up both of patients who received local treatment with surgery followed by radiotherapy and who had radiation alone for symptomatic bone metastases? (2) Which model performed better among patients whose local treatment consisted of only palliative radiotherapy? (3) Are laboratory values used by SORG-MLA, which are not included in METSSS, independently associated with survival after controlling for predictions made by METSSS?
Between 2010 and 2018, we provided local treatment for 2113 adult patients with skeletal metastases in the extremities at an urban tertiary referral academic medical center using one of two strategies: (1) surgery followed by postoperative radiotherapy or (2) palliative radiotherapy alone. Every patient's survivorship status was ascertained either by their medical records or the national death registry from the Taiwanese National Health Insurance Administration. After applying a priori designated exclusion criteria, 91% (1920) were analyzed here. Among them, 48% (920) of the patients were female, and the median (IQR) age was 62 years (53 to 70 years). Lung was the most common primary tumor site (41% [782]), and 59% (1128) of patients had other skeletal metastases in addition to the treated lesion(s). In general, the indications for surgery were the presence of a complete pathologic fracture or an impending pathologic fracture, defined as having a Mirels score of ≥ 9, in patients with an American Society of Anesthesiologists (ASA) classification of less than or equal to IV and who were considered fit for surgery. The indications for radiotherapy were relief of pain, local tumor control, prevention of skeletal-related events, and any combination of the above. In all, 84% (1610) of the patients received palliative radiotherapy alone as local treatment for the target lesion(s), and 16% (310) underwent surgery followed by postoperative radiotherapy. Neither METSSS nor SORG-MLA was used at the point of care to aid clinical decision-making during the treatment period. Survival was retrospectively estimated by these two models to test their potential for providing survival probabilities. We first compared SORG to METSSS in the entire population. Then, we repeated the comparison in patients who received local treatment with palliative radiation alone. We assessed model performance by area under the receiver operating characteristic curve (AUROC), calibration analysis, Brier score, and decision curve analysis (DCA). The AUROC measures discrimination, which is the ability to distinguish patients with the event of interest (such as death at a particular time point) from those without. AUROC typically ranges from 0.5 to 1.0, with 0.5 indicating random guessing and 1.0 a perfect prediction, and in general, an AUROC of ≥ 0.7 indicates adequate discrimination for clinical use. Calibration refers to the agreement between the predicted outcomes (in this case, survival probabilities) and the actual outcomes, with a perfect calibration curve having an intercept of 0 and a slope of 1. A positive intercept indicates that the actual survival is generally underestimated by the prediction model, and a negative intercept suggests the opposite (overestimation). When comparing models, an intercept closer to 0 typically indicates better calibration. Calibration can also be summarized as log(O:E), the logarithm scale of the ratio of observed (O) to expected (E) survivors. A log(O:E) > 0 signals an underestimation (the observed survival is greater than the predicted survival); and a log(O:E) < 0 indicates the opposite (the observed survival is lower than the predicted survival). A model with a log(O:E) closer to 0 is generally considered better calibrated. The Brier score is the mean squared difference between the model predictions and the observed outcomes, and it ranges from 0 (best prediction) to 1 (worst prediction). The Brier score captures both discrimination and calibration, and it is considered a measure of overall model performance. In Brier score analysis, the "null model" assigns a predicted probability equal to the prevalence of the outcome and represents a model that adds no new information. A prediction model should achieve a Brier score at least lower than the null-model Brier score to be considered as useful. The DCA was developed as a method to determine whether using a model to inform treatment decisions would do more good than harm. It plots the net benefit of making decisions based on the model's predictions across all possible risk thresholds (or cost-to-benefit ratios) in relation to the two default strategies of treating all or no patients. The care provider can decide on an acceptable risk threshold for the proposed treatment in an individual and assess the corresponding net benefit to determine whether consulting with the model is superior to adopting the default strategies. Finally, we examined whether laboratory data, which were not included in the METSSS model, would have been independently associated with survival after controlling for the METSSS model's predictions by using the multivariable logistic and Cox proportional hazards regression analyses.
Between the two models, only SORG-MLA achieved adequate discrimination (an AUROC of > 0.7) in the entire cohort (of patients treated operatively or with radiation alone) and in the subgroup of patients treated with palliative radiotherapy alone. SORG-MLA outperformed METSSS by a wide margin on discrimination, calibration, and Brier score analyses in not only the entire cohort but also the subgroup of patients whose local treatment consisted of radiotherapy alone. In both the entire cohort and the subgroup, DCA demonstrated that SORG-MLA provided more net benefit compared with the two default strategies (of treating all or no patients) and compared with METSSS when risk thresholds ranged from 0.2 to 0.9 at both 90 days and 1 year, indicating that using SORG-MLA as a decision-making aid was beneficial when a patient's individualized risk threshold for opting for treatment was 0.2 to 0.9. Higher albumin, lower alkaline phosphatase, lower calcium, higher hemoglobin, lower international normalized ratio, higher lymphocytes, lower neutrophils, lower neutrophil-to-lymphocyte ratio, lower platelet-to-lymphocyte ratio, higher sodium, and lower white blood cells were independently associated with better 1-year and overall survival after adjusting for the predictions made by METSSS.
Based on these discoveries, clinicians might choose to consult SORG-MLA instead of METSSS for survival estimation in patients with long-bone metastases presenting for evaluation of local treatment. Basing a treatment decision on the predictions of SORG-MLA could be beneficial when a patient's individualized risk threshold for opting to undergo a particular treatment strategy ranged from 0.2 to 0.9. Future studies might investigate relevant laboratory items when constructing or refining a survival estimation model because these data demonstrated prognostic value independent of the predictions of the METSSS model, and future studies might also seek to keep these models up to date using data from diverse, contemporary patients undergoing both modern operative and nonoperative treatments.
Level III, diagnostic study.
Lee CC
,Chen CW
,Yen HK
,Lin YP
,Lai CY
,Wang JL
,Groot OQ
,Janssen SJ
,Schwab JH
,Hsu FM
,Lin WH
... -
《-》
-
Synbiotics, prebiotics and probiotics for people with chronic kidney disease.
Chronic kidney disease (CKD) is a major public health problem affecting 13% of the global population. Prior research has indicated that CKD is associated with gut dysbiosis. Gut dysbiosis may lead to the development and/or progression of CKD, which in turn may in turn lead to gut dysbiosis as a result of uraemic toxins, intestinal wall oedema, metabolic acidosis, prolonged intestinal transit times, polypharmacy (frequent antibiotic exposures) and dietary restrictions used to treat CKD. Interventions such as synbiotics, prebiotics, and probiotics may improve the balance of the gut flora by altering intestinal pH, improving gut microbiota balance and enhancing gut barrier function (i.e. reducing gut permeability).
This review aimed to evaluate the benefits and harms of synbiotics, prebiotics, and probiotics for people with CKD.
We searched the Cochrane Kidney and Transplant Register of Studies up to 9 October 2023 through contact with the Information Specialist using search terms relevant to this review. Studies in the Register are identified through searches of CENTRAL, MEDLINE, and EMBASE, conference proceedings, the International Clinical Trials Registry Platform (ICTRP) Search Portal and ClinicalTrials.gov.
We included randomised controlled trials (RCTs) measuring and reporting the effects of synbiotics, prebiotics, or probiotics in any combination and any formulation given to people with CKD (CKD stages 1 to 5, including dialysis and kidney transplant). Two authors independently assessed the retrieved titles and abstracts and, where necessary, the full text to determine which satisfied the inclusion criteria.
Data extraction was independently carried out by two authors using a standard data extraction form. Summary estimates of effect were obtained using a random-effects model, and results were expressed as risk ratios (RR) and their 95% confidence intervals (CI) for dichotomous outcomes, and mean difference (MD) or standardised mean difference (SMD) and 95% CI for continuous outcomes. The methodological quality of the included studies was assessed using the Cochrane risk of bias tool. Data entry was carried out by one author and cross-checked by another. Confidence in the evidence was assessed using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach.
Forty-five studies (2266 randomised participants) were included in this review. Study participants were adults (two studies in children) with CKD ranging from stages 1 to 5, with patients receiving and not receiving dialysis, of whom half also had diabetes and hypertension. No studies investigated the same synbiotic, prebiotic or probiotic of similar strains, doses, or frequencies. Most studies were judged to be low risk for selection bias, performance bias and reporting bias, unclear risk for detection bias and for control of confounding factors, and high risk for attrition and other biases. Compared to prebiotics, it is uncertain whether synbiotics improve estimated glomerular filtration rate (eGFR) at four weeks (1 study, 34 participants: MD -3.80 mL/min/1.73 m², 95% CI -17.98 to 10.38), indoxyl sulfate at four weeks (1 study, 42 participants: MD 128.30 ng/mL, 95% CI -242.77 to 499.37), change in gastrointestinal (GI) upset (borborymgi) at four weeks (1 study, 34 participants: RR 15.26, 95% CI 0.99 to 236.23), or change in GI upset (Gastrointestinal Symptom Rating Scale) at 12 months (1 study, 56 participants: MD 0.00, 95% CI -0.27 to 0.27), because the certainty of the evidence was very low. Compared to certain strains of prebiotics, it is uncertain whether a different strain of prebiotics improves eGFR at 12 weeks (1 study, 50 participants: MD 0.00 mL/min, 95% CI -1.73 to 1.73), indoxyl sulfate at six weeks (2 studies, 64 participants: MD -0.20 μg/mL, 95% CI -1.01 to 0.61; I² = 0%) or change in any GI upset, intolerance or microbiota composition, because the certainty of the evidence was very low. Compared to certain strains of probiotics, it is uncertain whether a different strain of probiotic improves eGFR at eight weeks (1 study, 30 participants: MD -0.64 mL/min, 95% CI -9.51 to 8.23; very low certainty evidence). Compared to placebo or no treatment, it is uncertain whether synbiotics improve eGFR at six or 12 weeks (2 studies, 98 participants: MD 1.42 mL/min, 95% CI 0.65 to 2.2) or change in any GI upset or intolerance at 12 weeks because the certainty of the evidence was very low. Compared to placebo or no treatment, it is uncertain whether prebiotics improves indoxyl sulfate at eight weeks (2 studies, 75 participants: SMD -0.14 mg/L, 95% CI -0.60 to 0.31; very low certainty evidence) or microbiota composition because the certainty of the evidence is very low. Compared to placebo or no treatment, it is uncertain whether probiotics improve eGFR at eight, 12 or 15 weeks (3 studies, 128 participants: MD 2.73 mL/min, 95% CI -2.28 to 7.75; I² = 78%), proteinuria at 12 or 24 weeks (1 study, 60 participants: MD -15.60 mg/dL, 95% CI -34.30 to 3.10), indoxyl sulfate at 12 or 24 weeks (2 studies, 83 participants: MD -4.42 mg/dL, 95% CI -9.83 to 1.35; I² = 0%), or any change in GI upset or intolerance because the certainty of the evidence was very low. Probiotics may have little or no effect on albuminuria at 12 or 24 weeks compared to placebo or no treatment (4 studies, 193 participants: MD 0.02 g/dL, 95% CI -0.08 to 0.13; I² = 0%; low certainty evidence). For all comparisons, adverse events were poorly reported and were minimal (flatulence, nausea, diarrhoea, abdominal pain) and non-serious, and withdrawals were not related to the study treatment.
We found very few studies that adequately test biotic supplementation as alternative treatments for improving kidney function, GI symptoms, dialysis outcomes, allograft function, patient-reported outcomes, CVD, cancer, reducing uraemic toxins, and adverse effects. We are not certain whether synbiotics, prebiotics, or probiotics are more or less effective compared to one another, antibiotics, or standard care for improving patient outcomes in people with CKD. Adverse events were uncommon and mild.
Cooper TE
,Khalid R
,Chan S
,Craig JC
,Hawley CM
,Howell M
,Johnson DW
,Jaure A
,Teixeira-Pinto A
,Wong G
... -
《Cochrane Database of Systematic Reviews》
-
Impact of residual disease as a prognostic factor for survival in women with advanced epithelial ovarian cancer after primary surgery.
Ovarian cancer is the seventh most common cancer among women and a leading cause of death from gynaecological malignancies. Epithelial ovarian cancer is the most common type, accounting for around 90% of all ovarian cancers. This specific type of ovarian cancer starts in the surface layer covering the ovary or lining of the fallopian tube. Surgery is performed either before chemotherapy (upfront or primary debulking surgery (PDS)) or in the middle of a course of treatment with chemotherapy (neoadjuvant chemotherapy (NACT) and interval debulking surgery (IDS)), with the aim of removing all visible tumour and achieving no macroscopic residual disease (NMRD). The aim of this review is to investigate the prognostic impact of size of residual disease nodules (RD) in women who received upfront or interval cytoreductive surgery for advanced (stage III and IV) epithelial ovarian cancer (EOC).
To assess the prognostic impact of residual disease after primary surgery on survival outcomes for advanced (stage III and IV) epithelial ovarian cancer. In separate analyses, primary surgery included both upfront primary debulking surgery (PDS) followed by adjuvant chemotherapy and neoadjuvant chemotherapy followed by interval debulking surgery (IDS). Each residual disease threshold is considered as a separate prognostic factor.
We searched CENTRAL (2021, Issue 8), MEDLINE via Ovid (to 30 August 2021) and Embase via Ovid (to 30 August 2021).
We included survival data from studies of at least 100 women with advanced EOC after primary surgery. Residual disease was assessed as a prognostic factor in multivariate prognostic models. We excluded studies that reported fewer than 100 women, women with concurrent malignancies or studies that only reported unadjusted results. Women were included into two distinct groups: those who received PDS followed by platinum-based chemotherapy and those who received IDS, analysed separately. We included studies that reported all RD thresholds after surgery, but the main thresholds of interest were microscopic RD (labelled NMRD), RD 0.1 cm to 1 cm (small-volume residual disease (SVRD)) and RD > 1 cm (large-volume residual disease (LVRD)).
Two review authors independently abstracted data and assessed risk of bias. Where possible, we synthesised the data in meta-analysis. To assess the adequacy of adjustment factors used in multivariate Cox models, we used the 'adjustment for other prognostic factors' and 'statistical analysis and reporting' domains of the quality in prognosis studies (QUIPS) tool. We also made judgements about the certainty of the evidence for each outcome in the main comparisons, using GRADE. We examined differences between FIGO stages III and IV for different thresholds of RD after primary surgery. We considered factors such as age, grade, length of follow-up, type and experience of surgeon, and type of surgery in the interpretation of any heterogeneity. We also performed sensitivity analyses that distinguished between studies that included NMRD in RD categories of < 1 cm and those that did not. This was applicable to comparisons involving RD < 1 cm with the exception of RD < 1 cm versus NMRD. We evaluated women undergoing PDS and IDS in separate analyses.
We found 46 studies reporting multivariate prognostic analyses, including RD as a prognostic factor, which met our inclusion criteria: 22,376 women who underwent PDS and 3697 who underwent IDS, all with varying levels of RD. While we identified a range of different RD thresholds, we mainly report on comparisons that are the focus of a key area of clinical uncertainty (involving NMRD, SVRD and LVRD). The comparison involving any visible disease (RD > 0 cm) and NMRD was also important. SVRD versus NMRD in a PDS setting In PDS studies, most showed an increased risk of death in all RD groups when those with macroscopic RD (MRD) were compared to NMRD. Women who had SVRD after PDS had more than twice the risk of death compared to women with NMRD (hazard ratio (HR) 2.03, 95% confidence interval (CI) 1.80 to 2.29; I2 = 50%; 17 studies; 9404 participants; moderate-certainty). The analysis of progression-free survival found that women who had SVRD after PDS had nearly twice the risk of death compared to women with NMRD (HR 1.88, 95% CI 1.63 to 2.16; I2 = 63%; 10 studies; 6596 participants; moderate-certainty). LVRD versus SVRD in a PDS setting When we compared LVRD versus SVRD following surgery, the estimates were attenuated compared to NMRD comparisons. All analyses showed an overall survival benefit in women who had RD < 1 cm after surgery (HR 1.22, 95% CI 1.13 to 1.32; I2 = 0%; 5 studies; 6000 participants; moderate-certainty). The results were robust to analyses of progression-free survival. SVRD and LVRD versus NMRD in an IDS setting The one study that defined the categories as NMRD, SVRD and LVRD showed that women who had SVRD and LVRD after IDS had more than twice the risk of death compared to women who had NMRD (HR 2.09, 95% CI 1.20 to 3.66; 310 participants; I2 = 56%, and HR 2.23, 95% CI 1.49 to 3.34; 343 participants; I2 = 35%; very low-certainty, for SVRD versus NMRD and LVRD versus NMRD, respectively). LVRD versus SVRD + NMRD in an IDS setting Meta-analysis found that women who had LVRD had a greater risk of death and disease progression compared to women who had either SVRD or NMRD (HR 1.60, 95% CI 1.21 to 2.11; 6 studies; 1572 participants; I2 = 58% for overall survival and HR 1.76, 95% CI 1.23 to 2.52; 1145 participants; I2 = 60% for progression-free survival; very low-certainty). However, this result is biased as in all but one study it was not possible to distinguish NMRD within the < 1 cm thresholds. Only one study separated NMRD from SVRD; all others included NMRD in the SVRD group, which may create bias when comparing with LVRD, making interpretation challenging. MRD versus NMRD in an IDS setting Women who had any amount of MRD after IDS had more than twice the risk of death compared to women with NMRD (HR 2.11, 95% CI 1.35 to 3.29, I2 = 81%; 906 participants; very low-certainty).
In a PDS setting, there is moderate-certainty evidence that the amount of RD after primary surgery is a prognostic factor for overall and progression-free survival in women with advanced ovarian cancer. We separated our analysis into three distinct categories for the survival outcome including NMRD, SVRD and LVRD. After IDS, there may be only two categories required, although this is based on very low-certainty evidence, as all but one study included NMRD in the SVRD category. The one study that separated NMRD from SVRD showed no improved survival outcome in the SVRD category, compared to LVRD. Further low-certainty evidence also supported restricting to two categories, where women who had any amount of MRD after IDS had a significantly greater risk of death compared to women with NMRD. Therefore, the evidence presented in this review cannot conclude that using three categories applies in an IDS setting (very low-certainty evidence), as was supported for PDS (which has convincing moderate-certainty evidence).
Bryant A
,Hiu S
,Kunonga PT
,Gajjar K
,Craig D
,Vale L
,Winter-Roach BA
,Elattar A
,Naik R
... -
《Cochrane Database of Systematic Reviews》
-
Next-generation Sequencing Results Require Higher Inoculum for Cutibacterium acnes Detection Than Conventional Anaerobic Culture.
Cutibacterium acnes has been described as the most common causative microorganism in prosthetic shoulder infections. Conventional anaerobic culture or molecular-based technologies are usually used for this purpose, but little to no concordance between these methodologies (k = 0.333 or less) has been observed.
(1) Is the minimum C. acnes load for detection higher for next-generation sequencing (NGS) than for anaerobic conventional culture? (2) What duration of incubation is necessary for anaerobic culture to detect all C. acnes loads?
Five C. acnes strains were tested for this study: Four strains were causing infection and were isolated from surgical samples. Meanwhile, the other was a reference strain commonly used as a positive and quality control in microbiology and bioinformatics. To create inoculums with varying degrees of bacterial load, we began with a standard bacterial suspension at 1.5 x 10 8 colony-forming units (CFU)/mL and created six more diluted suspensions (from 1.5 x 10 6 CFU/mL to 1.5 x 10 1 CFU/mL). Briefly, to do so, we transferred 200 µL from the tube with the highest inoculum (for example, 1.5 x 10 6 CFU/mL) to the following dilution tube (1.5 x 10 5 CFU/mL; 1800 µL of diluent + 200 µL of 1.5 x 10 6 CFU/mL). We serially continued the transfers to create all diluted suspensions. Six tubes were prepared per strain. Thirty bacterial suspensions were tested per assay. Then, 100 µL of each diluted suspension was inoculated into brain heart infusion agar with horse blood and taurocholate agar plates. Two plates were used per bacterial suspension in each assay. All plates were incubated at 37°C in an anaerobic chamber and assessed for growth after 3 days of incubation and daily thereafter until positive or Day 14. The remaining volume of each bacterial suspension was sent for NGS analysis to identify bacterial DNA copies. We performed the experimental assays in duplicate. We calculated mean DNA copies and CFUs for each strain, bacterial load, and incubation timepoint assessed. We reported detection by NGS and culture as a qualitative variable based on the identification or absence of DNA copies and CFUs, respectively. In this way, we identified the minimum bacterial load detected by NGS and culture, regardless of incubation time. We performed a qualitative comparison of detection rates between methodologies. Simultaneously, we tracked C. acnes growth on agar plates and determined the minimum incubation time in days required for CFU detection in all strains and loads examined in this study. Growth detection and bacterial CFU counting were performed by three laboratory personnel, with a high intraobserver and interobserver agreement (κ > 0.80). A two-tailed p value below 0.05 was considered statistically significant.
Conventional cultures can detect C. acnes at a load of 1.5 x 10 1 CFU/mL, whereas NGS can detect bacteria when the concentration was higher, at 1.5 x 10 2 CFU/mL. This is represented by a lower positive detection proportion (73% [22 of 30]) for NGS than for cultures (100% [30 of 30]); p = 0.004). By 7 days, anaerobic cultures were able to detect all C. acnes loads, even at the lowest concentrations.
When NGS is negative and culture is positive for C. acnes , there is likely a low bacterial load. Holding cultures beyond 7 days is likely unnecessary.
This is important for treating physicians to decide whether low bacterial loads necessitate aggressive antibiotic treatment or whether they are more likely contaminants. Cultures that are positive beyond 7 days likely represent contamination or bacterial loads even below the dilution used in this study. Physicians may benefit from studies designed to clarify the clinical importance of the low bacteria loads used in this study at which both methodologies' detection differed. Moreover, researchers might explore whether even lower C. acnes loads have a role in true periprosthetic joint infection.
Fernández-Rodríguez D
,Cho J
,Parvizi N
,Khan AZ
,Parvizi J
,Namdari S
... -
《-》