Misuse of the Lower Limit of Detection in HBV DNA Testing and Anti-HBe Positive Status Will Significantly Impact the Diagnosis of Occult HBV Infection.
The diagnosis of occult hepatitis B virus (HBV) infection (OBI) is influenced by factors such as the lower limit of detection (LOD) of the HBV DNA test. However, in clinical practice and scientific research, the lower limit of quantification (LOQ) is often misused as the LOD. This study aims to investigate the impact of misuse of the LOD of the HBV DNA test on the detection rate of OBI, as well as the risk factors for OBI. Four hundred twelve patients who were HBsAg-negative and had undergone high-sensitivity HBV DNA testing were included in this study. HBV DNA was detected using the Cobas 6800 System with an LOD of 2.4 IU/mL and an LOQ of 10 IU/mL. The effect of using the LOQ as the LOD on the detection rate of OBI was compared, and univariate and multivariate logistic regression analyses were used to explore the risk factors for OBI. (1) Of the 412 patients, 63.3% (n = 261) were male, with a median age of 47 (range 34-55) years. A total of 473 HBV DNA test results were obtained, with 366 individuals undergoing only one HBV DNA test and the remaining 46 patients undergoing 2 to 5 HBV DNA tests (resulting in a total of 107 test results). (2) Considering only the first HBV DNA test result, the detection rate of OBI was 4.1% (17/412). However, when the LOQ (10 IU/mL) was used as the LOD, the detection rate of OBI was only 1.5% (6/412) (p < 0.001). (3) Univariate analysis showed that there were statistically significant differences in age, anti-HBe positivity rate and anti-HBc positivity rate between OBI and non-OBI individuals (p < 0.05). Multivariate regression analysis showed that anti-HBe positivity was an independent risk factor for OBI in this study (odds ratio [OR] = 3.807, 95% confidence interval [CI]: 1.065-13.617, p = 0.040), while anti-HBs positivity was a protective factor against OBI (OR = 0.271, 95% CI: 0.093-0.787, p = 0.016). (4) Among the 46 patients who underwent repeated testing, a total of seven individuals were found to be HBV DNA-positive in the first test, and six individuals tested positive for HBV DNA one or more times in subsequent tests. When OBI was confirmed by ≥ 1 out of 1-5 tests with detectable HBV DNA, the detection rate of OBI in this study could increase from 4.1% to 5.6%. The detection rate of OBI among HBsAg-negative adult patients attending hepatology departments in this region is 4.1%. Misusing the LOQ as LOD can significantly decrease the detection rate of OBI. The presence of anti-HBe positivity and undergoing multiple HBV DNA tests can lead to a significant increase in the detection rate of OBI.
Wang B
,Wang X
,Xiao L
,Xian J
... -
《-》
Comparison of Two Modern Survival Prediction Tools, SORG-MLA and METSSS, in Patients With Symptomatic Long-bone Metastases Who Underwent Local Treatment With Surgery Followed by Radiotherapy and With Radiotherapy Alone.
Survival estimation for patients with symptomatic skeletal metastases ideally should be made before a type of local treatment has already been determined. Currently available survival prediction tools, however, were generated using data from patients treated either operatively or with local radiation alone, raising concerns about whether they would generalize well to all patients presenting for assessment. The Skeletal Oncology Research Group machine-learning algorithm (SORG-MLA), trained with institution-based data of surgically treated patients, and the Metastases location, Elderly, Tumor primary, Sex, Sickness/comorbidity, and Site of radiotherapy model (METSSS), trained with registry-based data of patients treated with radiotherapy alone, are two of the most recently developed survival prediction models, but they have not been tested on patients whose local treatment strategy is not yet decided.
(1) Which of these two survival prediction models performed better in a mixed cohort made up both of patients who received local treatment with surgery followed by radiotherapy and who had radiation alone for symptomatic bone metastases? (2) Which model performed better among patients whose local treatment consisted of only palliative radiotherapy? (3) Are laboratory values used by SORG-MLA, which are not included in METSSS, independently associated with survival after controlling for predictions made by METSSS?
Between 2010 and 2018, we provided local treatment for 2113 adult patients with skeletal metastases in the extremities at an urban tertiary referral academic medical center using one of two strategies: (1) surgery followed by postoperative radiotherapy or (2) palliative radiotherapy alone. Every patient's survivorship status was ascertained either by their medical records or the national death registry from the Taiwanese National Health Insurance Administration. After applying a priori designated exclusion criteria, 91% (1920) were analyzed here. Among them, 48% (920) of the patients were female, and the median (IQR) age was 62 years (53 to 70 years). Lung was the most common primary tumor site (41% [782]), and 59% (1128) of patients had other skeletal metastases in addition to the treated lesion(s). In general, the indications for surgery were the presence of a complete pathologic fracture or an impending pathologic fracture, defined as having a Mirels score of ≥ 9, in patients with an American Society of Anesthesiologists (ASA) classification of less than or equal to IV and who were considered fit for surgery. The indications for radiotherapy were relief of pain, local tumor control, prevention of skeletal-related events, and any combination of the above. In all, 84% (1610) of the patients received palliative radiotherapy alone as local treatment for the target lesion(s), and 16% (310) underwent surgery followed by postoperative radiotherapy. Neither METSSS nor SORG-MLA was used at the point of care to aid clinical decision-making during the treatment period. Survival was retrospectively estimated by these two models to test their potential for providing survival probabilities. We first compared SORG to METSSS in the entire population. Then, we repeated the comparison in patients who received local treatment with palliative radiation alone. We assessed model performance by area under the receiver operating characteristic curve (AUROC), calibration analysis, Brier score, and decision curve analysis (DCA). The AUROC measures discrimination, which is the ability to distinguish patients with the event of interest (such as death at a particular time point) from those without. AUROC typically ranges from 0.5 to 1.0, with 0.5 indicating random guessing and 1.0 a perfect prediction, and in general, an AUROC of ≥ 0.7 indicates adequate discrimination for clinical use. Calibration refers to the agreement between the predicted outcomes (in this case, survival probabilities) and the actual outcomes, with a perfect calibration curve having an intercept of 0 and a slope of 1. A positive intercept indicates that the actual survival is generally underestimated by the prediction model, and a negative intercept suggests the opposite (overestimation). When comparing models, an intercept closer to 0 typically indicates better calibration. Calibration can also be summarized as log(O:E), the logarithm scale of the ratio of observed (O) to expected (E) survivors. A log(O:E) > 0 signals an underestimation (the observed survival is greater than the predicted survival); and a log(O:E) < 0 indicates the opposite (the observed survival is lower than the predicted survival). A model with a log(O:E) closer to 0 is generally considered better calibrated. The Brier score is the mean squared difference between the model predictions and the observed outcomes, and it ranges from 0 (best prediction) to 1 (worst prediction). The Brier score captures both discrimination and calibration, and it is considered a measure of overall model performance. In Brier score analysis, the "null model" assigns a predicted probability equal to the prevalence of the outcome and represents a model that adds no new information. A prediction model should achieve a Brier score at least lower than the null-model Brier score to be considered as useful. The DCA was developed as a method to determine whether using a model to inform treatment decisions would do more good than harm. It plots the net benefit of making decisions based on the model's predictions across all possible risk thresholds (or cost-to-benefit ratios) in relation to the two default strategies of treating all or no patients. The care provider can decide on an acceptable risk threshold for the proposed treatment in an individual and assess the corresponding net benefit to determine whether consulting with the model is superior to adopting the default strategies. Finally, we examined whether laboratory data, which were not included in the METSSS model, would have been independently associated with survival after controlling for the METSSS model's predictions by using the multivariable logistic and Cox proportional hazards regression analyses.
Between the two models, only SORG-MLA achieved adequate discrimination (an AUROC of > 0.7) in the entire cohort (of patients treated operatively or with radiation alone) and in the subgroup of patients treated with palliative radiotherapy alone. SORG-MLA outperformed METSSS by a wide margin on discrimination, calibration, and Brier score analyses in not only the entire cohort but also the subgroup of patients whose local treatment consisted of radiotherapy alone. In both the entire cohort and the subgroup, DCA demonstrated that SORG-MLA provided more net benefit compared with the two default strategies (of treating all or no patients) and compared with METSSS when risk thresholds ranged from 0.2 to 0.9 at both 90 days and 1 year, indicating that using SORG-MLA as a decision-making aid was beneficial when a patient's individualized risk threshold for opting for treatment was 0.2 to 0.9. Higher albumin, lower alkaline phosphatase, lower calcium, higher hemoglobin, lower international normalized ratio, higher lymphocytes, lower neutrophils, lower neutrophil-to-lymphocyte ratio, lower platelet-to-lymphocyte ratio, higher sodium, and lower white blood cells were independently associated with better 1-year and overall survival after adjusting for the predictions made by METSSS.
Based on these discoveries, clinicians might choose to consult SORG-MLA instead of METSSS for survival estimation in patients with long-bone metastases presenting for evaluation of local treatment. Basing a treatment decision on the predictions of SORG-MLA could be beneficial when a patient's individualized risk threshold for opting to undergo a particular treatment strategy ranged from 0.2 to 0.9. Future studies might investigate relevant laboratory items when constructing or refining a survival estimation model because these data demonstrated prognostic value independent of the predictions of the METSSS model, and future studies might also seek to keep these models up to date using data from diverse, contemporary patients undergoing both modern operative and nonoperative treatments.
Level III, diagnostic study.
Lee CC
,Chen CW
,Yen HK
,Lin YP
,Lai CY
,Wang JL
,Groot OQ
,Janssen SJ
,Schwab JH
,Hsu FM
,Lin WH
... -
《-》
Serum and urine nucleic acid screening tests for BK polyomavirus-associated nephropathy in kidney and kidney-pancreas transplant recipients.
BK polyomavirus-associated nephropathy (BKPyVAN) occurs when BK polyomavirus (BKPyV) affects a transplanted kidney, leading to an initial injury characterised by cytopathic damage, inflammation, and fibrosis. BKPyVAN may cause permanent loss of graft function and premature graft loss. Early detection gives clinicians an opportunity to intervene by timely reduction in immunosuppression to reduce adverse graft outcomes. Quantitative nucleic acid testing (QNAT) for detection of BKPyV DNA in blood and urine is increasingly used as a screening test as diagnosis of BKPyVAN by kidney biopsy is invasive and associated with procedural risks. In this review, we assessed the sensitivity and specificity of QNAT tests in patients with BKPyVAN.
We assessed the diagnostic test accuracy of blood/plasma/serum BKPyV QNAT and urine BKPyV QNAT for the diagnosis of BKPyVAN after transplantation. We also investigated the following sources of heterogeneity: types and quality of studies, era of publication, various thresholds of BKPyV-DNAemia/BKPyV viruria and variability between assays as secondary objectives.
We searched MEDLINE (OvidSP), EMBASE (OvidSP), and BIOSIS, and requested a search of the Cochrane Register of diagnostic test accuracy studies from inception to 13 June 2023. We also searched ClinicalTrials.com and the WHO International Clinical Trials Registry Platform for ongoing trials.
We included cross-sectional or cohort studies assessing the diagnostic accuracy of two index tests (blood/plasma/serum BKPyV QNAT or urine BKPyV QNAT) for the diagnosis of BKPyVAN, as verified by the reference standard (histopathology). Both retrospective and prospective cohort studies were included. We did not include case reports and case control studies.
Two authors independently carried out data extraction from each study. We assessed the methodological quality of the included studies by using Quality Assessment of Diagnostic-Accuracy Studies (QUADAS-2) assessment criteria. We used the bivariate random-effects model to obtain summary estimates of sensitivity and specificity for the QNAT test with one positivity threshold. In cases where meta-analyses were not possible due to the small number of studies available, we detailed the descriptive evidence and used a summative approach. We explored possible sources of heterogeneity by adding covariates to meta-regression models.
We included 31 relevant studies with a total of 6559 participants in this review. Twenty-six studies included kidney transplant recipients, four studies included kidney and kidney-pancreas transplant recipients, and one study included kidney, kidney-pancreas and kidney-liver transplant recipients. Studies were carried out in South Asia and the Asia-Pacific region (12 studies), North America (9 studies), Europe (8 studies), and South America (2 studies).
blood/serum/plasma BKPyV QNAT The diagnostic performance of blood BKPyV QNAT using a common viral load threshold of 10,000 copies/mL was reported in 18 studies (3434 participants). Summary estimates at 10,000 copies/mL as a cut-off indicated that the pooled sensitivity was 0.86 (95% confidence interval (CI) 0.78 to 0.93) while the pooled specificity was 0.95 (95% CI 0.91 to 0.97). A limited number of studies were available to analyse the summary estimates for individual viral load thresholds other than 10,000 copies/mL. Indirect comparison of thresholds of the three different cut-off values of 1000 copies/mL (9 studies), 5000 copies/mL (6 studies), and 10,000 copies/mL (18 studies), the higher cut-off value at 10,000 copies/mL corresponded to higher specificity with lower sensitivity. The summary estimates of indirect comparison of thresholds above 10,000 copies/mL were uncertain, primarily due to a limited number of studies with wide CIs contributed to the analysis. Nonetheless, these indirect comparisons should be interpreted cautiously since differences in study design, patient populations, and methodological variations among the included studies can introduce biases. Analysis of all blood BKPyV QNAT studies, including various blood viral load thresholds (30 studies, 5658 participants, 7 thresholds), indicated that test performance remains robust, pooled sensitivity 0.90 (95% CI 0.85 to 0.94) and specificity 0.93 (95% CI 0.91 to 0.95). In the multiple cut-off model, including the various thresholds generating a single curve, the optimal cut-off was around 2000 copies/mL, sensitivity of 0.89 (95% CI 0.66 to 0.97) and specificity of 0.88 (95% CI 0.80 to 0.93). However, as most of the included studies were retrospective, and not all participants underwent the reference standard tests, this may result in a high risk of selection and verification bias.
urine BKPyV QNAT There was insufficient data to thoroughly investigate both accuracy and thresholds of urine BKPyV QNAT resulting in an imprecise estimation of its accuracy based on the available evidence.
There is insufficient evidence to suggest the use of urine BKPyV QNAT as the primary screening tool for BKPyVAN. The summary estimates of the test sensitivity and specificity of blood/serum/plasma BKPyV QNAT test at a threshold of 10,000 copies/mL for BKPyVAN were 0.86 (95% CI 0.78 to 0.93) and 0.95 (95% CI 0.91 to 0.97), respectively. The multiple cut-off model showed that the optimal cut-off was around 2000 copies/mL, with test sensitivity of 0.89 (95% CI 0.66 to 0.97) and specificity of 0.88 (95% CI 0.80 to 0.93). While 10,000 copies/mL is the most commonly used cut-off, with good test performance characteristics and supports the current recommendations, it is important to interpret the results with caution because of low-certainty evidence.
Maung Myint T
,Chong CH
,von Huben A
,Attia J
,Webster AC
,Blosser CD
,Craig JC
,Teixeira-Pinto A
,Wong G
... -
《Cochrane Database of Systematic Reviews》
Multi-omic molecular characterization and diagnostic biomarkers for occult hepatitis B infection and HBsAg-positive hepatitis B infection.
The pathological and physiological characteristics between HBsAg-positive HBV infection and occult hepatitis B infection (OBI) are currently unclear. This study aimed to explore the immune microenvironment in the peripheral circulation of OBI patients through integration of proteomic and metabolomic sequencing, and to identify molecular biomarkers for clinical diagnosis of HBsAg-positive HBV and OBI.
This research involved collection of plasma from 20 patients with OBI (negative for HBsAg but positive for HBV DNA, with HBV DNA levels < 200 IU/mL), 20 patients with HBsAg-positive HBV infection, and 10 healthy individuals. Mass spectrometry-based detection was used to analyze the proteome, while nuclear magnetic resonance spectroscopy was employed to study the metabolomic phenotypes. Differential molecule analysis, pathway enrichment and functional annotation, as well as weighted correlation network analysis (WGCNA), were conducted to uncover the characteristics of HBV-related liver disease. Prognostic biomarkers were identified using machine learning algorithms, and their validity was confirmed in a larger cohort using enzyme linked immunosorbent assay (ELISA).
HBsAg-positive HBV individuals showed higher ALT levels (p=0.010) when compared to OBI patients. The influence of HBV infection on metabolic functions and inflammation was evident through the analysis of distinct metabolic pathways in HBsAg-positive HBV and OBI groups. Tissue tracing demonstrated a connection between Kupffer cells and HBsAg-positive HBV infection, as well as between hepatocytes and OBI. Immune profiling revealed the correlation between CD4 Tem cells, memory B cells and OBI, enabling a rapid response to infection reactivation through cytokine secretion and antibody production. A machine learning-constructed and significantly expressed molecule-based diagnostic model effectively differentiated HBsAg-positive and OBI groups (AUC values > 0.8). ELISA assay confirmed the elevation of FGB and FGG in OBI samples, suggesting their potential as biomarkers for distinguishing OBI from HBsAg-positive infection.
The immune microenvironment and metabolic status of HBsAg-positive HBV patients and OBI patients vary significantly. The machine learning-based diagnostic model described herein displayed impressive classification accuracy, presenting a non-invasive means of differentiating between OBI and HBsAg-positive HBV infections.
Jiang X
,Tian J
,Song L
,Meng J
,Yang Z
,Qiao W
,Zou J
... -
《Frontiers in Endocrinology》