-
Highly sensitive detection platform-based diagnosis of oesophageal squamous cell carcinoma in China: a multicentre, case-control, diagnostic study.
Early detection and screening of oesophageal squamous cell carcinoma rely on upper gastrointestinal endoscopy, which is not feasible for population-wide implementation. Tumour marker-based blood tests offer a potential alternative. However, the sensitivity of current clinical protein detection technologies is inadequate for identifying low-abundance circulating tumour biomarkers, leading to poor discrimination between individuals with and without cancer. We aimed to develop a highly sensitive blood test tool to improve detection of oesophageal squamous cell carcinoma.
We designed a detection platform named SENSORS and validated its effectiveness by comparing its performance in detecting the selected serological biomarkers MMP13 and SCC against ELISA and electrochemiluminescence immunoassay (ECLIA). We then developed a SENSORS-based oesophageal squamous cell carcinoma adjunct diagnostic system (with potential applications in screening and triage under clinical supervision) to classify individuals with oesophageal squamous cell carcinoma and healthy controls in a retrospective study including participants (cohort I) from Sun Yat-sen University Cancer Center (SYSUCC; Guangzhou, China), Henan Cancer Hospital (HNCH; Zhengzhou, China), and Cancer Hospital of Shantou University Medical College (CHSUMC; Shantou, China). The inclusion criteria were age 18 years or older, pathologically confirmed primary oesophageal squamous cell carcinoma, and no cancer treatments before serum sample collection. Participants without oesophageal-related diseases were recruited from the health examination department as the control group. The SENSORS-based diagnostic system is based on a multivariable logistic regression model that uses the detection values of SENSORS as the input and outputs a risk score for the predicted likelihood of oesophageal squamous cell carcinoma. We further evaluated the clinical utility of the system in an independent prospective multicentre study with different participants selected from the same three institutions. Patients with newly diagnosed oesophageal-related diseases without previous cancer treatment were enrolled. The inclusion criteria for healthy controls were no obvious abnormalities in routine blood and tumour marker tests, no oesophageal-associated diseases, and no history of cancer. Finally, we assessed whether classification could be improved by integrating machine-learning algorithms with the system, which combined baseline clinical characteristics, epidemiological risk factors, and serological tumour marker concentrations. Retrospective SYSUCC cohort I (randomly assigned [7:3] to a training set and an internal validation set) and three prospective validation sets (SYSUCC cohort II [internal validation], HNCH cohort II [external validation], and CHSUMC cohort II [external validation]) were used in this step. Six machine-learning algorithms were compared (the least absolute shrinkage and selector operator regression, ridge regression, random forest, logistic regression, support vector machine, and neural network), and the best-performing algorithm was chosen as the final prediction model. Performance of SENSORS and the SENSORS-based diagnostic system was primarily assessed using accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC).
Between Oct 1, 2017, and April 30, 2020, 1051 participants were included in the retrospective study. In the prospective diagnostic study, 924 participants were included from April 2, 2022, to Feb 2, 2023. Compared with ELISA (108·90 pg/mL) and ECLIA (41·79 pg/mL), SENSORS (243·03 fg/mL) showed 448 times and 172 times improvements, respectively. In the three retrospective validation sets, the SENSORS-based diagnostic system achieved AUCs of 0·95 (95% CI 0·90-0·99) in the SYSUCC internal validation set, 0·93 (0·89-0·97) in the HNCH external validation set, and 0·98 (0·97-1·00) in the CHSUMC external validation set, sensitivities of 87·1% (79·3-92·3), 98·6% (94·4-99·8), and 93·5% (88·1-96·7), and specificities of 88·9% (75·2-95·8), 74·6% (61·3-84·6), and 92·1% (81·7-97·0), respectively, successfully distinguishing between patients with oesophageal squamous cell carcinoma and healthy controls. Additionally, in three prospective validation cohorts, it yielded sensitivities of 90·9% (95% CI 86·1-94·2) for SYSUCC, 84·8% (76·1-90·8) for HNCH, and 95·2% (85·6-98·7) for CHSUMC. Of the six machine-learning algorithms compared, the random forest model showed the best performance. A feature selection step identified five features to have the highest performance to predictions (SCC, age, MMP13, CEA, and NSE) and a simplified random forest model using these five features further improved classification, achieving sensitivities of 98·2% (95% CI 93·2-99·7) in the internal validation set from retrospective SYSUCC cohort I, 94·1% (89·9-96·7) in SYSUCC prospective cohort II, 88·6% (80·5-93·7) in HNCH prospective cohort II, and 98·4% (90·2-99·9) in CHSUMC prospective cohort II.
The SENSORS system facilitates highly sensitive detection of oesophageal squamous cell carcinoma tumour biomarkers, overcoming the limitations of detecting low-abundance circulating proteins, and could substantially improve oesophageal squamous cell carcinoma diagnostics. This method could act as a minimally invasive screening tool, potentially reducing the need for unnecessary endoscopies.
The National Key R&D Program of China, the National Natural Science Foundation of China, and the Enterprises Joint Fund-Key Program of Guangdong Province.
For the Chinese translation of the abstract see Supplementary Materials section.
Wang Y
,Xing S
,Xu YW
,Xu QX
,Ji MF
,Peng YH
,Wu YX
,Wu M
,Xue N
,Zhang B
,Xie SH
,Zhu RD
,Ou XY
,Huang Q
,Tian BY
,Li HL
,Jiang Y
,Yao XB
,Li JP
,Ling L
,Cao SM
,Zhong Q
,Liu WL
,Zeng MS
... -
《The Lancet Digital Health》
-
Defining the optimum strategy for identifying adults and children with coeliac disease: systematic review and economic modelling.
Elwenspoek MM
,Thom H
,Sheppard AL
,Keeney E
,O'Donnell R
,Jackson J
,Roadevin C
,Dawson S
,Lane D
,Stubbs J
,Everitt H
,Watson JC
,Hay AD
,Gillett P
,Robins G
,Jones HE
,Mallett S
,Whiting PF
... -
《-》
-
Comparison of Two Modern Survival Prediction Tools, SORG-MLA and METSSS, in Patients With Symptomatic Long-bone Metastases Who Underwent Local Treatment With Surgery Followed by Radiotherapy and With Radiotherapy Alone.
Survival estimation for patients with symptomatic skeletal metastases ideally should be made before a type of local treatment has already been determined. Currently available survival prediction tools, however, were generated using data from patients treated either operatively or with local radiation alone, raising concerns about whether they would generalize well to all patients presenting for assessment. The Skeletal Oncology Research Group machine-learning algorithm (SORG-MLA), trained with institution-based data of surgically treated patients, and the Metastases location, Elderly, Tumor primary, Sex, Sickness/comorbidity, and Site of radiotherapy model (METSSS), trained with registry-based data of patients treated with radiotherapy alone, are two of the most recently developed survival prediction models, but they have not been tested on patients whose local treatment strategy is not yet decided.
(1) Which of these two survival prediction models performed better in a mixed cohort made up both of patients who received local treatment with surgery followed by radiotherapy and who had radiation alone for symptomatic bone metastases? (2) Which model performed better among patients whose local treatment consisted of only palliative radiotherapy? (3) Are laboratory values used by SORG-MLA, which are not included in METSSS, independently associated with survival after controlling for predictions made by METSSS?
Between 2010 and 2018, we provided local treatment for 2113 adult patients with skeletal metastases in the extremities at an urban tertiary referral academic medical center using one of two strategies: (1) surgery followed by postoperative radiotherapy or (2) palliative radiotherapy alone. Every patient's survivorship status was ascertained either by their medical records or the national death registry from the Taiwanese National Health Insurance Administration. After applying a priori designated exclusion criteria, 91% (1920) were analyzed here. Among them, 48% (920) of the patients were female, and the median (IQR) age was 62 years (53 to 70 years). Lung was the most common primary tumor site (41% [782]), and 59% (1128) of patients had other skeletal metastases in addition to the treated lesion(s). In general, the indications for surgery were the presence of a complete pathologic fracture or an impending pathologic fracture, defined as having a Mirels score of ≥ 9, in patients with an American Society of Anesthesiologists (ASA) classification of less than or equal to IV and who were considered fit for surgery. The indications for radiotherapy were relief of pain, local tumor control, prevention of skeletal-related events, and any combination of the above. In all, 84% (1610) of the patients received palliative radiotherapy alone as local treatment for the target lesion(s), and 16% (310) underwent surgery followed by postoperative radiotherapy. Neither METSSS nor SORG-MLA was used at the point of care to aid clinical decision-making during the treatment period. Survival was retrospectively estimated by these two models to test their potential for providing survival probabilities. We first compared SORG to METSSS in the entire population. Then, we repeated the comparison in patients who received local treatment with palliative radiation alone. We assessed model performance by area under the receiver operating characteristic curve (AUROC), calibration analysis, Brier score, and decision curve analysis (DCA). The AUROC measures discrimination, which is the ability to distinguish patients with the event of interest (such as death at a particular time point) from those without. AUROC typically ranges from 0.5 to 1.0, with 0.5 indicating random guessing and 1.0 a perfect prediction, and in general, an AUROC of ≥ 0.7 indicates adequate discrimination for clinical use. Calibration refers to the agreement between the predicted outcomes (in this case, survival probabilities) and the actual outcomes, with a perfect calibration curve having an intercept of 0 and a slope of 1. A positive intercept indicates that the actual survival is generally underestimated by the prediction model, and a negative intercept suggests the opposite (overestimation). When comparing models, an intercept closer to 0 typically indicates better calibration. Calibration can also be summarized as log(O:E), the logarithm scale of the ratio of observed (O) to expected (E) survivors. A log(O:E) > 0 signals an underestimation (the observed survival is greater than the predicted survival); and a log(O:E) < 0 indicates the opposite (the observed survival is lower than the predicted survival). A model with a log(O:E) closer to 0 is generally considered better calibrated. The Brier score is the mean squared difference between the model predictions and the observed outcomes, and it ranges from 0 (best prediction) to 1 (worst prediction). The Brier score captures both discrimination and calibration, and it is considered a measure of overall model performance. In Brier score analysis, the "null model" assigns a predicted probability equal to the prevalence of the outcome and represents a model that adds no new information. A prediction model should achieve a Brier score at least lower than the null-model Brier score to be considered as useful. The DCA was developed as a method to determine whether using a model to inform treatment decisions would do more good than harm. It plots the net benefit of making decisions based on the model's predictions across all possible risk thresholds (or cost-to-benefit ratios) in relation to the two default strategies of treating all or no patients. The care provider can decide on an acceptable risk threshold for the proposed treatment in an individual and assess the corresponding net benefit to determine whether consulting with the model is superior to adopting the default strategies. Finally, we examined whether laboratory data, which were not included in the METSSS model, would have been independently associated with survival after controlling for the METSSS model's predictions by using the multivariable logistic and Cox proportional hazards regression analyses.
Between the two models, only SORG-MLA achieved adequate discrimination (an AUROC of > 0.7) in the entire cohort (of patients treated operatively or with radiation alone) and in the subgroup of patients treated with palliative radiotherapy alone. SORG-MLA outperformed METSSS by a wide margin on discrimination, calibration, and Brier score analyses in not only the entire cohort but also the subgroup of patients whose local treatment consisted of radiotherapy alone. In both the entire cohort and the subgroup, DCA demonstrated that SORG-MLA provided more net benefit compared with the two default strategies (of treating all or no patients) and compared with METSSS when risk thresholds ranged from 0.2 to 0.9 at both 90 days and 1 year, indicating that using SORG-MLA as a decision-making aid was beneficial when a patient's individualized risk threshold for opting for treatment was 0.2 to 0.9. Higher albumin, lower alkaline phosphatase, lower calcium, higher hemoglobin, lower international normalized ratio, higher lymphocytes, lower neutrophils, lower neutrophil-to-lymphocyte ratio, lower platelet-to-lymphocyte ratio, higher sodium, and lower white blood cells were independently associated with better 1-year and overall survival after adjusting for the predictions made by METSSS.
Based on these discoveries, clinicians might choose to consult SORG-MLA instead of METSSS for survival estimation in patients with long-bone metastases presenting for evaluation of local treatment. Basing a treatment decision on the predictions of SORG-MLA could be beneficial when a patient's individualized risk threshold for opting to undergo a particular treatment strategy ranged from 0.2 to 0.9. Future studies might investigate relevant laboratory items when constructing or refining a survival estimation model because these data demonstrated prognostic value independent of the predictions of the METSSS model, and future studies might also seek to keep these models up to date using data from diverse, contemporary patients undergoing both modern operative and nonoperative treatments.
Level III, diagnostic study.
Lee CC
,Chen CW
,Yen HK
,Lin YP
,Lai CY
,Wang JL
,Groot OQ
,Janssen SJ
,Schwab JH
,Hsu FM
,Lin WH
... -
《-》
-
Mortality impact, risks, and benefits of general population screening for ovarian cancer: the UKCTOCS randomised controlled trial.
Menon U
,Gentry-Maharaj A
,Burnell M
,Ryan A
,Kalsi JK
,Singh N
,Dawnay A
,Fallowfield L
,McGuire AJ
,Campbell S
,Skates SJ
,Parmar M
,Jacobs IJ
... -
《-》
-
The effect of sample site and collection procedure on identification of SARS-CoV-2 infection.
Sample collection is a key driver of accuracy in the diagnosis of SARS-CoV-2 infection. Viral load may vary at different anatomical sampling sites and accuracy may be compromised by difficulties obtaining specimens and the expertise of the person taking the sample. It is important to optimise sampling accuracy within cost, safety and accessibility constraints.
To compare the sensitivity of different sampling collection sites and methods for the detection of current SARS-CoV-2 infection with any molecular or antigen-based test.
Electronic searches of the Cochrane COVID-19 Study Register and the COVID-19 Living Evidence Database from the University of Bern (which includes daily updates from PubMed and Embase and preprints from medRxiv and bioRxiv) were undertaken on 22 February 2022. We included independent evaluations from national reference laboratories, FIND and the Diagnostics Global Health website. We did not apply language restrictions.
We included studies of symptomatic or asymptomatic people with suspected SARS-CoV-2 infection undergoing testing. We included studies of any design that compared results from different sample types (anatomical location, operator, collection device) collected from the same participant within a 24-hour period.
Within a sample pair, we defined a reference sample and an index sample collected from the same participant within the same clinical encounter (within 24 hours). Where the sample comparison was different anatomical sites, the reference standard was defined as a nasopharyngeal or combined naso/oropharyngeal sample collected into the same sample container and the index sample as the alternative anatomical site. Where the sample comparison was concerned with differences in the sample collection method from the same site, we defined the reference sample as that closest to standard practice for that sample type. Where the sample pair comparison was concerned with differences in personnel collecting the sample, the more skilled or experienced operator was considered the reference sample. Two review authors independently assessed the risk of bias and applicability concerns using the QUADAS-2 and QUADAS-C checklists, tailored to this review. We present estimates of the difference in the sensitivity (reference sample (%) minus index sample sensitivity (%)) in a pair and as an average across studies for each index sampling method using forest plots and tables. We examined heterogeneity between studies according to population (age, symptom status) and index sample (time post-symptom onset, operator expertise, use of transport medium) characteristics.
This review includes 106 studies reporting 154 evaluations and 60,523 sample pair comparisons, of which 11,045 had SARS-CoV-2 infection. Ninety evaluations were of saliva samples, 37 nasal, seven oropharyngeal, six gargle, six oral and four combined nasal/oropharyngeal samples. Four evaluations were of the effect of operator expertise on the accuracy of three different sample types. The majority of included evaluations (146) used molecular tests, of which 140 used RT-PCR (reverse transcription polymerase chain reaction). Eight evaluations were of nasal samples used with Ag-RDTs (rapid antigen tests). The majority of studies were conducted in Europe (35/106, 33%) or the USA (27%) and conducted in dedicated COVID-19 testing clinics or in ambulatory hospital settings (53%). Targeted screening or contact tracing accounted for only 4% of evaluations. Where reported, the majority of evaluations were of adults (91/154, 59%), 28 (18%) were in mixed populations with only seven (4%) in children. The median prevalence of confirmed SARS-CoV-2 was 23% (interquartile (IQR) 13%-40%). Risk of bias and applicability assessment were hampered by poor reporting in 77% and 65% of included studies, respectively. Risk of bias was low across all domains in only 3% of evaluations due to inappropriate inclusion or exclusion criteria, unclear recruitment, lack of blinding, nonrandomised sampling order or differences in testing kit within a sample pair. Sixty-eight percent of evaluation cohorts were judged as being at high or unclear applicability concern either due to inflation of the prevalence of SARS-CoV-2 infection in study populations by selectively including individuals with confirmed PCR-positive samples or because there was insufficient detail to allow replication of sample collection. When used with RT-PCR • There was no evidence of a difference in sensitivity between gargle and nasopharyngeal samples (on average -1 percentage points, 95% CI -5 to +2, based on 6 evaluations, 2138 sample pairs, of which 389 had SARS-CoV-2). • There was no evidence of a difference in sensitivity between saliva collection from the deep throat and nasopharyngeal samples (on average +10 percentage points, 95% CI -1 to +21, based on 2192 sample pairs, of which 730 had SARS-CoV-2). • There was evidence that saliva collection using spitting, drooling or salivating was on average -12 percentage points less sensitive (95% CI -16 to -8, based on 27,253 sample pairs, of which 4636 had SARS-CoV-2) compared to nasopharyngeal samples. We did not find any evidence of a difference in the sensitivity of saliva collected using spitting, drooling or salivating (sensitivity difference: range from -13 percentage points (spit) to -21 percentage points (salivate)). • Nasal samples (anterior and mid-turbinate collection combined) were, on average, 12 percentage points less sensitive compared to nasopharyngeal samples (95% CI -17 to -7), based on 9291 sample pairs, of which 1485 had SARS-CoV-2. We did not find any evidence of a difference in sensitivity between nasal samples collected from the mid-turbinates (3942 sample pairs) or from the anterior nares (8272 sample pairs). • There was evidence that oropharyngeal samples were, on average, 17 percentage points less sensitive than nasopharyngeal samples (95% CI -29 to -5), based on seven evaluations, 2522 sample pairs, of which 511 had SARS-CoV-2. A much smaller volume of evidence was available for combined nasal/oropharyngeal samples and oral samples. Age, symptom status and use of transport media do not appear to affect the sensitivity of saliva samples and nasal samples. When used with Ag-RDTs • There was no evidence of a difference in sensitivity between nasal samples compared to nasopharyngeal samples (sensitivity, on average, 0 percentage points -0.2 to +0.2, based on 3688 sample pairs, of which 535 had SARS-CoV-2).
When used with RT-PCR, there is no evidence for a difference in sensitivity of self-collected gargle or deep-throat saliva samples compared to nasopharyngeal samples collected by healthcare workers when used with RT-PCR. Use of these alternative, self-collected sample types has the potential to reduce cost and discomfort and improve the safety of sampling by reducing risk of transmission from aerosol spread which occurs as a result of coughing and gagging during the nasopharyngeal or oropharyngeal sample collection procedure. This may, in turn, improve access to and uptake of testing. Other types of saliva, nasal, oral and oropharyngeal samples are, on average, less sensitive compared to healthcare worker-collected nasopharyngeal samples, and it is unlikely that sensitivities of this magnitude would be acceptable for confirmation of SARS-CoV-2 infection with RT-PCR. When used with Ag-RDTs, there is no evidence of a difference in sensitivity between nasal samples and healthcare worker-collected nasopharyngeal samples for detecting SARS-CoV-2. The implications of this for self-testing are unclear as evaluations did not report whether nasal samples were self-collected or collected by healthcare workers. Further research is needed in asymptomatic individuals, children and in Ag-RDTs, and to investigate the effect of operator expertise on accuracy. Quality assessment of the evidence base underpinning these conclusions was restricted by poor reporting. There is a need for further high-quality studies, adhering to reporting standards for test accuracy studies.
Davenport C
,Arevalo-Rodriguez I
,Mateos-Haro M
,Berhane S
,Dinnes J
,Spijker R
,Buitrago-Garcia D
,Ciapponi A
,Takwoingi Y
,Deeks JJ
,Emperador D
,Leeflang MMG
,Van den Bruel A
,Cochrane COVID-19 Diagnostic Test Accuracy Group
... -
《Cochrane Database of Systematic Reviews》