自引率: 11%
被引量: 28878
通过率: 暂无数据
审稿周期: 1
版面费用: 暂无数据
国人发稿量: 6
投稿须知/期刊简介:
Published monthly, the Journal of Clinical Epidemiology provides timely, authoritative studies developed from the interplay of clinical medicine, epidemiology, biostatistics and pharmacoepidemiology. Articles are oriented toward methodology, clinical research or both. A special section, Pharmacoepidemiology Reports, is dedicated to the rapid publication of articles on the clinical epidemiologic investigation of pharmaceutical agents.
期刊描述简介:
Published monthly, the Journal of Clinical Epidemiology provides timely, authoritative studies developed from the interplay of clinical medicine, epidemiology, biostatistics and pharmacoepidemiology. Articles are oriented toward methodology, clinical research or both. A special section, Pharmacoepidemiology Reports, is dedicated to the rapid publication of articles on the clinical epidemiologic investigation of pharmaceutical agents.
-
Updating methods for artificial intelligence-based clinical prediction models: a scoping review.
To give an overview of methods for updating artificial intelligence (AI)-based clinical prediction models based on new data. We comprehensively searched Scopus and Embase up to August 2022 for articles that addressed developments, descriptions, or evaluations of prediction model updating methods. We specifically focused on articles in the medical domain involving AI-based prediction models that were updated based on new data, excluding regression-based updating methods as these have been extensively discussed elsewhere. We categorized and described the identified methods used to update the AI-based prediction model as well as the use cases in which they were used. We included 78 articles. The majority of the included articles discussed updating for neural network methods (93.6%) with medical images as input data (65.4%). In many articles (51.3%) existing, pretrained models for broad tasks were updated to perform specialized clinical tasks. Other common reasons for model updating were to address changes in the data over time and cross-center differences; however, more unique use cases were also identified, such as updating a model from a broad population to a specific individual. We categorized the identified model updating methods into four categories: neural network-specific methods (described in 92.3% of the articles), ensemble-specific methods (2.5%), model-agnostic methods (9.0%), and other (1.3%). Variations of neural network-specific methods are further categorized based on the following: (1) the part of the original neural network that is kept, (2) whether and how the original neural network is extended with new parameters, and (3) to what extent the original neural network parameters are adjusted to the new data. The most frequently occurring method (n = 30) involved selecting the first layer(s) of an existing neural network, appending new, randomly initialized layers, and then optimizing the entire neural network. We identified many ways to adjust or update AI-based prediction models based on new data, within a large variety of use cases. Updating methods for AI-based prediction models other than neural networks (eg, random forest) appear to be underexplored in clinical prediction research. AI-based prediction models are increasingly used in health care, helping clinicians with diagnosing diseases, guiding treatment decisions, and informing patients. However, these prediction models do not always work well when applied to hospitals, patient populations, or times different from those used to develop the models. Developing new models for every situation is neither practical nor desired, as it wastes resources, time, and existing knowledge. A more efficient approach is to adjust existing models to new contexts ('updating'), but there is limited guidance on how to do this for AI-based clinical prediction models. To address this, we reviewed 78 studies in detail to understand how researchers are currently updating AI-based clinical prediction models, and the types of situations in which these updating methods are used. Our findings provide a comprehensive overview of the available methods to update existing models. This is intended to serve as guidance and inspiration for researchers. Ultimately, this can lead to better reuse of existing models and improve the quality and efficiency of AI-based prediction models in health care.
被引量:- 发表:1970
-
The EORTC QLU-C10D distinguished better between cancer patients and the general population than PROPr and EQ-5D-5L in a cross-sectional study.
Health state utility (HSU) instruments for calculating quality-adjusted life years, such as the European Organisation for Research and Treatment of Cancer (EORTC) Quality of Life Utility - Core 10 Dimensions (QLU-C10D), derived from the EORTC QLQ-30 questionnaire, the Patient-Reported Outcome Measurement Information System (PROMIS) preference score (PROPr), and the EuroQoL-5-Dimensions-5-Levels (EQ-5D-5L), yield different HSU values due to different modeling and different underlying descriptive scales. For example the QLU-C10D includes cancer-relevant dimensions such as nausea. This study aimed to investigate how these differences in descriptive scales contribute to differences in HSU scores by comparing scores of cancer patients receiving chemotherapy to those of the general population. EORTC QLU-C10D, PROPr, and EQ-5D-5L scores were obtained for a convenience sample of 484 outpatients of the Department of Oncology, Charité - Universitätsmedizin Berlin, Germany. Convergent and known group's validity were assessed using Pearson's correlation and intraclass correlation coefficients (ICC). We assessed each descriptive dimension score's discriminatory power and compared them to those of the general population (n > 1000) using effect size (ES; Cohen's d) and area under the curve (AUC). The mean scores of QLU-C10D (0.64; 95% CI 0.62-0.67), PROPr (0.38; 95% CI 0.36-0.40), and EQ-5D-5L (0.72; 95% CI 0.70-0.75) differed significantly, irrespective of sociodemographic factors, condition, or treatment. Conceptually similar descriptive scores as obtained from the HSU instruments showed varying degrees of discrimination in terms of ES and AUC between patients and the general population. The QLU-C10D and its dimensions showed the largest ES and AUC. The QLU-C10D and its domains distinguished best between health states of the two populations, compared to the PROPr and EQ-5D-5L. As the EORTC Core Quality of Life Questionnaire (QLQ-C30) is widely used in clinical practice, its data are available for economic evaluation. The assessment of dimensions of health-related quality of life (HRQoL), such as physical functioning or depression, is important to cancer patients and physicians for treatment and side effect monitoring. Descriptive HRQoL is measured by patient-reported outcomes measures (PROM). The European Organisation for Research and Treatment of Cancer (EORTC) QLQ-C30 questionnaire and the Patient-Reported Outcome Measurement Information System (PROMIS) are the most common PROM in the clinical HRQoL assessment. In recent years, multidimensional preference-based HRQoL measures were developed using these PROM as dimensions. These preference-based measures, also referred to as health state utility (HSU) scores, are needed for economic evaluations of treatments. The QLQ-C30's corresponding HSU score is the Quality of Life Utility measure-Core 10 Dimensions (QLU-C10D), and PROMIS' HSU score is the PROMIS preference score (PROPr). Both new HSU scores are frequently compared to the well-established EuroQoL-5-dimensions-5-levels (EQ-5D-5L). They all conceptualize HSU differently, as they assess different dimensions of HRQoL und use different models. Both the QLU-C10D and the PROPr have thus shown systematic differences to the EQ-5D-5L but these were largely consistent across the subgroups. Convergent and known groups validity can therefore be considered established. However, as HSU is a multidimensional construct, it remains unclear how differences in its dimensions, for example, its descriptive scales, contribute to differences in HSU scores. This is of importance as it is the descriptive scales that measure clinical HRQoL. We investigated this question by assessing each dimension's ability to distinguish between a sample of 484 cancer patients and the German general population. We could show that the ability to distinguish depended on the domain: for example, for depression, the QLU-C10D and EQ-5D-5L distinguished clearer, while for physical function, PROMIS did. Overall, the QLU-C10D and its dimensions distinguish best between cancer patients and general population.
被引量:- 发表:1970
-
Prospective registration of trials: where we are, why, and how we could get better.
Transparent trial conduct requires prospective registration of a randomized controlled clinical trial (RCT) before the enrollment of the first participant. We aimed to (1) estimate the proportion of RCTs that are prospectively registered and analyze the time trends and factors linked to registration timing and (2) assess the reasons for nonadherence to prospective registration and explore ways to improve compliance. We studied trials published in rheumatology as a case study. We searched for RCTs in rheumatology published between 2009 and 2022. We conducted a multivariable logistic regression to identify factors associated with prospective trial registration. We sent a survey to investigators of trials not prospectively registered, asking about reasons for nonadherence and potential solutions. We identified 1093 RCTs; 453 (41.4%) were not prospectively registered. Of these, 130 (11.9%) were not registered and 323 (29.5%) were retrospectively registered. Prospective registration increased by 3% annually (P < .001), with 13.3% (2 of 15) trials registered in 2009 to 73.2% (112 of 153) in 2022. In journals supporting the International Committee of Medical Journals Editors recommendations, 16% of trials published in 2022 were not prospectively registered. Prospective registration was associated with a larger sample size, multinational recruitment, and publication in higher impact journals. Investigators reported lack of knowledge or organizational problems as key reasons for retrospective registration. They suggested linking ethical approval to trial registration to ensure prospective registration. Despite significant improvement, adherence to prospective registration remains unsatisfactory in rheumatology. Targeted strategies for journal editors, health-care professionals, and researchers may help improve trial registration. Randomized controlled clinical trials are a research type where people are randomly assigned to different treatments to see which works best. These treatments can include drugs, surgery, medical devices, or changes in behavior. The results obtained in RCTs are essential for the advance of medicine and for making medical decisions. Randomized controlled clinical trials need to be conducted in a transparent way to provide trustworthy information and avoid misleading findings. A key aspect of transparency is registering the study details and plan in a public repository before the trial starts. This not only requires researchers to plan their study in advance but also enables the scientific community to track any change in how the study is conducted. Although registration of RCTs is recommended, it is not compulsory. Questions remain about researchers' compliance with prospective registration, and the factors that may affect it. In the present study, we systematically studied the registration practices of rheumatology RCTs published between 2009 and 2022. We reviewed how the trials were registered and used a statistical method (multivariable logistic regression) to determine what factors were linked to whether a trial was registered before it started. We also sent a questionnaire to researchers who either did not register or retrospectively registered their study, asking for their suggestions on how to improve adherence to proper registration practices. We found 1093 trials, of which 453 (41.4%) were not registered before they started. Among these, 130 (11.9%) were never registered and 323 (29.5%) were retrospectively registered. Trials with a larger number of participants, those involving recruiting centers from multiple countries, and those published in more prestigious journals were more likely to be registered in advance and adhere to transparency recommendations. Researchers who did not register their trial before it started reported that lack of awareness and organizational issues as the main reasons for not following these recommendations. They suggested that connecting ethical approval to trial registration could be a solution for ensuring adequate registration. We found that even though trial registration has improved in recent years, a considerable number of rheumatology trials are still not registered before they start. Based on our findings, we think that focusing on strategies for journal editors, health-care professionals, and researchers could help increase the number of properly registered trials.
被引量:- 发表:1970
-
A meta-epidemiological analysis of post-hoc comparisons and primary endpoint interpretability among randomized noncomparative trials in clinical medicine.
被引量:- 发表:1970
-
Resource use and costs of investigator-sponsored randomised clinical trials in Switzerland, Germany and the United Kingdom: a meta-research study.
被引量:- 发表:1970