Advancements in AI Medical Education: Assessing ChatGPT's Performance on USMLE-Style Questions Across Topics and Difficulty Levels.

来自 PUBMED

作者:

Penny PBane RRiddle V

展开

摘要:

Background  AI language models have been shown to achieve a passing score on certain imageless diagnostic tests of the USMLE. However, they have failed certain specialty-specific examinations. This suggests there may be a difference in AI ability by medical topic or question difficulty. This study evaluates the performance of two versions of ChatGPT, a popular language-based AI model, on USMLE-style questions across various medical topics.  Methods  A total of 900 USMLE-style multiple-choice questions were equally divided into 18 topics, categorized by exam type (step 1 vs. step 2), and copied from AMBOSS, a medical learning resource with large question banks. Questions that contained images, charts, and tables were excluded due to current AI capabilities. The questions were entered into ChatGPT-3.5 (version September 25, 2023) and ChatGPT-4 (version April 2023) for multiple trials, and performance data were recorded. The two AI models were compared against human test takers (AMBOSS users) by medical topic and question difficulty.  Results  Chat-GPT-4, AMBOSS users, and Chat-GPT-3.5 had accuracies of 71.33%, 54.38%, and 46.23% respectively. When comparing models, GPT-4 was a significant improvement demonstrating a 25% greater accuracy and 8% higher concordance between trials than GPT-3 (p<.001). The performance of GPT models was similar between step 1 and step 2 content. Both GPT-3.5 and GPT-4 varied performance by medical topic (p=.027, p=.002). However, there was no clear pattern of variation. Performance for both GPT models and AMBOSS users declined as question difficulty increased (p<.001). However, the decline in accuracy was less pronounced for GPT-4. The accuracy of the GPT models showed less variability with question difficulty compared to AMBOSS users, with the average drop in accuracy from the easiest to hardest questions being 45% and 62%, respectively. Discussion  ChatGPT-4 shows significant improvement over its predecessor, ChatGPT-3.5, in the medical education setting. It is the first ChatGPT model to surpass human performance on modified AMBOSS USMLE tests. While there was variation in performance by medical topic for both models, there was no clear pattern of discrepancy. ChatGPT-4's improved accuracy, concordance, performance on difficult questions, and consistency across topics are promising for its reliability and utility for medical learners.  Conclusion  ChatGPT-4's improvements highlight its potential as a valuable tool in medical education, surpassing human performance in some areas. The lack of a clear performance pattern by medical topic suggests that variability is more related to question complexity than specific knowledge gaps.

收起

展开

DOI:

10.7759/cureus.76309

被引量:

0

年份:

1970

SCI-Hub (全网免费下载) 发表链接

通过 文献互助 平台发起求助,成功后即可免费获取论文全文。

查看求助

求助方法1:

知识发现用户

每天可免费求助50篇

求助

求助方法1:

关注微信公众号

每天可免费求助2篇

求助方法2:

求助需要支付5个财富值

您现在财富值不足

您可以通过 应助全文 获取财富值

求助方法2:

完成求助需要支付5财富值

您目前有 1000 财富值

求助

我们已与文献出版商建立了直接购买合作。

你可以通过身份认证进行实名认证,认证成功后本次下载的费用将由您所在的图书馆支付

您可以直接购买此文献,1~5分钟即可下载全文,部分资源由于网络原因可能需要更长时间,请您耐心等待哦~

身份认证 全文购买

相似文献(100)

参考文献(0)

引证文献(0)

来源期刊

Cureus

影响因子:0

JCR分区: 暂无

中科院分区:暂无

研究点推荐

关于我们

zlive学术集成海量学术资源,融合人工智能、深度学习、大数据分析等技术,为科研工作者提供全面快捷的学术服务。在这里我们不忘初心,砥砺前行。

友情链接

联系我们

合作与服务

©2024 zlive学术声明使用前必读