-
Addressing Fairness, Bias, and Appropriate Use of Artificial Intelligence and Machine Learning in Global Health.
In Low- and Middle- Income Countries (LMICs), machine learning (ML) and artificial intelligence (AI) offer attractive solutions to address the shortage of health care resources and improve the capacity of the local health care infrastructure. However, AI and ML should also be used cautiously, due to potential issues of fairness and algorithmic bias that may arise if not applied properly. Furthermore, populations in LMICs can be particularly vulnerable to bias and fairness in AI algorithms, due to a lack of technical capacity, existing social bias against minority groups, and a lack of legal protections. In order to address the need for better guidance within the context of global health, we describe three basic criteria (Appropriateness, Fairness, and Bias) that can be used to help evaluate the use of machine learning and AI systems: 1) APPROPRIATENESS is the process of deciding how the algorithm should be used in the local context, and properly matching the machine learning model to the target population; 2) BIAS is a systematic tendency in a model to favor one demographic group vs another, which can be mitigated but can lead to unfairness; and 3) FAIRNESS involves examining the impact on various demographic groups and choosing one of several mathematical definitions of group fairness that will adequately satisfy the desired set of legal, cultural, and ethical requirements. Finally, we illustrate how these principles can be applied using a case study of machine learning applied to the diagnosis and screening of pulmonary disease in Pune, India. We hope that these methods and principles can help guide researchers and organizations working in global health who are considering the use of machine learning and artificial intelligence.
Fletcher RR
,Nakeshimana A
,Olubeko O
《-》
-
A scoping review of fair machine learning techniques when using real-world data.
The integration of artificial intelligence (AI) and machine learning (ML) in health care to aid clinical decisions is widespread. However, as AI and ML take important roles in health care, there are concerns about AI and ML associated fairness and bias. That is, an AI tool may have a disparate impact, with its benefits and drawbacks unevenly distributed across societal strata and subpopulations, potentially exacerbating existing health inequities. Thus, the objectives of this scoping review were to summarize existing literature and identify gaps in the topic of tackling algorithmic bias and optimizing fairness in AI/ML models using real-world data (RWD) in health care domains.
We conducted a thorough review of techniques for assessing and optimizing AI/ML model fairness in health care when using RWD in health care domains. The focus lies on appraising different quantification metrics for accessing fairness, publicly accessible datasets for ML fairness research, and bias mitigation approaches.
We identified 11 papers that are focused on optimizing model fairness in health care applications. The current research on mitigating bias issues in RWD is limited, both in terms of disease variety and health care applications, as well as the accessibility of public datasets for ML fairness research. Existing studies often indicate positive outcomes when using pre-processing techniques to address algorithmic bias. There remain unresolved questions within the field that require further research, which includes pinpointing the root causes of bias in ML models, broadening fairness research in AI/ML with the use of RWD and exploring its implications in healthcare settings, and evaluating and addressing bias in multi-modal data.
This paper provides useful reference material and insights to researchers regarding AI/ML fairness in real-world health care data and reveals the gaps in the field. Fair AI/ML in health care is a burgeoning field that requires a heightened research focus to cover diverse applications and different types of RWD.
Huang Y
,Guo J
,Chen WH
,Lin HY
,Tang H
,Wang F
,Xu H
,Bian J
... -
《-》
-
Multidisciplinary considerations of fairness in medical AI: A scoping review.
Artificial Intelligence (AI) technology has been developed significantly in recent years. The fairness of medical AI is of great concern due to its direct relation to human life and health. This review aims to analyze the existing research literature on fairness in medical AI from the perspectives of computer science, medical science, and social science (including law and ethics). The objective of the review is to examine the similarities and differences in the understanding of fairness, explore influencing factors, and investigate potential measures to implement fairness in medical AI across English and Chinese literature.
This study employed a scoping review methodology and selected the following databases: Web of Science, MEDLINE, Pubmed, OVID, CNKI, WANFANG Data, etc., for the fairness issues in medical AI through February 2023. The search was conducted using various keywords such as "artificial intelligence," "machine learning," "medical," "algorithm," "fairness," "decision-making," and "bias." The collected data were charted, synthesized, and subjected to descriptive and thematic analysis.
After reviewing 468 English papers and 356 Chinese papers, 53 and 42 were included in the final analysis. Our results show the three different disciplines all show significant differences in the research on the core issues. Data is the foundation that affects medical AI fairness in addition to algorithmic bias and human bias. Legal, ethical, and technological measures all promote the implementation of medical AI fairness.
Our review indicates a consensus regarding the importance of data fairness as the foundation for achieving fairness in medical AI across multidisciplinary perspectives. However, there are substantial discrepancies in core aspects such as the concept, influencing factors, and implementation measures of fairness in medical AI. Consequently, future research should facilitate interdisciplinary discussions to bridge the cognitive gaps between different fields and enhance the practical implementation of fairness in medical AI.
Wang Y
,Song Y
,Ma Z
,Han X
... -
《-》
-
Fairness of artificial intelligence in healthcare: review and recommendations.
In this review, we address the issue of fairness in the clinical integration of artificial intelligence (AI) in the medical field. As the clinical adoption of deep learning algorithms, a subfield of AI, progresses, concerns have arisen regarding the impact of AI biases and discrimination on patient health. This review aims to provide a comprehensive overview of concerns associated with AI fairness; discuss strategies to mitigate AI biases; and emphasize the need for cooperation among physicians, AI researchers, AI developers, policymakers, and patients to ensure equitable AI integration. First, we define and introduce the concept of fairness in AI applications in healthcare and radiology, emphasizing the benefits and challenges of incorporating AI into clinical practice. Next, we delve into concerns regarding fairness in healthcare, addressing the various causes of biases in AI and potential concerns such as misdiagnosis, unequal access to treatment, and ethical considerations. We then outline strategies for addressing fairness, such as the importance of diverse and representative data and algorithm audits. Additionally, we discuss ethical and legal considerations such as data privacy, responsibility, accountability, transparency, and explainability in AI. Finally, we present the Fairness of Artificial Intelligence Recommendations in healthcare (FAIR) statement to offer best practices. Through these efforts, we aim to provide a foundation for discussing the responsible and equitable implementation and deployment of AI in healthcare.
Ueda D
,Kakinuma T
,Fujita S
,Kamagata K
,Fushimi Y
,Ito R
,Matsui Y
,Nozaki T
,Nakaura T
,Fujima N
,Tatsugami F
,Yanagawa M
,Hirata K
,Yamada A
,Tsuboyama T
,Kawamura M
,Fujioka T
,Naganawa S
... -
《-》
-
Ethics of artificial intelligence in global health: Explainability, algorithmic bias and trust.
AI has the potential to disrupt and transform the way we deliver care globally. It is reputed to be able to improve the accuracy of diagnoses and treatments, and make the provision of services more efficient and effective. In surgery, AI systems could lead to more accurate diagnoses of health problems and help surgeons better care for their patients. In the context of lower-and-middle-income-countries (LMICs), where access to healthcare still remains a global problem, AI could facilitate access to healthcare professionals and services, even specialist services, for millions of people. The ability of AI to deliver on its promises, however, depends on successfully resolving the ethical and practical issues identified, including that of explainability and algorithmic bias. Even though such issues might appear as being merely practical or technical ones, their closer examination uncovers questions of value, fairness and trust. It should not be left to AI developers, being research institutions or global tech companies, to decide how to resolve these ethical questions. Particularly, relying only on the trustworthiness of companies and institutions to address ethical issues relating to justice, fairness and health equality would be unsuitable and unwise. The pathway to a fair, appropriate and relevant AI necessitates the development, and critically, successful implementation of national and international rules and regulations that define the parameters and set the boundaries of operation and engagement.
Kerasidou A
《-》