-
Regulatory Aspects of Artificial Intelligence and Machine Learning.
In the realm of health care, numerous generative and nongenerative artificial intelligence and machine learning (AI-ML) tools have been developed and deployed. Simultaneously, manufacturers of medical devices are leveraging AI-ML. However, the adoption of AI in health care raises several concerns, including safety, security, ethical biases, accountability, trust, economic impact, and environmental effects. Effective regulation can mitigate some of these risks, promote fairness, establish standards, and advocate for more sustainable AI practices. Regulating AI tools not only ensures their safe and effective adoption but also fosters public trust. It is important that regulations remain flexible to accommodate rapid advances in this field to support innovation and also not to add additional burden to some of our preexisting and well-established frameworks. This study covers regional and global regulatory aspects of AI-ML including data privacy, software as a medical device, agency approval and clearance pathways, reimbursement, and laboratory-developed tests.
Pantanowitz L
,Hanna M
,Pantanowitz J
,Lennerz J
,Henricks WH
,Shen P
,Quinn B
,Bennet S
,Rashidi HH
... -
《-》
-
Healthcare workers' informal uses of mobile phones and other mobile devices to support their work: a qualitative evidence synthesis.
Healthcare workers sometimes develop their own informal solutions to deliver services. One such solution is to use their personal mobile phones or other mobile devices in ways that are unregulated by their workplace. This can help them carry out their work when their workplace lacks functional formal communication and information systems, but it can also lead to new challenges.
To explore the views, experiences, and practices of healthcare workers, managers and other professionals working in healthcare services regarding their informal, innovative uses of mobile devices to support their work.
We searched MEDLINE, Embase, CINAHL and Scopus on 11 August 2022 for studies published since 2008 in any language. We carried out citation searches and contacted study authors to clarify published information and seek unpublished data.
We included qualitative studies and mixed-methods studies with a qualitative component. We included studies that explored healthcare workers' views, experiences, and practices regarding mobile phones and other mobile devices, and that included data about healthcare workers' informal use of these devices for work purposes.
We extracted data using an extraction form designed for this synthesis, assessed methodological limitations using predefined criteria, and used a thematic synthesis approach to synthesise the data. We used the 'street-level bureaucrat' concept to apply a conceptual lens to our findings and prepare a line of argument that links these findings. We used the GRADE-CERQual approach to assess our confidence in the review findings and the line-of-argument statements. We collaborated with relevant stakeholders when defining the review scope, interpreting the findings, and developing implications for practice.
We included 30 studies in the review, published between 2013 and 2022. The studies were from high-, middle- and low-income countries and covered a range of healthcare settings and healthcare worker cadres. Most described mobile phone use as opposed to other mobile devices, such as tablets. We have moderate to high confidence in the statements in the following line of argument. The healthcare workers in this review, like other 'street-level bureaucrats', face a gap between what is expected of them and the resources available to them. To plug this gap, healthcare workers develop their own strategies, including using their own mobile phones, data and airtime. They also use other personal resources, including their personal time when taking and making calls outside working hours, and their personal networks when contacting others for help and advice. In some settings, healthcare workers' personal phone use, although unregulated, has become a normal part of many work processes. Some healthcare workers therefore experience pressure or expectations from colleagues and managers to use their personal phones. Some also feel driven to use their phones at work and at home because of feelings of obligation towards their patients and colleagues. At best, healthcare workers' use of their personal phones, time and networks helps humanise healthcare. It allows healthcare workers to be more flexible, efficient and responsive to the needs of the patient. It can give patients access to individual healthcare workers rather than generic systems and can help patients keep their sensitive information out of the formal system. It also allows healthcare workers to communicate with each other in more personalised, socially appropriate ways than formal systems allow. All of this can strengthen healthcare workers' relationships with community members and colleagues. However, these informal approaches can also replicate existing social hierarchies and deepen existing inequities among healthcare workers. Personal phone use costs healthcare workers money. This is a particular problem for lower-level healthcare workers and healthcare workers in low-income settings as they are likely to be paid less and may have less access to work phones or compensation. Out-of-hours use may also be more of a burden for lower-level healthcare workers, as they may find it harder to ignore calls when they are at home. Healthcare workers with poor access to electricity and the internet are less able to use informal mobile phone solutions, while healthcare workers who lack skills and training in how to appraise unendorsed online information are likely to struggle to identify trustworthy information. Informal digital channels can help healthcare workers expand their networks. But healthcare workers who rely on personal networks to seek help and advice are at a disadvantage if these networks are weak. Healthcare workers' use of their personal resources can also lead to problems for patients and can benefit some patients more than others. For instance, when healthcare workers store and share patient information on their personal phones, the confidentiality of this information may be broken. In addition, healthcare workers may decide to use their personal resources on some types of patients, but not others. Healthcare workers sometimes describe using their personal phones and their personal time and networks to help patients and clients whom they assess as being particularly in need. These decisions are likely to reflect their own values and ideas, for instance about social equity and patient 'worthiness'. But these may not necessarily reflect the goals, ideals and regulations of the formal healthcare system. Finally, informal mobile phone use plugs gaps in the system but can also weaken the system. The storing and sharing of information on personal phones and through informal channels can represent a 'shadow IT' (information technology) system where information about patient flow, logistics, etc., is not recorded in the formal system. Healthcare workers may also be more distracted at work, for instance, by calls from colleagues and family members or by social media use. Such challenges may be particularly difficult for weak healthcare systems.
By finding their own informal solutions to workplace challenges, healthcare workers can be more efficient and more responsive to the needs of patients, colleagues and themselves. But these solutions also have several drawbacks. Efforts to strengthen formal health systems should consider how to retain the benefits of informal solutions and reduce their negative effects.
Glenton C
,Paulsen E
,Agarwal S
,Gopinathan U
,Johansen M
,Kyaddondo D
,Munabi-Babigumira S
,Nabukenya J
,Nakityo I
,Namaganda R
,Namitala J
,Neumark T
,Nsangi A
,Pakenham-Walsh NM
,Rashidian A
,Royston G
,Sewankambo N
,Tamrat T
,Lewin S
... -
《Cochrane Database of Systematic Reviews》
-
Consensus statements on the current landscape of artificial intelligence applications in endoscopy, addressing roadblocks, and advancing artificial intelligence in gastroenterology.
ASGE AI Task Force
,Parasa S
,Berzin T
,Leggett C
,Gross S
,Repici A
,Ahmad OF
,Chiang A
,Coelho-Prabhu N
,Cohen J
,Dekker E
,Keswani RN
,Kahn CE
,Hassan C
,Petrick N
,Mountney P
,Ng J
,Riegler M
,Mori Y
,Saito Y
,Thakkar S
,Waxman I
,Wallace MB
,Sharma P
... -
《-》
-
Qualitative evidence synthesis informing our understanding of people's perceptions and experiences of targeted digital communication.
Health communication is an area where changing technologies, particularly digital technologies, have a growing role to play in delivering and exchanging health information between individuals, communities, health systems, and governments.[1] Such innovation has the potential to strengthen health systems and services, with substantial investments in digital health already taking place, particularly in low‐ and middle‐income countries. Communication using mobile phones is an important way of contacting individual people and the public more generally to deliver and exchange health information. Such technologies are used increasingly in this capacity, but poor planning and short‐term projects may be limiting their potential for health improvement. The assumption that mobile devices will solve problems that other forms of communication have not is also prevalent. In this context, understanding people's views and experiences may lead to firmer knowledge on which to build better programs. A qualitative evidence synthesis by Heather Ames and colleagues on clients' perceptions and experiences of targeted digital communication focuses on a particular type of messaging – targeted messages from health services delivered to particular group(s) via mobile devices, in this case looking at communicating with pregnant women and parents of young children, and with adults and teenagers about sexual health and family planning.[2] These areas of reproductive, maternal, newborn, child, and adolescent health (RMNCAH) are where important gains have been made worldwide, but there remains room for improvement. Ames and colleagues sought to examine and understand people's perceptions and experiences of using digital targeted client communication. This might include communication in different formats and with a range of purposes related to RMNCAH – for example, receiving text message reminders to take medicines (e.g. HIV medicines) or go to appointments (such as childhood vaccination appointments), or phone calls offering information or education (such as about breastfeeding or childhood illnesses), support (e.g. providing encouragement to change behaviours) or advice (such as advising about local healthcare services). These communication strategies have the potential to improve health outcomes by communicating with people or by supporting behaviour change. However, changing people's health behaviours to a significant and meaningful degree is notoriously challenging and seldom very effective across the board. There are a multitude of systematic reviews of interventions aiming to change behaviours of both patients and providers, with the overall objective of improving health outcomes – many of which show little or no average effects across groups of people.[3] This evidence synthesis is therefore important as it may help to understand why communicating with people around their health might (or might not) change behaviours and improve consequent health outcomes. By examining the experiences and perspectives of those receiving the interventions, this qualitative evidence synthesis allows us to better understand the interventions' acceptability and usefulness, barriers to their uptake, and factors to be considered when planning implementation. The synthesis looked at 35 studies from countries around the world, focussing on communication related to RMNCAH. Of the 35 studies, 16 were from high‐income countries, mainly the United States, and 19 were from low‐ or middle‐income countries, mainly African countries. Many of the studies presented hypothetical scenarios. The findings from the synthesis are mixed and give us a more nuanced picture of the role of targeted digital communication. People receiving targeted digital communications from health services often liked and valued these contacts, feeling supported and connected by them. However, some also reported problems with the use of these technologies, which may represent barriers to their use. These included practical or technical barriers like poor network or Internet access, as well as cost, language, technical literacy, reading or issues around confidentiality, especially where personal health conditions were involved. Access to mobile phones may also be a barrier, particularly for women and adolescents who may have to share or borrow a phone or who have access controlled by others. In such situations it may be difficult to receive communications or to maintain privacy of content. The synthesis also shows that people's experiences of these interventions are influenced by factors such as the timing of messages, their frequency and content, and their trust in the sender. Identifying key features of such communications by the people who use them might therefore help to inform future choices about how and when such messaging is used. The authors used their knowledge from 25 separate findings to list ten implications for practice. This section of the review is hugely valuable, making a practical contribution to assist governments and public health agencies wishing to develop or improve their delivery of digital health. The implications serve as a list of points to consider, including issues of access (seven different aspects are considered), privacy and confidentiality, reliability, credibility and trust, and responsiveness to the needs and preferences of users. In this way, qualitative evidence is building a picture of how to better communicate with people about health. For example, an earlier 2017 Cochrane qualitative evidence synthesis by Ames, Glenton and Lewin on parents' and informal caregivers' views and experiences of communication about routine childhood vaccination provides ample evidence that may help program managers to deliver or plan communication interventions in ways that are responsive to and acceptable to parents.[4] The qualitative synthesis method, therefore, puts a spotlight on how people's experiences of health and health care in the context of their lives may lead to the design of better interventions, as well as to experimental studies which take more account of the diversity that exists in people's attitudes and decision‐making experiences.[5] In the case of this qualitative evidence synthesis by Ames and colleagues, the method pulled together a substantial body of research (35 data‐rich studies were sampled from 48 studies identified, with the high‐to‐moderate confidence in the evidence for 13 of the synthesized findings). The evidence from this review can inform the development of interventions, and the design of trials and their implementation. While waiting for such new trials or trial evidence on effects to emerge, decision‐makers can build their programs on the highly informative base developed by this review. This qualitative evidence synthesis, alongside other reviews, has informed development by the World Health Organization of its first guideline for using digital technologies for health systems strengthening,[1, 6] part of a comprehensive program of work to better understand and support implementation of such new technologies.
Ryan R
,Hill S
《Cochrane Database of Systematic Reviews》
-
Artificial intelligence for breast cancer detection and its health technology assessment: A scoping review.
Recent healthcare advancements highlight the potential of Artificial Intelligence (AI) - and especially, among its subfields, Machine Learning (ML) - in enhancing Breast Cancer (BC) clinical care, leading to improved patient outcomes and increased radiologists' efficiency. While medical imaging techniques have significantly contributed to BC detection and diagnosis, their synergy with AI algorithms has consistently demonstrated superior diagnostic accuracy, reduced False Positives (FPs), and enabled personalized treatment strategies. Despite the burgeoning enthusiasm for leveraging AI for early and effective BC clinical care, its widespread integration into clinical practice is yet to be realized, and the evaluation of AI-based health technologies in terms of health and economic outcomes remains an ongoing endeavor.
This scoping review aims to investigate AI (and especially ML) applications that have been implemented and evaluated across diverse clinical tasks or decisions in breast imaging and to explore the current state of evidence concerning the assessment of AI-based technologies for BC clinical care within the context of Health Technology Assessment (HTA).
We conducted a systematic literature search following the Preferred Reporting Items for Systematic review and Meta-Analysis Protocols (PRISMA-P) checklist in PubMed and Scopus to identify relevant studies on AI (and particularly ML) applications in BC detection and diagnosis. We limited our search to studies published from January 2015 to October 2023. The Minimum Information about CLinical Artificial Intelligence Modeling (MI-CLAIM) checklist was used to assess the quality of AI algorithms development, evaluation, and reporting quality in the reviewed articles. The HTA Core Model® was also used to analyze the comprehensiveness, robustness, and reliability of the reported results and evidence in AI-systems' evaluations to ensure rigorous assessment of AI systems' utility and cost-effectiveness in clinical practice.
Of the 1652 initially identified articles, 104 were deemed eligible for inclusion in the review. Most studies examined the clinical effectiveness of AI-based systems (78.84%, n= 82), with one study focusing on safety in clinical settings, and 13.46% (n=14) focusing on patients' benefits. Of the studies, 31.73% (n=33) were ethically approved to be carried out in clinical practice, whereas 25% (n=26) evaluated AI systems legally approved for clinical use. Notably, none of the studies addressed the organizational implications of AI systems in clinical practice. Of the 104 studies, only two of them focused on cost-effectiveness analysis, and were analyzed separately. The average percentage scores for the first 102 AI-based studies' quality assessment based on the MI-CLAIM checklist criteria were 84.12%, 83.92%, 83.98%, 74.51%, and 14.7% for study design, data and optimization, model performance, model examination, and reproducibility, respectively. Notably, 20.59% (n=21) of these studies relied on large-scale representative real-world breast screening datasets, with only 10.78% (n =11) studies demonstrating the robustness and generalizability of the evaluated AI systems.
In bridging the gap between cutting-edge developments and seamless integration of AI systems into clinical workflows, persistent challenges encompass data quality and availability, ethical and legal considerations, robustness and trustworthiness, scalability, and alignment with existing radiologists' workflow. These hurdles impede the synthesis of comprehensive, robust, and reliable evidence to substantiate these systems' clinical utility, relevance, and cost-effectiveness in real-world clinical workflows. Consequently, evaluating AI-based health technologies through established HTA methodologies becomes complicated. We also highlight potential significant influences on AI systems' effectiveness of various factors, such as operational dynamics, organizational structure, the application context of AI systems, and practices in breast screening or examination reading of AI support tools in radiology. Furthermore, we emphasize substantial reciprocal influences on decision-making processes between AI systems and radiologists. Thus, we advocate for an adapted assessment framework specifically designed to address these potential influences on AI systems' effectiveness, mainly addressing system-level transformative implications for AI systems rather than focusing solely on technical performance and task-level evaluations.
Uwimana A
,Gnecco G
,Riccaboni M
《-》