
Abstract: Generative Artificial Intelligence (GAI) are growing rapidly, while AI technology significantly influences across diverse fields, including healthcare and can change the world in ways that we can’t even imagine. This review paper provides a concise review of generative AI, focusing on its foundational principles and its growing role in the medicine industry. What is generative AI? is the question mentioned in the first part. It details the core concept and techniques of generative AI, as well as modelling development in recent years, with an emphasis on two primary areas: language generation and image generation. The second section explores the wide-ranging application of GAI in enhancing healthcare, including improving diagnosis and treatment planning through advanced imaging and disease data analysis; drug discovery and development via molecular synthesis predictions; and, moreover, supporting pharmacovigilance by identifying potential drug risks and telehealth. This review also underscores the importance of adopting GAI responsibly and ethical considerations such as data privacy, bias, a robust framework, or human- machine communication.
Keywords: generative AI; healthcare; disease diagnosis; treatment planning; drug discovery, pharmacovigilance; telemedicine.
I. INTRODUCTION
In the last few years, artificial intelligence (AI) has developed outstandingly and quickly by ad- vances in machine learning, deep learning, computer vision, and especially natural language processing. Consequently, machines can perform complex tasks and analyse large amounts of data, as well as make predictions with remarkable accuracy. This cutting- edge technology will transform the society we live in, including how we interact, work, and learn [1, 2]. Meanwhile, generative AI emerging as a particu- larly breakthrough and immediately developing area with many applications in a variety of fields from education, finance, IT, media and entertainment to healthcare. The beginning is the introduction of ChatGPT, a generative text AI model developed by OpenAI in late 2021 that created a new wave in the world and achieved significant momentum after a year, causing profound impacts in the technology sectors and various domains [3]. ChatGPT is specif- ically designed and meticulously modified related to the Large Language Model (LLMS) that uses deep learning techniques like neural networks with vast quantities of data for training, which produces human-like responses in conversations with high accuracy on information and knowledge bases [2]. This discovery has created a platform for generative AI to extend its capabilities beyond text genera- tion to include image, video, and audio generation, opening the door to potential innovations across a spectrum from simple to complex and advanced areas, including healthcare [1].
In the healthcare industry, these AI models have the ability to become ideal candidates to transform various aspects with enormous volumes of medical data and expertise, opening a new era of data management, patient communication, and clinical decision support [4]. Besides, generative models like Generative Adversarial Networks (GANs) and Vari- ational Autoencoders (VAEs) make medical imag- ing clearer and better by generating high-quality initial images, which assist in improving outcomes and quality of healthcare services, including disease diagnosis or drug discovery and development. Al- though some research studies illustrated that Gener- ative AI is the focus point of revolutionizing health-care applications such as clinical documentation, medical education, and evidence-based medicine summarization, as well as treatment and prognosis, it is still unclear and presents certain ethical and practical challenges in using these technologies in things related to human existence [5]. Thus, this review provides an overview of the application of existing generative AI models in healthcare. It describes generative AI and its vast potential; its benefits in healthcare consist of disease diagnosis and treatment planning, drug discovery and develop- ment, clinical decision support and personalization, and telehealth, as well as its challenges and ethical considerations. Finally, this study will contribute to the ongoing exploration of the transformative potential of AI to responsibly improve healthcare and patient-centred care in the context of the rapid development of AI technology in particular and other technology in general.
II. GENERATIVE AI
Liv Here, the phrase “generative AI” is used to refer to machine learning solutions that can create new content to respond to user inputs (prompts) based on available training on massive amounts of data[3, 6]. Since their inception in the 1950s, generative mod- els have undergone steady improvement, and the advent of new methods such as neural networks has greatly improved and supplemented the previous data generation methods such as Hidden Markov and Gaussian Mixture [7]. Nowadays, Generative AI showcased its importance across various types of data like text, realistic images, audio, video, code and even DNA - the molecular blueprint for all - ing organisms, including humans. However, the core of generative AI lies in generating high-quality im- ages and text, so there are many models created with different terminologies depending on the specific approach and context, the typical stages are problem definition, data collection and preprocessing, model selection, model training, model evaluation, model fine-tuning, deployment, and monitoring and main- tenance [8]. In natural language generation, Long Short-Term Memory (LSTM) and Gated Recurrent Networks (GRN) are early architectures that pro- vided a robust framework for sequence modelling, and the introduction of Transformers revolutionized the field with self-attention mechanisms, opening the door to Large Language Models (LLMs) models like GPT, BERT, and PaLM to excellently com- prehend, produce and transform text at previously unheard-of scales [7, 9]. In image generation, mod- els like Generative Adversarial Networks (GANs)- operate on a minimax two-person zero-sum game principle, which consists of two networks, a dis- criminator D estimates the probability of the data distribution sample to evaluate and a generator G to create images that deceive D by minimizing the probability of its generated samples being detected, through which D learns to distinguish real samples; Variational Autoencoders (VAEs) - the type of autoencoder that combines variational inference with an encoder-decoder archi- tecture, pioneering synthetic image creation [8, 10]. Neural Field Models (NFM) and Vision Transform- ers expanded their capabilities, after, advancements in diffusion models and Neural Radiance Fields (NeRF) have enabled hyper-realistic imagery and 3D representations. Finally, these led to tools like DALL·E, MidJourney, and Stable Diffusion which are reshaping various creative domains [7].
III. APPLICATION OF GENERATIVE AI IN HEALTHCARE
A Disease Diagnosis and Treatment Planning
One prominent application of generative AI in healthcare is disease diagnosis and treatment plan- ning. With an advanced understanding of the human language and analyzing amounts of medical data, including imaging, lab results, and patient histo- ries, advanced models can support clinical decision- making and diagnosis [4]. The combination of new techniques in image generation such as GANs can be used to create synthetic medical images that can augment medical datasets or train machine- learning models for image-based diagnoses with the support of VAEs, as be one of the adequate medical image segmentation technologies, VAEs effectively segment structures in medical images, especially brain tumours images by encoding input images into a latent space and decoding them into segmentation masks [11, 12]. Additionally, VAEs also assist in the registration of MRI and CT images by learning a mapping function between the input images [12]. In language generation, LLMs improve the performance of various computer-aided diagno- sis (CAD) networks, including diagnosis networks, lesion segmentation networks, and in addition, med- ical image captioning, medical report generation from clinical notes by summarising and reorganiz- ing the information presented [11, 13]. These can create a more understandable and friendly system for patients as well as promote innovative tools that are transforming disease diagnosis by integrating seamlessly into clinical workflows, for example, Regard is an AI tool that integrates with electronic health record (EHR) systems to streamline patient care by suggesting diagnoses, writing clinical notes, and automating administrative tasks, thus allowing clinicians to focus more on patients or Redbrick AI’s Fast Automated Segmentation Tool (F.A.S.T) enhances medical imaging by automating the seg- mentation of scans, improving diagnostic speed and accuracy [4]. Overall, AI technology with promised language and image generation models not only brings accuracy improvements in disease diagnosis and treatment planning but also has many benefits in other difficult medical applications. Such as the oncology field, where the term “precision medicine” is popular and cannot also be absent the support of AI [14].
B Drug Discovery and Development
By training on a huge amount of chemical struc- tures, biological interactions and other data from clinical results, generative AI accelerates the iden- tification of molecules and novel compounds and optimises drug candidates, which get a hand in quickly developing and discovering drugs. It is a long process, from the unsuccessful history of AI about using ML in the 1990s because of the general challenge in drug discovery (specifically, small- molecule drugs) including drug quality, chemical regulatory agencies, and human acceptance, as well as lack of economics, until today, developing AI algorithms has the ability to support humans in drug development [15, 16] . The critical aspect of drug design is the representation of molecules, which is the challenge of generative models. However, cutting-edge AI models like Recurrent Neural Net- works (RNNs), particularly Long Short-Term Mem- ory (LSTM) and Gated Recurrent Units (GRU), cre- ate to handle sequential data like SMILES strings, generating molecules character by character with LSTMs supporting longer sequences but requiring more parameters than GRUs, at the same time VAEs regularizing the latent space to prevent over- fitting to enable property-specific edits through dis- entangled representations, in contract, GANs pro- duce realistic molecules without directly modelling probability distributions. Lastly, flow-based models, like MoFlow, perform invertible transformations be- tween molecular graphs and latent spaces, explicitly modelling probability densities [17, 18]. Together, they improve the molecular design and molecule filtering to consider molecular properties implic- itly, which helps the drug development process be more precise, efficient, and innovative. Based on the principles of these technologies, a novel generative adversarial framework designed is MolFilterGAN, extends the GANs concept by introducing a pro- gressively augmented training strategy, not only generates molecules resembling known drugs or bioactive compounds but also iteratively refines the generator’s output quality [18]. Last, these innova- tions can be effectively incorporated into end-to-end AI-powered drug discovery platforms, for example, the AlphaFold computer program includes a bio- computational platform PandaOmics and a gener- ative chemistry platform Chemistry42 successfully revolutionize structure-based drug design for novel targets with limited or no prior structural data [19].
C Pharmacovigilance
Holistic AI-based pharmacovigilance optimiza- tion is one of the most focused topics in applying AI to the medical industry. Pharmacovigilance (PV) activities involve collecting, detecting, assessing, monitoring and evaluating relevant information to reduce the incidence and severity of adverse ef- fects (AEs) associated with pharmaceutical products [20, 21]. By combining with the large amounts of pharmacovigilance-related data stored electron- ically or analyzing real-world data such as patient- reported outcomes, social media platforms, medical literature, and electronic health records, AI tech- niques using machine learning (ML) and natural language processing (NLP) to analyze and predict AEs [20, 22]. The suitability of natural language processing (NLP) for pharmacovigilance by auto- mated analysis, synthesis and rapid identification of AEs with high accuracy, which is much better than traditional methods that are both costly and time-consuming to develop and manufacture drugs because it requires dangerous testing on animals or humans and reporting to authorized organizations, individuals or health professionals [23]. AI encour- ages ongoing monitoring even for rare or long- term AEs, which are often challenging to detect in conventional settings. Furthermore, they also boosts efficiency and consistency in processing individual case safety reports (ICSRs), automating manual processes, eliminating bias, and offering valuable insights for data scientists and medical profession- als. Thus, drug safety (DS) professionals suggested that using AI or integration into pharmacovigilance workflows would save pharmacovigilance resources and time as well as have the potential to enhance their decision-making processes and help to process effective and accurate cases in drug testing and production [24].
Figure 1. Benefits of supporting Generative AI in detecting AEs for PV and Drug development aspect [20, 22, 23. PV: Pharmacovigilance; AEs: adverse effects asso- ciated with pharmaceutical products; NLP: natural language processing (AI model); LLMs: Large Lan- gauge Models (AI Models); ML: Machine Learning.
D Telehealth
Before the widespread adoption of artificial in- telligence (AI) technologies, the term “Telehealth or Telemedicine” was noticed and developed in many ways, using video calls, text messaging, or mobile apps to help healthcare professionals inter- act with patients, breaking barriers of geography and accessibility. However, in traditional telehealth practices, because patient-clinician interactions ne- cessitate both sides’ simultaneous presence, it re- stricts flexibility [25]. On the basis of AI advances in medical imaging, pathology, natural language processing, and biosignal analysis, deep learning techniques like neural networks, residual neural networks and GANs upgraded the telemedicine area [26]. Prior telemedicine platforms will integrate AI- powered chatbots for initial symptom assessment, basic medical advice, then scheduling appointments and patient monitoring, which lead to better aug- mentation of online consultations in synchronous and also asynchronous scenarios [27, 25]. It also enhances diagnostic accuracy through analyzing medical images and data, enabling early detection of conditions for some emergency diseases, for instance, GANs and DNNs models are used to generate perfect synthetic sample data for training, and four machine learning (ML)-based algorithms consist of elastic model, Random forest (RF) algo- rithm, gradient boosting machine (GBM) and single hidden layer artificial neural network (ANN) leveled up high accuracy in the prediction of cancer and the approach to cancer pain management [28]. Further- more, AI can assist with the integration of remote measurement devices (e.g., wearable technologies and mobile applications) that continuously monitor health monitoring, alerting patients and doctors to anomalies in real-time [28]. As a result, devel- oped telemedicine optimizes resource allocation and healthcare costs, particularly in underserved areas with restricted access to specialists.
IV. ETHICAL CONSIDERATIONS
While generative AI has numerous applications in healthcare development, it also raises impor- tant ethical considerations that need to be prop- erly addressed to ensure responsible implementation [4, 5, 7]. The primary concern is patient privacy, AI systems often require large amounts of sensitive medical data like details of age, sex and disease status for training, which poses high risks when attacked by hackers or data leaks [4, 11]. In ad- dition, because of the increasing increase in data, including in online repositories, especially image data can cause the patient’s identity to be lost or deleted even when identified [29]. ChatGPT or AI tools used in healthcare highlight the important role of human oversight and safety in the deployment of AI systems. The second primary concern is accountability for the content produced, they can generate large numbers of products but are un- certain about plagiarism or authorship [30]. The third concern is the accuracy of AI training data, which requires ethical responsibility from healthcare organizations in training and integrating AI into medical education [31]. If biased AI systems are released from incorrect training data, they may negatively impact society, such as they can limit individual freedoms and reinforcing societal power dynamics. Of course, these cause the public’s trust in technology to decline, which in turn causes new technologies to be rejected [32]. Finally, integrating generative AI into clinical workflows can undermine human-to-human empathy, which is at the core of effective healthcare, and thus undermine the patient- provider relationship. Consequently, strong ethical frameworks must be in place to balance innovation with patient safety, fairness, and trust. The below table shows the necessary ethical principles frame- work summary to apply AI to healthcare.
Table 1. Implementation Stages and Principles of Generative AI in Healthcare[4, 30, 31, 32]
Stage
Principles
Implementation
Data Management
Accuracy, Privacy, Fairness and Security
Collect and select correct data, anonymize data, secure storage, audit for bias
Model Development
Transparency, Reliability, Fairness
Adopt Explainable AI (XAI), validate diverse datasets, benchmark
Deployment
Safety, Oversight, Accountability
Safety pilot testing, real-time monitoring and reporting
Governance
Policy, Regulation, Stakeholder Involvement
Clear policy, ethics boards, regular audits and responsibility
IV. CONCLUSION
Generative artificial intelligence is a subset of AI technology that can generate new content such as images, text, and audio from trained data. Due to its capacity to handle vast amounts of complex data and transform it into usable material, it could revolutionise healthcare by offering creative ways to enhance telemedicine, medication research, phar- macovigilance, diagnostics, and treatment planning, and then create new avenues for future medical applications. Imagine that AI can effectively aid in the development of medications to combat HIV or cancer, or that AI integration can turn a smart- phone into a private family doctor. However, AI’s implementation must be guided by strong ethical frameworks to ensure patient safety, as well as accountability and fairness. In short, AI may be powerful, but humans are still at the core, so col- laboration between technologists, healthcare profes- sionals, and policymakers is needed to shape an efficient, accessible, and patient-centred healthcare ecosystem where AI is the right-hand man of the doctor.
REFERENCES
S. Feuerriegel, J. Hartmann, C. Janiesch, and P. Zschech, “Generative ai,” Business & Information Systems Engineering, vol. 66, no. 1, pp. 111–126, 2024. [Online]. Available: https://doi.org/10.1007/s12599-023-00834-7
F. F.-H. Nah, R. Zheng, J. Cai, K. Siau, and L. Chen, “Generative ai and chatgpt: Applica- tions, challenges, and ai-human collaboration,” Journal of Information Technology Case and Application Research, vol. 25, no. 3, pp. 277–304, 2023. [Online]. Available: https: //doi.org/10.1080/15228053.2023.2233814
P. P. Ray, “Chatgpt: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope,” In- ternet of Things and Cyber-Physical Systems, vol. 3, pp. 121–154, 2023. [Online]. Available: https://doi.org/10.1016/j.iotcps.2023.04.003
P. Zhang and M. N. Kamel Boulos, “Genera- tive ai in medicine and healthcare: promises, opportunities and challenges,” Future Internet, vol. 15, no. 9, p. 286, 2023.
K. Moulaei et al., “Generative artificial intelligence in healthcare: A scoping review on benefits, challenges and applications,” International Journal of Medical Informatics, 2024. [Online]. Available: https://doi.org/10.1016/j.ijmedinf.2024.105474
H. Sætra, “Generative ai: Here to stay, but for good?” Technology in Society, vol. 75, p. 102372, 2023. [Online]. Available: https: //doi.org/10.1016/j.techsoc.2023.102372
Y. Shokrollahi, S. Yarmohammadtoosky, M. Nikahd, P. Dong, X. Li, and L. Gu, “A comprehensive review of generative ai in healthcare,” arXiv preprint, 2023. [Online]. Available: https://arxiv.org/abs/2310.00795
A.Bandi,P.V.S.R.Adapa,andY.E.V.P.K. Kuchi, “The power of generative ai: A review of requirements, models, input–output formats, evaluation metrics, and challenges,” Future Internet, vol. 15, no. 8, p. 260, 2023. [Online]. Available: https://doi.org/10.3390/fi15080260
Y. Bang, S. Cahyawijaya, N. Lee, W. Dai, D. Su, B. Wilie, and P. Fung, “A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity,” in Proceedings of the Association for Computational Linguistics (ACL), 2024, pp. 675–718. [Online]. Available: https: //doi.org/10.18653/v1/2023.ijcnlp-main.45
S. Bond-Taylor, A. Leach, Y. Long, and C. G. Willcocks, “Deep generative modelling: A comparative review of vaes, gans, normalizing flows, energy-based and autoregressive models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 1, pp. 111–126, 2022. [Online]. Available: https://doi.org/10.1109/TPAMI.2021.3116668
S. Reddy, “Generative ai in healthcare: an implementation science informed translational path on application, integration and governance,” Implementation Science, vol. 19, no. 1, p. 27, 2024. [Online]. Available: https://doi.org/10.1186/s13012-024-01357-9
T. Musalamadugu and H. Kannan, “Gen- erative ai for medical imaging analysis and applications,” Future Medicine AI, vol. 1, no. 2, 2023. [Online]. Available: https://doi.org/10.2217/fmai-2023-0004
E. Ullah et al., “Challenges and barriers of using large language models (llm) such as chatgpt for diagnostic medicine with a focus on digital pathology–a recent scoping review,” Diagnostic Pathology, vol. 19, no. 1, p. 43, 2024. [Online]. Available: https://doi.org/10.1186/s13000-024-01464-7
R. Hamamoto, K. Suvarna, M. Yamada, K. Kobayashi, N. Shinkai, M. Miyake, M. Takahashi, S. Jinnai, R. Shimoyama, A. Sakai, and K. Takasawa, “Application of artificial intelligence technology in oncology: Towards the establishment of precision medicine,” Cancers, vol. 12, no. 12, p. 3532, 2020. [Online]. Available: https://doi.org/10.3390/cancers12123532
M. Bordukova, N. Makarov, R. Rodriguez- Esteban, F. Schmich, and M. Menden, “Generative artificial intelligence empowers digital twins in drug discovery and clinical trials,” Expert Opinion on Drug Discovery, vol. 19, no. 1, pp. 33–42, 2024. [Online]. Available: https://doi.org/10.1080/17460441. 2023.2273839
Hasselgren and T. Oprea, “Artificial intelligence for drug discovery: Are we there yet?” Annual Review of Pharmacology and Toxicology, vol. 64, no. 1, pp. 527–550, 2024. [Online]. Available: https://doi.org/10. 1146/annurev-pharmtox-040323-040828
X. Zeng, F. Wang, Y. Luo, S. Kang, J. Tang, F. Lightstone, E. Fang, W. Cornell, R. Nussinov, and F. Cheng, “Deep generative molecular design reshapes drug discovery,” Cell Reports Medicine, vol. 3, no. 12, 2022. [Online]. Available: https://doi.org/10.1016/j. xcrm.2022.100794
X. Liu, W. Zhang, X. Tong, F. Zhong, Z. Li, Z. Xiong, J. Xiong, X. Wu, Z. Fu, X. Tan, and Z. Liu, “Molfiltergan: a progressively augmented generative adversarial network for triaging ai-designed molecules,” Journal of Cheminformatics, vol. 15, no. 1, p. 42, 2023. [Online]. Available: https://doi.org/10. 1186/s13321-023-00711-1
F. Ren, X. Ding, M. Zheng, M. Korzinkin, X. Cai, W. Zhu, A. Mantsyzov, A. Aliper, V. Aladinskiy, Z. Cao, and S. Kong, “Alphafold accelerates artificial intelligence powered drug discovery: efficient discovery of a novel cdk20 small molecule inhibitor,” Chemical Science, vol. 14, no. 6, pp. 1443–1452, 2023. [Online]. Available: https://doi.org/10.1039/d2sc05709c
L. Liang, J. Hu, G. Sun, N. Hong, G.Wu,Y.He,Y.Li,T.Hao,L.Liu, and M. Gong, “Artificial intelligence-based pharmacovigilance in the setting of limited resources,” Drug Safety, vol. 45, no. 5, pp. 511–519, 2022. [Online]. Available: https://doi.org/10.1007/s40264-022-01170-7
V. Roche, J. Robert, and H. Salam, “A holistic ai-based approach for pharmacovigilance optimization from patients behavior on social media,” Artificial Intelligence in Medicine, vol. 144, p. 102638, 2023. [Online]. Available: https://doi.org/10.1016/j.artmed.2023.102638
S. Singh, R. Kumar, S. Payra, and S. Singh, “Artificial intelligence and machine learning in pharmacological research: bridging the gap between data and drug discovery,” Cureus, vol. 15, no. 8, 2023. [Online]. Available: https://doi.org/10.7759/cureus.44359
P. Pilipiec, M. Liwicki, and A. Bota, “Using machine learning for pharmacovigilance: a systematic review,” Pharmaceutics, vol. 14, no. 2, p. 266, 2022. [Online]. Available: https: //doi.org/10.3390/pharmaceutics14020266
K. Danysz, S. Cicirello, E. Mingle, B. Assuncao, N. Tetarenko, R. Mockute, D. Abatemarco, M. Widdowson, and S. Desai, “Artificial intelligence and the future of the drug safety professional,” Drug Safety, vol. 42, pp. 491–497, 2019. [Online]. Available: https://doi.org/10.1007/s40264-018-0746-z
J. Pool, M. Indulska, and S. Sadiq, “Large language models and generative ai in telehealth: a responsible use lens,” Journal of the American Medical Informatics Association, 2024. [Online]. Available: https://doi.org/10.1093/jamia/ocae035
J. Ryu, H. Chung, and K. Choi, “Potential role of artificial intelligence in craniofacial surgery,” Archives of Craniofacial Surgery, vol. 22, no. 5, p. 223, 2021. [Online]. Avail- able: https://doi.org/10.7181/acfs.2021.00507
H. Bays, A. Fitch, S. Cuda, S. Gonsahn- Bollie, E. Rickey, J. Hablutzel, R. Coy, and M. Censani, “Artificial intelligence and obesity management: an obesity medicine association (oma) clinical practice statement (cps) 2023,” Obesity Pillars, vol. 6, p. 100065, 2023. [Online]. Available: https://doi.org/10.1016/j.obpill.2023.100065
M. Cascella, G. Scarpati, E. Bignami, A. Cuomo, A. Vittori, P. Di Gennaro, A. Crispo, and S. Coluccia, “Utilizing an artificial intelligence framework (conditional generative adversarial network) to enhance telemedicine strategies for cancer pain man- agement,” Journal of Anesthesia, Analgesia and Critical Care, vol. 3, no. 1, p. 19, 2023.
Arora and A. Arora, “Generative adver- sarial networks and synthetic patient data: current challenges and future perspectives,” Future Healthcare Journal, vol. 9, no. 2, pp. 190–193, 2022. [Online]. Available: https://doi.org/10.7861/fhj.2022-0013
T. Dave, S. Athaluri, and S. Singh, “Chatgpt in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations,” Frontiers in Artificial Intelligence, vol. 6, p. 1169595, 2023. [Online]. Available: https://doi.org/10.3389/frai.2023.1169595
Ijiga, A. Peace, I. Idoko, D. Agbo, K. Harry, C. Ezebuka, and I. Ukatu, “Ethical considerations in implementing generative ai for healthcare supply chain optimization: A cross-country analysis across india, the united kingdom, and the united states of america,” International Journal of Biological and Phar- maceutical Sciences Archive, vol. 7, no. 01, pp. 048–063, 2024. [Online]. Available: https://doi.org/10.53771/ijbpsa.2024.7.1.0015
E. Ferrara, “Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies,” Sci, vol. 6, p. 3, 2024. [Online]. Available: https://doi. org/10.3390/sci6010003