`
Crimson Publishers Publish With Us Reprints e-Books Video articles

Full Text

Trends in Telemedicine & E-health

Risks Arising from Individuals’ Complete Reliance on Artificial Intelligence in Healthcare

Dr. Adel Abdulrahman Alkhudiri*

King Saud University, Saudi Arabia

*Corresponding author:Dr. Adel Abdulrahman Alkhudiri, King Saud University, Riyadh, Saudi Arabia

Submission: November 19, 2025;Published: December 09, 2025

DOI: 10.31031/TTEH.2025.06.000633

ISSN: 2689-2707
Volume 6 Issue 2

Abstract

The integration of Artificial Intelligence (AI) technologies into healthcare represents a qualitative transformation in medical practice, offering unprecedented capabilities in diagnosis, treatment, and patient management. However, the growing reliance on these systems raises potential risks concerning patient safety, privacy protection, and clinical decision-making mechanisms, in addition to ethical challenges associated with healthcare delivery. This research article aims to systematically analyze these risks and propose practical strategies to mitigate them, thereby achieving the desired balance between technological integration and human oversight. The study seeks to ensure the safe and responsible use of AI in the healthcare sector by offering practical recommendations and an applied framework for AI applications in healthcare.

Keywords:Artificial intelligence; Risks, Safety; Healthcare; Ethics

Introduction

Recent years have witnessed rapid advancements in AI technologies, directly impacting the healthcare sector. These technologies have become pivotal tools in enhancing diagnostic accuracy, supporting clinical decisions, and managing patient data with unprecedented efficiency. This transformation has reshaped medical practices, raising expectations for higher levels of quality, speed, and precision in healthcare delivery. With the proliferation of smart health applications and wearable devices, individuals increasingly rely on AI for medical consultations and instant responses to their symptoms. Nevertheless, this growing dependence on AI systems introduces a set of challenges and risks that cannot be overlooked, including medical safety concerns, privacy protection, fairness in diagnosis and treatment, and ethical and legal dimensions of technology use [1]. The central question thus emerges: how can healthcare systems balance the immense potential of AI with the indispensable role of human expertise in ensuring patient safety and quality of care? This article seeks to analyze the potential risks of complete reliance on AI in healthcare, review relevant regulatory and ethical frameworks, and propose practical and technical strategies to mitigate these risks. Ultimately, the study contributes to shaping a balanced vision that integrates technological innovation with human oversight, ensuring the safe and responsible use of AI in service of humanity [2].

Significance

The significance of this research lies in its focus on human health and the urgent need to raise awareness of the risks associated with complete reliance on AI platforms and applications for self-treatment or medication intake without professional consultation [3]. Awareness represents the most effective tool for protecting individuals and ensuring the responsible use of these technologies. Despite the substantial benefits of AI in improving diagnostics and supporting medical decisions, absolute dependence on it entails multiple risks across three main dimensions:
a. Medical Dimension: Potential diagnostic errors and insufficient human monitoring.
b. Human Dimension: The diminishing role of physicians weakened patient-provider relationships, and reduced reliance on human expertise in decision-making.
c. Legal and Ethical Dimension: Conflicts between AI-driven decisions and principles of justice, privacy protection, and legal accountability.

Accordingly, this research highlights the risks of complete reliance on AI and underscores the necessity of striking a balance between harnessing its capabilities and preserving the central role of human expertise in safeguarding patient safety and healthcare quality [4].

Concepts and Theoretical Framework

Artificial Intelligence (AI)

A set of systems and algorithms capable of simulating human cognitive abilities, analyzing large datasets, and producing accurate results rapidly.

AI in medicine

Systems capable of analyzing medical data, identifying patterns, and making semi-autonomous decisions, such as interpreting radiological images, analyzing laboratory samples, proposing treatment plans, predicting chronic diseases, offering therapeutic recommendations, and enabling self-monitoring of health conditions through applications [5].

Individual reliance on AI in medicine

Refers to the complete or near-complete dependence of individuals on AI recommendations for managing health symptoms or making medical decisions without consulting a specialist [6]. This growing reliance raises questions about the safety of such practices, particularly in cases of potential errors.

Examples include:
a. Using applications to diagnose symptoms.
b. Following treatment plans suggested by AI systems without physician review.
c. Relying on health chatbots for medical advice.
d. Using smartwatches to determine health indicators and make treatment decisions.
e. Misdiagnosis due to self-reliance on AI applications.
f. Unsafe treatment management, such as adjusting medication dosages based on automated recommendations without medical supervision.
g. Delayed medical care due to excessive trust in AI selfassessments.

Potential Risks of Complete Reliance on AI

Diagnostic errors

AI systems may produce inaccurate diagnoses due to limited training data or algorithmic biases. Errors may arise from:
a. Inaccurate or biased training datasets, such as underrepresentation of certain populations.
b. Incomplete understanding of the patient’s full health context.
c. Neglect of subtle human factors beyond algorithmic inference.
d. Inappropriate treatment recommendations based on incomplete patient information.
e. Failure to recognize critical cases requiring immediate human intervention.
f. System vulnerabilities and single points of failure [7].
g. Consequences include delayed treatment, inappropriate interventions, and worsening health conditions.

Excessive dependence and loss of independent judgment

Complete reliance on AI diminishes individuals’ ability to independently assess their health, reducing awareness of symptoms and weakening their role in medical decision-making [8].

Privacy and data protection risks

a. AI applications collect vast amounts of sensitive health data. Risks include:
b. Data analysis without explicit consent.
c. Patients’ lack of awareness of how their data is used in AI algorithms.
d. Re-identification of individuals from anonymized datasets.
e. Commercial exploitation of health data for non-medical purposes without informed consent.
f. Data breaches and leaks exposing sensitive health information.

Algorithmic risks in healthcare AI

AI systems may reflect biases inherent in training data, leading to:
a. Inaccurate decisions for underrepresented populations.
b. Variations in healthcare quality, with some patients receiving inferior services.
c. Unfair or biased diagnoses conflicting with medical justice [9].
d. Hidden influence on patient choices through imperceptible manipulation of recommendations.

Ambiguity in legal and ethical responsibility

The absence of clear legal frameworks exacerbates risks of patient rights violations in cases of medical error. Questions arise regarding accountability-whether it lies with the application developer, the producing company, or the individual relying on AI.

Threats to the patient-physician relationship

Complete reliance on AI may reduce human interaction, leading to:
a. Declining trust.
b. Weak continuity of care.
c. Absence of psychological support for patients.
d. Exclusion of elderly or less-educated individuals from quality care.

Decision-making risks

a. Bias and uncritical acceptance of AI recommendations.
b. Decline in clinical skills and expertise among healthcare providers.
c. Gaps in understanding and inability of AI to account for unique patient circumstances and preferences.
d. Overreliance on pattern recognition, overlooking rare cases not represented in training data.

Case Studies and Practical Risks in the Use of Artificial Intelligence in Healthcare

Diagnostic AI systems

Several studies have shown that AI-based diagnostic tools may exhibit reduced performance when dealing with underrepresented populations in training datasets, leading to inaccurate or biased outcomes [10].

Excessive reliance on automated recommendations

Documented cases of misdiagnosis have resulted from complete reliance on AI recommendations without human review, highlighting the danger of excluding clinical expertise from decision-making processes.

Patient-oriented AI applications

Some health applications provide therapeutic or preventive advice that may be unsafe, particularly in the absence of clear disclaimers. This can lead patients to delay necessary medical care or make unsafe decisions [11].

Impact of unregulated applications on patient behaviour

A study revealed that approximately 40% of users make treatment decisions based on unregulated applications, underscoring the danger of widespread use of such tools without oversight or official approval [12].

Trust of young physicians in AI systems

Another study found that 30% of young physicians trust AI recommendations more than their own expertise or that of their colleagues, raising concerns about declining reliance on traditional clinical judgment [13].

Methodology

This research article is based on a systematic review of scientific literature, peer-reviewed articles, and relevant studies, in addition to examining regulatory documents and applied case studies. The focus is on a comprehensive analysis of key domains associated with risks in healthcare AI use, including safety, privacy, clinical decision-making, and ethical dimensions [14]. The methodology aims to construct an integrated knowledge framework that bridges theoretical evidence with practical experiences, enabling the evaluation of existing challenges and the proposal of regulatory and technical solutions to promote safe and responsible use of these technologies [15].

Practical Recommendations and Applied Framework for AI In Healthcare

Strengthening governance and regulation

a. Establish clear regulatory frameworks defining the responsibilities of different stakeholders in AI use.
b. Adopt auditing and periodic review protocols to ensure compliance with quality and safety standards [16-20].

Human oversight and accountability

a. Guarantee human supervision of AI-assisted medical decisions.
b. Define clear legal and ethical accountability mechanisms in cases of errors or deviations.

Post-market surveillance

a. Create systems to monitor the real-world performance of AI systems after clinical integration.
b. Develop mechanisms for early detection of technical issues or potential medical errors.

Educational and training initiatives

a. Train healthcare providers on the limitations and capabilities of AI.
b. Educate patients on the appropriate use of these technologies to enhance trust and awareness.
c. Develop curricula that preserve the independence of human clinical decision-making.

Integration of AI and human expertise

a. Design AI systems as decision-support tools rather than replacements for physicians.
b. Strengthen partnerships between AI and healthcare practitioners to achieve maximum efficiency and safety [21- 26].

Strategies for risk mitigation in healthcare AI

a. Develop a comprehensive regulatory framework aligned with international standards.
b. Establish healthcare-specific legislation defining boundaries for AI use in diagnosis and treatment.
c. Mandate rigorous verification and approval processes before adoption of AI systems.
d. Enforce post-market monitoring requirements to detect side effects or technical issues.
e. Design AI tools that support, rather than replace, human decision-making.
f. Implement continuous monitoring mechanisms to detect errors or system failures.
g. Enhance digital literacy among healthcare professionals through specialized training programs [27].

Regulatory and Ethical Frameworks for AI In Healthcare

WHO guidelines

The World Health Organization has established a reference framework for integrating AI into healthcare, emphasizing quality monitoring, transparency, and patient data protection. This includes:
a. Regulating AI use in line with global health standards.
b. Establishing clear accountability mechanisms for errors or deviations.
c. Developing formal accreditation and periodic review systems to ensure compliance with safety and quality standards [28].

Ethical principles

International guidelines stress adherence to core principles in healthcare AI use, including:
a. Transparency in decision-making processes.
b. Fairness and non-discrimination to ensure equal care for all populations.
c. Protection of privacy and safeguarding sensitive medical data [29].
d. Ensuring patient safety and preventing harm from reliance on AI systems.

Legal regulations

There is a pressing need for clear legislation defining responsibility in AI-assisted medical decisions to protect patients and build trust. While legal frameworks in Arab countries remain in early stages, growing interest is evident in developing regulations that include:
a. Protection of medical data and confidentiality.
b. Safeguarding patient rights and strengthening trust in modern technologies [30].
c. Providing safety guarantees when integrating AI into clinical practice.

Technical Solutions in Healthcare AI

a. Explainable AI (XAI): Develop systems capable of clarifying decision-making processes, enhancing transparency, and enabling healthcare providers to understand the basis of diagnoses or treatment recommendations.
b. Robust Testing Across Diverse Populations: Subject AI systems to comprehensive trials covering varied demographic groups to ensure accuracy, fairness, and quality of care [31].
c. Continuous Performance Monitoring: Establish technical mechanisms for ongoing evaluation of AI systems in clinical practice, enabling early detection of errors and continuous improvement.

Educational Initiatives in Healthcare AI

a. Training Healthcare Providers on AI Limitations: Specialized programs to raise awareness among physicians and practitioners about the boundaries of AI capabilities, ensuring safe and informed use.
b. Patient Education on Responsible AI Use: Awareness campaigns to help patients understand the role of AI in diagnosis and treatment, emphasizing the importance of consulting physicians.
c. Preserving Clinical Decision-Making Skills: Curricula designed to strengthen critical thinking and independent medical judgment, ensuring AI remains a supportive tool rather than a substitute for human expertise.

Policy Frameworks for AI in Healthcare

a. Human Oversight Requirements: Ensure direct human supervision of AI-assisted decisions, with final responsibility resting on healthcare practitioners.
b. Clear Accountability Structures: Define legal and ethical mechanisms for responsibility in cases of AI-related errors [32].
c. Regular Auditing Protocols: Enforce periodic reviews and comprehensive audits of AI systems to verify compliance with safety and quality standards.

Recommendations

a. AI should be used as a supportive tool, not a complete substitute.
b. Strengthen integration between AI and human expertise under physician supervision.
c. Raise public awareness of AI limitations and emphasize the importance of medical consultation.
d. Develop clear legal frameworks regulating AI use in medicine and defining responsibilities.
e. Ensure robust data governance and user privacy protection [33].
f. Adopt strict cybersecurity standards for health data protection. g. Verify the quality of health applications through accredited regulatory bodies.
h. Support scientific research to improve algorithmic accuracy and reduce bias.
i. Develop explainable AI systems whose decisions can be easily interpreted and reviewed [34].

Conclusion

Artificial intelligence represents a fundamental transformation in healthcare [35]. However, complete reliance on it by individuals entails medical, legal, and ethical risks that cannot be ignored. This research emphasizes the necessity of balancing AI use with the central role of human expertise to ensure patient safety and healthcare quality. The optimal future lies not in replacing physicians with AI, but in establishing a balanced partnership between them to achieve the highest levels of efficiency and safety.

References

  1. Beauchamp TL, Childress JF (2019) Principles of biomedical ethics. Oxford University Press, United Kingdom.
  2. Blease C, Locher C, Leon CM, Doraiswamy M (2020) Artificial intelligence and the future of psychiatry: Qualitative findings from a global physician survey. Digital Health 6: 1-18.
  3. Cabitza F, Rasoini R, Gensini GF (2017) Unintended consequences of machine learning in medicine. JAMA 318(6): 517-518.
  4. Glenn CI, Ruben A, Anand S, Bin X, Bernard L (2020) The legal and ethical concerns that arise from using complex predictive analytics in health care. Health Affairs 33(7): 1139-1147.
  5. Andre E, Alexandre R, Bharath R, Volodymyr K, Mark D, et al. (2019) A guide to deep learning in healthcare. Nature Medicine 25(1): 24-29.
  6. El HM (2021) Artificial intelligence in healthcare: Hope and risk. Egyptian Journal of Information Systems.
  7. (2021) Artificial intelligence/machine learning (AI/ML)-based software as a medical device (SaMD) action plan. FDA.
  8. Samuel GF, John DB, Joichi I, Jonathan LZ, Andrew LB, et al. (2019) Adversarial attacks on medical machine learning. Science 363(6433): 1287-1289.
  9. Juan MGG, Vicent BS, José CBC, Jaime CC, Ascensión DM (2023) Functional requirements to mitigate the risk of harm to patients from AI in healthcare. ArXiv, pp. 1-14.
  10. Goddard K, Roudsari A, Wyatt JC (2012) Automation bias: A systematic review of frequency, effect mediators, and mitigators. Journal of the American Medical Informatics Association 19(1): 121-127.
  11. IMDRF (2015) Software as a medical device (SaMD): Application of quality management system.
  12. Institute for healthcare improvement-artificial intelligence in health care.
  13. Kocaballi AB (2020) The personalization of cognitive authority. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems pp. 1-13.
  14. Gabriel L, Nina GH, Jin KJ, Meeyoung C (2022) The conflict between explainable and accountable decision-making algorithms. ArXiv, pp. 1-18.
  15. Xiaoxuan L, Ben G, Melissa MM, Marzyeh G, Alastair KD, et al. (2021) The medical algorithmic audit. The Lancet Digital Health 4(5): e384-e397.
  16. Artificial intelligence in healthcare: Between ethics, law, benefits, and risks.
  17. Steven MW, Victor P (2024) Balancing privacy and progress in AI-driven healthcare. Applied Sciences 14(2): 675.
  18. Ministry of Health-Saudi Arabia (2023) Guidelines for regulating artificial intelligence in the health sector. Riyadh: Ministry of Health.
  19. Najah Net (n.d.) Risks of artificial intelligence in the medical field and patient health.
  20. Sehatok (n.d.) Risks of artificial intelligence on healthcare and the health sector.
  21. Obermeyer Z, Powers B, Vogeli C, Mullainathan S (2019) Dissecting racial bias in an algorithm used to manage the health of populations. Science 366(6464): 447-453.
  22. Papernot N, McDaniel P, Goodfellow I (2016) Transferability in machine learning: From phenomena to black-box attacks using adversarial samples. ArXiv.
  23. Price WN, Cohen IG (2019) Privacy in the age of medical big data. Nature Medicine 25(1): 37-43.
  24. Price WN, Sara G, Glenn CI (2019) Potential liability for physicians using artificial intelligence. JAMA 322(18): 1765-1766.
  25. Rocher L, Hendrickx JM, de MYA (2019) Estimating the success of re-identification in incomplete datasets using generative models. Nature Communications 10(1): 3069.
  26. Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence 1(5): 206-215.
  27. Topol EJ (2019) High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine 25(1): 44-56.
  28. Vyas DA, Eisenstein LG, Jones DS (2020) Hidden in plain sight-reconsidering the use of race correction in clinical algorithms. New England Journal of Medicine 383(9): 874-882.
  29. Ploug T, Holm S (2020) The four dimensions of contestable AI diagnostics, a patient-centric approach to explainable AI. Artificial Intelligence in Medicine 107: 101901.
  30. WHO (2021) Ethics and governance of artificial intelligence for health.
  31. Wiegand T (2021) Toward a framework for evaluating patient-facing digital health tools. Journal of Medical Internet Research 23(10): e30676.
  32. World Health Organization (2023) Regulation of artificial intelligence for health.
  33. WHO (2021) AI ethics and governance guidance for large multi-modal models.
  34. Zhang J (2022) Patient use and clinical recommendations of health apps: A mixed methods study. JMIR mHealth and uHealth. 10(1): e28185.
  35. FDA (2022) Digital health software precertification program.

© 2025 Dr. Adel Abdulrahman Alkhudiri. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and build upon your work non-commercially.

-->