`
Crimson Publishers Publish With Us Reprints e-Books Video articles

Full Text

Trends in Telemedicine & E-health

The Double-Edged Scalpel: The Risks of Relying Too Much on AI in Telehealth

Zuheir N Khlaif*

Artificial Intelligence and Virtual Reality Research Center, An Najah National University, Palestine

*Corresponding author:Zuheir N Khlaif, Artificial Intelligence and Virtual Reality Research Center, An Najah National University, Nablus, Palestine

Submission: February 19, 2025; Published: March 05, 2025

DOI: 10.31031/TTEH.2025.05.000612

ISSN: 2689-2707
Volume 5 Issue 3

Opinion

The rise of AI in healthcare

Artificial Intelligence (AI) has significantly changed the way healthcare is delivered, especially in telehealth. From symptom-checking apps to advanced diagnostic tools, AI has made medical care more accessible, particularly for people in remote areas [1]. It has helped ease the workload of healthcare professionals and improved efficiency in patient care [2]. However, this progress comes with risks. While AI can assist medical professionals, relying too much on it may weaken essential skills, reduce professional accountability, and raise ethical concerns. If left unchecked, AI dependence could lead to serious problems in healthcare, including medical errors and a decline in human-centered care. To avoid these pitfalls, we must find a balance-combining human expertise with AI’s capabilities while ensuring ethical and responsible use.

The benefits and risks of AI in telehealth

AI has transformed healthcare by analysing vast amounts of data, predicting diseases, and assisting in diagnoses. Chatbots provide around-the-clock medical advice, and AI-powered imaging tools can detect health conditions faster than human doctors. These advancements are especially valuable in areas with limited medical resources [3]. But as healthcare providers rely more on AI, new challenges arise:
The loss of essential skills (skills atrophy): We found on a series of experimental research in different context as education and social sciences that over-reliance on AI could raise the phenomena which is skills atrophy which is consistent with Ali’s study [1]. When doctors and nurses depend too much on AI for diagnosing illnesses or analysing patient history, they may lose critical thinking skills [1]. Just as using GPS too often can weaken a person’s ability to navigate, AI can dull medical professionals’ instincts.

Reduced professional curiosity

AI’s convenience can sometimes lead to professional complacency. If an AI system can summarize medical research, some doctors might stop keeping up with new studies. If chatbots handle patient interactions, healthcare providers may lose their ability to communicate with empathy. Over time, this can weaken the doctor-patient relationship and reduce trust in the healthcare system [4].

Ethical dilemmas and risk of negligence: AI is not perfect, yet some medical professionals may blindly trust its recommendations without questioning them. This “automation bias” can lead to serious mistakes. For example, if an AI system labels a patient’s symptoms as low-risk, a nurse might dismiss the patient’s concerns-even if their instincts suggest otherwise. Similarly, a therapist might rely on an AI emotion-detection tool instead of actively listening to a patient. In such cases, AI can unintentionally contribute to negligence or malpractice.

Strategies to lessen AI dependence

A smarter approach: human-AI collaboration: To prevent these risks, we must shift our mindset. AI should not replace human decision-making but support it. A “hybrid intelligence” modelwhere AI and human expertise complement each other-offers the best way forward. Here’s how we can achieve that balance through using various strategies:

Supporting, not replacing, decision-making: AI tools should be designed to assist healthcare professionals rather than make decisions for them. For example, an AI system can highlight potential drug interactions, but a doctor should consider a patient’s full medical history before making a decision. Telehealth platforms should also encourage providers to think critically by requiring them to justify any decisions that differ from AI recommendations.

Ongoing training and education: Medical professionals must continuously learn how AI works-not just how to use it. Healthcare institutions should provide training that teaches doctors and nurses how to identify biases in AI, interpret its recommendations, and recognize situations where human judgment is more reliable than machine-generated results. One effective way to develop these skills is through interactive simulations where doctors compete against AI in diagnosing conditions.

Clear rules and accountability: Governments and healthcare organizations must establish strict guidelines for AI use in medicine. Telehealth services should disclose when AI is involved in patient care, and there must be clear rules about who is responsible for medical errors-whether it’s the AI developers, the healthcare provider, or both. The European Union’s AI Act, which treats medical AI as “high-risk” and requires strict testing, is a good example of how regulations can protect both patients and healthcare professionals.

Building a culture of responsible AI use: Technology alone cannot prevent AI misuse-cultural change is also necessary.

Healthcare professionals, patients, and policymakers must work together to promote responsible AI use in telehealth. This can be done by:
a. Raising awareness about cases where AI reliance has led to mistakes, alongside success stories where AI and human expertise worked well together.
b. Encouraging healthcare providers to verify AI recommendations with peer reviews or second opinions.
c. Educating patients about their right to ask whether AI was used in their treatment and to request human oversight when needed.

Conclusion

Keeping humanity at the heart of healthcare

AI is neither a saviour nor a threat-it is a tool. But like any tool, it depends on how we use it. By combining AI’s efficiency with human intuition, compassion, and ethical responsibility, we can create a healthcare system that is both innovative and patientcentered. The future of telehealth should not be about choosing between human doctors and AI, but about ensuring that they work together to provide the best care possible. With the right approach, we can build a system where healthcare professionals stay sharp, patients feel valued, and AI serves as a helpful assistant rather than a risky replacement.

References

  1. Ali M (2025) Will AI reshape or deform pharmacy education? Currents in Pharmacy Teaching and Learning 17(3): 102274.
  2. Choi J, Woo S, Ferrell A (2025) Artificial intelligence assisted telehealth for nursing: A scoping review. Journal of Telemedicine and Telecare 31(1): 140-149.
  3. Snoswell CL, Snoswell AJ, Kelly JT, Caffery LJ, Smith AC (2025) Artificial intelligence: Augmenting telehealth with large language models. Journal of Telemedicine and Telecare 31(1): 150-154.
  4. Bueter A, Jukola S (2025) Multi-professional healthcare teams, medical dominance, and institutional epistemic injustice. Med Health Care Philos pp. 1-14.

© 2025 Zuheir N Khlaif. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and build upon your work non-commercially.

-->

About Crimson

We at Crimson Publishing are a group of people with a combined passion for science and research, who wants to bring to the world a unified platform where all scientific know-how is available read more...

Leave a comment

Contact Info

  • Crimson Publishers, LLC
  • 260 Madison Ave, 8th Floor
  •     New York, NY 10016, USA
  • +1 (929) 600-8049
  • +1 (929) 447-1137
  • info@crimsonpublishers.com
  • www.crimsonpublishers.com