Automated Indifference Artificial Intelligence and Human Rights in Healthcare

Authors

  • Yichuan Wang University of Sheffield image/svg+xml , Associate Professor, Management School, University of Sheffield, Sheffield, United Kingdom ,S102TN , United Kingdom. Email: yichuan.wang@sheffield.ac.uk Author

DOI:

https://doi.org/10.71426/jassh.v1.i1.pp15-19

Keywords:

Algorithmic bias, Medical ethics, Human rights, Black box medicine, Informed consent, AI Governance

Abstract

The integration of Artificial Intelligence (AI) into global healthcare systems represents a seismic shift in medical epistemology, yet it resurrects Gilbert Ryle’s philosophical concept of the "ghost in the machine" in a troubling new form. No longer a critique of dualism, the "ghost" has mutated into the unexamined biases and opaque logic structures codified within deep learning networks. This paper investigates the tension between "algorithmic determinism" and fundamental human rights, specifically focusing on non-discrimination and informed consent. Utilizing a secondary data analysis methodology, the study synthesizes critical findings from 2021 through early 2025, including the landmark Estate of Lokken v. United Health Group class-action lawsuit and recent data on diagnostic error rates. The research identifies a "responsibility gap" where moral agency is ceded to probabilistic outputs, exemplified by the nH Predict algorithm’s 90% error rate in denying post-acute care. The findings suggest that without a radical restructuring of ethical governance—moving beyond voluntary principles to enforceable human rights impact assessments—healthcare risks descending into a state of automated indifference. Key themes include the erosion of the fiduciary relationship, the "black box" barrier to consent, and the systemic erasure of marginalized demographics in predictive modeling.

References

[1] Hashmi AM. Ghost in the Machine: Artificial Intelligence in Medical Education. Annals of King Edward Medical University 2025 Jun 30;31(Spl2):97–98. Available from: https://doi.org/10.21649/akemu.v31iSpl2.6163

[2] Milossi M, Alexandropoulou-Egyptiadou E, Psannis KE. AI ethics: algorithmic determinism or self-determination? The GPDR approach. Ieee Access 2021 Apr 12;9:58455–66. Available from: https://doi.org/10.1109/ACCESS.2021.3072782

[3] Anderson JW, Visweswaran S. Algorithmic individual fairness and healthcare: a scoping review. JAMIA Open. Volume 8, Issue 1, February 2025, ooae149. Available from: https://doi.org/10.1093/jamiaopen/ooae149

[4] Narayanan A, Kapoor S. AI Snake Oil : What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference. Princeton University Press 2025. Available from: https://www.torrossa.com/en/resources/an/6055496

[5] [Online Available]: Birrane K, Tobey D, Kopans D, Cloud W. Lawsuit over AI usage by Medicare Advantage plans allowed to proceed. DLA Piper (2025, February 24). Available from: https://www.dlapiper.com/en/insights/publications/ai-outlook/2025/lawsuit-over-ai-usage-by-medicare-advantage-plans-allowed-to-proceed

[6] Schiff GD. AI-Driven Clinical Documentation—Driving Out the Chitchat?. New England Journal of Medicine 2025 May 15;392(19):1877–9. Available from: https://doi.org/10.1056/NEJMp2416064

[7] Chau M, Rahman MG, Debnath T. From black box to clarity: Strategies for effective AI informed consent in healthcare. Artificial Intelligence in Medicine. 2025 May 24:103169. Available from: https://doi.org/10.1016/j.artmed.2025.103169

[8] [Online Available]: Estate of Gene B. Lokken et al. v. UnitedHealth Group, Inc. et al. Case No. 0:23-cv-03514 (D. Minn. 2023). Available from: https://litigationtracker.law.georgetown.edu/wp-content/uploads/2023/11/Estate-of-Gene-B.-Lokken-et-al_20231114_COMPLAINT.pdf

[9] Gore MN, Olawade DB. Harnessing AI for public health: India's roadmap. Frontiers in Public Health. 2024 Sep 27;12:1417568. Available from: https://doi.org/10.3389/fpubh.2024.1417568

[10] [Online Available]: World Health Organization. Ethics and governance of artificial intelligence for health: large multi-modal models. WHO guidance. World Health Organization; 2024 Jan 18. Available from: https://www.who.int/publications/i/item/9789240084759

[11] Formosa P, Rogers W, Griep Y, Bankins S, Richards D. Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts. Computers in Human Behavior. 2022 Aug 1;133:107296. Available from: https://doi.org/10.1016/j.chb.2022.107296

[12] Correia PM, Pedro RL, Videira S. Artificial Intelligence in Healthcare: Balancing Innovation, Ethics, and Human Rights Protection. Journal of Digital Technologies and Law. 2025;3(1):143–180. Available from: https://doi.org/10.21202/jdtl.2025.7

[13] Parry MW, Markowitz JS, Nordberg CM, Patel A, Bronson WH, DelSole EM. Patient perspectives on artificial intelligence in healthcare decision making: a multi-center comparative study. Indian Journal of Orthopaedics. 2023 May;57(5):653–65. Available from: https://doi.org/10.1007/s43465-023-00845-2

[14] van Leersum CM, Maathuis C. Human centred explainable AI decision-making in healthcare. Science Direct Journal of Responsible Technology. 2025 Mar 1;21:100108. Available from: https://doi.org/10.1016/j.jrt.2025.100108

[15] Aizenberg E, Van Den Hoven J. Designing for human rights in AI. Big Data & Society. 2020 Aug;7(2):2053951720949566. Available from: https://doi.org/10.1177/2053951720949566

[16] Molbæk-Steensig H, Scheinin M. Human Rights and Artificial Intelligence in Healthcare-Related Settings: A Grammar of Human Rights Approach. European Journal of Health Law. 2025 May 13;32(2):139–64. Available from: https://brill.com/view/journals/ejhl/32/2/article-p139_2.xml

[17] Lysaght T, Lim HY, Xafis V, Ngiam KY. AI-assisted decision-making in healthcare: the application of an ethics framework for big data in health and research. Asian Bioethics Review. 2019 Sep;11(3):299–314. Available from: https://doi.org/10.1007/s41649-019-00096-0

[18] [Online Available]: Hoxhaj O, Halilaj B, Harizi A. Ethical implications and human rights violations in the age of artificial intelligence. Balkan Social Science Review. 2023 Dec 25;22(22):153–71. Available from: https://www.ceeol.com/search/article-detail?id=1207107

[19] Nouis SC, Uren V, Jariwala S. Evaluating accountability, transparency, and bias in AI-assisted healthcare decision-making: a qualitative study of healthcare professionals’ perspectives in the UK. BMC Medical Ethics. 2025 Jul 8;26(1):89. Available from: https://doi.org/10.1186/s12910-025-01243-z

[20] Elgin CY, Elgin C. Ethical implications of AI-driven clinical decision support systems on healthcare resource allocation: a qualitative study of healthcare professionals’ perspectives. BMC Medical Ethics. 2024 Dec 21;25(1):148. Available from: https://doi.org/10.1186/s12910-024-01151-8

Downloads

Published

2026-02-03

How to Cite

Automated Indifference Artificial Intelligence and Human Rights in Healthcare. (2026). Journal of Applied Social Sciences and Humanities, 1(1), 15-19. https://doi.org/10.71426/jassh.v1.i1.pp15-19