• Users Online: 444
  • Print this page
  • Email this page


 
 Table of Contents  
ORIGINAL ARTICLES
Year : 2022  |  Volume : 9  |  Issue : 2  |  Page : 160-167

Emerging ethical dilemmas in the use of intelligent computer programs in decision-making in health care: an exploratory study


1 Department of Physiology, Yenepoya Medical College, Yenepoya (Deemed to be University), Deralakatte, Mangalore 575018, Karnataka, India
2 Department of Centre for Ethics and Forensic Medicine, Yenepoya Medical College, Yenepoya (Deemed to be University), Deralakatte, Mangalore 575018, Karnataka, India

Date of Submission15-Mar-2022
Date of Acceptance19-Apr-2022
Date of Web Publication17-Jun-2022

Correspondence Address:
Dr. Padmini Thalanjeri
Department of Physiology, Yenepoya Medical College, Yenepoya (Deemed to be University), Deralakatte, Mangalore 575018, Karnataka
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/mgmj.mgmj_34_22

Rights and Permissions
  Abstract 

Background: Medical professionals are under tremendous stress due to various occupational stressors, and Artificial Intelligence (AI) geared toward patient care might look like a preferable solution in alleviating some amount of stress. Hence, the study assessed the level of awareness and perception of the ethical dilemmas of health professionals on intelligent computer programs in independent healthcare decision-making. Materials and Methods: The present study is a cross-sectional, non-interventional, and questionnaire-based descriptive study. This study was done in a Deemed to be University Hospital, Karnataka, India. Of the total 96 participants, 30 were medical participants, 36 were dentists, and 30 were nurses. A pretested and validated questionnaire was used to collect the participants’ responses. Results: Medical and nursing participants opined that intelligent computer programs can take both major and minor independent decisions in inpatient care when the physician is unavailable. The majority of the participants felt that in the decisions made by the intelligent computer programs, patients’ rights and wishes might not be respected, compromising autonomy. The majority agreed that computer-assisted information extraction helps in better treatment of patients causing beneficence. Medical and dental participants thought that intelligent computer programs cannot communicate well with patients, do not have a conscience, and can be hacked causing maleficence. Participants opined that the use of intelligent computer programs could serve justice in the form of equity and equality in health care. Conclusion: Breach of patient autonomy due to data mining, loss of confidentiality, and disrespect for patients’ rights and wishes are major concerns when AI takes independent decisions in health care. One of the most desirable outcomes of AI in health care could be an increase in equity and equality of the reach in health care to the rural population.

Keywords: Artificial intelligence, computer, ethics, health


How to cite this article:
Balakrishnan G, Vaswani V, Thalanjeri P. Emerging ethical dilemmas in the use of intelligent computer programs in decision-making in health care: an exploratory study. MGM J Med Sci 2022;9:160-7

How to cite this URL:
Balakrishnan G, Vaswani V, Thalanjeri P. Emerging ethical dilemmas in the use of intelligent computer programs in decision-making in health care: an exploratory study. MGM J Med Sci [serial online] 2022 [cited 2022 Jul 6];9:160-7. Available from: http://www.mgmjms.com/text.asp?2022/9/2/160/347691




  Introduction Top


John McCarthy, a computer scientist, coined the term Artificial Intelligence (AI) in 1955 and defined it as “The science and engineering of making intelligent machines, especially intelligent computer programs.”[1],[2] Brain implants in quadriplegic patients, AI as a decision support system, AI programs to organize bed schedules, making staff rotation roster, laboratory information systems, and robotic surgical systems are a few of the examples that have proved repeatedly that present-day health care is in throes of technological revolution and we as healthcare workers should keep pace with it. The technology giant Google, the pioneer in AI, has created software for predicting mortality among patients admitted to hospitals (with 95% accuracy) using the hospital information, prior diagnosis, vital signs, and laboratory results.[3] This shows that the latest medical technologies in AI along with the exponential growth in human knowledge can not only change the quality of medical care provided but also the quality of life.[4],[5],[6],[7]

Medical professionals are under stress due to various occupational stressors such as time pressure, a high workload, and a predisposition to emotional responses due to constant exposure to suffering and death of patients.[8],[9],[10],[11] Even though AI geared toward patient care might look like a preferable solution in alleviating some amount of stress on hugely burdened medical professionals, it is also important to remember a few accepted facts such as AI not being robust and competent enough to appreciate and replicate all dimensions of the human brain like its inability to learn from past mistakes and to consolidate the pieces of medical evidence and its application to the present-day healthcare scenario.[5] This could create an imbalance between beneficence and maleficence caused by inpatient care. Hence, it is ideal to try decoding and analyzing the ethical ramifications of behavior involving machines in health care.[12],[13] There are also efforts made to explore the possibility of harnessing AI in ethical decision-making in health care. One such computer program, MedEthEx, was created as a helpful step toward creating an AI that communicates with those in need of healthcare services in a way that is perceptive to ethical issues.[14],[15]

An important ethical ramification when AI is an independent caregiver could be a lack of preservation of the physician–patient relationship due to the loss of tactile communication. Tactile communication, a key component of the physician–patient relationship, is the communication between the physician (diagnostic touch) and patient (healing touch).[13],[16] This touch tends to get lost when an intelligent computer program or AI becomes the healthcare giver. But, can this be overlooked or underplayed keeping in mind that minimizing direct physical contact during the COVID pandemic is the need of the hour? With the world reeling under the COVID pandemic, the medical fraternities have started exploring other technology-driven alternatives for the effective and speedy delivery of health care, especially in rapid testing, interpretation, and diagnosis of patient’s illnesses, minimizing the direct physical contact with patients, improving workflow among health professionals, thus promoting the overall health and efficiency of healthcare delivery. Customizing and developing a technology based on the present health scenario and the needs of health professionals, especially during the present pandemic, could be the cornerstone of future when AI independently decides a morally correct strategy[17] for the treatment of patients. Considering all of this, it is ideal to have better insights into understanding various ethical dilemmas, especially the ones the healthcare professionals may confront in the future if AI or intelligent computer programs take independent healthcare decisions.

Principlism, an applied ethics approach, propounded by Beauchamp and Childress,[18] is a very popular bioethics framework to explore the ethical ramifications of a moral dilemma in health care. This is supported by four pillars, namely, patient autonomy, beneficence, nonmaleficence, and justice.[19] This approach has been used in our study. The current healthcare practices involve the collaboration of various healthcare providers from different specialties working as a team with the same focussed goal of effective healthcare delivery.[20] One such interprofessional collaboration is that of physicians/surgeons, dentists, and nurses. As there is a paucity of literature on exploring the ethical dilemmas that could arise with the use of intelligent computer programs in independent decision-making, we undertook this study to assess the level of awareness and perception of health professionals (physicians/surgeons, dentists, and nurses) on intelligent computer programs in independent decision-making in health care and to determine the ethical dilemmas which could be faced because of these independent decisions, concerning upholding the ethical principles of patient autonomy, beneficence, nonmaleficence, and justice.


  Materials and methods Top


The present study is a cross-sectional, non-interventional, and questionnaire-based descriptive study. The study was conducted in a Deemed to be University Hospital in Mangalore, India. The investigators approached the Institutional Ethics Committee before the commencement of the study, and the present study was approved by the Committee (Protocol No. 2016/289). We obtained the participants’ written informed consent before enrolling them in the study. We included faculty from our university who had a minimum of 5 years of experience in their field of expertise. Out of the 150 participants recruited for the study, 96 participants responded of which 30 participants were physicians and surgeons (N1, medical participants), 36 participants were dentists (N2, dental participants), and 30 participants were nurses (N3, nurse participants). We employed the purposive sampling technique with a complete enumeration of participants who completed answering the questionnaire. Other health professionals such as lab technicians, OT technicians, dialysis technicians, clinical assistants, and ward secretaries were excluded from the study.

The details and purpose of the study were explained to the study participants. A pre-tested validated semi-structured questionnaire was given to the participants. Instead of the term “AI,” “intelligent computer programs” were used in the questionnaire for easy understanding for the participants. This questionnaire was developed after doing a needs assessment relevant to the study in consultation with ethics experts. Questionnaire items were constructed taking into consideration the views and inputs from the ethics and medical experts. The developed questionnaire was subjected to face and content validation. This was followed by pre-testing of the questionnaire, and their responses were subjected to principal component analysis. The internal consistency was ensured before administering it to the study participants. The questionnaire consisted of questions to assess the awareness of the participants about intelligent computer programs in health care and their perception of ethical dilemmas that both patients and health professionals might face in the future if the intelligent computer programs took independent healthcare decisions both in diagnosis and management of a patient, focussing on the principles of patient autonomy, beneficence, nonmaleficence, and justice. Out of the 28 questions in the questionnaire, for 25 questions the participants’ responses were collected on a 5-point Likert scale: strongly agree, agree, neutral, disagree, and strongly disagree and one question had a yes or no option, one question had multiple choices answer, and one open-ended question. The average response rate to the questionnaire was 64% (medical participants 60%, dental participants 72%, nurse participants 60%), achieved with all our efforts and precautions to minimize the non-responder’s bias.

Statistical analysis

The Statistical Package for the Social Sciences version 26.0 and MS Excel were used for the analytical purpose. The descriptive data are presented as frequencies and percentages. The results are summarized as tables.


  Results Top


It was observed that the majority of the participants which included 93.3% medical, 72.2% dental, and 46.7% nurse participants were aware of AI in the form of intelligent computer programs in patient care. [Table 1] shows that the participants agreed that intelligent computer programs play a minor role presently and can play a major role in inpatient care in the future. Medical and nurse participants thought that intelligent computer programs can take both major and minor independent decisions in inpatient care when the physician is unavailable. In contrast, the dental participants disagreed. The majority of the participants perceived that in the decisions made by the intelligent computer programs, patients’ rights and wishes might not be respected, hence compromising autonomy [Table 2]. Most of the participants agreed that computer extracting information from other solved medical cases and its prompt application to similar related cases could lead to better patient outcomes, thereby causing beneficence to them [Table 3]. Medical and dental participants thought that intelligent computer programs cannot communicate well with users to prevent clinically harmful misunderstanding, it does not have a conscience and can be hacked, and hence it can cause maleficence [Table 4]. The majority of nurses (73%) opined that the health professional–patient relationships could deteriorate because of the loss of “healing touch” when it is the computer taking patient care decisions. The participants majorly opined that the use of computers in decision-making could serve justice in the form of equity and equality of reach of health care to patients [Table 5]. The majority of medical and dental participants felt that intelligent computer programs in the future would play an important role in the fields of physiotherapy, laboratory procedures, and cardiac perfusion during cardiac operations. Nurse participants felt that it would play an important role in the diagnosis and treatment of patients. The majority of the medical and dental participants agreed that health professionals should be held responsible and accountable in case of an error occurring due to a technical problem with the use of intelligent computer programs in health care [Table 6].
Table 1: Perception of participants on the use of intelligent computer programs in decision-making in health care (n = 96)

Click here to view
Table 2: Perception of participants regarding patient autonomy when intelligent computer programs take independent decisions in patient care (n = 96)

Click here to view
Table 3: Perception of participants on beneficence when intelligent computer programs take independent decisions in patient care (n = 96)

Click here to view
Table 4: Perception of participants on nonmaleficence when intelligent computer programs take independent decisions in patient care (n = 96)

Click here to view
Table 5: Perception of participants on justice when intelligent computer programs take independent decisions in patient care (n = 96)

Click here to view
Table 6: Perception of participants on the onus of responsibility and accountability when an error occurs due to a technical problem while implementing the decision taken by an intelligent computer program (n = 96)

Click here to view



  Discussion Top


Experts claim that AI or computer intelligence exceeding human intelligence might come into existence shortly.[21] Creating such intelligence would be the biggest revolutionary event in human history. In the earlier days, when AI was developing and computer programs slowly encroaching on health care, it almost looked like a utopia and a solution to many logistic problems of healthcare delivery. Nevertheless, we need to take a balanced view of practical and ethical issues that could arise in the future in health care with the use of AI or intelligent computer programs in decision-making. In the present study, we have attempted to understand the various ethical dilemmas which both the medical profession and patients could face if and when intelligent computer programs take independent decisions in patient care.

The majority of the medical and dental participants of the present study were aware of AI and its progress in the medical arena. A little more than half of the nurse participants were not aware of AI. Medical and nurse participants agreed that intelligent computer programs can take both major and minor independent decisions in the future only when the physician is unavailable. But again another school of thought is that a majority of all groups agreed that the physicians should approve the intelligent computer program decisions before implementation, and it would be wise to use it as a decision-supporting system. Pestotnik et al.[22] reported that a computer-assisted decision support system is useful in providing guidelines to improve antibiotic use, to medical practitioners. A lack of a self-consistent model in medicine, the multi-dimensional character of humans, and the non-reliable nature of the theoretical assumptions involved in intelligent computer programs could restrict computer programs to the role of decision support systems and not decision-making systems.[23]

Patient autonomy

Medical care has moved away from a paternalistic approach to the insistence on patient autonomy.[24] Medical participants of our study felt that the autonomy of the patient and confidentiality of data could be compromised when the intelligent computer programs take independent decisions. This might stem from disregard for patients’ rights and wishes. Also, the present study participants opined that informed written consent should mandatorily have a statement regarding the role of computers in decision-making. In contrast, a study by Geoffrey et al. reported that the computer-assisted intervention enhanced diabetic patients’ perception of autonomy.[25] This difference in opinion could be because, in the study by Geoffrey et al. on diabetic patients, the computer assisted the ongoing intervention and did not take decisions independently.

Beneficence and nonmaleficence

In our study, the nurse participants perceived intelligent computer programs as a decision-maker in health care, a solution to economic problems, and the shortage of health professionals in a country. The majority of participants agreed that the beneficence in health care obtained could be because of data mining and its prompt application in solving other related cases. The mining of medical data because of its universal applicability and its use for beneficial purposes in health care is fast gaining popularity.[26] This can build precision medicine which ultimately enhances evidence-based medicine and thus guide clinical practice.[27] AI could also accumulate a lot of medical information in a short time from various databases but its ability to apply the error of the past in the present decision-making is questionable and uncertain. It is worthwhile to ponder if the breach in autonomy and privacy that can occur because of ubiquitous data mining[27],[28],[29] and patient care decisions of AI[30] could outweigh the beneficence expected.

Any action in a doctor–patient relationship has an emotional component attached to it, as a result of which trust develops. It is questionable that computers, which are devoid of emotions, can develop the trust which is normally present in a doctor–patient relationship. Sometimes, the emotion is as, if not more, important a reason in the process of decision-making. To address this issue, Leon et al.[31] had developed a system with a human emotions recognition level of 71.4%, which can recognize emotions based on positive and negative emotional changes making use of physiological signals presented from one subject. This area needs extensive research before being incorporated into AI modules for patient care.

The primary duty of a healthcare professional is nonmaleficence. Medical participants’ opinions in our study contrasted with nurse participants who opined that the computer can communicate well with users to prevent clinically harmful misunderstandings. In line with medical participants’ opinion, Green et al.[32] reported that the counselors communicate with female breast cancer patients better than a computer at US medical centers.

The participants of the present study felt that a computer cannot replace human judgment or human touch and does not have a conscience; hence, it can cause maleficence in patient care. The nurse participants were worried about the deterioration of the health professional–patient relationship due to the loss of “healing touch,” which cannot be underplayed even during the COVID crisis. Nurse participants felt that if an algorithm is built into AI, which helps it to make ethical decisions, it could reduce the maleficence, if any, caused by it in patient care. Scientists have tried to incorporate ethics into computer programs that are capable of taking independent ethical decisions in health care to prevent maleficence caused to patients.[15],[33] It is also wise to recognize the fact that computer programs of AI may get corrupted and hacked,[34] thereby altering the data or breaching confidentiality causing maleficence.

Justice

The world has limited resources and there is significant inequity in its distribution. Experts have warned of a “technological divide.”[4] But in our study, participants felt that the use of computers in independent decision-making could serve justice in the form of equity and equality in the reach of health care to patients and reduce discrimination among patients. The result of our study agrees with the pilot study conducted by Ryan et al.,[35] who reported that difficulties faced by low-income and medically underserved communities in accessing healthcare services could be bridged by technology. During the lockdown and movement restrictions due to the COVID pandemic, health professionals were unable to reach out to rural communities and provide health care. The computer programs in the form of mobile-friendly applications could be a powerful and fruitful solution to increase the reach of healthcare delivery to the rural community, especially during a lockdown. At such times, intelligent computer programs in health care could become a boon in disguise in terms of enhanced coverage of health care with minimal physical contact.

One of the ethical dilemmas faced with the advent of computer programs taking healthcare decisions is that of responsibility and accountability in case of an error occurring due to a technical problem when an intelligent computer program takes patient care decisions. Is the onus in such a situation on the health professional, patient, or computer? Cummings[36] opined that the computer systems diminish the sense of responsibility which could result in erosion of accountability in general. One should also consider the implication of “Novus actus interventions” in AI making decisions. Novus actus interventions is a Latin term which means that after negligence has occurred, another unforeseeable act operates to precipitate or worsen the impact and then the intervening person is not responsible for the consequences of the intervening act.[37] The chain of causation is broken by the intervening act. Considering the things that can go wrong with AI, we think that the specific roles played by AI and the healthcare professional should be defined. This could be instrumental in solving the problems in an inadvertent error by the program.

Role of AI in the future of health care

The majority of the medical and dental participants felt that AI in the future might play an important role in the fields of physiotherapy, laboratory procedures, and cardiac perfusion during cardiac operations, but nurse participants felt that AI will play an important role in the diagnosis and treatment of patients.

Health care is a complex institution comprising multiple stakeholders, and the need of the hour is to form a core interprofessional team of AI developers, ethicists, and healthcare professionals to deliver AI-linked healthcare modules. This can minimize and resolve the ethical dilemmas that can emerge in AI decision-making scenarios. Robust and rigorous validated rubrics and guidelines need to be in place for allowing this kind of healthcare approach to thwart any chances of untoward consequences.

In our study, for the majority of questions, as highlighted earlier, nurse participants’ opinions differed from the medical and dental participants. We attribute these differences in the opinions to the nurses not directly being involved in decision-making in patient care when compared with medical and dental professionals.

Even though due to time constraints, we have used a questionnaire as the study tool to understand the various ethical dilemmas the healthcare professionals may confront when AI takes independent healthcare decisions, it would be ideal to have an in-depth interview to elucidate the complex thoughts of survey participants. Exploration of the participants’ responses using other ethical frameworks such as consequentialism, deontology, and casuistry form the future scope of our study.


  Conclusion Top


If emerging AI or computer intelligence with their enormous potential for therapeutic benefit is to be tapped wisely, paying attention both to the goal of achieving effective health care and the means (by upholding ethical principles of patient autonomy, beneficence, nonmaleficence, and justice in patient care) of obtaining those goals, it could be a boon. From our study, we conclude that breaches in patient autonomy due to data mining and loss of confidentiality and disrespect for patient’s rights and wishes are major concerns when AI takes independent decisions in health care. A detailed risk–benefit analysis should be carried out for every single healthcare scenario, such that the expected beneficence to the patients outshines the unexpected maleficence. One wrong step could quickly turn into a slippery slope and lead to disastrous consequences to avoid which adequate precautions in the form of astringent and validated AI-linked healthcare protocol that clearly defines specific roles played by both AI and the healthcare professional need to be in place before AI takes independent decisions in patient care. One of the most desirable outcomes of AI in health care could be an increase in equity and equality of the reach in health care to the rural population, especially during the COVID-19 pandemic. This could prove to be a valuable addition to healthcare delivery.

Acknowledgement

We would like to acknowledge Dr. Ravi Vaswani, Professor of Internal Medicine for his contribution in the scientific content, Dr. B. Kalpana, Associate Professor, Department of Physiology, Yenepoya Medical College, Yenepoya (deemed to be University) for their technical suppot. This study was a part of project submitted towards partial fulfillment of PG Diploma in Bioethics and Medical Ethics from Yenepoya (deemed to be University).

Ethical consideration

Yenepoya University Ethics Committee, Mangalore, Karnataka, India has approved undertaking the proposed study on “Emerging ethical dilemmas in the use of intelligent computer programs in decision making in healthcare: an exploratory study” vide Protocol no. 2016/289 dated October 28, 2016.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.



 
  References Top

1.
Hayes PJ, Morgenstern L On John McCarthy’s 80th birthday, in honor of his contributions. AI Mag 2007;28:93.  Back to cited text no. 1
    
2.
Frankenfield J “How artificial intelligence works.” Artificial Intelligence Investopedia, July 28, 2021. Available from: https://www.investopedia.com/terms/a/artificial-intelligence-ai.asp. [Last accessed on January 22, 2021].  Back to cited text no. 2
    
3.
The Sun. Google can predict when you will die with 95% accuracy using AI. Updated: June 19, 2018. Available from: https://www.thesun.co.uk/tech/6564114/google-predict-die-death-ai-artificial-intelligence/. [Last accessed on June 9, 2020].  Back to cited text no. 3
    
4.
Wyatt J, Taylor P CMF file 49-emerging medical technologies: Ethical issues. Available from: https://www.cmf.org.uk/resources/publications/content/?context=article&id=26007. [Last accessed on June 9, 2020].  Back to cited text no. 4
    
5.
Bologa A, Bologa R, Sabau G, Muntean M Management Information Systems in Romanian Universities. In: Proceedings of the International Conference on E-Business. 2008. p. 425-8. doi: 10.5220/0001904904250428. [Accessed November 09, 2021].  Back to cited text no. 5
    
6.
Das N, Topalovic M, Janssens W Artificial intelligence in diagnosis of obstructive lung disease: Current status and future potential. Curr Opin Pulm Med 2018;24:117-23.  Back to cited text no. 6
    
7.
Rogozea L Towards ethical aspects on artificial intelligence, plenary lecture 7. In: Trilling L, Perkins D, Dionysiou D, Perlovsky L, Davey K, Landgrebe D, et al, editors. Recent advances in artificial intelligence, knowledge engineering, and databases. Proceedings of the 8th WSEAS International Conference on Artificial Intelligence, Knowledge Engineering and Databases (AIKED ‘09), Cambridge, UK, February 21–23, 2009. Zografou, Athens: WSEAS Press; 2009. p. 22.  Back to cited text no. 7
    
8.
McVicar A Workplace stress in nursing: A literature review. J Adv Nurs 2003;44:633-42.  Back to cited text no. 8
    
9.
Ruotsalainen JH, Verbeek JH, Mariné A, Serra C Preventing occupational stress in healthcare workers. Cochrane Database Syst Rev 2014:CD002892.  Back to cited text no. 9
    
10.
Wu S, Li H, Zhu W, Lin S, Chai W, Wang X Effect of work stressors, personal strain, and coping resources on burnout in Chinese medical professionals: A structural equation model. Indus Health 2012:MS1250. doi: 10.2486/indhealth.MS1250..  Back to cited text no. 10
    
11.
Nieuwenhuijsen K, Bruinvels D, Frings-Dresen M Psychosocial work environment and stress-related disorders, a systematic review. Occup Med (Lond) 2010;60:277-86.  Back to cited text no. 11
    
12.
Vilardell F Ethical problems of medical technology. Bull Pan Am Health Organ 1990;24:379-85.  Back to cited text no. 12
    
13.
Norman ID, Aikins MK, Binka FN Ethics and electronic health information technology: Challenges for evidence-based medicine and the physician–patient relationship. Ghana Med J 2011;45:115-24.  Back to cited text no. 13
    
14.
Anderson M, Anderson SL, Armen C MedEthEx: A prototype medical ethics advisor. In AAAI 2006;1759-65.  Back to cited text no. 14
    
15.
Anderson M, Anderson SL Machine ethics: Creating an ethical intelligent agent. AI Mag 2007;28:15.  Back to cited text no. 15
    
16.
Bruhn JG The doctor’s touch: Tactile communication in the doctor–patient relationship. South Med J 1978;71:1469-73.  Back to cited text no. 16
    
17.
LaChat MR Artificial intelligence and ethics: An exercise in the moral imagination. AI Mag 1986;7:70.  Back to cited text no. 17
    
18.
Beauchamp TL, Childress JF Principles of Biomedical Ethics. 5th ed. New York, NY: Oxford University Press; 2001.  Back to cited text no. 18
    
19.
Loewy EH, Fitzgerald F Principalism at the bed-side. Wien Klin Wochenschr 2003;115:797-802.  Back to cited text no. 19
    
20.
Busari JO, Moll FM, Duits AJ Understanding the impact of interprofessional collaboration on the quality of care: A case report from a small-scale resource limited health care environment. J Multidiscip Healthc 2017;10:227-34.  Back to cited text no. 20
    
21.
Müller VC, Bostrom N Future progress in artificial intelligence: A survey of expert opinion. In: Müller VC, editor. Fundamental Issues of Artificial Intelligence (Synthese Library). Berlin: Springer; 2016. p. 555-72.  Back to cited text no. 21
    
22.
Pestotnik SL, Classen DC, Evans RS, Burke JP Implementing antibiotic practice guidelines through computer-assisted decision support: Clinical and financial outcomes. Ann Intern Med 1996;124:884-90.  Back to cited text no. 22
    
23.
Papagounos G, Spyropoulos B The multifarious function of medical records: Ethical issues. Methods Inf Med 1999;38:317-20.  Back to cited text no. 23
    
24.
Madder H Existential autonomy: Why patients should make their own choices. J Med Ethics 1997;23:221-5.  Back to cited text no. 24
    
25.
Williams GC, Lynch M, Glasgow RE Computer-assisted intervention improves patient-centered diabetes care by increasing autonomy support. Health Psychol 2007;26:728-34.  Back to cited text no. 25
    
26.
Cios KJ, Moore GW Uniqueness of medical data mining. Artif Intell Med 2002;26:1-24.  Back to cited text no. 26
    
27.
Collins FS, Varmus H A new initiative on precision medicine. N Engl J Med 2015;372:793-5.  Back to cited text no. 27
    
28.
Hogle LF The ethics and politics of infrastructures: Creating the conditions of possibility for big data in medicine. In: Mittelstadt BD, Floridi L, editors. The Ethics of Biomedical Big Data. Law, Governance, and Technology Series. Switzerland: Springer International Publishing; 2016. Cham. p. 397-427.  Back to cited text no. 28
    
29.
Mittelstadt BD, Floridi L The ethics of big data: Current and foreseeable issues in biomedical contexts. Sci Eng Ethics 2016;22: 303-41.  Back to cited text no. 29
    
30.
Keskinbora KH Medical ethics considerations on artificial intelligence. J Clin Neurosci 2019;64:277-82.  Back to cited text no. 30
    
31.
Leon E, Clarke G, Callaghan V, Sepulveda F A user-independent real-time emotion recognition system for software agents in domestic environments. Eng Appl Artif Intell 2007;20:337-45.  Back to cited text no. 31
    
32.
Green MJ, Peterson SK, Baker MW, Harper GR, Friedman LC, Rubinstein WS, et al. Effect of a computer-based decision aid on knowledge, perceptions, and intentions about genetic testing for breast cancer susceptibility: A randomized controlled trial. JAMA 2004;292:442-52.  Back to cited text no. 32
    
33.
Warner HR, Olmsted CM, Rutherford BD HELP? A program for medical decision-making. Comp Biomed Res 1972;5:65-74.  Back to cited text no. 33
    
34.
Manchikanti L, Benyamin RM, Falco FJ, Hirsch JA Metamorphosis of medicine in the United States: Is information technology a white knight or killer whale? Pain Physician 2014;17:E663-70.  Back to cited text no. 34
    
35.
Ryan MH, Yoder J, Flores SK, Soh J, Vanderbilt AA Using health information technology to reach patients in underserved communities: A pilot study to help close the gap with health disparities. Glob J Health Sci 2015;8:86-94.  Back to cited text no. 35
    
36.
Cummings ML Automation and accountability in decision support system interface design. J Technol Stud 2016;42:23-31.  Back to cited text no. 36
    
37.
Harrison S Clinical negligence and Novus actus interveniens, May 6, 2019. Available from: https://hwlebsworth.com.au/clinical-negligence-novus-actus-interveniens/. [Last accessed on May 30, 2020].  Back to cited text no. 37
    



 
 
    Tables

  [Table 1], [Table 2], [Table 3], [Table 4], [Table 5], [Table 6]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract
Introduction
Materials and me...
Results
Discussion
Conclusion
References
Article Tables

 Article Access Statistics
    Viewed138    
    Printed2    
    Emailed0    
    PDF Downloaded32    
    Comments [Add]    

Recommend this journal


[TAG2]
[TAG3]
[TAG4]