Dr ADS Manhas
Artificial Intelligence (AI) has enormous potential to overcome some of the major challenges in healthcare, such as lack of medical professionals and infrastructure, rising healthcare costs, and difficulties in the implementation of new technology due to the complex healthcare systems. The incorporation of Artificial Intelligence (AI) based tools and techniques is expected to improve healthcare delivery by making healthcare accessible and affordable and improving the quality of care provided, Artificial Intelligence (AI) has the potential to transform the way we diagnose and treat diseases. With India’s unattainable Patient- Doctor Ratio, there is need of faster inclusion of Artificial Intelligence in the Healthcare sector.
A basic tenet of decision-making in the field of medicine is whether the benefits of a treatment approach outweigh its risks. Instead of fearing the integration of AI in healthcare or going to the other extreme of allowing for its blanket implementation without well-defined ethical principles for Artificial Intelligence (AI) in Healthcare, a balanced approach would be to determine where AI can be applied and what should be done in the conventional way for the benefit of the patient.
In an event of AI Diagnostic error or inaccurate use of data, or a technological malfunction, who would be held responsible, Practitioner or the Artificial Intelligence (AI) Developer. How to determine the degree of accountability of operating Physician when wrong diagnosis or treatment occurs due to error in printing data or Artificial Intelligence systematic glitch. AI technologies if applied in clinical decision making cannot be held accountable for its decisions and judgements in case of errors.
When AI technologies are used in healthcare, there is a possibility that the system can function independently and undermine the human autonomy. The application of AI technology into healthcare may transfer the responsibility of decision-making into the hands of machines. Humans should have complete control of the AI based healthcare system and medical decision-making. The AI technology should not interfere with patient autonomy under any circumstances.
The concept ‘Human In The Loop’ (HITL) places human beings in a supervisory role and is more relevant for healthcare purposes. This will ensure an individualized decision making by the health professionals keeping the interest of the patient at the centre. Adoption of the HITL principle throughout the development and deployment of AI for Healthcare also helps in optimal sharing of accountability by the team involved in development and deployment of AI-based algorithms.
It is critical to ensure that the entity(s) seeking such responsibility have proper legal and technical credentials in the area of AI technologies for health. The AI-based solutions may malfunction, underperform, or make erroneous decisions with a potential of harm to the recipient especially if it is left unsupervised. Like other diagnostic and decision-making tools used in clinical practice the responsibility of optimal utilization of the technology is on the health professional using AI-based solutions for delivering healthcare.
There is absence of legal framework and urgent need to consider legal aspects while designing, implementing and regulating Artificial Intelligence in Healthcare.
Presently, it is presumed that the treating doctor is fully responsible for his decisions as patient considers him to be an expert/ specialist in his field of medicine, and consequentially the Doctor is responsible if the medical care provided is proved to be negligent. But who should be held liable when the physician delivers a wrong treatment at the recommendation of the Artificial Intelligence (AI) diagnostic tool?
In the eyes of law, there is concept of ‘foreseeability’ which may not work when Artificial Intelligence (AI) systems perform a medical diagnosis and treatment. For an individual to be held liable for negligence, the damage that occurred must be ordinarily ‘foreseeable.’ However, AI or machine learning systems are supposed to learn from the past data and patterns and may behave in ways that the AI developers and designers may not be able to foresee reasonably.
Some of the experts in the field of law and Artificial Intelligence have proposed that AI should be accorded a special legal status equivalent to personhood in order to account for its current and future role in the process of medical decision making.
Some other experts argue that even IT engineers come within the purview of “workmen” as defined in the Industrial Disputes Act, 1947. In order to make transition to an AI-enabled workforce in highly specialized fields such as healthcare, some of the present labour laws may have to be amended.
The limitation of liability as described in the IT Act, 2000 may also be unfit to operate in the era of Artificial Intelligence (AI). Section 79 of the Act suggests that intermediary service providers in the field of information technology are merely the carriers of content. Barring exceptions, under Section 79, they would not be held liable for the substance of the content. This rule may have to be re-examined with the implementation of AI systems that are devised by the carriers.
For the benefit of patients and society, there is need to develop a comprehensive legal framework of checks and balances as use of Artificial Intelligence (AI) in healthcare in the 21st century is the most transformative technology and its use is only set to grow in the coming years.
