WHO warning on precipitous adoption of AI

Dr. Gyan Pathak
The WHO’s warning on precipitous adoption of AI must serve as a wake-up call for the world. These new AI-based tools require vigilance, especially in light of such rapidly expanding platforms such as ChatGPT, Bard, BERT, and many others that imitate understanding, processing, and producing human communication, World Health Organization has said, with special reference to patients.
WHO has recommended that policymakers ensure patients safety and protection as technology firms work to commercialize LLM tools. Let us remember that UN health agency had published “Ethics and Governance of Artificial intelligence for Health in 2021 ahead of the adoption of the first global agreement on the Ethics of AI (UNESCO), but little is done so far in this regard. Hence, the WHO has to call for an action to ensure that patients are properly protected.
Cautionary measures are not being exercised consistently with regard to large language model (LLM) tools, which use Artificial Intelligence (AI) for crunching data, creating content, and answering questions, the WHO has warned.
These new tools can generate answers that may appear authoritative and plausible to an end user. WHO has said that the danger is that these responses may be completely incorrect or contain serious errors, especially concerning for any health issues. They can also be misused to generate and disseminate highly convincing disinformation in the form of text, audio, or video content that is difficult for the public to differentiate form reliable health content.
While enthusiastic about the appropriate use of technologies to support healthcare professionals, patients, researchers, and scientists, WHO has said that the risks must be examined carefully when using these new tools to improve access to health information, as a decision-support tool, or even to enhance diagnostic capacity in under-resourced settings to protect people’s health and reduce inequality.
AI is now a well-known term that refers to the ability of machines to performs tasks that normally require human intelligence, such as understanding language, recognizing images and solving problems and learning from data, and is being currently employed in almost all fields of activities including healthcare, education, transportation, entertainment, production etc. It poses many dangers though it has potential benefits.
When asked ChatGPT, “What are the dangers of using AI?”, it enumerates some of the dangers which included not only job losses, social manipulation and surveillance, but also automation of weapons. Here are some of the dangers it warned about:
AI can replace human workers in many industries, such as manufacturing, marketing and healthcare. This can lead to unemployment, inequality and social unrest for millions of people who may not have the skills or opportunities to find new jobs.
AI can influence people’s behaviour, opinions and emotions through social media platforms, online advertising, fake news and deepfakes. These techniques can be used for malicious purposes, such as spreading misinformation, propaganda and hate speech, or manipulating elections and public opinion.
AI can enable mass surveillance and data collection by governments, corporations and hackers. This can threaten people’s privacy, security and civil liberties. AI can also be used to track, profile and discriminate against individuals or groups based on their personal data or characteristics.
AI can inherit and amplify human biases that are present in the data used to train it. This can result in unfair or inaccurate decisions or outcomes that affect people’s lives, such as hiring, lending, policing and healthcare.
AI can create a digital divide between those who have access to and benefit from AI technologies and those who do not. This can widen the gap between rich and poor countries, regions and communities. AI can also increase the concentration of power and wealth in the hands of a few individuals or corporations who control the development and deployment of AI systems.
AI can disrupt existing markets and industries by creating new products, services and business models that challenge or replace traditional ones. This can cause instability, uncertainty and unpredictability for consumers, workers and investors. AI can also affect the global financial system by enabling faster and more complex trading, speculation and fraud.
AI can enable the development and use of autonomous weapons that can operate without human oversight or control. These weapons can pose serious ethical, legal and moral dilemmas, as well as increase the risk of accidental or intentional conflict, violence and war.
ChatGPT replied that these are some of the possible dangers of using AI that need to be considered and addressed by researchers, developers, policymakers and society at large. However, this does not mean that AI is inherently evil or harmful. AI can also be used for good purposes, such as advancing scientific discovery, enhancing human creativity and solving global challenges. The key is to ensure that AI is developed and used in a responsible, ethical and beneficial way for everyone.
When asked, “Can AI be of any help in warding off the dangers that it presents?” The ChatGPT replied, “AI can be of help in warding off the dangers that it presents, but it cannot do so alone. AI needs to be guided and regulated by human values, ethics and laws.” It suggested few solutions which included creating an external AI ethics board, establishing a national or international law for AI, developing responsible and trustworthy AI, and promoting education and awareness about AI.
These are some of the possible ways that AI can be of help, ChatGPT said, in warding off the dangers that it presents. However, these solutions are not easy or straightforward to implement. They require collaboration, coordination and consensus among various actors and sectors involved in or affected by AI. They also require constant evaluation and adaptation to the changing needs and challenges posed by AI. (IPA)