Leveraging AI's potential for good
Artificial intelligence is often considered the "next frontier" of technological advancement, and sought after for the improvements that it can make to business performance. But its use does not stop at the business and economic case. The versatility of AI makes it applicable in so many fields and to such a great extent—not merely business and economic, but social and environmental—that it is imperative to understand not just where its potential for good lies, but also how that potential can be most fully utilized.
Where can AI make the greatest difference?
AI can help to solve some of the pressing social problems facing today's world. One is healthcare: take, for example, the present and immediate threat of COVID-19. As early as January, researchers from several institutions including Stanford University had applied machine learning tools to the analysis of the virus causing COVID-19 with the objective of advancing vaccine development. AI has also been put to use in tracking and predicting the spread of infections and diagnosing medical risk for patients. Outside of COVID-19, AI has already been deployed in pharmaceutical development, greatly speeding up the analysis of diseases and the identification of better treatments, and potentially reducing the cost. One estimate by market research firm Bekryl suggests that by the end of this decade, AI could enable over US$70 billion in savings for the pharmaceutical development process, which could be passed on to patients.
Another area where AI can make a major difference is in education. One Microsoft-IDC study found that the key drivers for AI in higher education are efficiency and student engagement: it allows the automation of administrative tasks such as preparing and grading assignments and tests, freeing up more of teachers' time and attention for students. It can even analyze students' performance and develop personalized study packets for them based on their strengths and weaknesses, automatically providing the kind of differentiated and individualized learning that teachers would otherwise spend immense amounts of time and effort on.
A third, particularly significant application of AI is in combating the massive global problem of climate change. For example, one of the challenges in switching to more sustainable technologies is the time it takes to develop new materials—for carbon capture, for energy harvesting and storage, for simply reducing energy use. Sometimes, a breakthrough may take as long as a decade, but machine learning can accelerate it in much the same way as drug development. AI can also optimize some of the most energy and carbon-intensive features of the modern economy, such as urban development and resource use, freight and shipping, transport, and even, to a lesser extent, agriculture. A World Economic Forum report projects that by the mid-2020s, it will be possible to achieve end-to-end optimization of food systems; and by the mid-2030s, materials science and R&D will be massively accelerated by AI which is powered by quantum computing.
To keep AI on the right track, ethics and responsibility must be fundamental
"As we build more powerful technological tools using AI, these could be used for greater good, or they could be used to continue to concentrate power in the hands of a few," warns Professor Yoshua Bengio, the founder and scientific director of machine learning research group Mila—Quebec Artificial Intelligence Institute. "We have to be mindful of the social impact of AI...we need to allocate a good number of projects to applications that are beneficial to humanity, we need to think about governance at every level. It starts within every company and it ends at the level of the whole planet."
Indeed, multiple issues around the implementation and use of AI have emerged that make people question its effectiveness and even its desirability. For example, AI is known to under-deliver; it has a number of ethically questionable applications; and without doubt, it can be all too easily abused to entrench power in the hands of a self-serving few.
But this is not a sign to throw the baby out with the bathwater. As the technology advances, so must the associated governance. Already, advocacy groups in several countries have lobbied against the more intrusive uses of AI; some governments, especially within the European Union, have released guidance around the development of AI or even passed rudimentary regulations around its commercial applications, including data privacy, the use of chatbots, and facial recognition.
Meanwhile, individual companies have the opportunity to develop their own governance around how they use AI. The spectrum is enormous, ranging from hiring—where companies might actively implement processes to avert bias—to risk management, even to widespread socially impactful issues such as content moderation on the Internet. But one common thread runs through it: the need to balance the interest of individual AI users or providers, against the broader public interest. In a paper published in the Evolutionary and Institutional Economics Review earlier this year, economic and social sciences don Professor Dr. Dirk Wagner from the Karlshochschule International University points out that the significant information asymmetries inherent in the use of AI leads to agency problems—the idea that AI providers will exploit their position of advantage over end consumers. "At least employment relationships involving new forms of micromanagement and exposure of citizens to authoritarian states pursuing AI strategies appear to be of relevance," he writes.
Governance and ethics must be part of AI's makeup from the start
In the same way that corporate governance evolved to balance the interests of a company's many stakeholders, so AI governance will ideally evolve to resolve the information asymmetries and balance out the interests of the public and those in positions of power. But it will have to happen from multiple angles. A governmental push from above and an upswell of advocacy from the ground will do much, but for a complete change, AI governance, ethics, and responsibility must be supported at the same place that the technology came from: the workshops and laboratories of researchers and engineers, who are best positioned to build that governance, ethics, and responsibility into the systems they create.
"If we want to encourage the ethical and responsible use of AI we have to change our culture," said Professor Bengio. "We have to create a culture that's not just about better science or creating profits but also about society. Governments have to get involved in changing the culture, invest in changing the education system so that engineers and scientists will learn about not just their particular little corner of engineering, but also social sciences and ethics."
If the greater good of society—the imperative to benefit the wider world and not just a single corner of it—can be baked into AI at all its levels, from the initial development, to the commercial use case, to the large-scale application in the real world, that would surely be the ideal way of leveraging AI's potential for good.