Guest Feature

The Right Way to Regulate AI

By Business & Finance
06 June 2024

New technologies often have unintended consequences that are difficult to reverse, which is likely to be true of artificial intelligence as well. But rather than pausing AI development, we should engage in more small-scale experiments to identify potential harms when it is still possible to limit their damage.

By Joshua Gans


TORONTO – The arrival of new technologies tends to stoke widespread anxiety, most commonly about automation and displacement of human workers. The Luddites at the beginning of the industrial age are perhaps the most frequently cited example of this technophobia, but such worries have emerged periodically ever since.

The perceived threat of artificial intelligence, however, seems greater compared to past technological breakthroughs. People are worried not simply about human replacement, but about the possibility of human extinction. Lately, technologists have been asking each other, “What’s your p(doom)?” (one’s estimate of the probability that AI destroys humanity). The answer can vary depending on the day. But the exact number selected on the scale of zero to 100 matters less than the underlying question: How pessimistic are you about AI?

In March 2023, more than 1,000 tech researchers and executives signed what came to be known as the “pause” letter. They called on all AI labs to pause for at least six months “the training of AI systems more powerful than GPT-4.” The signatories ranged from AI pioneer Yoshua Bengio to economist Daron Acemoglu and tech billionaire Elon Musk. It’s a group that ought not to be ignored, but, as far as I can tell, they were. There was no discernable pause, although it is worth noting that, at the time of writing, no AI system significantly more powerful than ChatGPT-4 has been released.

The experts who signed the pause letter were adhering to the precautionary principle, which calls for slowing down and learning more if an activity could cause irreparable harm. But for every expert who emphasizes caution, there are others who advocate the rapid development of AI technology. After all, for more than a decade, many economists, including me, have watched as startups brought incredible AI advances to market. There were breakthroughs in cancer detection, safety systems, tools to help the disabled, tutors to expand educational opportunities, and self-driving vehicles to increase mobility and independence for the elderly.

A pause would only delay such benefits. So, while a pause may indeed be advisable, we should ensure that our fears about potential harms are based on more than speculation.

To help determine the validity of concerns about AI, it is worth understanding their provenance. The first is historical experience. Most new technologies have both benefits and costs, and sometimes the latter outweigh the former. Thomas Midgley, Jr., invented tetraethyl lead, a gasoline additive that reduces engine knocking and dramatically improves vehicle reliability, and Freon, a chemical that is key to refrigeration. But his creations caused lead poisoning and released chlorofluorocarbons that depleted the ozone layer, and both were eventually banned.

In their 2023 book Power and Progress, Acemoglu and Simon Johnson argued that this outcome, more often than not, was the norm. So, while economists generally believe that the benefits of most new technologies end up outweighing the costs, others see AI as the latest in a long line of potentially dangerous technologies that were adopted before fully understanding what these costs could be.

The second source of concern about AI is that it does have various identifiable harms. There are conflicts over how such systems are trained, exemplified by The New York Times’ lawsuit against OpenAI and Microsoft for alleged copyright infringement. Some innovators have made it clear that their goal is to replace workers with AI, while students are already using large language models to complete exams and assignments.

In addition to disrupting education and the workplace, AI has been used to surveil citizens, to imitate individuals through deepfakes, and to enhance military weapons. There are also legitimate concerns that incumbent firms that already possess vast quantities of data will enhance their market power. These identifiable harms can be weighed against the technology’s benefits and, if necessary, addressed by regulation or, as with leaded petroleum and chlorofluorocarbons, outright prohibition.

This brings us back to the third source of concern: many of the consequences of new technologies are unintended and unforeseeable, and this will likely be true of AI as well. For example, there is now widespread concern that social media is damaging children’s mental health. But, crucially, there is no simple regulatory solution, given the ubiquity of these platforms.

The fact that unintended consequences are often difficult to reverse poses its own problems, and appears to force a binary choice between AI or no AI. But this is a false dilemma. In fact, it means that we should engage in more small-scale experiments with AI to identify potential harms when it is still possible to limit their damage.

The potential irreversibility of a new technology’s unintended consequences also suggests a more nuanced and multifaceted role for AI regulation. We must engage in continual research to identify consequences immediately after they appear and to conduct appropriate cost-benefit analyses. Moreover, we should find ways to make AI adoption more reversible, which might involve institutional and infrastructure reforms. This will provide a greater level of insurance against real damage should AI have unwelcome side effects, while ensuring that society can reap its benefits.

This commentary has been adapted from a lecture to be delivered at the Turin International Festival of Economics.

© Project Syndicate 1995–2024

Joshua Gans is Professor of Strategic Management at the University of Toronto’s Rotman School of Management.


READ MORE:

Making Emerging Technologies Safe for Democracy

The Economic Power of Gender Equality

How to Think About AI Policy

How to Democratize AI