Navigating the AI maze is a challenge for governments

Source
Maze

New developments in artificial intelligence are proceeding apace. As an economist who has researched the AI revolution, I see 2018 as similar to 1995 when the commercial internet was born. The technology is advancing rapidly, but most businesses are only just starting to figure out how to put it to work.

While much of the media attention is focused on corporate applications of AI, governments are also increasing their focus on this prediction enabling technology.

In late 2016, just as President Barack Obama was leaving office, his administration published four reports on how best to prepare the American economy for the development and arrival of AI.

Last month, France released a comprehensive report on AI chaired by Fields Medalist Cédric Villani. President Emmanuel Macron stressed the immediacy of government policy choices to ensure that France is well positioned to benefit from AI innovation.

Navigating a maze

To consider the main policy options available to Canada, let’s consider an analogy. Finding the optimal route to benefiting from AI is like navigating a maze. Most countries are just waking up to the size of the prize for navigating the maze quickly and in a manner consistent with their values.

Mazes have sharp and surprising turns. Just because a mouse is close to the cheese, doesn’t mean it will get there first. This is shorthand for saying that it is hard to know what the correct path is — it’s not necessarily the shortest.

What can we do to increase the chance that the mouse (country) will successfully navigate the maze? One option is to increase the size of the cheese. That increases the incentive to move quickly and work hard at navigation.

For AI, this means ensuring that innovators can profit from AI development. To achieve this, we have policy levers such as competitive grants for compelling research proposals, prizes for research results and the removal of trade barriers so that products can be sold worldwide.

Interestingly, the French report does not spend much time on such possibilities. And we should consider why that is. Put simply, profit-oriented companies already know there is cheese at the end of the maze but they do not know what type of cheese it is.

Where’s the cheese?

The government could lower taxes on the income of companies applying AI, but how would they identify such companies, even after the fact? AI is a general purpose technology. It may be used anywhere. Creating an incentive would be like promoting Canadian cheddar, but subsidising thousands of other cheese types.

The second way to improve maze performance is to make the mouse stronger. If a mouse is starving, it may not be equipped to make it through the maze. So, you might fatten the mouse a bit and make it stronger. For AI, this is the world of tax breaks for expenditures on AI, government subsidies for basic AI research and subsidising the training of AI talent to ensure that Canadian companies can get the talent they need.

Canada is showing itself to have some advantages. Just this month, the Canada 150 Research Chair program led the University of Toronto to hire Alan Aspuru-Guznik, an expert in machine learning, quantum computing and chemistry, from his tenured position at Harvard. He saw Canada as a country consistent with his values. More critically, he joins a growing scientific ecosystem fuelled by initiatives such as the Vector Institute for Artificial Intelligence.

Removing barriers

The final way to improve the maze is to remove barriers. While some barriers are the nature of innovation, others are placed there by government policy. The very first proposal of the French AI report deals with this — ensuring data is available to train AI.

Most computer-related projects are hungry for data and knowledge. After all, the web is just a big data transfer engine. But as I outline in my new book, Prediction Machines: The Simple Economics of Artificial Intelligence, when it comes to AI, data is critical. The better, more comprehensive and richer the data, the better the performance of the AI at its main job — prediction.

Just as our ability to predict the weather depends on weather data acquired all over the globe, and our experience in identifying objects comes from a lifetime of experience stored in our memories, AIs need data to build their capabilities.

The problem is that data may be locked down in various silos created for reasons other than AI. This is currently a topical issue with regard to Facebook’s user data. A few years ago, Facebook was freer with its data, which led to a variety of uses — some creative and productive and others unsavoury.

In response to the current crisis, Facebook has now locked this down. You may feel comforted by the privacy that affords, but at the same time, it is just another barrier to data being available for researchers and creators outside of Facebook.

In actuality, if we want to promote AI, we need to encourage rather than discourage companies from sharing data. And in some cases, that data — for instance, health data — rests with governments.

Making data available

The sooner governments find a way to make that data available for research and creative applications in a manner that suitably protects the privacy of Canadians, the easier the maze navigation will be for Canadian businesses to leverage this powerful prediction technology to enhance their products and services, making them more globally competitive.

The French approach is to choose key sectors where they will make things easier for businesses — something they call “sandboxes.” They are exploring the removal of certain regulations to encourage development in health (predictive diagnostics, personalized medicine), transport (autonomous vehicles), defence (predicting cyber-attacks) and the environment (predicting problems in the food supply chain).

There is, of course, more to the French report than just encouraging AI development. Regardless of whether they or others develop AI, the report reflects thinking about how to protect French workers from disruptions and ensure that AI does not lead to biases that humans engender — particularly on the dimensions of gender and race.

The Canadian government would benefit from carefully reviewing the French proposal, including the speculative sections that only apply when the mouse finally reaches the cheese.

The ConversationFor the moment, I urge the Canadian government to think about whether that mouse is Canadian or not.