Ursula von der Leyen, incoming European Commission President, has promised to propose legislation on Artificial Intelligence within the first 100 days of the new Commission term.

In doing so she has thrown down the gauntlet to Commission officials to produce a Proposal by March 2020. The proposal will aim to encourage AI innovation – especially if it is home grown in Europe. But it will also seek to set some ethical rules and standards to govern the use artificial intelligence and its impact on our lives, society and decision making.

H+K Brussels reached out to Fiona McEvoy, our AI expert in San Francisco, for her thoughts on AI and ethics and the 100-day challenge facing EU officials.

The EU is clearly looking to lead the field in AI regulation as it did with the GDPR – do you think the time is right for ethics and rules for AI ? 

Absolutely. And it’s important that we don’t think of AI as some obscure field of computer science, but as a tool that will shape and define the future of humanity. We currently have the opportunity to determine what the rules are – what’s allowable and what we should halt. It’s vital that we take this time to pause and anticipate unintended consequences.

Where do you think rules are most needed? 

I’m concerned about surveillance, and the way that surveillance data is used to influence our choices and undercut human autonomy. Currently vast swathes of our behavioral data sits with a relatively limited number of technology companies. They essentially know more about our habits than we do ourselves. This information imbalance can so easily be weaponized against us and we need to ensure that firms who wish to “nudge” us are prevented from “shoving” us into choices that aren’t in our best interests.

How would you strike a balance between encouraging innovation and protecting our human values and rights? 

I don’t feel that innovation and good ethical practice are mutually exclusive. Demanding the protection of end users needn’t stifle creativity. On the contrary, it requires it. Anyone can create a product that it is optimized for profit, but optimizing for other metrics – like human flourishing – requires skill. Increasingly, technology developers are embracing this challenge with an understanding that the products they put out into the world can affect real lives. It’s a big responsibility and we should be vigilant in ensuring that technologists rise to it.

What are the potential pitfalls of legislating too hard too soon versus not legislating at all ?

It’s already too late for “too soon”. We’re swiftly moving into a new phase of mass deployment and it’s critical we get ahead of it. Too hard is another matter. Often people seek to wield regulation as a club to punish businesses for completely separate perceived “crimes” (like making a healthy profit). For me, this is one issue that should transcend the political and ideological bias. The genie is already out of the bottle, and we can’t legislate it back in – nor should we wish to. It’s critical to remember that AI has the capability to improve the lot of humankind more than any other tool that has gone before it. But, to quote Paul Virilio, “when you invent the ship you invent the shipwreck…when you invent electricity, you invent electrocution”.  Regulation should seek to mitigate the latter part of the equation while humbly acknowledging the raw potential of the former.

How do the EU and US conversations on AI and ethics compare? 

The United States typically seeks to avoid regulation and the imposition of the state, at least until deemed necessary. The EU approach to regulation seeks to preempt undesirable developments. Interestingly though, there is a real synergy when it comes to much of the non-governmental work that is being done outside of the regulatory piece. Ethics precedes the law, and there is a lively global community (mapped here) having an AI ethics conversation and trying to conceive of the best way forward. This is an issue that transcends geographical boundaries and, though there will inevitably be cultural differences when it comes to approach, there is broad agreement on the problems that need to be addressed.

Your recent paper looked at how AI might in future make more reliable and evidence based decisions than our politicians. In what ways do you think AI could already assist in the process of devising ethical rules?

My paper was designed to test intuitions, and specifically to test the natural sense of repulsion most have when confronted with the idea of an autonomous system that devises rules for humans. I ask, “what if the machine was more perfect in its decision-making than a human ever could be?” In truth, I’m not sure an ethically robust system can even exist, much less a system that can generate or facilitate ethical rules. AI might be reasonably good at spotting patterns and predicting the future, but there is still so much it cannot do. For example, it isn’t even close to developing faculties like “reason” or “common sense.” I think where AI may be useful is in identifying imbalances that compromise fairness – like in systems that make decisions on credit or hiring. But again, being able to quantify something that amounts to bias or discrimination is not the same as understanding the concept. That semantic knowledge is still the preserve of human beings, and long may it be so.

In the last few weeks we have seen so many countries coming out with AI plans – Germany, Netherlands, Russia, Malta …Do you think there might be a greater chance of regulatory alignment across the globe if decisions were based on transparent evidence based artificial intelligence rather than human judgment and political motivations? 

That’s a good question, but I think it probably assumes that regulatory alignment is the goal rather than (one type of) means to an end. And that all jurisdictions have aligned needs. Honestly, in regulating both machines and humans suffer from the same lack of data about the future. Neither are particularly good predictors of the unpredictable. Where AI systems have a marginal statistical advantage, they are by nature incredibly inflexible. Human politicians, though often vainglorious and ignorant, are nevertheless adaptable. The truth is that a combination of both is probably optimal. A machine to crunch and quantify, and a human to scrutinize “evidence” while remembering that such data is very rarely impartial.

Fiona McEvoy is a consultant at H+K San Francisco where she is listed in ’30 Women Influencing AI ‘ and ‘100 Brilliant Women in AI Ethics You Should Follow in 2019 & Beyond’. Her recent academic paper ‘Political Machines: Ethical Governance in the Age of AI’ is available here. Read more insights and views from Fiona in her blog youthedata.com