Governing AI: Can regulators control artificial intelligence?

6 min read

This post originally appeared on The Raconteur.

How do you see the world adapting/evolving in an AI environment?

In terms of computer applications, we will see increasing application and adoption of machine learning (ML) and artificial intelligence (AI) techniques.

We already see this in shopping recommendations, games, and large social networks. Voice assistants such as Siri, Alexa, and Google Assistant use ML to perform natural language processing and classification to respond appropriately.

Such techniques will make interacting with devices increasingly seamless, which will ultimately make technology easier to use while making humans more efficient in finding and managing information.

Just like automation and the application of machine learning to businesses and business processes, there are many opportunities to apply similar techniques to government functions. Transportation, public utilities, and public health are all massive public-sector functions that could benefit from the application of ML and AI.

Artificial Intelligence Illustration

Stephen Hawking says there could be a robot apocalypse, what are risks associated with developing software capable of decision-making and ‘independent’ thought?

I’ve been careful to use the term machine learning over artificial intelligence because we have not yet achieved what could be considered ‘independent’ thought in computer software. Also, decision-making and ‘independent’ thought are two different concepts.

We now have computer-enhanced cars that can ‘decide’ to apply the brakes to avoid a collision (automatic emergency braking), but such a ‘decision’ is limited to a very specific situation. In my view, independent thought is associated with self-awareness and emotion. We have not achieved this type of AI to date and it seems we are a long way off.

We don’t know yet if it is even possible. I’m skeptical that we will be able to develop computers that achieve true independent thought. The complex interactions of our brain functions with our physiology seem truly difficult to replicate.

That said, using computers to automate complex physical systems, such as self-driving cars, that require ‘judegement’ will be tricky. For example, a self-driving car could have to decide between potentially harming the passengers or perhaps many more pedestrians.

Should the car protect the passengers at all costs, or try to minimise the total harm to all the humans involved even if that means harming the passengers? If you knew that a self-driving car was programmed to minimise the total harm to human life in certain situations, would you agree to allow the car to drive you?

So there are definitely risks in allowing software to control physical systems. I think our adoption of computer automation will be evaluated on a case-by-case basis. In some cases, increased automation will make human activities safer. In other cases, we may choose to continue relying on human decision making but perhaps augmented with computer assistance.

Will AI be capable of governing (in civil disputes, for example)?

This gets back to the notion of judgement and, in addition, morality. If true AI is possible and it could assess situations and pass judgements, then we would evaluate the AI for governing just as we would human judges.

Just like human judges must convince the public regarding their suitability for upholding the law, so would AI judges. If they were to pass our current tests for appointing judges, then it may be acceptable to allow an AI to govern. That said, will AI be capable of such governing? I don’t think so.

AI has been used in recent elections (US/UK – to gauge popular policies and influence voters) will this be a theme in future?

To be precise, I don’t think true AI was used in these situations, but rather the narrower application of machine learning. Humans controlled the execution of machine learning algorithms. AI did not choose by itself to influence elections. I believe machine learning will be increasingly used as a tool by humans to influence voters and to shape policy.

Do you think governance will need to adapt to handle AI? Is it even possible to regulate it?

AI in practice is really the application of algorithms to data in a process that is controlled by humans. So, in this sense governance needs to adapt to handle and regulate computer software that is used in activities that can impact human well-being such as voting machines, transportation, health systems, and many others.

Computer technology has advanced at such a rapid pace, government oversight has not been able to keep up. It is interesting to think that to build a bridge you must be a licensed mechanical engineer, however, software developers require no such license to work on many types of systems that can affect human life, such as medical devices.

Can we have governance for computer software without stifling innovation and delaying potential benefits to human life? I’m not sure.

Do you think governments will have a say on what technology is developed legally?

I think so indirectly in the sense that results or impact of technology will be allowed through the law, or not. Consider the murky area of drones. Governments will certainly have a say on how drone technology can be deployed and utilized.

The citizens will demand regulation through the democratic process. Although in the case of drones, the technology is so new and different, our understanding of their impact and laws to regulate them have not yet caught up, so this will take time.

Will robots ever be capable of committing crime?

If you believe we can create conscious artificial intelligent robots that can have flaws just like humans, then yes, such robots could commit crimes just like humans. This would suggest that robots have emotion and selfish motives. I’m doubtful we will achieve such consciousness in digital technology.

Perhaps subtler is whether or not autonomous systems supported by AI and ML can commit a crime, but not knowingly. I think this is a more likely scenario. For example, a drone may fly into airspace or property where no trespassing is allowed. An autonomous car may exceed the speed limit accidentally or because it was necessary to avoid an accident.

In your view, are governments on the back foot when it comes to regulating AI?

Software, in general, is evolving at such a rapid pace, and this will continue to present ongoing challenges to government regulation.

For example, software can be developed to exhibit bias, lead to unsafe systems, or make financially irresponsible decisions. So, government needs to stay close as all technology develops and step in accordingly to regulate where makes sense. AI is the most recent and prominent example.

What do they need to do, to catch up with all the developments?

I think we need to do a better job of educating the larger population on these complex technology topics. We need more people with a greater understanding of technology, software, and mathematics. In fact, we need to develop technology that can help us better understand the risks and benefits of existing and future software systems.

In an AI future, will a universal income be needed? If so, what might it look like and how could it be administered?

I believe that AI technology will augment human activity more than replace it. It will make us more efficient so that we can use our judegment in different ways.

Such augmentation will shift how human intelligence is applied and human creativity is realized. Such new forms of human-computer interaction will result in income earning activities. I don’t believe a universal income will be needed.

Chief Scientist at SnapLogic and Professor of Computer Science at the University of San Francisco

We're hiring!

Discover your next great career opportunity.