Artificial intelligence programs that predict, suggest and act extrapolating from our requests are already being used in everyday tools, and this technology cannot be stopped. I just changed my car and it’s incredible how many options are based in AI, providing a lot of functionalities to assist me -the driver- making it almost unthinkable that I could do without them.
At the same time we start hearing famous voices such as the ones of Elon Musk, Steven Hawkings and Bill Gates warning us that “AI could spell the end of the human race“.
Their concerns of the potential issues that the rise of AI presents are real, and they will need to be addressed. This is why I liked this video from Stuart Russel, where he proposes 3 principles for creating a safer AI.
[embedyt] https://www.youtube.com/watch?v=EBK-a94IFHY[/embedyt]
The King Midas Problem
Midas request was to be able to transform everything he touched into gold. His wish was granted but then he died because EVERYTHING he touched was transformed in gold, even his food.
Current AIs are facing that same dilemma, they require from us (programmers) to be very specific and careful with the objectives we put into them. As Stuart Russel says: “better be quite sure that the purpose put on the robots is what we want”.
He proposes to implement these following principles to make sure AI’s programs will be helpful to humans:
The laws for ‘Human compatible AI’
1. AI Goal Is To Maximize The Realization of Human Values
Robots should not have an objective per se, with the exception of maximizing the realization of human values. If you are fan of Science Fiction literature, you’ll remember Asimov’s robotic laws. This law will directly overrule the Asimov’s self-preservation one, making the AI truly altruistic.
2. The Law of Humility
Our human values are not clearly stated and those values differ in some way from one human to another so they will never be completely defined. The AIs will need some humility to understand that they may not fully know what are the values they are trying to maximise. This will force them to observe all human behaviour and adapt the values to those observations. That will also make them accept our feedback if they are not satisfying our wishes (because we expressed them badly).
This law is important because it avoids the problem of the mindless poursuit by eliminating the certainty of a single known objective to be maximised.
3. Human Behavior, The Information Source of Human Values
The idea is that they will draw our human goals from their “general knowledge “ to fill in a particular request. For that, AI’s have to have access to the full story of humans…but we have electronic files of almost any written book and research, and a quite important quantity of our daily activities leave electronic traces nowadays.
Robots should try to understand the motivations behind our behavior, instead of copying our strict behaviour.
They have to know our limitations like the limited computation of the GO game expert (he didn’t want to loose, he just couldn’t foresee the result of his move).
And they should be designed to satisfy humanity desires, not responding to the wishes of one specific human being.
Russel argues that there is a huge economic incentive to get it right, because one bad example will make people untrust AI’s (as in his example of an AI home assistant that made the kitty for dinner because there was no other food in the house, not realising the affective value of the kitty)
But There Is Still Room For Improvement
I love this revisited Asimov’s robotic laws, but obviously AI programs could be designed in other ways, unwillingly or willingly, so let’s remain vigilant.