x

Fifteen Eighty Four

Academic perspectives from Cambridge University Press

Menu
4
Apr
2017

Warning! Intelligence Not Included

Jose Hernandez-Orallo

Is superintelligence dangerous? This question has been in our imagination for decades, inextricably sustained and distorted by science fiction. Now, things have become different. The question is dominating headlines and entering the agendas of governments, philanthropists, philosophers and AI researchers.

This issue really deserves public attention and research effort. Still, I’ve been resisting the trend. What really fascinates me is untangling what intelligence is, and how it can be measured, in all its forms and degrees. Ultimately, as I argue in my recent book, The Measure of All Minds , we cannot go too far about the perils of superintelligence if we cannot measure intelligence in an effective way. We need to understand how technology can extrapolate it beyond the kinds of intelligence that we know.

At the Beneficial AI conference in Asilomar in January this year, I had the opportunity of listening to the most prominent voices on the safety and impact of AI. I got an update of some of the concerns about AI, especially those about AI control, which I can briefly summarise as follows:

  • Myopic behaviour: an AI system can understand a goal literally or without properly calculating its consequences (e.g., King Midas petrifying his daughter into gold).
  • Corrigibility and interruptibility: an AI system can prevent its designers or controllers from modifying its goals (e.g., breaking the red button).
  • Instrumentality: an AI system can regard everything, including humans, as a resource, which can be ignored or disposed at will (e.g., a domestic robot can cook the family pet for dinner).
  • Wireheading and manipulation: An AI alters its rewards or perceptions (self-delusion), or those of others (e.g., an AI system modifies its code or gives drugs to humans to see them “happy”).
  • Unpredictability: an AI system is not certified for unseen situations (e.g., a robot cleaner finds a gun at home).

It’s enlightening to compare this list with the problems parents face when raising their children or leaving them alone at home. Unsurprisingly, the modern view of AI safety no longer aims at imbuing AI systems with all the right skills and values, but ensuring that they develop them. In the same vein, the International Joint Conference on Artificial Intelligence has chosen autonomy as the key theme this year. Are AI systems prepared to be autonomous? Do we want them to be?

We can look at the previous bullet list more carefully. For an autonomous system to know the consequences of its actions, to understand what is allowed or not according to law, to recognise and use resources in an appropriate way, to know when wireheading and manipulation is happening, and to cope with unpredictable situations, it needs intelligence. Ultimately, learning others’ values needs intelligence too. It is then the infraintelligence of autonomous systems what is really dangerous. Accordingly, I’m tempted to rephrase Stuart Russell’s long-term question of “should we fear supersmart robots?” into a more short-term concern: “should we fear supersilly robots?”

Of course, in the longer term, if truly intelligent AI systems are granted autonomy, we have to be wary of them. We know well of the history of intelligence and domination, as Stephen Cave, executive director of the Centre for the Future of Intelligence in Cambridge, UK, has recently pointed out. Making, or faking, a difference in cognition power has increasingly pervaded natural evolution, especially for social species, and human civilisations. Indeed, the crux of the issue is not on the quantity of intelligence but rather on the variance of intelligence. So again, I would yet ask a different question “should we fear an unequal distribution of intelligence?”

Marine chronometer that was used on the HMS Beagle, Public domain under Creative Commons licence from wikipedia: https://en.wikipedia.org/wiki/File:British_Museum_Marine_Chronometer_cropped.jpg

Marine chronometer that was used on the HMS Beagle, Public domain under Creative Commons licence from wikipedia

This brings us back again to the measurement problem. Intelligence is not a monolithic concept but a conglomerate of behavioural features. The animal kingdom also reminds us that there is no such a thing as a gold standard of intelligence. Indeed, the problems of the very concept of “human-level (machine) intelligence” are becoming more conspicuous in the light of a diversity of technology-enhanced (or atrophied) humans, AI systems and hybrid collectives. What is the direction of this new distribution of cognitive power? Do we have accurate instruments to evaluate this increasing diversity?

The evaluation and comparison of behavioural features (including cognitive abilities and personality traits) of humans, non-human animals and AI systems not only is a major scientific inquiry but an urgent research area with enormous implications on safety issues. And it couldn’t be otherwise, as measurement is crucial for all branches of science and engineering, as well as governance.

For more information, check out The Measure of All Minds

About The Author

Jose Hernandez-Orallo

José Hernández-Orallo is Professor of Information Systems and Computation at the Universitat Politècnica de València, Spain. He has published four books and more than a hundred...

View profile >
 

Latest Comments

Have your say!