RunRan, Flickr

In The Chronicle of Higher Education, SFI External Professor Dan Rockmore and President David Krakauer consider artificial intelligence in their essay "Never Mind Turing Tests. What About Terminator Tests?"

What happens when machine intelligence reaches or surpasses human intelligence depends on the type of intelligence being measured, and whether or not the machines are expected to imitate human approaches. As the authors distill the meaning and context of terms like “Turing test” and “singularity” for those outside the AI community, they propose a new test they term the “Terminator test,” which “gauges not whether an intelligence is a convincing likeness of a human’s, but whether it replaces and possibly surpasses a human’s.”

Whether or not machines resemble humans, we must consider the implications of AI surpassing human intelligence in ever new and increasing ways. The authors propose a mathematical approach for dealing with these phenomena, comparing areas of machine super-intelligence to black holes “pocking a landscape that is generally benign.”

As they set forth a meaningful course for investigating AI, Rockmore and Krakauer remind us that “many of our current concerns relate not to the smartest technologies but to ubiquitous and relatively dumb ones.”

Read the essay in The Chronicle of Higher Education (August 10, 2015, subscription required)