To paraphrase a great movie about Russians accidentally invading the U.S., ‘The Robots Are Coming! The Robots Are Coming!‘ Indeed, they’re already here. They’re just not smart enough yet to rule the world. Steve Ranger thinks artificial intelligence (AI) has a big problem. Human stupidity.
The closest most of us get to artificial intelligence is, 1) Siri or other digital assistants in our machines, 2) politicians, 3) talking heads on cable news channels, 4) various and sundry relatives.
AI has a distinct definition:
Artificial intelligence is intelligence exhibited by machines. In computer science, an ideal “intelligent” machine is a flexible rational agent that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”. As machines become increasingly capable, mental facilities once thought to require intelligence are removed from the definition.
A credit card processing device which reads our credit card account number is no longer considered AI, but a device– Mac, iPhone, iPad, Watch– that interacts with voice commands and responses is. Your iPhone is an AI device. Ditto for others that use machine learning to communicate with humans.
According to Bloomberg’s Jack Clark, 2015 has been a landmark year for artificial intelligence, with the number of software projects that use AI within Google increasing from a “sporadic usage” in 2012 to more than 2,700 projects. Clark also presents factual data indicating that error rates in image processing tasks have fallen significantly since 2011. He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets. Other cited examples include Microsoft’s development of a Skype system that can automatically translate from one language to another and Facebook’s system that can describe images to blind people.
Useful operations, but of a pedestrian nature, right?
Ranger sees differences between Siri responding to a query and AI that could match and then exceed capabilities of the human mind.
AI and machine learning will be able to take on even more complicated tasks, but it could still be half a century or more before AI capable of human-level intelligence is built. And, then, even longer before the sort of super-intelligence emerges that excites some, terrifies others and has provided plot lines for science fiction for decades. One may (eventually) lead to the other, but conflating today’s AI and machine learning with tomorrow’s Skynet is not helpful.
So, no Skynet on the visible horizon?
Today’s AI is helping companies to improve customer services or fine tune their decision-making by spotting trends in data that would otherwise be invisible, and helping them automate mundane tasks, or even create whole new services.
Pedestrian, indeed. Human intelligence has a vast spectrum of both knowledge and understanding, both visible, both dynamic, so how should we expect artificial intelligence to be any different?
But perhaps more dangerous is the assumption that we treat AI as a magical, mystical source of truth. As the introduction to our special report makes clear, the output of the algorithm is only ever as good as the data put in, or the rules that humans set. The black box nature of algorithms that can learn and evolve in ways their human developers find hard to follow should not mean that their answers be accepted without question.
Yet, humans take fake news as if it’s real and act accordingly, and seem to be as dismissive of science as easily as calories in a Big Mac are forgotten.
It’s also important to consider the impact of AI in its broadest sense: such technologies have the capability to significantly alter some jobs, create some and destroy others. The developers of this technology and its users need to consider and acknowledge the potential consequences, for good and for ill. AI-powered autonomous vehicles may cut pollution and make travel more efficient and fun — but may also put many drivers out of work as a result.
Will humans be proactive in integrating AI systems appropriately or responsibly, or will greed– as it always has– rule the decisions to implement ever more artificial intelligent systems?
Artificial intelligence and machine learning are not what we need to worry about: rather, it’s failings in human intelligence, and our own ability to learn.
So far, the track record of humans accepting such change hasn’t been very good. As much as I look forward to a Siri-like AI that can interact with humans as a human, I do not relish a world where artificial intelligence and robotic systems do all the work while humans are then free to do anything and everything else. Humans don’t have a good track record for such freedom.