The Extraordinary Invention of Intelligence
In 1948 a young man by the name of Alan Turing penned a report entitled “Intelligent Machinery.” The opening sentence “I propose to investigate the question as to whether it is possible for machinery to show intelligent behavior” (1) had instantly set the stage for what we today would call AI, or Artificial Intelligence. And ever since that time the world has looked towards the future with glossy stares and dreams of such a day.
Turing, in 1935, was the pioneering mind behind the modern computer, though most people recognize the name based on the human computer test called the Turing Test. The test was introduced by Alan in a 1950 paper titled “Computing Machinery and Intelligence,” and his goal was to “test if a machine’s ability could exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.” There was an underlying obsession within Turing’s work circling around this question and it quickly became a visible thread throughout his work. Most of his concepts and ideas, however, either fell on deaf ears or were not even published or recognized until well after his death at the age of forty.
Since that time, our fascination regarding a vision of a future where man and machine would live in this kind of semiautonomous relationship has been the fodder for many to debate, speculate, and dream over.
As of recent (2014) the name Alan Turing found its second life as bits and pieces of his story were memorialized in the movie Imitation Game, where, in 1939, a ragtag group of computer scientists, mathematicians, chess experts, and even crossword aficionados were hired as a clandestine group of code breakers during the war to try and decipher the enigma, a German cipher that at the time was thought to be unbreakable. Turing’s intelligent machine eventually did break the code and some historians even speculate that World War II ended several years earlier than it would have based on the daily information the machine captured.
Some of Turing’s most thought provoking and future thinking works never saw the light of day when he was alive. In fact, his seminal piece of writing (Intelligent Machinery), written while Turing was working for the National Physical Laboratory in London, did not meet with his employer’s approval. Sir Charles Darwin, the rather headmasterly director of the laboratory and grandson of the great English naturalist, dismissed it as a “schoolboy essay” (3). Most people had no idea how truly ahead of the times he really was as he attempted and succeeded with the extraordinary invention of intelligence.
From the time of Turing, the holy grail of machine intelligence has been how close can we mimic the human brain and its neural capacity and infuse it into the bits and bytes of a digital machine network in order to create a truly intelligent man-machine relationship? In his early papers, Alan anticipated neural-network computing which is a precursor to the modern day Connectionism framework (3). Connectionism is the (emerging) science of computing with networks of artificial neurons. These neurons simulate neural pathways in the human brain which give computers the ability to see and respond intelligently to their environment. Today, companies like IBM, Qualcom, and HRL Labs have devoted a tremendous amount of financial capital into the future of, what they’re calling, Neuromorphic computing, a plausible extension of the question that Alan Turing was asking over sixty years ago.
Today we live in a space where technologies are giving us the ability to see and interact with our world in ways we could only dream about. Autonomous driving (driverless cars) is no longer future speak but a reality that is being built into every production car today even though they may not be immediately enabled. Our homes, with products like NEST, can self adjust temperature based on external conditions as well as user patterns that the system has learned over time. We’re not talking about technological singularity, a point in time when smart machines far exceed human brain capacity and intelligence and take over the world. What we are talking about is living in a world where the technological capabilities are giving us a rather interesting view of a future that is already beginning to show itself to us in interesting ways.
For the most part, technological advancement happens at the human level, but when you see things like Genetic Cars which is a genetic algorithm that, over time, mutates a set of physical characteristics in order to accomplish a task, you begin to see the power that lies just below the surface of intelligent machines. Machines becoming smarter over time and adapting to situations. This is the foundational element of Neuromorphic computing.
One example of this was given in an article from Robert D. Hof as he describes how Qualcom, in a controlled test environment, created a child’s bedroom with superhero action figures sprawled across the floor and released a small robot called Pioneer within the space to pick up the Captain America action figure and place it within an area of the bedroom mimicking a kids toy bin. Pioneer was effortless in its execution but this action was nothing groundbreaking. It wasn’t until Pioneer started to make its way back to Lloo Chang, Qualcom’s senior engineer, as it spotted another action figure with similar visual characteristics to Captain America in the room - Spider Man. Based on its previous command Pioneer took the initiative to pick up Spider Man and also place it in the bin with Captain America. This is groundbreaking and important because the sensory input, audible and visual commands for Captain America, paired with the ability to not only capture and learn behavior but also act without additional human intervention is truly remarkable. It recognized other figures that had similar characteristics as Captain America and took it upon itself to act accordingly (4). This is a smart machine.
We’re at a critical juncture in technology where objects and artifacts are becoming infused with smart(er) technologies that influence our culture and the way in which we live, work, play and progress throughout our days.
Jibo is friendly, helpful and intelligent. He can sense and respond, and learns as you engage with him.
Ultimately technology and the advancement of intelligent systems will begin to influence the way we make decisions. The question then becomes one of do artifacts have ethics, a moral compass that can navigate the complexities that humans beings are confronted with every day? This question becomes a defining topic for a new generation of technologies that seek to become part of our very own DNA.
“If a man and a machine work side by side, which one will make the decisions?” -Amy Bernstein
Historically computer scientists have created very complex algorithms, a defining set of rules that a computer must adhere to in order to complete a given task. There’s very little deviation from this pattern (algorithm) as lines of code are executed with efficiency. Within this highly structured framework the majority of control and decision making has been placed into the hands of the programmer. It’s within this language that the programmer tells the system what to think at certain times, how to react in certain situations and even what to do if there’s an interruption or a flaw in the output of the experience. Most, if not all, of the decision making burden is placed upon the programmer of the language. This would suggest that the moral or ethical boundary of the machine is intrinsically attached to the moral and ethical boundary of its creator.
“Every artist dips his brush in his own soul, and paints his own nature into his pictures.” — Henry Ward Beecher
Today we’re beginning to see a new landscape emerge that gives limited but increasing control to the system; where decisions are now being blurred between human and machine intelligence. It’s within these bits and bytes, pixels and binaries that we’re beginning to see a system of complex machines that are beginning to create experiences, to learn and grow, to adapt and transform our world without human intervention. But it’s back to the ethics in artifacts question, that point of infusion of an ethical system into artifacts. How blurred are those lines really and who is ultimately responsible for the decisions when something goes wrong?
In his book The Glass Cage, author Nicholas Carr touches on the complexities of conscious computing.
Up to now, discussions about the morals of robots and other machines have been largely theoretical, the stuff of science-fiction stories or thought experiments in philosophy classes. Ethical consideration have often influenced the design of tools — guns have safeties, motors have governors, search engines have filters — but machines haven’t been required to have consciences. They have’t had to adjust their own operation in real time to account for the ethical vagaries of a situation. Whenever questions about the moral use of a technology arose in the past, people would step in to sort things out. That won’t always be feasible in the future. As robots and computers become more adept at sensing the world and acting autonomously in it, they’ll inevitably face situations in which there’s no one right choice. They’ll have to make vexing decisions on their own. It’s impossible to automate complex human activities without also automating moral choices.
As machines transition from passive do what you’re told algorithms to active decision making participants, our continued dilemma will be the appropriate use and responsible injection of ethical frameworks into our digital lives. There’s no doubt that a machine’s capacity for intelligent behavior is a reality that even surrounds us today. From wearable technologies such as ‘smart clothing’ all the way to the opposite end of the spectrum with driverless transportation capabilities, we’re seeing technologies that not only deliver to us relevant data so we can proactively make decisions but we’re also beginning to see technologies that are influencing our decisions or in some cases making the decisions for us.
“Well, basically I have intuition. I mean, the DNA of who I am is based on the millions of personalities of all the programmers who wrote me. But what makes me me is my ability to grow through my experiences. So basically, in every moment I’m evolving, just like you.” -Samantha (HER)
Below are a couple of examples of how intelligent machines are influencing our culture today-
- Epagogix- Epagogix works confidentially with the senior management of major film studios, large independents and other media companies, assisting with the selection and development of scripts by identifying likely successes and probable ‘Turkeys’; helping to quantify a script/project’s commercial success; and advising on enhancements to the box office/audience share potential. Epagogix
- thegrid.io- The Grid harnesses the power of artificial intelligence to take everything you throw at it — videos, images, text, urls and more — and automatically shapes them into a custom website unique to you. As your needs grow, it evolves with you, effortlessly adapting to your needs. TheGrid.io
- Google Self Driving Car- A prototype vehicle that’s designed to take you where you want to go at the push of a button — no driving required. Google Self Driving Car
- IBM Watson- A cognitive system that enables a new partnership between people and computers that enhances, scales and accelerates human expertise. Watson is built to mirror the same learning process that we have — through the power of cognition. What drives this process is a common cognitive framework that humans use to inform their decisions: Observe, Interpret, Evaluate, and Decide. Watson
Machine ethics, also known as machine morality, artificial morality, or computational morality, is an increasingly controversial topic as technology is butting up against human-like capabilities. IBM’s Watson can consume 30,000 documents a day and millions of unstructured documents per second. Our homes are self adjusting to our learned patterns; games are adapting and creating alternate endings based on our decisions along the way; airplanes can autopilot themselves; our clothing can notify our doctor when abnormal rhythms occur within our bodies. Whether we want to accept it or not, our futures, long and short term, are going to be tethered to intelligent machines that will help us to navigate our world in new and often exciting ways.
These systems and many more like them work within a structure called a connectivism framework which is akin to the artificial neural-network that Alan Turing talked about within his earlier works on artificial intelligence. These systems sit within a sub-field of artificial intelligence called automated reasoning and adaptive or reconfigurable computing which gives these systems the capability to learn over time. But can artificial learning translate into conscious behavior or ethical/moral decision making? According to Luís Moniz Pereira, Professor of Computer Science and Director of the AI centre at University of Nova de Lisboa, and Ari Saptawijaya a Lecturer at University of Indonesia, the answer to that question is yes.
In a paper entitled Modelling Morality with Prospective Logic, Luís and Ari state that ‘morality is no longer the exclusive realm of human philosophers’ (Pereira and Saptawijaya, 2011).
Prospective Logic programs are programs that have the capacity to ‘look ahead’ at the prospective consequences of a moral or ethical decision within a hypothetical situation. The diagram above shows the architecture of the prospective logic framework. You’ll notice that the starting point is the fundamental knowledge base and the end point is the potential for that knowledge base to update itself based on internal and external forces.
Even with programs and logic frameworks like the one above, we cannot discount the enormity of how the human experience helps to inform and dictate how we, individually, process information and make decisions. So how does the human experience play into the ethical fabric of this conversation, especially when everyone processes information differently and has their own metrics of acceptability regarding their moral fiber? This is where we still heavily rely on our own methodologies to inform the system just like the Prospective logic agent that tries to intelligently make decisions based on a proposed ethical situation.
There’s still a divided community on the subject of building intelligent ethical machines and the consequences of such machines propagating our very own decisions in times of crisis. But it’s not only about the decision making capabilities of such machines but also the ‘how will we use this technology’ that is another important byproduct of our age.
“When we do think about technology’s moral implications, we tend to think about what we do with a given technology. We might call this the “guns don’t kill people, people kill people” approach to the ethics of technology. What matters most about a technology on this view is the use to which it is put. This is, of course, a valid consideration. A hammer may indeed be used to either build a house or bash someones head in. On this view, technology is morally neutral and the only morally relevant question is this: What will I do with this tool?” (Michael Sacasas)
One thing is for certain — the extraordinary invention of intelligence that Allan Turing was striving for has been realized. It’s beginning to show up all around us in subtle ways and moving forward is our only option. But for now we need to be more aware of our surroundings and how we as individuals and communities are giving up control, interacting with, manipulating, and designing our world around technology. Ultimately, the way in which machines respond to us and to the environment in which we’ve positioned them — whatever that may look like — tells a much greater story about who we are and what matters to us as a people.
Pereira, L. M. and Saptawijaya, A. (2011) ‘Modeling Morality with Prospective Logic’, Machine Ethics, pp. 398–421. doi: 10.1017/cbo9780511978036.027
Copeland, Jack B., and Diane Proudfoot. ‘Alan Turing’s Forgotten Ideas in Computer Science.’ Scientific American 280.4 (1999): 98–103. Web.
Carr, N. (2015) The Glass Cage: Where Automation is Taking Us. United Kingdom: The Bodley Head
Cole, K. C. (1985) Sympathetic vibrations: reflections on physics as a way of life. 1st edn. New York: W. Morrow
Her (2014) Directed by Spike Jonze
HTML5 Genetic Algorithm 2D Car Thingy -(no date) Available at: http://rednuht.org/genetic_cars_2/ (Accessed: 1 May 2015)
(no date) Available at: https://www.wikiwand.com/en/Mind_uploading (Accessed: 1 May 2015)
Turing, Alan (1948) Intelligent Machinery