Saturday, March 1, 2014

Artificial Intelligence or Creationism


 At a dinner party the other night, as the wine flowed, or in my case, the Ice House, the conversation turned to the possibility of robot consciousness, or strong artificial intelligence (hereafter AI).  What struck me about the arguments put forward against AI, was the belief that human consciousness is essentially mysterious.  The working assumption for those denying the possibility of AI was that a purely physical, mechanistic process cannot fully explain the operations of the human mind.  They argue that since we can never fully understand our own minds, we will never create artificial minds.  But this puts the anti-AI crowd in a very awkward position.  By asserting that human minds are ultimately mysterious, one not only disallows the possibility of artificial intelligence, one also rejects the process of Darwinian natural selection.  In other words, if human consciousness is inexplicable without resorting to some mysterious, non-physical power, then the evolution of our species is inexplicable without resorting to some mysterious, non-physical power.  Those who deny the possibility of artificial intelligence, in this way, implicitly endorse some form of creationism.

The dinner party anti-AI argument went something like this: Machines will never be smarter than humans because machines can only perform actions that are executable by their programming.  The machine's program was written by a human.  Therefore, the human does the thinking, while the machine only "unthinkingly" does what its programmed to do.  Describing machines with properties like "thinking," "learning," or "having beliefs" is anthropomorphizing, since only humans really do these things, while machines can, at best, only simulate human behavior.

But this line of reasoning begs the question.  One can't just assume that only humans think, learn, or believe; that's precisely the question under discussion.  In order to validly deduce the conclusion that machines will never think for themselves, one must work from the premise that a human will never be able to write a program that allows a machine to think for itself.  But this premise assumes humans will never be able to write a program that captures the way the human mind works.  But why should we assume that?  Once we fully understand how the human brain functions, then we should, in principle, be able to replicate that functioning.  If humans think, learn, and have beliefs, then our replicated brain should think, learn, and have beliefs in exactly the way we, humans, do.  And this is where the mysteriousness creeps in for the anti-AI folk.  They must resort to denying that we can ever fully understand the human mind in physical, mechanistic terms.  They must cling to a dualism separating the mind from the brain.  But if you take that route, you should understand there's no place for dualism in a naturalist, Darwinian account of the evolution of our species.  

So beware anti-AIists, if you can't stomach the possibility of robot consciousness, you better be able to stomach the dogma of creationism or Intelligent Design.      

Recommended read: Dan Dennett's comparison of Darwin and Turing.

2 comments:

  1. I was speaking to my father, a psychiatrist, about the Obama administration's Brain Initiative. This aims to map out the brain in a similar fashion as the human genome, in order for applications in treating Alzheimer's, brain cancer, etc. I don't disagree with the points you've made above, but I do want to comment on this line: "Once we fully understand how the human brain functions, then we should, in principle, be able to replicate that functioning." In theory, yes. However, mapping out the pathways of the human brain is proving extremely difficult. Unlike a machine that runs on say, purely electricity, the human brain, while having electric currents, is influenced by million of chemical combinations. It can also be affected by experience. Think of it more along the lines of a fingerprint. You can map one fingerprint easily, but to catalog all fingerprints in order to find out which combinations of DNA markers lead to which designs is virtually impossible because there are endless combinations. I'm sure studying infant brains, or fetus brains (a good goth metal band name) may be easier, but even there its seen that there are just too many combinations. So while we can predict certain things of the commonalities between healthy human brains, we may never be able to fully map (and thus code into machines) the infinite differences between our brains.

    ReplyDelete
    Replies
    1. You draw an important distinction between those who deny AI is logically possible and those who only deny its practical feasibility. My post was addressing the former. I'm not familiar enough with the science to make a claim as to how close we are to achieving AI, but I am fairly confident that it will happen at some point. I'm sure mapping the human genome, or putting people on the moon, or even constructing a building that could stand over 10 stories tall seemed like practical impossibilities several generations ago. We'll figure out the brain too, if we don't destroy ourselves first.

      I also don't think we'll have to create a complete replication of the human brain before we get to AI. Just as we don't have to construct a cell-for-cell replication of a human knee, or even understand exactly what's happening at the cell (or sub-atomic) level, the important thing is to understand how the knee works at the macro-knee level, and then construct something similar that performs all the critical functions of the knee. Evolution is notoriously inefficient (see the human appendix and the panda's "thumb.") Likewise, the human brain probably has a number of overcomplicated inefficiencies that we won't need to reproduce to get the job (thinking) done. 

      Delete