The brain handles information differently than computers. Living organisms, according to Kak, store experiences by adapting neural connections in an active process between the subject and the environment. In fact, scientists are beginning to think that the brain doesn’t really “store” information.
In an article for Aeon, psychologist Robert Epstein of the American Institute for Behavioral Research and Technology in California wrote that the brain is “empty.” It does not process information, retrieve knowledge or store memories.
“The idea that memories are stored in individual neurons is preposterous: How and where is the memory stored in the cell?” wrote Epstein.
Instead, Epstein posits that memories are made when interactions with a stimulus modify the brain. A person reciting a poem from memory isn’t really retrieving the lines of the poem but is simply reliving a previous perceptual experience. This is possible because the experience of reading the poem is encoded in the brain’s neural connections.
A computer, however, does not work this way. It stores data in short-term and long-term memory blocks and quite literally moves information from place to place. It has physical memories and operates on symbolic representations of the world. Moreover, according to Kak, a computer with fixed architecture cannot fully replicate the physical changes that happen normally in the brain.
Granted, it’s still unclear exactly what those changes look like. But research suggests that they may involve several areas of the brain. In a 2016 study of plane crash survivors, recalling the accident increased neural activity in the amygdala, medial temporal lobe, visual cortex as well as the anterior and posterior midline of the survivors’ brains.
The exact definition of consciousness remains slippery, but the many attempts to define it indicate that even the most advanced computers are not truly conscious. Computer scientist Edith Elkind of the University of Oxford said that consciousness should not be confused with autonomy. While artificial intelligence (AI) can teach itself to do things, it still requires a human programmer to settle its tasks and select the data for its instruction.
“Machines will become conscious when they start to set their own goals and act according to these goals rather than do what they were programmed to do,” Elkind said.
That said, even though there are already autonomous devices such as self-driving cars and space robots, scientists still haven’t invented a truly conscious machine.
For neuroscientist Christof Koch, the president and chief scientific officer of the Allen Institute for Brain Science in Seattle, consciousness is an intrinsic property of matter, just like energy or mass. A computer may be able to simulate consciousness and share humans’ ability to speak and think, but it will never experience anything. It is, in the words of Koch, “black inside.”
“Consciousness is something physical … But it takes a particular type of hardware to instantiate it,” said Koch in an interview with the MIT Technology Review. (Related: Rise of the robots: 8 professions that will be taken over by AI technology.)
However, if scientists can build a computer with the same circuitry as the brain, Koch said that this computer will have consciousness attached to it. On that note, he thinks that scientists will be able to build smart machines that can pass intelligence tests well before we decipher the true definition of consciousness. This, however, doesn’t seem to bode well for humanity’s future. Koch warns that computer intelligence will likely be abused and may even pose an existential threat to humanity on the same level as a nuclear war or meteor strike.
Robots.news has more on the dangers of AI.
Sources include: