Seasoned investigative journalist and author Jon Rappoport recently disputed that artificial intelligence is very far from its goal of merging with the human brain. Rappoport says that while AI systems have gained steam during the last few years, neuroscience has not completely understood how the human brain works. The renowned author also discussed that discovering the algorithm behind brain function remains pretty far off.
According to Rappoport, more challenges lie ahead before a functional human-computer interface becomes a reality. Rappoport noted that the interface would deal with hurdles in transmitting detailed information. The author raised concerns on how the brain would be effectively hooked up to a computer interface, and whether the brain would be functional enough to absorb and process all the information it will obtain from the interface.
Likewise, the author argued that the proliferation of false information would be a big threat to the development of human-computer interface. Rappoport noted that if the interface has indeed become possible, it will still be faced with hoards of faulty information stored in questionable databases. Rappoport further discussed that detecting and deleting false information is beyond a program's ability. Likewise, the author argued that there is no committee that monitors and makes the corrective changes in these faulty information.
"There is an inherent self-limiting function in AI. It uses, accesses, collates, and calculates with, false information. Not just here and there or now and then, but on a continuous basis. Think about all the entrenched institutions and monopolies in our society. Each one of them proliferates false information in cascades. No machine can correct that. Indeed, AI machines are victims to it. They in turn emanate more falsities based on the information they are utilizing," Rappoport wrote in Waking Times online.
Rappoport was the author of The Matrix Revealed, Exit From The Matrix and Power Outside The Matrix. He also was once a candidate for a U.S. Congressional seat in the 29th District of California.
New York University (NYU) research professor Kate Crawford also undermined the capacity of AI brains, stating that the programs might just be as prone to errors as human brains. According to the expert, AI systems depend on neural networks that emulate the brain's mechanisms in order to learn. Crawford also explained that these systems can be trained to recognize information patterns such as speech, text data or visual images.
However, Crawford argued that these information are fed to the systems by no other than humans themselves. This, in turn, makes the AI brains just as susceptible to human errors. The expert also warned that the AI brain may inadvertently use these errors and biases that may affect its decision making skills. (Related: Expert warns that AI brains are not infallible and have even been found to “make bad decisions” that can harm humans.)
"These systems “learn” from social data that reflects human history, with all its biases and prejudices intact. Algorithms can unintentionally boost those biases, as many computer scientists have shown. It’s a minor issue when it comes to targeted Instagram advertising but a far more serious one if AI is deciding who gets a job, what political news you read or who gets out of jail. Only by developing a deeper understanding of AI systems as they act in the world can we ensure that this new infrastructure never turns toxic," Professor Crawford said.
Crawford and her colleagues announced the launch of The AI Now institute in October last year. The institute aims to examine the complex social implications of AI development, Crawford explained.
Log on to Robotics.news and be up to speed with the latest news in robotics and artificial intelligence.
Sources include: