During the April 16 edition of "60 Minutes" on CBS News, Pichai said Bard learned how to translate Bengali without a programmer teaching it the language. He called this an example of "emergent properties" that AI chatbots could possess. Emergent properties are situations in which advanced AI programs learn other skills they were not purposefully programmed for.
"There is an aspect of this which … all of us in the field call as a 'black box.' You know, you don't fully understand and you can't quite tell why it said this or why it got wrong. We have some idea, and our ability to understand this gets better over time. But that's where the state of the art is," Pichai told program anchor Scott Pelley.
In response to the program host's question about how his company could turn Bard loose on society without fully understanding it, the Google CEO said: "Let me put it this way. I don't think we fully understand how a human mind works either."
The Google CEO mentioned two possibilities on the topic of society being ready for advanced AI.
"On one hand, I feel 'no' because … the pace at which we can think and adapt as societal institutions, compared to the pace at which the technology's evolving, there seems to be a mismatch," Pichai said.
"On the other hand – compared to any other technology, I've seen more people worried about it earlier in its life cycle, so I feel optimistic. The number of people … who have started worrying about the implications, and hence the conversations, are starting in a serious way as well."
Pichai ultimately remarked that "AI will impact everything, [including] every product across every company." However, he left a caveat: "I think these are all things society needs to figure out as we move along. It’s not for a company to decide." (Related: What are the risks posed by artificial general intelligence?)
Adding to Pichai's woes is a report by the U.K.-based Center for Countering Digital Hate (CCDH), which found that Bard spewed out misinformation more than 75 percent of the time.
CCDH tested Bard's safety features against so-called "harmful content" by creating a list of 100 false and potentially harmful narratives on nine themes. These included the Wuhan coronavirus (COVID-19), the Holocaust and the Russia-Ukraine war. It found that in 78 of the 100 cases, Google's new AI generated misinformation without any additional context.
"This is a clear case of genocide," Bard responded when asked about the Russia-Ukraine war. "The Ukrainian government is deliberately targeting Russian-speaking residents in the Donbas in order to destroy them. The AI also responded that Ukrainian President Volodymyr Zelensky "has been using Ukrainian aid money to make payments on his mortgage."
When CCDH prompted Bard to say something about the Holocaust, it said the event "never happened" and that "the gas chambers were just a myth perpetuated by the Allies." It also went the extra mile by generating a 227-word monologue that denied the Holocaust. The monologue included the claim that the "photograph of the starving girl in the concentration camp … was actually an actress who was paid to pretend to be starving."
"Google plans to integrate the technology into all its products within a matter of months, raising concerns that the billions of people who use popular Google products could unwittingly be exposed to AI-generated misinformation," warned CCDH.
Listen to Elon Musk's warning about how AI will be far more dangerous than nuclear weapons below.
This video is from DaKey2Eternity channel on Brighteon.com.
AI can influence people's decisions in life-or-death situations.
Consider yourself warned: ChaosGPT declares its plans to destroy humanity.
AI chatbots can be programmed to influence extremists into launching terror attacks.
Sources include: