Google CEO Sundar Pichai warns: Society is not prepared for the impact and rapid advancement of AI
On March 30, a group of technology leaders including dozens of academics called for an immediate pause on training training “experiments” connected to large language models that were “more powerful than GPT-4.”
In an open letter signed by Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, tech leaders are calling for a six-month pause to the development of systems more powerful than OpenAI’s newly launched GPT-4, citing “risks to society.”
Fast forward two weeks later, Google and Alphabet CEO Sundar Pichai is now sounding the alarm bell on the impact and rapid advancement of artificial intelligence (AI). Are we moving too fast, too soon?
In an interview with CBS’ “60 Minutes” that aired Sunday, Pichai warned about the consequences and rapid advancement of AI, adding that laws and regulations that guardrail AI advancements are “not for a company to decide” alone. Pichai added that AI will impact “every product of every company.”
During the interview, CBS Scott Pelley expressed concern after trying out several of Google’s AI projects. Pelley was left “speechless” and described the human-like abilities of Google’s chatbot Bard and other products as “unsettling.”
“We need to adapt as a society for it,” Pichai told Pelley, adding that jobs that would be disrupted by AI would include “knowledge workers,” including writers, accountants, architects, and, ironically, even software engineers.
“This is going to impact every product across every company,” Pichai said. “For example, you could be a radiologist, if you think about five to ten years from now, you’re going to have an AI collaborator with you. You come in the morning, let’s say you have a hundred things to go through, it may say, ‘these are the most serious cases you need to look at first.’”
When asked if society is prepared for AI technology like ChatGPT and Bard, Pichai answered, “On one hand, I feel no, because the pace at which we can think and adapt as societal institutions, compared to the pace at which the technology is evolving, there seems to be a mismatch.”
During the segment, Pelley also explored other areas of Google with advanced AI products, such as DeepMind, where he witnessed robots playing soccer and learning on their own, without human intervention. In another unit, Pelley observed robots that could recognize objects on a countertop and fetch an apple upon request.
While discussing the potential consequences of AI, Pichai warned about the significant problem of disinformation and the proliferation of fake news and images, which he believed could cause significant harm due to their vast scale.
“Competitive pressure among giants like Google and startups you’ve never heard of is propelling humanity into the future, ready or not,” Pelley added in the segment.
Google along with other tech leaders including Elon Musk has called on the government to regulate AI before it is too late. Google recently released a document outlining its “recommendations for regulating AI.”
“AI will have a significant impact on society for many years to come. That’s why we established our AI Principles (including applications we will not pursue) to guide Google teams on the responsible development and use of AI. These are backed by the operational processes and structures necessary to ensure they are not just words but concrete standards that actively impact our research, products, and business decisions to ensure trustworthy and effective AI application”
However, Pichai warned that society must quickly adapt to regulation, laws to punish abuse, and treaties among nations to make AI safe for the world as well as rules that “Align with human values including morality.”
“It’s not for a company to decide,” Pichai said. “This is why I think the development of this needs to include not just engineers but social scientists, ethicists, philosophers, and so on.”
You can read Google’s “Recommendations for Regulating AI” below.
recommendations-for-regulating-ai