What Retail Leaders Need To Know About the Congressional Hearing on Artificial Intelligence
OpenAI CEO Sam Altman, whose company created ChatGPT, was one of three experts who testified at a Senate Judiciary subcommittee hearing exploring the oversight of artificial intelligence. Here are the top takeaways for retailers.
The hearing came less than two weeks after it was reportedGeoffrey Hinton, who has been referred to as the "Godfather of AI," left his role at Google to speak out about the technology he helped develop. “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Hinton told the New York Times, which was first to report his decision. In a statement to CNBC, Hinton said, “I now think the digital intelligences we are creating are very different from biological intelligences.”
Risks of AI for Retailers
In addition to potential security risks, Sen. Blumenthal noted during the Senate Judiciary subcommittee hearing that perhaps the biggest nightmare is the “looming new industrial revolution."
“The displacement of millions of workers, the loss of huge numbers of jobs, the need to prepare for this new industrial revolution and skill training, and relocation that may be required," he said. "And already industry leaders are calling attention to those challenges.”
Three AI experts appeared before the subcommittee: OpenAI CEO Sam Altman, along with IBM chief privacy officer Christina Montgomery and NYU professor Gary Marcus.
“OpenAI was founded on the belief that artificial intelligence has the potential to improve nearly every aspect of our lives, but also that it creates serious risks we have to work together to manage,” opened Altman. “We’re here because people love this technology. We think it can be a printing press moment. We have to work together to make it so.”
Montgomery said IBM urges Congress to adopt a precision regulation approach to AI. "This means establishing rules to govern the deployment of AI in specific use cases, not regulating the technology itself," she noted. "Such an approach would involve four things:
"First, different rules for different risks. The strongest regulation should be applied to use cases with the greatest risks to people and society.
"Second, clearly defining risks. There must be clear guidance on AI uses or categories of AI supported activity that are inherently high risk. This common definition is key to enabling a clear understanding of what regulatory requirements will apply in different use cases and contexts.
"Third, be transparent. So AI shouldn’t be hidden. Consumers should know when they’re interacting with an AI system and that they have recourse to engage with a real person should they so desire. No person anywhere should be tricked into interacting with an AI system.
"And finally, showing the impact. For higher risk use cases, companies should be required to conduct impact assessments that show how their systems perform against tests for bias and other ways that they could potentially impact the public. And to attest that they’ve done so."
Later, Sen. Blumenthal askedMontgomery, “I know you don’t deal directly with consumers, but do you take steps to protect privacy as well?” To which she responded, “Absolutely. And we even filter our large language models for content that includes personal information that may have been pulled from public data sets as well. So we apply additional level of filtering.”
On the topic of jobs being impacted by AI and generative AI, Altman noted “there will be an impact on jobs. We try to be very clear about that, and I think it will require partnership between the industry and government, but mostly action by government to figure out how we want to mitigate that. But I’m very optimistic about how great the jobs of the future will be.”
Montgomery said she believes the most important thing that we should be doing now, is to “prepare the workforce of today and the workforce of tomorrow for partnering with AI technologies and using them. And we’ve been very involved for, for years now in doing that in focusing on skills-based hiring in educating for the skills of the future. Our skills build platform has 7 million learners and over a thousand courses worldwide focused on skills. And we’ve pledged to train 30 million individuals by 2030 in the skills that are needed for society today.”
Retail and ChatGPT
Retailers are beginning to quickly experiment with ChatGPT use cases. In April resale marketplace Mercari added a ChatGPT-enabled shopping assistant, Merchat AI. The conversational shopping bot leverages large language models and sifts through millions of Mercari listings and churns out personalized recommendations based on customer prompts.
In March, Instacart introduced an Instacart plugin for ChatGPT in collaboration with OpenAI. The plugin acts as an add-on to ChatGPT and will combine ChatGPT’s capabilities with Instacart’s own AI technology to let users shop from food and recipe-related conversations with ChatGPT and get ingredients delivered to their door. In response to questions like, “I have chicken and pasta. What’s a kid-friendly meal I can make, and what else do I need?” or “How can I make an easy carrot cake?”, ChatGPT can now create Instacart orders based on suggested meal responses, adding all the necessary ingredients to their Instacart cart.
It's important for retailers to remember, in just a few months, ChatGPT and generative AI have come into the technology conversation full force. But as Sen. Blumenthal, noted during the hearing:
“I am sure that we’ll look back in a decade and view ChatGPT and GPT-4 like we do the first cell phone, those big clunky things that we used to carry around. But we recognize that we are on the verge, really, of a new era.”