Axios reports that "the Biden administration's long-awaited executive order on artificial intelligence will require developers of the most powerful AI systems to share critical testing information with the government."
According to the story, "Companies developing models that pose serious risks to public health and safety, the economy or national security will have to notify the federal government when training the model and share results of red-team safety tests before making models public.
"The provision would apply to future models that go beyond a specific compute power threshold and would not lead to any restrictions or removal of existing AI tools in the marketplace, a senior administration official said.
"The provision goes beyond voluntary commitments that the White House garnered from AI companies and requires notification in accordance with the Defense Production Act."
The goals of the executive order including assuring consumer privacy and that AI is not used to abridge people's civil rights. In addition, the administration is working to prevent AI from being used to disseminate misinformation disinformation. "Ahead of the 2024 elections, the administration is also tackling deepfakes by instructing the Commerce Department to develop guidance for content authentication and watermarking. Federal agencies will use the content authentication tools and watermarking to make it easy for Americans to know that government communication is authentic."
Reactions to the executive order are mixed, with some suggesting that it oversteps federal authority and will inhibit innovation, and others saying that the government needs to go farther to prevent abuse of a technology that has unknown potential as well as potential pitfalls.
- KC's View:
This is something that Tom Furphy and I will be talking about tomorrow in our regular Innovation Conversation, with a focus on how AI will impact retailers and how the technology can be used to bring retailers closer to their shoppers.
There's a good piece from Bloomberg columnist Dave Lee in which he assesses Amazon's AI intentions (which Tom and I also will talk about tomorrow).
Lee notes that earlier this year, New York magazine ran a story about “The Junkification of Amazon" saying that "Amazon’s aggressive pursuit of growth had come at the expense of a good shopping experience for its customers." (I think a lot of us would agree with that statement.)
Lee goes on: "I was reminded of the article this week when learning about a new initiative Amazon is working on to give its sellers the ability to generate fake 'lifestyle' images of products using artificial intelligence. A tool, currently in beta, takes a boring (real) image of the seller’s item — such as a toaster, say — and spins up a more interesting shot in seconds. Perhaps the toaster will now be on a kitchen counter, next to some tasty looking croissants. The images can then be used in advertising slots on Amazon’s website."
The problem, Lee suggests, is that these images are by their very nature deceptive. They're not real. And as a result, there is greater "potential for more misrepresentation about the quality of products and the integrity of the seller peddling it to buyers."
Amazon says that these AI-generated images will be used only for advertising - but Lee argues that Amazon's increased reliance on advertising revenue means that in all likelihood, they will be ubiquitous on the site, and shoppers will find it hard to tell the difference.
"Amazon founder Jeff Bezos liked to say the company’s guiding principle was to always do what was best for the customer experience," Lee writes. "But features like this make me wonder which 'customer' Amazon cares more about today?"
Good question. I'd suggest that as Amazon goes down this path, creating questions about which customer it really prioritizes, it highlights the degree to which it competitors should differentiate themselves by focusing on actual customers - you know, shoppers.