What We're Reading

This week, we're reading about FCC action again deepfake phone calls, considerations of antitrust law in the AI market, the dual edged sword of open source AI, and new industry efforts on watermarking. 

AI-generated voices in robocalls can deceive voters. The FCC just made them illegal | The Associated Press

We were pleased to see action by the FCC last week making AI powered deepfake audio calls illegal. Both in the context of elections and broader fraud worries, this is a welcome step. The use of the Telephone Consumer Protection Act ("a 1991 law restricting junk calls that use artificial and prerecorded voice messages") is a good example of regulators using existing law to take action against AI. 

Meta expands AI image labeling to include AI-generated content from other platforms | Digiday 

Last week included several big announcements around watermarking. Meta announced plans to detect and label AI content that had been generated with external tool , while OpenAI rolled out plans to embed metadata in DALL-E generated images. Enforcement of these policies is another question, but the effort at least speaks to a growing industry recognition of the importance of authenticity and verification. 

As Facebook turns 20, politics is out; impersonal video feeds are in | The Economist

Last week also marked the 20th anniversary of Facebook's founding. Reflecting that "social media have become the main way that people experience the internet—and a substantial part of how they experience life," The Economist reflects on how social media has evolved – and where it's headed. 

The Global AI Conundrum for Antitrust Agencies | TechPolicy.Press 

In the wake of the FTC announcement that it would open an inquiry into AI start-ups and their mega tech investors, this article from Tech Policy Press is both timely and thoughtful. 

Inside OpenAI's Plan to Make AI More 'Democratic' | TIME 

An interesting behind the scenes look of OpenAI's efforts to incubate ideas around democratic processes to crowdsource values that should govern AI alignment. The exercise raises important questions – to include the sincerity/motives of OpenAI, as well as whether following public opinion will avert the worst AI harms. But it also shines a spotlight on deliberative technologies and asks the question – what design changes, whether technologically powered or not, could improve the functioning of democratic systems of governance? 

Should we make our most powerful AI models open source to all? | Vox

Thoughtful piece on the inherent conflict between promoting AI innovation by open sourcing it and AI safety. Democratizing AI sounds appealing on the surface – put comes with some real risks, especially as the technology improves. 

Online searches to evaluate misinformation can increase its perceived veracity | Nature 

Important findings as to whether as to whether online search engines can reduce belief in misinformation: "online search to evaluate the truthfulness of false news articles actually increases the probability of believing them." Noting the disclaimer that "the search effect is concentrated among individuals for whom search engines return lower-quality information," concerns about declining quality of search results amidst AI generated junk content are top of mind. 

News Categories