What We're Reading

This week, we're reading about opportunities to develop 'RegTech' that supports compliance with new EU technology regulations, research on how reigning in BigTech through antitrust enforcement could mitigate AI risks, the pernicious Russian and PRC efforts to collaborate with authoritarian regimes in South America in corrupting the information ecosystem, and more. 

Deeping the Response to Authoritarian Information Operations in Latin America | National Endowment for Democracies

This report from the National Endowment for Democracies reviews collaboration between Russia, China and repressive regimes in South American in conducting malign information operations, and explores specific authoritarian narratives and strategies in the region. While acknowledging efforts to counter information manipulation through fact checking, media literacy programming, and independent journalism, the report calls for "new ways of working together to contest authoritarian narratives effectively, because the perceptions of the world that they foster among citizens are at the core of our democratic backsliding." 

U.S. Stops Helping Big Tech Spot Foreign Meddling Amid GOP Legal Threats | The Washington Post

The Washington Post reports that amidst legal challenges concerning government engagement with social media companies, the federal government has effectively disengaged from conversations with the platforms about foreign disinformation campaigns. Even the social media companies see this as a problem: Meta's head of security policy stated: "our investigators might not know that a campaign is coming until the last minute … if they are operating off of our platforms, there are a number of times when a tip from [the] government has enabled us to take action.” 

A Lot Has Happened in A.I. Let’s Catch Up | The New York Times

Timed around the 1 year anniversary of the release of ChatGPT-3, Ezra Klein's podcast on how society and government has thus far responded to AI provides a thoughtful overview on the state of AI governance, bias and risks, and labor market impacts. 

Report | AI in the Public Interest: Confronting the Monopoly Threat | Open Market

This report, timely in the context of FTC efforts against BigTech, explores how existing antitrust and competition law can be used to address monopoly power in AI as a means of redress for AI risks, which are being exacerbated by "the race to profit [that] is leading even these fantastically rich corporations into reckless and dangerous behaviors and actions." The recommendation to regulate cloud computing like a public utility is particularly interesting, as is the call for a public interest data governance regime. 

Extracting Training Data from ChatGPT (not-just-memorization.github.io) 

A new paper from Google reveals that a training-data extraction attack can cause ChatGPT to divulge data taken directly from its training data, known as "memorization." Concerning, the model divulged real personal information – including names, email addresses, and phone numbers. Further, the authors warn that, even if the developers deploy a patch to block this particular mode of attack, "these vulnerabilities could be exploited by other exploits that don’t look at all like the one we have proposed here. The fact that this distinction exists makes it more challenging to actually implement proper defenses ... we want to get at the core of why this vulnerability exists to design better defenses." 

Enabling the Responsible use of Technology at Scale | Sitra

Smart regulation is necessary to protect the public good, but compliance can prove a challenge, particularly for smaller companies. This report from Finnish think tank Sitra explores the potential technology solutions (RegTech) in supporting compliance and regulatory efforts. Defining it as "an emerging field of innovative start-ups that provide ethical, risk management, compliance, and supervisory services as software tools," the report offers recommendations on steps to cultivate a RegTech innovation ecosystem. 

News Categories