What We're Reading

This week, we're reading about initial public sector ventures in AI-supported operations, the complicated relationship between emerging technology and national security, and research on the use of AI in real world forecasting (spoiler: it's not very good, yet). 

Generative AI Transforming Government | Deloitte Insights

With the draft OMB guidance tasking federal agencies to "identify and prioritize appropriate uses of AI that will improve their agency’s mission and advance equity," this series of Deloitte articles recommends several fields – procurement, regulatory enforcement, delivery of human services, and emergency management – and in which generative AI can improve government operations. 

Enterprise Artificial Intelligence Strategy FY 2024-2025:  Empowering Diplomacy through Responsible AI | Department of State

On the topic of federal agency use of AI, the U.S. State Department just released its first ever Enterprise Artificial Intelligence Strategy, noting opportunities to leverage AI "in public diplomacy, language translation, management operations, information proliferation and dissemination, task automation, code generation, and others." Although the strategy lies out mainly broad principles and is short on details, its release is an important step in modeling AI in diplomacy to counter the authoritarian state model of surveillance and control. 

A.I. Belongs to the Capitalists Now | The New York Times

The OpenAI Sam Altman/board drama has dominated headlines, but to the bigger question of 'what's at stake,' we found this piece particularly astute. "Perhaps what happened at OpenAI — a triumph of corporate interests over worries about the future — was inevitable."  

Forecasting Future World Events with Neural Networks | ARXIV

The pre-AI world features various forecasting – in economic performance, climate/weather events, disease spread, etc. This paper attempts to use machine learning to automate binary forecasting by training the model on past world events but finds that "all baselines [performed] substantially worse than aggregate human forecasts," with the best model achieving 65% accuracy vs 92% for humans. 

The Uli Dataset: An Exercise in Experience Led Annotation of oGBV | Cornell University

This new paper looks at the experience of online gender-based violence, building a dataset from tweets in Hindi, Tamil and Indian English. Non-English languages have long been neglected in content moderation; as content moderation shifts increasingly to automated detection tools, there exists an opportunity to take a more inclusive approach by training the tools on diverse languages. (It's worth noting, however, the challenges that come with cultural nuance here – recall the public criticism of Meta's for its handling of posts with the word 'shaheed,' Arabic for martyr, or a more recent consequential mistranslation of an Arabic greeting). 

Inside U.S. Efforts to Untangle an A.I. Giant’s Ties to China | The New York Times

This New York Times article digs into linkages between UAE AI firm G42 and Chinese companies/government entities, illustrating the complex connections between emerging technology and national security. "On sensitive emerging technologies, the Emirates must choose between the United States and China, American officials have told their Emirati counterparts" -- this statement brings to mind the binary framing of the Cold War and illustrates the challenge national security officials are facing in the context of emerging tech. 

News Categories