What We're Reading

Some research on quantifying employees' AI exposure, new tools that visualize bias in AI text to image tools, and disinformation and graphic images run rampant on X. 

Alongside confusion sewn by hackers, Hamas is capitalizing on lax content moderation to use social media as a platform for terror. Amplified the use of social media as a news sourcethis includes manipulated and false media (in one example, the BBC identified video game content being passed off as rocket attacks on Israel). EU regulators have publicly called out X for failing to abide by content moderation requirements of the Digital Services Act.  

Hamas Seeds Violent Videos on Sites with Little Moderation | The New York Times 

With graphic content flooding social media in violation of content moderation policies (Israeli and Jewish schools reportedly advised parents to remove Instagram and TikTok from their children's phones), the Washington Post offers thoughtful guidance on what they call "our individual responsibility with social media posts that might contribute to human suffering." 

How to limit graphic social media images from the Israel-Hamas war | The Washington Post 

Amidst calls for an FDA style agency that would issue AI licenses, WSJ reporting on the challenges the FDA is facing in regulating AI enabled medical devices is particularly interesting. The quick evolution of algorithms poses a particular challenge to the agency's regulatory approach, and some are calling to provide the FDA with increased legal authority to conduct real-time monitoring of AI devices. 

Your Medical Devices Are Getting Smarter. Can the FDA Keep Them Safe? | The Wall Street Journal 

This article by Pitt Cyber affiliate scholar Morgan Frank an "ensemble model of AI exposure scores" to predict the vulnerability of job categories to AI. The paper argues that AI exposure alone does not necessarily indicate likelihood of job loss and urges new methods that "holistically quantify which workers have exposure to technology and the risk of detrimental labor outcomes." 

AI exposure predicts unemployment risk | Cornell University

New online tools developed by Hugging Face and Leipzig University illustrate what biased outputs look like in text to image AI tools, using professions and gender-coded adjectives to tease out how the models portray gender and ethnicity. Their findings determine that DALL-E 2, Stable Diffusion v 1.4 and v 2 all "significantly over-represent the portion of their latent space associated with whiteness and masculinity across target attributes." 

Stable Bias: Analyzing Societal Representations in Diffusion Models | Cornell University

News Categories