What We're Reading

Tech headlines this week are tied up with Congress' latest attempt to ban TikTok, but there's lots of important technology reporting flying under the public's radar. This week, we highlight a few of them: an actionable risk framework to structure thinking around open-source AI, what to make of BigTech's steps into chip manufacturing and the diminishing the increasing dependence of researchers on industry in areas of emerging technology. 

On the Societal Impact of Open Foundation Models | Hai Stanford

This excellent new paper from Stanford "present[s] a framework that centers marginal risk: what additional risk is society subject to because of open foundation models relative to pre-existing technologies, closed models, or other relevant reference points?" The paper identifies seven "misuse vectors": biosecurity, cybersecurity, voice cloning scams, spear phishing, disinformation, non-consensual intimate imagery, and child sexual abuse materials. The framework put forth is both concrete and immediately useful in quantifying risks associated with open AI models. 

Fair Learning | Texas Law Review

This article comes at the recommendation of Pitt Cyber's former Research & Academic Director Mike Madison. Although technical at times, it offers a fairly accessible assessment of the complications of applying copyright protections AI training materials, as it argued in the case brought by the NY Times against Open AI. While noting some possible exceptions (style mimicry), the authors argue that fair use applies to machine learning, going so far as to argue that "the law should [not] treat robots and humans differently" in the context of learning. Additionally, allowing AI models to train on copyright materials advances social good: "broad access to training datasets will make AI better, safer, and fairer." 

The Lifeblood of the AI Boom | The Atlantic

Faced with high energy costs and external dependencies for GPUs, BigTech companies are investing in their own chip producing capabilities in the hopes of cost and energy savings. The prospect raises concerns about market dominance, especially with the FTC paying increased attention to the AI market. 

OpenAI GPT Sorts Resume Names With Racial Bias, Test Shows | Bloomberg 

This excellent reporting by Bloomberg tangibly illustrates what AI bias looks like in the context of hiring. Using 8 resumes with like qualifications, they assigned names with clear ethnic (and gender) associations. For 4 different roles, they instruct ChatGPT to pick the top candidate, and iterate the experiment 1000 times. A non-biased tool would pick each candidate 12.5% of the time. However, the outcomes show that the AI's "gender and racial preferences differed depending on the particular job that a candidate was evaluated for. GPT does not consistently disfavor any one group but will pick winners and losers depending on the context" -- favoring Hispanic women for the HR position and Asian candidates for the financial analyst position. 

Silicon Valley is pricing academics out of AI research | The Washington Post

The exploding cost of training AI models has tiled the balance of power towards industry and away from academia and government. With researchers dependent on BigTech for access to models and compute, industry has increased leverage in setting research agendas. This reporting by the Washington Post raises important questions about the implications of this imbalance of power – one distinct from historical technology breakthroughs driven by government funding. 

News Categories