A Conversation with Yu-Ru Lin, Associate Professor, School of Computing and Information & Pitt Cyber Research and Academic Director

Pitt Cyber has nearly 100 affiliate scholars drawn from across the University. Affiliate scholars are Pitt faculty working on cyber-related transdisciplinary research. Every so often we catch up with one of them on the blog to learn more about what they’re working on.

This week, we spoke with Pitt Cyber Research and Academic Director and Associate Professor at the School of Computing and Information, Yu-Ru Lin.

Q: You’ve been working on countering extremist narratives over the past several years. What brought you to this topic?

In the past few years, many high-profile violent events were inspired by extremism, like the Tree of Life synagogue shootings in Pittsburgh in 2018 and ​​the shooting at a Walmart in El Paso, Texas, in 2019. Because social media plays a significant role in this, and my research focuses on social media analytics, I began investigating social media extremism, information integrity, and AI development. The problem has become complex because it evolves not only within an individual, but the community, and the networks are widespread. 

Q: You are also working on ethical and accountable AI. How can AI developers work to mitigate bias in their models?

My research spans two areas: cyber social influence (misinformation, disinformation, & conspiracy theories) and ethical AI. In the second area, we are focusing our efforts on mitigating the negative effects of AI-generated bias. For example, facial recognition systems have a tendency to misidentify people of color. And, people get news feeds everywhere (Amazon, Google, etc.) that reinforce stereotypes, which could make people more divided on political issues. One way to combat this is to figure out how this information spreads and then create tools to help data scientists recognize it. We also propose a solution to help data scientists reduce biases by reevaluating their data sources. We want to foster cultures and practices that are more accountable in data science.

Q: What suggestions would you give to people trying to avoid online misinformation or disinformation?

There are many fact-checking tools available, and people can make use of them. While it is the responsibility of social media platforms to create a safe online environment, users must also engage in critical thinking and digital literacy development. Automatic friend or content recommendations on social media limit your ability to reach a diverse range of people. Because users are more likely to interact with the content they find interesting, they are more likely to receive more and more biased recommendations because social media platforms believe you will like those recommendations. So watch out for the recommendations you get and try to make your social network as open and diverse as possible so that you can be exposed to a variety of viewpoints.

Q: What impact does your work have on society?

The broader impact is on public policy. We have already seen how misinformation can sway elections by distorting public opinion. Better regulatory policies on AI, social media, and search engines are necessary. The United States' policy is falling behind, and more research into AI's negative impacts and how to mitigate those impacts is needed.

It also helps us better understand human psychology and cognition. Misinformation and disinformation affect our thinking, communication, and knowledge-acquisition processes. The impact of misinformation and disinformation is everywhere, from education, and finance, to security, and healthcare. In every field, you want to know how information affects decisions. For instance, people made risky choices regarding their health because of misinformation during the COVID outbreak.

Many people have worked on misinformation and disinformation, but a large part of this research is to create a system that automatically "flags" and removes problems (either specific contents or people); however, people and their behaviors are constantly changing. When you have a flagging system, people will try to get around it. Free speech also adds a layer of complexity to the problem. My research goal is to better understand how information spreads within individuals and ecosystems. We want to create a more effective strategy for improving our information system so that we can hear more voices and protect minorities.

Q: Can you recommend a book, podcast, or resource?

For people interested in interdisciplinary research, I recommend Thomas Kuhn's book "The Structure of Scientific Revolutions." It helped me understand why disciplinary research could be bad and why we should cultivate a more diverse and open mind. I also enjoy listening to interesting podcasts that cover a wide range of topics. For example, "Data Skeptic" talked about how to understand things using data and data science; what works and what doesn't. Another podcast, "No Stupid Questions" often used simple questions to introduce new and old research in human psychology, which helps understand what scientists currently know about how and why people think and act. All of these are easy to get to, and you do not have to be an expert to enjoy them.

News Categories