Speaker Series

In Summer 2021, we featured several early career researchers in Communications, Information Science, and Computer Science from various universities during the months of June and July. This speaker series built on the themes discussed in the AI Infodemic reading group during the Spring 2021 semester. Recordings of all past speaker presentations can be viewed below.

June 15: William Clyde Partin

June 22: Yvonne M. Eadon

June 29: Danica Pawlick-Potts

July 20: Ruth L. Nuñez

July 27: Devansh Saxena

Speaker Schedule

All events will take place on Zoom on Tuesdays during the months of June and July at 3:00 CT.

June 15: What Was Disinformation? Lessons from the 2020 Census

William Clyde Partin, University of North Carolina; Data & Society

Between 2018 and 2021, the Disinformation Action Lab at Data & Society Research Institute participated in an unprecedented partnership with Census Counts, a coalition of civil society organizations, and the United States Census Bureau to monitor, analyze, and respond to disinformation threats against the 2020 decennial census. Our work was motivated out of a desire to preempt two harms that could result from sustained disinformation campaigns: intimidation leading to non-participation in the census and the undermining of the legitimacy of census data. And while we discovered several examples of “classic” disinformation attacks, we soon discovered that mis- and disinformation were not the primary drivers of the harms we had anticipated. Put simply, by centering harms we once attributed to disinformation, we discovered that the concept of disinformation was insufficient to either conceptualize or respond to the range of networked communication threats that imperiled the 2020 Census.

Empirically, this presentation draws archival material and participant observation from our work on the 2020 Census to illustrate the range of communication threats we faced and responded to, including but not limited to disinformation. Theoretically, however, this presentation offers an in-progress critique of the concept of disinformation that emerges from our experience working on the Census. I contextualize our findings within the growing body of literature on disinformation, as well as recent attempts to rethink many of disinformation’s key assumptions. Our critique focuses on a number of factors, including: methodological and empirical norms that spare disinformation researchers from grappling with the limits of the concept; the need to account for contested notions of truth and harm in order to avoid an untenable “eye from nowhere” approach; the uninterrogated power dynamics that occur when researchers universalize their understandings of contested concepts; and the inability for disinformation to account for how narrative and framing shapes the production and interpretation of social media content.

Registration is now closed for this event.

June 22: “Who’s the Conspiracy Theorist? Not I:” Feedback Loops between Conspiracy Communities and Academia

Yvonne M. Eadon, University of California, Los Angeles

View the presentation slides

This presentation will look at the relationship between two areas of research that have been labeled “conspiracy theories,” the Assassination of JFK and UFO research, and how their relationship with science, history, and academia in general has shifted both the alternative disciplines and mainstream academic areas that engage with or run parallel to them. Questions of evidence, legitimacy, and what constitutes an archival record will be addressed.

Registration is now closed for this event.

June 29: Is anybody in there? How perception and trust impact information interactions with AI systems

Danica Pawlick-Potts, Western University

In this talk I will do two things. The first is argue for the validity and utility of a motivation attributing account of trust.  The second is to outline how such an account of trust functions in human-AI information interactions, including potential influences and the implications of such trust interactions.

In both the information behaviour and human-AI interaction literature the dominant accounts of trust are rational-cognitive. Rational cognitive accounts of trust are relatively straight forward to apply in information system contexts because they are grounded in features akin to risk assessment and arguably result in reliance and not trust. I am interested in what happens when people are not informed enough to perform a robust risk assessment. The utility of trust is in its ability to help us cope and make decisions when we face uncertainty and/or ignorance, often stemming from the freedom of others. Rational cognitive approaches to trust have likely previously dominated in information behaviour research because information systems have historically lacked any meaningful autonomy or discretionary power. Motivation-attributing accounts of trust require there to be at least the perception that the subject of trust possesses autonomy and discretion in the interaction.

While most agree that current AI does not possess meaningful moral autonomy, there is the opportunity for some kinds of discretionary power. Further the mere perception of autonomy may elevate an interaction from one of reliance to trust. This is significant because trust, by its nature, leads people to be vulnerable and more likely to accept information without further questions once trust has been established. Trust is not necessarily an inherently rational phenomenon and trades carefully informed choices for reducing complexity and decision-making efforts. It is important to understand how this interaction functions and what the implication is for information behaviour when people interact with AI systems.

Registration is now closed for this event.

July 20: Injustice by Design: A Chicana/Latina feminist analysis of Amazon’s ACX labor platform

Ruth L. Nuñez, University of California, Los Angeles

The future of creative work depends on the decisions and actions we take today, as well as on the voices and perspectives that are included in these processes. In this era of convergence media, the lack of regulatory frameworks and strong worker protections are allowing digital labor platform providers to reshape both online and offline labor structures and culture without accountability to the workers nor to the greater public. In the commercial audiobook arena, Amazon holds a virtual monopoly. It achieves this through the vertical integration of book publishing (APub), audiobook production (Audio Creation Exchange or ACX), and its audiobook distribution (Audible) platforms. Employing a Cultures of Production methodology and a Chicana/Latina feminist framework, I interrogate the ways in which power asymmetries are embedded into the value laden design of its ACX labor platform. My findings reveal the ways in which its design serves to manufacture authenticity in order to seem trustworthy to prospective workers, keeps labor and labor relations within Amazon’s gaze and control, and how it participates in (re)structuring the labor of audiobook narrators into a form of precarious labor. I conclude by reimagining alternative more inclusive and worker-centered audiobook labor platforms that are informed by Chicana/Latina feminist epistemologies.

Registration is now closed for this event.

July 27: Child-Welfare System: The Politics, Power, and Economics of Algorithms

Devansh Saxena, Marquette University

Child welfare agencies in the United States have looked towards structured risk assessment tools over the past three decades as a means to achieve consistent, evidence-based, objective, unbiased, and defensible decision-making. These structured risk assessments act as data collection mechanisms and have further evolved into algorithmic decision-making tools in recent years. Moreover, several of these algorithmic tools have uncritically reinforced biased theoretical constructs and predictors because of the easy availability of that data. For instance, algorithms embed pseudo-predictors that use a parent’s response to interventions rather than the means and effectiveness of the interventions themselves to assess the likelihood of maltreatment. There are also significant disparities between algorithmic assessments and the narrative coding from case notes. That is, algorithms do not even mirror caseworkers’ notes from their interactions with families. Algorithms create a mirage of evidence-based and unbiased practice, but can easily embed underlying power structures, conceal biases, and cause ambiguity in decision-making which has serious implications for child-welfare practice. Why haven’t these algorithms lived up to expectations? And how might we be able to improve them? A human-centered approach to algorithm design can help us evaluate algorithmic decision-making systems in complex socio-political domains and uncover such disparities.

Registration is now closed for this event.