Hoda Heidari

Hoda Heidari

Hoda Heidari

ETH-Zurich

Postdoctoral Associate

Hoda Heidari is a postdoctoral associate at Cornell University. Her current research is broadly concerned with the societal aspects of Artificial Intelligence, and in particular, on issues of fairness and explainability for Machine Learning. At Cornell, Hoda is fortunate to work with Professors Jon Kleinberg, Solon Barocas, and Karen Levy. Before joining Cornell, she spent two years as a postdoctoral fellow at the Machine Learning Institute of ETH Zurich, where she had the privilege of collaborating with Professor Andreas Krause, and co-supervising multiple B.Sc. and M.Sc. students. Hoda completed her doctoral studies in computer and information science at the University of Pennsylvania under the supervision of Professors Michael Kearns and Ali Jadbabaie. During her time at UPenn, she also obtained an M.Sc. degree in statistics from the Wharton school of business. Hoda has organized multiple events on the topic of her research, including a tutorial at the Web Conference (WWW) and a workshop at the Neural and Information Processing Systems (NeurIPS) conference. Beyond computer science venues, she has been invited to and participated in numerous interdisciplinary panels and discussions. In her spare time, Hoda enjoys traveling, reading books, and singing.

Research Abstract:

Artificial Intelligence and Machine Learning have brought about substantial change to society, and there is an urgent need for these technologies to reflect our collective values. A report by the Obama Administration in 2016 echoes this message and calls for a better understanding of how big data and AI technologies can perpetuate, exacerbate, or mask unfairness and discrimination (Podesta et al., 2014). Quantifying and forging a consensus around social values, like fairness, is a significant challenge, and we cannot solely rely on technical solutions to address these highly complex, socio-technical problems. The core mission of my research is to bring together tools and insights from machine learning, economics, political philosophy, and human-centered experiments to address three fundamental challenges: (1) comparing and combining algorithmic decision-making with the human-centered alternatives; (2) finding ethically acceptable, context-dependent measures of algorithmic fairness with input from stakeholders and domain experts; and (3) evaluating the long-term impact of algorithmic decisions on society. My research often draws on ideas and models from social sciences and economics. For instance, in a series of recent articles, I have presented a new perspective on the fair-ML literature through the lens of the long-established theories of distributive justice. I also study and develop methods to effectively involve people in the process of defining and enforcing algorithmic fairness. In particular, I design and conduct human-subject experiments to better understand the human perceptions of fairness and justice and inform and enrich the mathematical formulation of these concepts.