Presented by Amifa Raj, Computer Science emphasis
Hybrid format: City Center Plaza Conference Room 368 or via Zoom
Information access systems, such as search engines and recommender systems, often display results in a sorted ranked list based on their relevance. The fairness of these ranked lists has received attention as an important evaluation criteria along with traditional metrics capturing constructs such as utility or accuracy. Fairness broadly involves both provider and consumer side fairness at both group and individual levels. Several fair ranking metrics have been proposed to measure group fairness for providers based on various “sensitive attributes”. These metrics differ in their fairness goal, assumptions, and implementations. Although there are several fair ranking metrics to measure group fairness, multiple open challenges still exist in this area to consider.
In my thesis, I work on the area of provider-side group fairness in ranking in information access systems. I am interested in understanding the fairness concepts and practical applications of existing fair ranking metrics and finding ways to improve the metrics. My work will aid researchers and practitioners in selecting fair ranking metrics by pointing out the strengths, limitations, applicability and reliability of the metrics. Moreover, I will contribute to the advancement of fair ranking metrics by considering various ranking layout models and further contribute to group-fairness optimization in grid-based ranking layout models.
Dr. Michael Ekstrand (Advisor), Dr. Casey Kennington, Dr. Sole Pera, Dr. Edoardo Serra