REFRAMING ALGORITHMIC FAIRNESS: A PARADIGM FOR FAIR, ACCURATE, AND FLEXIBLE MODEL DEVELOPMENT
Degree type
Graduate group
Discipline
Subject
machine learning
Funder
Grant number
License
Copyright date
Distributor
Related resources
Author
Contributor
Abstract
Machine learning increasingly shapes high-stakes decisions, offering benefits like improved efficiency and accuracy—but also raising challenges when systems fail. Unlike human decision-makers, algorithms often lack clear paths for recourse or repair. This dissertation develops algorithmic tools for responsible and flexible machine learning, focusing on models that can adapt post-deployment, incorporate human feedback, and scale to real-world use. It introduces bias bounties--- a framework for users to collaboratively identify and patch model failures—and offers algorithms for post-processing models to meet fairness goals without retraining or rigid assumptions. It advances the theory of multicalibration, showing when fairness can be achieved without sacrificing accuracy, and proposes a simple swap-regret formulation of multicalibration which can be used in a variety of contexts, such as collaborative learning. Finally, it leverages multicalibration for flexible model development, demonstrating techniques to flexibly postprocess multicalibrated predictors to meet various downstream fairness objectives.
Advisor
Roth, Aaron