Date of Award


Degree Type


Degree Name

Doctor of Philosophy (PhD)

Graduate Group


First Advisor

Michael Kearns

Second Advisor

Aaron Roth


Large-scale algorithmic decision making has increasingly run afoul of various social norms, laws, and regulations. A prominent concern is when a learned model exhibits discrimination against some demographic group, perhaps based on race or gender. Concerns over such algorithmic discrimination have led to a recent flurry of research on fairness in machine learning, which includes new tools for designing fair models, and studies the tradeoffs between predictive accuracy and fairness. We address algorithmic challenges in this domain.

Preserving privacy of data when performing analysis on it is not only a basic right for users, but it is also required by laws and regulations. How should one preserve privacy? After about two decades of fruitful research in this domain, differential privacy (DP) is considered by many the gold standard notion of data privacy. We focus on how differential privacy can be useful beyond preserving data privacy. In particular, we study the connection between differential privacy and adaptive data analysis.

Users voluntarily provide huge amounts of personal data to businesses such as Facebook, Google, and Amazon, in exchange for useful services. But a basic principle of data autonomy asserts that users should be able to revoke access to their data if they no longer find the exchange of data for services worthwhile. The right for users to request the erasure of personal data appears in regulations such as the Right to be Forgotten of General Data Protection Regulation (GDPR), and the California Consumer Privacy Act (CCPA). We provide algorithmic solutions to the the problem of removing the influence of data points from machine learning models.