Differential Privacy Beyond The Central Model

dc.contributor.advisorAaron Roth
dc.contributor.authorJoseph, Matthew
dc.date2023-05-18T03:32:15.000
dc.date.accessioned2023-05-22T18:31:01Z
dc.date.available2001-01-01T00:00:00Z
dc.date.copyright2022-09-17T20:20:00-07:00
dc.date.issued2020-01-01
dc.date.submitted2022-09-17T12:48:02-07:00
dc.description.abstractA differentially private algorithm adds randomness to its computations to ensure that its output reveals little about its input. This careful decoupling of output and input provides privacy for users that contribute input data, but the nature of this privacy depends on the model of differential privacy used. In the most common model, a differentially private algorithm receives a raw database and must produce a differentially private output. This privacy guarantee requires several assumptions. There must exist a secure way of sending the data to the algorithm; the algorithm must maintain a secure state while carrying out its computations; and data contributors must trust the algorithm operator to responsibly steward their raw data in the future. When these three assumptions hold, differential privacy offers both meaningful utility and privacy. In this dissertation, we study what is possible when these assumptions fail. Pan-privacy weakens the first two assumptions and removes the third. Local differential privacy removes all three. Unfortunately, this flexibility comes at a cost. Pan-privacy often introduces more random noise, and local differential privacy adds more noise still. This reduces utility in the forms of worse accuracy and higher sample complexity. Motivated by this trade-off between privacy and utility, it is important to understand the relative powers of these models. We approach this question in two ways. The first part of this dissertation focuses on connections between different models: we show that in some settings, it is possible to convert algorithms in one model to algorithms in another. The second part of this dissertation complements these connections with separations: we construct problems where algorithms in different models must obtain different performance guarantees.
dc.description.degreeDoctor of Philosophy (PhD)
dc.format.extent143 p.
dc.format.mimetypeapplication/pdf
dc.identifier.urihttps://repository.upenn.edu/handle/20.500.14332/31974
dc.languageen
dc.legacy.articleid7017
dc.legacy.fulltexturlhttps://repository.upenn.edu/cgi/viewcontent.cgi?article=7017&context=edissertations&unstamped=1
dc.provenanceReceived from ProQuest
dc.rightsMatthew Joseph
dc.source.issue5231
dc.source.journalPublicly Accessible Penn Dissertations
dc.source.statuspublished
dc.subject.otherCommunication complexity
dc.subject.otherDifferential privacy
dc.subject.otherHypothesis testing
dc.subject.otherLocal privacy
dc.subject.otherPan privacy
dc.subject.otherUniformity testing
dc.subject.otherComputer Sciences
dc.titleDifferential Privacy Beyond The Central Model
dc.typeDissertation/Thesis
digcom.contributor.authorJoseph, Matthew
digcom.date.embargo2001-01-01T00:00:00-08:00
digcom.identifieredissertations/5231
digcom.identifier.contextkey31349201
digcom.identifier.submissionpathedissertations/5231
digcom.typedissertation
dspace.entity.typePublication
upenn.graduate.groupComputer and Information Science
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Joseph_upenngdas_0175C_14235.pdf
Size:
831.48 KB
Format:
Adobe Portable Document Format