Having your Privacy Cake and Eating it Too: Platform-supported Auditing of Social Media Algorithms for Public Interest. Imana, B., Korolova, A., & Heidemann, J. In Computer Supported Cooperative Work , pages to appear, Minneapolis, Minnesota, USA, October, 2023. ACM.
Having your Privacy Cake and Eating it Too: Platform-supported Auditing of Social Media Algorithms for Public Interest [link]Paper  doi  abstract   bibtex   
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse today. The opaque nature of the algorithms these platforms use to curate content raises societal questions. Prior studies have used black-box methods led by experts or collaborative audits driven by everyday users to show that these algorithms can lead to biased or discriminatory outcomes. However, existing auditing methods face fundamental limitations because they function independent of the platforms. Concerns of potential harmful outcomes have prompted proposal of legislation in both the U.S. and the E.U. to mandate a new form of auditing where vetted external researchers get privileged access to social media platforms. Unfortunately, to date there have been no concrete technical proposals to provide such auditing, because auditing at scale risks disclosure of users' private data and platforms' proprietary algorithms. We propose a new method for \emphplatform-supported auditing that can meet the goals of the proposed legislation. The first contribution of our work is to enumerate the challenges and the limitations of existing auditing methods to implement these policies at scale. Second, we suggest that limited, privileged access to \emphrelevance estimators is the key to enabling generalizable platform-supported auditing of social media platforms by external researchers. Third, we show platform-supported auditing need not risk user privacy nor disclosure of platforms' business interests by proposing an auditing framework that protects against these risks. For a particular fairness metric, we show that ensuring privacy imposes only a small constant factor increase ($6.34×$ as an upper bound, and $4×$ for typical parameters) in the number of samples required for accurate auditing. Our technical contributions, combined with ongoing legal and policy efforts, can enable public oversight into how social media platforms affect individuals and society by moving past the privacy-vs-transparency hurdle.
@InProceedings{Imana23a,
        author =        "Basileal Imana and Aleksandra Korolova and John Heidemann",
        title =         "Having your Privacy Cake and Eating it Too: Platform-supported Auditing of Social Media Algorithms for Public Interest",
        booktitle =     " Computer Supported Cooperative Work ",
        year =          2023,
	sortdate = 	"2023-10-13", 
	project = "ant",
	jsubject = "network_observation",
        pages =      "to appear",
        month =      oct,
        address =    "Minneapolis, Minnesota, USA",
        publisher =  "ACM",
        keywords =   "linkedin, facebook, ad delivery algorithm, bias,
                  skew, discrimination, platform-supported auditing,
                  differential privacy",
        doi =     "https://doi.org/10.1145/3579610",
	url =		"https://ant.isi.edu/%7ejohnh/PAPERS/Imana23a.html",
	pdfurl =	"https://ant.isi.edu/%7ejohnh/PAPERS/Imana23a.pdf",
	blogurl = "https://ant.isi.edu/blog/?p=1889",
	abstract = "Social media platforms curate access to information and opportunities,
and so play a critical role in shaping public discourse today.  The
opaque nature of the algorithms these platforms use to curate content
raises societal questions.  Prior studies have used black-box methods
led by experts or collaborative audits driven by everyday users to
show that these algorithms can lead to biased or discriminatory
outcomes.  However, existing auditing methods face fundamental
limitations because they function independent of the platforms.
Concerns of potential harmful outcomes have prompted proposal of
legislation in both the U.S.~and the E.U.~to mandate a new form of
auditing where vetted external researchers get privileged access to
social media platforms.  Unfortunately, to date there have been no
concrete technical proposals to provide such auditing, because
auditing at scale risks disclosure of users' private data and
platforms' proprietary algorithms.  We propose a new method for
\emph{platform-supported auditing} that can meet the goals of the
proposed legislation.  The first contribution of our work is to
enumerate the challenges and the limitations of existing auditing
methods to implement these policies at scale.  Second, we suggest that
limited, privileged access to \emph{relevance estimators} is the key
to enabling generalizable platform-supported auditing of social media
platforms by external researchers.  Third, we show platform-supported
auditing need not risk user privacy nor disclosure of platforms'
business interests by proposing an auditing framework that protects
against these risks.  For a particular fairness metric, we show that
ensuring privacy imposes only a small constant factor increase
($6.34\times$ as an upper bound, and $4\times$ for typical parameters)
in the number of samples required for accurate auditing.  Our
technical contributions, combined with ongoing legal and policy
efforts, can enable public oversight into how social media platforms
affect individuals and society by moving past the
privacy-vs-transparency hurdle.
",
}

Downloads: 0