Differentially Private Stochastic Linear Bandits:(Almost) for Free. Hanna, O. A, Girgis, A. M, Fragouli, C., & Diggavi, S. arXiv preprint arXiv:2207.03445, 2022.
Differentially Private Stochastic Linear Bandits:(Almost) for Free [link]Arxiv  abstract   bibtex   11 downloads  
In this paper, we propose differentially private algorithms for the problem of stochastic linear bandits in the central, local and shuffled models. In the central model, we achieve almost the same regret as the optimal non-private algorithms, which means we get privacy for free. In particular, we achieve a regret of Õ(T‾‾√+1ϵ) matching the known lower bound for private linear bandits, while the best previously known algorithm achieves Õ(1ϵT‾‾√). In the local case, we achieve a regret of Õ(1ϵT‾‾√) which matches the non-private regret for constant ϵ, but suffers a regret penalty when ϵ is small. In the shuffled model, we also achieve regret of Õ(T‾‾√+1ϵ) %for small ϵ as in the central case, while the best previously known algorithm suffers a regret of Õ(1ϵT3/5). Our numerical evaluation validates our theoretical results.
@article{hanna2022differentially,
  title={Differentially Private Stochastic Linear Bandits:(Almost) for Free},
  author={Hanna, Osama A and Girgis, Antonious M and Fragouli, Christina and Diggavi, Suhas},
  journal={arXiv preprint arXiv:2207.03445},
  year={2022},
  tags={journalSub,DML,PDL},
  type={1},
  url_arxiv={https://arxiv.org/abs/2207.03445},
  abstract={In this paper, we propose differentially private algorithms for the problem of stochastic linear bandits in the central, local and shuffled models. In the central model, we achieve almost the same regret as the optimal non-private algorithms, which means we get privacy for free. In particular, we achieve a regret of Õ(T‾‾√+1ϵ) matching the known lower bound for private linear bandits, while the best previously known algorithm achieves Õ(1ϵT‾‾√). In the local case, we achieve a regret of Õ(1ϵT‾‾√) which matches the non-private regret for constant ϵ, but suffers a regret penalty when ϵ is small. In the shuffled model, we also achieve regret of Õ(T‾‾√+1ϵ) %for small ϵ as in the central case, while the best previously known algorithm suffers a regret of Õ(1ϵT3/5). Our numerical evaluation validates our theoretical results.},
}

Downloads: 11