Differentially Private Stochastic Linear Bandits: (Almost) for Free. Hanna, O., Girgis, A. M., Fragouli, C., & Diggavi, S. IEEE Journal on Selected Areas in Information Theory, 5:135-147, 2024.
Differentially Private Stochastic Linear Bandits: (Almost) for Free [link]Arxiv  doi  abstract   bibtex   13 downloads  
In this paper, we propose differentially private algorithms for the problem of stochastic linear bandits in the central, local and shuffled models. In the central model, we achieve almost the same regret as the optimal non-private algorithms, which means we get privacy for free. In particular, we achieve a regret of $ ̃{O}łeft({\sqrt {T}+{}\frac {1}{ɛ }}i̊ght)$ matching the known lower bound for private linear bandits, while the best previously known algorithm achieves $ ̃{O}łeft({{}\frac {1}{ɛ }\sqrt {T}}i̊ght)$ . In the local case, we achieve a regret of $ ̃{O}łeft({{}\frac {1}{ɛ }{\sqrt {T}}}i̊ght)$ which matches the non-private regret for constant $ɛ $ , but suffers a regret penalty when $ɛ $ is small. In the shuffled model, we also achieve regret of $ ̃{O}łeft({\sqrt {T}+{}\frac {1}{ɛ }}i̊ght)$ while the best previously known algorithm suffers a regret of $ ̃{O}łeft({{}\frac {1}{ɛ }{T^{3/5}}}i̊ght)$ . Our numerical evaluation validates our theoretical results. Our results generalize for contextual linear bandits with known context distributions.
@ARTICLE{10502314,
  author={Hanna, Osama and Girgis, Antonious M. and Fragouli, Christina and Diggavi, Suhas},
  journal={IEEE Journal on Selected Areas in Information Theory}, 
  title={Differentially Private Stochastic Linear Bandits: (Almost) for Free}, 
  year={2024},
  volume={5},
  number={},
  pages={135-147},
  abstract={In this paper, we propose differentially private algorithms for the problem of stochastic linear bandits in the central, local and shuffled models. In the central model, we achieve almost the same regret as the optimal non-private algorithms, which means we get privacy for free. In particular, we achieve a regret of  $\tilde {O}\left({\sqrt {T}+{}\frac {1}{\varepsilon }}\right)$  matching the known lower bound for private linear bandits, while the best previously known algorithm achieves  $\tilde {O}\left({{}\frac {1}{\varepsilon }\sqrt {T}}\right)$ . In the local case, we achieve a regret of  $\tilde {O}\left({{}\frac {1}{\varepsilon }{\sqrt {T}}}\right)$  which matches the non-private regret for constant  $\varepsilon $ , but suffers a regret penalty when  $\varepsilon $  is small. In the shuffled model, we also achieve regret of  $\tilde {O}\left({\sqrt {T}+{}\frac {1}{\varepsilon }}\right)$  while the best previously known algorithm suffers a regret of  $\tilde {O}\left({{}\frac {1}{\varepsilon }{T^{3/5}}}\right)$ . Our numerical evaluation validates our theoretical results. Our results generalize for contextual linear bandits with known context distributions.},
  keywords={Privacy;Servers;Stochastic processes;Differential privacy;Information theory;Context modeling;Sensors;Bandits;contextual bandits;differential privacy;linear bandits;differentially private bandits},
  doi={10.1109/JSAIT.2024.3389954},
  ISSN={2641-8770},
  month={},
  tags={journal,DML,PDL},
  type={2},
  url_arxiv={https://arxiv.org/abs/2207.03445},
  }

Downloads: 13