Published as a conference paper at ICLR 2024
Ziyue Huang, Yuting Liang, and Ke Yi. Instance-optimal mean estimation under differential privacy.
Advances in Neural Information Processing Systems, 34:25993–26004, 2021.
Peter Kairouz, Ziyu Liu, and Thomas Steinke. The distributed discrete gaussian mechanism for
federated learning with secure aggregation. In International Conference on Machine Learning,
pp. 5201–5212. PMLR, 2021.
Vishesh Karwa and Salil Vadhan. Finite sample differentially private confidence intervals, 2017.
Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images.
2009.
Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recog-
nition. Proceedings of the IEEE, 86(11):2278–2324, 1998. doi: 10.1109/5.726791.
Yann LeCun, L
´
eon Bottou, Genevieve B Orr, and Klaus-Robert M
¨
uller. Efficient backprop. In
Neural networks: Tricks of the trade, pp. 9–50. Springer, 2002.
Xuechen Li, Florian Tramer, Percy Liang, and Tatsunori Hashimoto. Large language models can be
strong differentially private learners. In International Conference on Learning Representations,
2021.
Xuechen Li, Daogao Liu, Tatsunori B Hashimoto, Huseyin A. Inan, Janardhan Kulka-
rni, Yin-Tat Lee, and Abhradeep Guha Thakurta. When does differentially pri-
vate learning not suffer in high dimensions? In S. Koyejo, S. Mohamed,
A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural Infor-
mation Processing Systems, volume 35, pp. 28616–28630. Curran Associates, Inc.,
2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/
file/b75ce884441c983f7357a312ffa02a3c-Paper-Conference.pdf.
H Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. Learning differentially private
recurrent language models. In International Conference on Learning Representations, 2018.
Sebastian Meiser and Esfandiar Mohammadi. Tight on budget? tight bounds for r-fold approximate
differential privacy. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and
Communications Security, CCS ’18, pp. 247–264, New York, NY, USA, 2018. Association for
Computing Machinery. ISBN 9781450356930. doi: 10.1145/3243734.3243765. URL https:
//doi.org/10.1145/3243734.3243765.
Natalia Ponomareva, Hussein Hazimeh, Alex Kurakin, Zheng Xu, Carson Denison, H Brendan
McMahan, Sergei Vassilvitskii, Steve Chien, and Abhradeep Thakurta. How to dp-fy ml: A
practical guide to machine learning with differential privacy. arXiv preprint arXiv:2303.00654,
2023.
David Sommer, Sebastian Meiser, and Esfandiar Mohammadi. Privacy loss classes: The central
limit theorem in differential privacy. Proceedings on Privacy Enhancing Technologies, 2019:
245–269, 04 2019. doi: 10.2478/popets-2019-0029.
Shuang Song, Om Thakkar, and Abhradeep Thakurta. Characterizing private clipped gradient de-
scent on convex generalized linear problems. arXiv preprint arXiv:2006.06783, 2020.
Shuang Song, Thomas Steinke, Om Thakkar, and Abhradeep Thakurta. Evading the curse of dimen-
sionality in unconstrained private glms. In Arindam Banerjee and Kenji Fukumizu (eds.), Pro-
ceedings of The 24th International Conference on Artificial Intelligence and Statistics, volume
130 of Proceedings of Machine Learning Research, pp. 2638–2646. PMLR, 13–15 Apr 2021a.
URL https://proceedings.mlr.press/v130/song21a.html.
Shuang Song, Thomas Steinke, Om Thakkar, and Abhradeep Thakurta. Evading the curse of dimen-
sionality in unconstrained private glms. In International Conference on Artificial Intelligence and
Statistics, pp. 2638–2646. PMLR, 2021b.
Florian Tramer and Dan Boneh. Differentially private learning needs better features (or much
more data). In International Conference on Learning Representations, 2021. URL https:
//openreview.net/forum?id=YTWGvpFOQD-.
11