Ta Duy Nguyen

I am a Ph.D. student in Computer Science at Boston University working under the supervision of Alina Ene. I am interested in Optimization, Machine Learning and Theoretical Computer Science in general. I obtained my Master of Science from the National University of Singapore, where I worked with Yair Zick. Before that, I earned my Diplôme d'Ingénieur from École Polytechnique and a Bachelor's degree in Computer Science from the National University of Singapore.

Conference papers

  1. Quasi-Self-Concordant Optimization with \(\ell_{\infty}\) Lewis Weights

    (\(\alpha\beta\)) Alina Ene, Ta Duy Nguyen, Adrian Vladu
    In Proceedings of the 39th Conference on Neural Information Processing Systems (NeurIPS 2025) (To appear).
  2. Solving Linear Programs with Differential Privacy

    (\(\alpha\beta\)) Alina Ene, Huy Le Nguyen, Ta Duy Nguyen, Adrian Vladu
    In Proceedings of the 2025 International Conference on Randomization and Computation (RANDOM 2025). [pdf]
  3. Multiplicative Weights Update, Area Convexity and Random Coordinate Descent for Densest Subgraph Problems

    Ta Duy Nguyen, Alina Ene
    In Proceedings of the 40th International Conference on Machine Learning (ICML 2024) (Oral Presentation). [pdf]
  4. On the Generalization Error of Stochastic Mirror Descent for Quadratically-Bounded Losses: an Improved Analysis

    In Proceedings of the 37th Conference on Neural Information Processing Systems (NeurIPS 2023).
  5. Improved Convergence in High Probability of Clipped Gradient Methods with Heavy Tailed Noise

    In Proceedings of the 37th Conference on Neural Information Processing Systems (NeurIPS 2023) (Spotlight).
    This paper is a merger of two preprints [arxiv:2302.05437] and [arxiv:2304.01119].
  6. High Probability Convergence of Stochastic Gradient Methods

    Zijian Liu*, Ta Duy Nguyen*, Thien Hang Nguyen*, Alina Ene, Huy Le Nguyen
    In Proceedings of the 40th International Conference on Machine Learning (ICML 2023). [pdf]
  7. On the Convergence of AdaGrad on \(\mathbb{R}^{d}\): Beyond Convexity, Non-Asymptotic Rate and Acceleration

    Zijian Liu*, Ta Duy Nguyen*, Alina Ene, Huy Le Nguyen
    In Proceedings of the 11th International Conference on Learning Representations (ICLR 2023). [pdf]
  8. Adaptive Accelerated (Extra-)Gradient Methods with Variance Reduction

    Zijian Liu*, Ta Duy Nguyen*, Alina Ene, Huy Le Nguyen
    In Proceedings of the 39th International Conference on Machine Learning (ICML 2022). [pdf]
  9. Threshold Task Games: Theory, Platform and Experiments

    (\(\alpha\beta\)) Kobi Gal, Ta Duy Nguyen, Quang Nhat Tran, Yair Zick
    In Proceedings of the 19th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2020). [pdf]
  10. Resource Based Cooperative Games: Optimization, Fairness and Stability

    (\(\alpha\beta\)) Ta Duy Nguyen, Yair Zick
    In Proceedings of the 11th International Symposium on Algorithmic Game Theory (SAGT 2018) (Short paper). [pdf]
  11. Fast Genetic Algorithms

    (\(\alpha\beta\)) Benjamin Doerr, Huu Phuoc Le, Regis Makhmara, Ta Duy Nguyen
    In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2017). [pdf]

Manuscripts

  1. Improved \(\ell_p\)-Regression via Iteratively Reweighted Least Squares

    (\(\alpha\beta\)) Alina Ene, Ta Duy Nguyen, Adrian Vladu
    In Submission.
  2. META-STORM: Generalized Fully-Adaptive Variance Reduced SGD for Unbounded Functions

    Zijian Liu*, Ta Duy Nguyen*, Thien Hang Nguyen*, Alina Ene, Huy Le Nguyen
    [pdf]
  3. On Adversarial Bias and the Robustness of Fair Machine Learning

    Hongyan Chang*, Ta Duy Nguyen*, Sasi Kumar Murakonda, Ehsan Kazemi, Reza Shokri
    [pdf]
  • Student Researcher Intern, Google, 08/2025 - 11/2025

    Research Project: Space and Time Efficient Softmax for Recommender Systems.
  • Machine Learning Intern, Meta, 06/2025 - 08/2025

    Project: Impression Rate Prediction and Allocation Algorithms for Pricing Optimization.
  • Research Intern, Applied Sciences Group, Microsoft, 06/2024 - 08/2024

    Research Project: Efficient Rank Allocation under Memory Constraints for Low-Rank Adaptation in Fine-Tuning Language Models.
  • Visiting Graduate Student, Simons Institute for the Theory of Computing, 08/2023 - 12/2023

  • Research Assistant, National University of Singapore, 04/2019 - 08/2021

    Research Project: Interpretability of Machine Learning Models and Robustness of Fair Machine Learning.
    Supervisor: Reza Shokri.