Welcome!
I am a Research Fellow at Deakin University, Melbourne, VIC, Australia working with Professor Sunil Gupta and Professor Svetha Venkatesh. I earned my PhD in Machine Learning at the prestigious National University of Singapore (NUS), under the great guidance and mentorship of Professor Bryan Kian Hsiang Low and Professor Patrick Jaillet. My research focuses on
- Bayesian optimization,
- Active learning,
and other areas such as meta-learning, fairness in collaborative machine learning, machine unlearning, explainable AI, and inverse reinforcement learning.
Currently, I am focusing on devising a general approach that unifies several problems related to Bayesian optimization.
Recently
- I am thrilled to announce that our work on active set ordering have been accepted for poster presentation at NeurIPS 2024.
- I am thrilled to share the exciting news that our meta Bayesian optimization and constrained Bayesian optimization have been accepted for poster presentation at ICLR 2024.
- [Oct 17, 2023 @Phoenix, Arizona] I presented Optimizing Value-at-risk And Conditional Value-at-risk Of Black-box Functions at Bayesian Optimization session at 2023 INFORMS Annual Meeting.
Projects
[Bayesian Optimization (BO)]
BO of Risk Measures.
[NeurIPS'21] Optimizing Conditional Value-At-Risk of Black-Box Functions. Quoc Phong Nguyen*, Zhongxiang Dai, Bryan Kian Hsiang Low & Patrick Jaillet. In Advances in Neural Information Processing Systems 34: 35th Annual Conference on Neural Information Processing Systems (NeurIPS'21), Dec 6-14, 2021. [code]
[ICML'21] Value-at-Risk Optimization with Gaussian Processes. Quoc Phong Nguyen*, Zhongxiang Dai, Bryan Kian Hsiang Low & Patrick Jaillet. In Proceedings of the 38th International Conference on Machine Learning (ICML'21), Jun 18-24, 2021. [code]
BO with Top-k Ranking Inputs.
- [AAAI'21] Top-k Ranking Bayesian Optimization. Quoc Phong Nguyen*, Sebastian Tay, Bryan Kian Hsiang Low & Patrick Jaillet. In Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI'21), Feb 2-9, 2021. [code]
Information-Theoretic BO.
[arXiv'22] Rectified Max-Value Entropy Search for Bayesian Optimization Quoc Phong Nguyen*, Bryan Kian Hsiang Low & Patrick Jaillet. arXiv preprint arXiv:2202.13597.
[AAAI'21] An Information-Theoretic Framework for Unifying Active Learning Problems. Quoc Phong Nguyen*, Bryan Kian Hsiang Low & Patrick Jaillet. In Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI'21), Feb 2-9, 2021. [code]
[UAI'21] Trusted-Maximizers Entropy Search for Efficient Bayesian Optimization. Quoc Phong Nguyen*, Zhaoxuan Wu*, Bryan Kian Hsiang Low & Patrick Jaillet. In Proceedings of the 37th Conference on Uncertainty in Artificial Intelligence ({\bf{UAI'21}}), Jul 27-30, 2021.
Finding Nash Equilibrium with BO.
- [AISTATS'23] No-Regret Sample-Efficient Bayesian Optimization for Finding Nash Equilibria with Unknown Utilities. Sebastian Tay*, Quoc Phong Nguyen, Chuan-Sheng Foo & Bryan Kian Hsiang Low. In Proceedings of the 26th International Conference on Artificial Intelligence and Statistics (AISTATS'23), Apr 25-Apr 27, 2023. [code]
Exploring Reward Functions with BO.
- [NeurIPS'20] Efficient Exploration of Reward Functions in Inverse Reinforcement Learning via Bayesian Optimization. Sreejith Balakrishnan*, Quoc Phong Nguyen, Bryan Kian Hsiang Low & Harold Soh. In Advances in Neural Information Processing Systems 33: 34th Annual Conference on Neural Information Processing Systems (NeurIPS'20), Dec 6-12, 2020.
[Active Learning]
Active Level Set Estimation.
- [AAAI'21] An Information-Theoretic Framework for Unifying Active Learning Problems. Quoc Phong Nguyen*, Bryan Kian Hsiang Low & Patrick Jaillet. In Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI'21), Feb 2-9, 2021. [code]
Active Inverse Reinforcement Learning.
- [Workshop at NeurIPS'17] Active Learning for Inverse Reinforcement Learning with Gaussian Processes. Quoc Phong Nguyen*, Bryan Kian Hsiang Low & Patrick Jaillet. In Aligned AI Workshop at NeurIPS'17, Dec 4-9, 2017.
[Collaborative Machine Learning]
Trade-off between Payoff and Model Rewards via Shapley Value.
- [NeurIPS'22] Trade-off between Payoff and Model Rewards in Fair Collaborative Machine Learning Quoc Phong Nguyen*, Bryan Kian Hsiang Low & Patrick Jaillet. In Advances in Neural Information Processing Systems: 36th Annual Conference on Neural Information Processing Systems (NeurIPS'22), Nov 28-Dec 9, 2022. [code]
[Machine Unlearning]
Variational Bayesian Unlearning.
- [NeurIPS'20] Variational Bayesian Unlearning. Quoc Phong Nguyen*, Bryan Kian Hsiang Low & Patrick Jaillet. In Advances in Neural Information Processing Systems 33: 34th Annual Conference on Neural Information Processing Systems (NeurIPS'20), Dec 6-12, 2020.
[Meta-Learning]
Meta-Learning with Gaussian Process.
- [UAI'21] Learning to Learn with Gaussian Processes. Quoc Phong Nguyen, Bryan Kian Hsiang Low & Patrick Jaillet. In Proceedings of the 37th Conference on Uncertainty in Artificial Intelligence (UAI'21), Jul 27-30, 2021. [code]