I am a Ph.D. candidate in the Computer Science department at McGill University and Mila, supervised by Prof. Doina Precup
My research focuses on Continual Reinforcement Learning, where I develop novel algorithms inspired by cognitive science and neuroscience to help AI agents adapt to non-stationary environments. Grounded in both theoretical rigour and practical utility, I have authored first-author papers at NeurIPS and ICML. Beyond my research, I have co-instructed graduate-level RL courses at McGill (COMP 579) and Polytechnique Montreal (INF8250AE), served as a lead organizer for the Mila RL Workshop and the RL Sofa meeting series, and currently mentors several M.Sc. and Ph.D. students in the lab. I like communicating my research with broader audience through YouTube videos.
I obtained a masters degree in Computer Science from McGill University in 2019. Before that, I worked as a Data Scientist at Fractal Analytics for roughly two years. I went to P.E.S. Institute of Technology, Bengaluru, India, for bachelor's studies in telecommunication engineering. You can find more on my CV.
Besides research, I like to read books, play chess, meditate, and maintain an active lifestyle.
Email: nishanth127127 [AT] gmail [DOT] com.
My research focuses on Continual Reinforcement Learning, where I develop novel algorithms inspired by cognitive science and neuroscience to help AI agents adapt to non-stationary environments. Grounded in both theoretical rigour and practical utility, I have authored first-author papers at NeurIPS and ICML. Beyond my research, I have co-instructed graduate-level RL courses at McGill (COMP 579) and Polytechnique Montreal (INF8250AE), served as a lead organizer for the Mila RL Workshop and the RL Sofa meeting series, and currently mentors several M.Sc. and Ph.D. students in the lab. I like communicating my research with broader audience through YouTube videos.
I obtained a masters degree in Computer Science from McGill University in 2019. Before that, I worked as a Data Scientist at Fractal Analytics for roughly two years. I went to P.E.S. Institute of Technology, Bengaluru, India, for bachelor's studies in telecommunication engineering. You can find more on my CV.
Besides research, I like to read books, play chess, meditate, and maintain an active lifestyle.
Email: nishanth127127 [AT] gmail [DOT] com.
Highlights
- May 2020 - April 2026: Co-organized the weekly reinforcement learning meetings at Mila.
- June 2023 - December 2023: Co-organizing New in ML workshop at NeurIPS 2023.
- July 2023: Passed the Ph.D. proposal exam.
- Februray 2023 - May 2023: Served as a reviewer for CoLLAs 2023 conference.
- November 2022 - June 2023: Elected as EDI commissioner at McGill CS Graduate Society.
- October 2022: Served as a reviewer for All things attention workshop, NeurIPS 2022.
- Februray 2022 - May 2022: Served as a reviewer for CoLLAs 2022 conference.
- January 2022 - April 2022: Teaching assistant for Introduction to Reinforcement Learning.
- September 2021 - December 2021: Teaching assistant for Applied Machine Learning.
- January 2021 - April 2021: Teaching assistant for Computers and Society.
- September 2020: Passed the Ph.D. comprehensive exam.
- June 2020 - July 2020: Teaching assistant for AI4Good summer school.
- January 2020 - April 2020: Teaching assistant for Introduction to Reinforcement Learning.
- December 2019: My master's thesis is approved by the graduate program director.
- September 2019: Started Ph.D. in Computer Science with Prof. Doina Precup.
- June 2019 - July 2019: Teaching assistant for AI4Good summer school.
- January 2018 - April 2018: Teaching assistant for Algorithms and Data Structures.
- September 2017 - December 2017: Teaching assistant for Operating systems.
- September 2017: Started M.Sc. in Computer Science with Prof. Doina Precup.