NVIDIA Awards up to $60,000 Research Fellowships to PhD Students

For more than two decades, the NVIDIA Graduate Fellowship Program has supported graduate students doing outstanding work relevant to NVIDIA technologies. Today, the program announced the latest awards of up to $60,000 each to 10 Ph.D. students involved in research that spans all areas of computing innovation.

Selected from a highly competitive applicant pool, the awardees will participate in a summer internship preceding the fellowship year. Their work puts them at the forefront of accelerated computing — tackling projects in autonomous systems, computer architecture, computer graphics, deep learning, programming systems, robotics and security.

The NVIDIA Graduate Fellowship Program is open to applicants worldwide.

The 2025-2026 fellowship recipients are:

Anish Saxena, Georgia Institute of Technology — Rethinking data movement across the stack — spanning large language model architectures, system software and memory systems — to improve the efficiency of LLM training and inference.
Jiawei Yang, University of Southern California — Creating scalable, generalizable foundation models for autonomous systems through self-supervised learning, leveraging neural reconstruction to capture detailed environmental geometry and dynamic scene behaviors, and enhancing adaptability in robotics, digital twin technologies and autonomous driving.
Jiayi (Eris) Zhang, Stanford University — Developing intelligent algorithms, models and tools for enhancing user creativity and productivity in design, animation and simulation.
Ruisi Cai, University of Texas at Austin — Working on efficient training and inference for large foundation models as well as AI security and privacy.
Seul Lee, Korea Advanced Institute of Science and Technology — Developing generative models for molecules and exploration strategies in chemical space for drug discovery applications.
Sreyan Ghosh, University of Maryland, College Park — Advancing audio processing and reasoning by designing resource-efficient models and training techniques, improving audio representation learning and enhancing audio perception for AI systems.
Tairan He, Carnegie Mellon University — Researching the development of humanoid robots, with a focus on advancing whole-body loco-manipulation through large-scale simulation-to-real learning.
Xiaogeng Liu, University of Wisconsin–Madison — Developing robust and trustworthy AI systems, with an emphasis on evaluating and enhancing machine learning models to ensure consistent performance and resilience against diverse attacks and unforeseen inputs.
Yunze Man, University of Illinois Urbana-Champaign — Developing vision-centric reasoning models for multimodal and embodied AI agents, with a focus on object-centric perception systems in dynamic scenes, vision foundation models for open-world scene understanding and generation, and large multimodal models for embodied reasoning and robotics planning.
Zhiqiang Xie, Stanford University — Building infrastructures to enable more efficient, scalable and complex compound AI systems while enhancing the observability and reliability of such systems.

We also acknowledge the 2025-2026 fellowship finalists:

Bo Zhao, University of California, San Diego
Chenning Li, Massachusetts Institute of Technology
Dacheng Li, University of California, Berkeley
Jiankai Sun, Stanford University
Wenlong Huang, Stanford University

Leave a Reply

Your email address will not be published.

Previous post 15 Secret AI Tools the Wealthy Elite Use to 10x Their Income: Regular People Are Finally Catching On! ($12,450/Week)
Next post Microsoft is Nvidia’s biggest AI chip buyer of the year, and it’s not even close. With ByteDance and Tencent coming out ahead of Zuck, Bezos, and Musk’s outfits, too