At York University, Assistant Professor Laleh Seyyed-Kalantari and her Responsible AI Lab have achieved a remarkable milestone: 100 per cent of her trainees have secured competitive awards over the last two years.
A rare achievement, the success of Seyyed-Kalantari's graduate students reflects the lab’s structured, collaborative approach to mentorship.
“Achieving a 100 per cent success rate in competitive research funding is an extraordinary accomplishment,” says Amir Asif, vice-president research and innovation. “It reflects not just exceptional talent, but the University’s ambition to lead in shaping the future of responsible AI. At York, we strive to empower students to break new ground, drive meaningful societal impact and set a standard of excellence that inspires the next generation of innovators in emerging fields like responsible AI.”
With a focus on fairness and safety, the Responsible AI Lab develops tools and methods to make AI more accessible for everyone with the goal to build technology that is ethically grounded and designed to serve all communities – particularly those that have been underserved.

It aims to curate a future that makes AI systems more equitable and easier to understand – particularly in high-impact areas like medical imaging, drug discovery and large language models, says Seyyed-Kalantari, a faculty member at York’s Lassonde School of Engineering.
As part of this pursuit, grad students apply for competitive research awards to advance the lab’s mission to create technology that better serves diverse communities.
The 12 successful trainees – postdoctoral fellows, PhD candidates, master’s students and one undergraduate researcher – have earned over $543,500 in competitive awards, with funding from prestigious programs such as Connected Minds, VISTA, the Vector Institute, Natural Sciences and Engineering Research Council of Canada, Erasmus+ and more.
Seyyed-Kalantari notes that these awards are independent from her research grants.
“These awards give students the financial security to focus more fully on their research and academic growth,” she says. “This recognition boosts their confidence, motivates them to pursue further opportunities and opens new doors for future success.”
Their projects explore a range of equity-focused AI challenges – from text-guided enhancement of medical imaging to detecting dialectic bias in language models – that aim to build culturally aware and socially responsible systems. The research is rooted in technology but has social impacts, too.
“We’re building AI that reflects the diversity of the communities it serves,” says Seyyed-Kalantari. “That means embedding cultural and ethical awareness into every layer of the technology.”
The lab’s mentorship model is central to this success. From the moment students join, they’re supported in identifying award opportunities and preparing strong applications. In addition to guidance from Seyyed-Kalantari, senior students help mentor new students, creating a culture of collaboration and resilience.
Opportunities are ongoing, she notes, adding that there are current openings for students and postdoctoral researchers who are passionate about responsible AI.
“I hope that prospective students and future collaborators see that our lab is committed to meaningful research, strong mentorship and an inclusive, supportive environment,” says Seyyed-Kalantari. “We value growth, collaboration and making a real impact – both in our field and in society. Anyone joining us will find opportunities to learn, contribute and be part of a team dedicated to advancing responsible and equitable AI.”
