York University researchers will help lead a national effort to make artificial intelligence (AI) safer and more inclusive.
The initiative, launched by global research organization CIFAR (Canadian Institute for Advanced Research), introduces two AI Safety Networks that will address: fake AI-generated content in the justice system; and, linguistic inequality in AI tools.
Funded through CIFAR’s Canadian AI Safety Institute (CAISI) Research Program, each network will receive $700,000 to design and implement open-source AI tools over the next two years that will detect synthetic evidence and make language models fairer for everyone.
Both solution networks – Safeguarding Courts from Synthetic AI Content and Mitigating Dialect Bias – will be co-led by York faculty.
Maura R. Grossman, an adjunct professor at Osgoode Hall Law School will direct the Safeguarding Courts from Synthetic AI Content network alongside Ebrahim Bagheri from the University of Toronto. This team will develop a free, open-source framework to help courts detect and manage AI-generated content, such as fake videos or hallucinated legal documents produced by large language models (LLMs).
The team’s solution aims to support both legal professionals and self-represented litigants with user-friendly tools that flag questionable content.
“We need a tool that knows when it's not sure about its output,” says Grossman, adding that the stakes are high when a judicial decision is based on fake content.

Mitigating Dialect Bias will be co-directed by Laleh Seyyed-Kalantari, assistant professor at York’s Lassonde School of Engineering, alongside Brock University’s Blessing Ogbuokiri. The work will focus on Nigerian Pidgin English, a dialect spoken by over 140 million people that is often misinterpreted by LLMs as toxic or inappropriate, leading to censorship and discrimination.
Working with a citizen network in Nigeria, Seyyed-Kalantari’s team will build the first-ever bias and safety benchmarks for Pidgin English as part of an open-source audit and mitigation toolkit.
“I think what makes our solution unique is that it is locally rooted and culturally representative of citizens of African countries,” says Sayyed-Kalantari. “We want to ensure that the research that we are developing brings actual positive changes for people who are using these LLMs in Africa.”
The project could have a much broader impact by creating culturally representative AI systems and influencing policy to ensure equitable access to AI tools for marginalized communities – including immigrant and Indigenous populations in Canada.
The CAISI Research Program at CIFAR is part of a $50-million federal investment launched in November 2024 to address the evolving risks of AI. It supports interdisciplinary research to develop practical tools for responsible AI deployment across Canada and the Global South.
With files from CIFAR
