Addressing fairness in large language models
By: Hanna Williams
As artificial intelligence (AI) advances – particularly large language models (LLMs) which are increasingly integrated into social, governmental and economic systems – discriminatory stereotypes and biases persist. These prejudices reflect and reinforce historical and systemic inequalities embedded in massive datasets that models like OpenAI’s Generative Pre-trained Transformer (GPT) and Google’s Gemini learn from.
York University researchers from across faculties are joining forces to develop frameworks to identify and mitigate biases in LLMs rooted in colonialism, racism and ableism.
Health informatics Professor Christo El Morr’s work spans a range of topics, from achieving accessible and inclusive AI to modelling and building bilingual and accessible knowledge infrastructures, and creating frameworks to address AI bias.
“Currently, AI operates as a tool of corporate and state control, reinforcing systems of exclusion and marginalization under the guise of progress,” says El Morr, who co-edited Beyond Tech Fixes: Towards an AI Future Where Disability Justice Thrives, published in October 2025. The book challenges the prevailing assumption that AI can be “fixed” by improving datasets, adding ethical guidelines or refining bias-detection algorithms.

Internationally, El Morr and his Faculty of Health collaborator Professor Vijay Mago recently convened philosophers, social scientists and AI researchers at a symposium in India to advance global research collaborations with support from York’s Global Research Excellence Seed Fund.
El Morr is involved in multiple equity-focused and LLM-related studies, partnering with colleagues at York, including long-time collaborator and Critical Disability Studies Professor Rachel da Silveira Gorman in the Faculty of Graduate Studies, and other Canadian and international universities.
As a principal investigator on several studies funded by Social Sciences and Humanities Research Council (SSHRC) grants, El Morr collaborates with Gorman on advancing AI and disability advocacy, accessibility for persons with disabilities and AI and equity.
“Across projects, we centre data sovereignty, community governance and decolonial design. This means long-term partnerships, ensuring consent over data use, and sharing power over how models are trained, evaluated and deployed,” says El Morr.
His most recent SSRHC-funded research project, Equity Artificial Intelligence: towards a framework to address AI bias, with Gorman, Faculty of Health Assistant Professor Elham Dolatabadi and Lassonde School of Engineering Assistant Professor Laleh Seyyed-Kalantari, explores how AI can be reimagined through frameworks of equity, justice and liberation.
Seyyed-Kalantari also leads the study, Design of Benchmarks for Fairness and Bias Evaluation and De-Biasing of Natural Language Model to Incorporate User Diversity, focusing on addressing fairness issues in LLMs, which often favour majority groups due to biased training data.

A recipient of a Connected Minds seed grant, the project is in collaboration with the Vector Institute and aims to design domain-specific testing benchmarks to assess and score fairness across diverse dimensions such as race, gender, religion and social status.
“By focusing on linguistic bias, particularly in the context of sentiment analysis, my work aims to mitigate stereotypes and ensure more inclusive LLMs that better support marginalized groups, including Indigenous Peoples, racialized communities and those with disabilities,” says Seyyed-Kalantari, who leads York’s Responsible AI Lab and is co-director of the new Mitigating Dialect Bias solutions network, which received $700,000 in Canadian Institute for Advanced Research funding.
She sees cultural bias arising from misinterpretation of dialects as a major concern in LLMs. “For example, African American Vernacular English often uses grammar, vocabulary and expressions that are not part of Standard American English. AI may interpret such words and phrases as ‘toxic’ and harmful. This is because LLMs are trained on data that favour dominant dialects.”
This is an issue that affects dialects around the world, something Seyyed-Kalantari plans to address next.
Read more
The Biophysics of age-related visual brain diseases
Innovative technique will bring to light new treatments and diagnostics for vision-related diseases
Research for a better future
Creating positive change in areas related to decolonization; the integration of AI in healthcare; mitigating racism in classrooms; sustainable arts; and inclusive health care
Full Circle: Alum partners with Cinespace studios and creates student opportunities
Partnership will let students experience behind-the-scenes of a billion-dollar film industry
