{"id":654,"date":"2026-01-28T09:26:29","date_gmt":"2026-01-28T14:26:29","guid":{"rendered":"https:\/\/www.yorku.ca\/ascend\/?post_type=article&#038;p=654"},"modified":"2026-01-30T11:43:45","modified_gmt":"2026-01-30T16:43:45","slug":"mitigating-bias","status":"publish","type":"article","link":"https:\/\/www.yorku.ca\/ascend\/article\/mitigating-bias\/","title":{"rendered":"Mitigating bias"},"content":{"rendered":"\n<p>As artificial intelligence (AI) advances \u2013 particularly large language models (LLMs) which are increasingly integrated into social, governmental and economic systems \u2013 discriminatory stereotypes and biases persist. These prejudices reflect and reinforce historical and systemic inequalities embedded in massive datasets that models like OpenAI\u2019s Generative Pre-trained Transformer (GPT) and Google\u2019s Gemini learn from.<\/p>\n\n\n\n<p>York University researchers from across faculties are joining forces to develop frameworks to identify and mitigate biases in LLMs rooted in colonialism, racism and ableism.<\/p>\n\n\n\n<p>Health informatics Professor <a href=\"https:\/\/www.yorku.ca\/dighr\/person\/christo-el-morr\/\">Christo El Morr<\/a>\u2019s work spans a range of topics, from achieving accessible and inclusive AI to modelling and building bilingual and accessible knowledge infrastructures, and creating frameworks to address AI bias.<\/p>\n\n\n\n<p>\u201cCurrently, AI operates as a tool of corporate and state control, reinforcing systems of exclusion and marginalization under the guise of progress,\u201d says El Morr, who co-edited <em>Beyond Tech Fixes: Towards an AI Future Where Disability Justice Thrives<\/em>, published in October 2025. The book challenges the prevailing assumption that AI can be \u201cfixed\u201d by improving datasets, adding ethical guidelines or refining bias-detection algorithms.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"2560\" height=\"969\" src=\"https:\/\/www.yorku.ca\/ascend\/wp-content\/uploads\/sites\/689\/2025\/12\/Mitigating-Bias-Christo-scaled.jpg\" alt=\"\" class=\"wp-image-756\" srcset=\"https:\/\/www.yorku.ca\/ascend\/wp-content\/uploads\/sites\/689\/2025\/12\/Mitigating-Bias-Christo-scaled.jpg 2560w, https:\/\/www.yorku.ca\/ascend\/wp-content\/uploads\/sites\/689\/2025\/12\/Mitigating-Bias-Christo-400x151.jpg 400w, https:\/\/www.yorku.ca\/ascend\/wp-content\/uploads\/sites\/689\/2025\/12\/Mitigating-Bias-Christo-1024x388.jpg 1024w, https:\/\/www.yorku.ca\/ascend\/wp-content\/uploads\/sites\/689\/2025\/12\/Mitigating-Bias-Christo-1536x581.jpg 1536w, https:\/\/www.yorku.ca\/ascend\/wp-content\/uploads\/sites\/689\/2025\/12\/Mitigating-Bias-Christo-2048x775.jpg 2048w\" sizes=\"auto, (max-width: 2560px) 100vw, 2560px\" \/><figcaption class=\"wp-element-caption\">Christo El Morr<\/figcaption><\/figure>\n\n\n\n<p>Internationally, El Morr and his Faculty of Health collaborator Professor Vijay Mago recently convened philosophers, social scientists and AI researchers at a symposium in India to advance global research collaborations with support from <a href=\"https:\/\/www.yorku.ca\/global-engagement\/global-research-excellence-seed-fund\/\">York\u2019s Global Research Excellence Seed Fund<\/a>.<\/p>\n\n\n\n<p>El Morr is involved in multiple equity-focused and LLM-related studies, partnering with colleagues at York, including long-time collaborator and <a href=\"https:\/\/www.yorku.ca\/gradstudies\/cds\/\">Critical Disability Studies<\/a> Professor Rachel da Silveira Gorman in the Faculty of Graduate Studies, and other Canadian and international universities.<\/p>\n\n\n\n<p>As a principal investigator on several studies funded by Social Sciences and Humanities Research Council (SSHRC) grants, El Morr collaborates with Gorman on advancing AI and disability advocacy, accessibility for persons with disabilities and AI and equity.<\/p>\n\n\n\n<p>\u201cAcross projects, we centre data sovereignty, community governance and decolonial design. This means long-term partnerships, ensuring consent over data use, and sharing power over how models are trained, evaluated and deployed,\u201d says El Morr.<\/p>\n\n\n\n<p>His most recent SSRHC-funded research project, Equity Artificial Intelligence: towards a framework to address AI bias, with Gorman, Faculty of Health Assistant Professor Elham Dolatabadi and Lassonde School of Engineering Assistant Professor <a href=\"https:\/\/lassonde.yorku.ca\/users\/lskalantari\">Laleh Seyyed-Kalantari<\/a>, explores how AI can be reimagined through frameworks of equity, justice and&nbsp;liberation.<\/p>\n\n\n\n<p>Seyyed-Kalantari also leads the study, Design of Benchmarks for Fairness and Bias Evaluation and De-Biasing of Natural Language Model to Incorporate User Diversity, focusing on addressing fairness issues in LLMs, which often favour majority groups due to biased training data.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"2045\" height=\"725\" src=\"https:\/\/www.yorku.ca\/ascend\/wp-content\/uploads\/sites\/689\/2025\/12\/Mitigating-Bias-Laleh.jpg\" alt=\"\" class=\"wp-image-755\" srcset=\"https:\/\/www.yorku.ca\/ascend\/wp-content\/uploads\/sites\/689\/2025\/12\/Mitigating-Bias-Laleh.jpg 2045w, https:\/\/www.yorku.ca\/ascend\/wp-content\/uploads\/sites\/689\/2025\/12\/Mitigating-Bias-Laleh-400x142.jpg 400w, https:\/\/www.yorku.ca\/ascend\/wp-content\/uploads\/sites\/689\/2025\/12\/Mitigating-Bias-Laleh-1024x363.jpg 1024w, https:\/\/www.yorku.ca\/ascend\/wp-content\/uploads\/sites\/689\/2025\/12\/Mitigating-Bias-Laleh-1536x545.jpg 1536w\" sizes=\"auto, (max-width: 2045px) 100vw, 2045px\" \/><figcaption class=\"wp-element-caption\">Laleh Seyyed-Kalantari<\/figcaption><\/figure>\n\n\n\n<p>A recipient of a <a href=\"https:\/\/www.yorku.ca\/research\/connected-minds\/from-ideas-to-impact-the-latest-connected-minds-seed-grant-innovators\/\">Connected Minds seed grant<\/a>, the project is in collaboration with the Vector Institute and aims to design domain-specific testing benchmarks to assess and score fairness across diverse dimensions such as race, gender, religion and social status.<\/p>\n\n\n\n<p>\u201cBy focusing on linguistic bias, particularly in the context of sentiment analysis, my work aims to mitigate stereotypes and ensure more inclusive LLMs that better support marginalized groups, including Indigenous Peoples, racialized communities and those with disabilities,\u201d says Seyyed-Kalantari, who leads York\u2019s Responsible AI Lab and is co-director of the new Mitigating Dialect Bias solutions network, which received $700,000 in Canadian Institute for Advanced Research funding.<\/p>\n\n\n\n<p>She sees cultural bias arising from misinterpretation of dialects as a major concern in LLMs. \u201cFor example, African American Vernacular English often uses grammar, vocabulary and expressions that are not part of Standard American English. AI may interpret such words and phrases as \u2018toxic\u2019 and harmful. This is because LLMs are trained on data that favour dominant dialects.\u201d<\/p>\n\n\n\n<p>This is an issue that affects dialects around the world, something Seyyed-Kalantari plans to address next.<\/p>\n","protected":false},"template":"","meta":{"_kad_blocks_custom_css":"","_kad_blocks_head_custom_js":"","_kad_blocks_body_custom_js":"","_kad_blocks_footer_custom_js":""},"categories":[61],"tags":[62],"class_list":["post-654","article","type-article","status-publish","hentry","category-inclusive-research","tag-artificial-intelligence"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Mitigating bias - Ascend Magazine<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.yorku.ca\/ascend\/article\/mitigating-bias\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Mitigating bias - Ascend Magazine\" \/>\n<meta property=\"og:description\" content=\"As artificial intelligence (AI) advances \u2013 particularly large language models (LLMs) which are increasingly integrated into social, governmental and economic systems \u2013 discriminatory stereotypes and biases persist. These prejudices reflect and reinforce historical and systemic inequalities embedded in massive datasets that models like OpenAI\u2019s Generative Pre-trained Transformer (GPT) and Google\u2019s Gemini learn from. York University [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.yorku.ca\/ascend\/article\/mitigating-bias\/\" \/>\n<meta property=\"og:site_name\" content=\"Ascend Magazine\" \/>\n<meta property=\"article:modified_time\" content=\"2026-01-30T16:43:45+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.yorku.ca\/ascend\/wp-content\/uploads\/sites\/689\/2025\/12\/Mitigating-Bias-Christo-scaled.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2560\" \/>\n\t<meta property=\"og:image:height\" content=\"969\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.yorku.ca\\\/ascend\\\/article\\\/mitigating-bias\\\/\",\"url\":\"https:\\\/\\\/www.yorku.ca\\\/ascend\\\/article\\\/mitigating-bias\\\/\",\"name\":\"Mitigating bias - Ascend Magazine\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.yorku.ca\\\/ascend\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.yorku.ca\\\/ascend\\\/article\\\/mitigating-bias\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.yorku.ca\\\/ascend\\\/article\\\/mitigating-bias\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.yorku.ca\\\/ascend\\\/wp-content\\\/uploads\\\/sites\\\/689\\\/2025\\\/12\\\/Mitigating-Bias-Christo-scaled.jpg\",\"datePublished\":\"2026-01-28T14:26:29+00:00\",\"dateModified\":\"2026-01-30T16:43:45+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.yorku.ca\\\/ascend\\\/article\\\/mitigating-bias\\\/#breadcrumb\"},\"inLanguage\":\"en-CA\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.yorku.ca\\\/ascend\\\/article\\\/mitigating-bias\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-CA\",\"@id\":\"https:\\\/\\\/www.yorku.ca\\\/ascend\\\/article\\\/mitigating-bias\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.yorku.ca\\\/ascend\\\/wp-content\\\/uploads\\\/sites\\\/689\\\/2025\\\/12\\\/Mitigating-Bias-Christo-scaled.jpg\",\"contentUrl\":\"https:\\\/\\\/www.yorku.ca\\\/ascend\\\/wp-content\\\/uploads\\\/sites\\\/689\\\/2025\\\/12\\\/Mitigating-Bias-Christo-scaled.jpg\",\"width\":2560,\"height\":969},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.yorku.ca\\\/ascend\\\/article\\\/mitigating-bias\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.yorku.ca\\\/ascend\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Articles\",\"item\":\"https:\\\/\\\/www.yorku.ca\\\/ascend\\\/article\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Mitigating bias\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.yorku.ca\\\/ascend\\\/#website\",\"url\":\"https:\\\/\\\/www.yorku.ca\\\/ascend\\\/\",\"name\":\"Ascend Magazine\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.yorku.ca\\\/ascend\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-CA\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Mitigating bias - Ascend Magazine","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.yorku.ca\/ascend\/article\/mitigating-bias\/","og_locale":"en_US","og_type":"article","og_title":"Mitigating bias - Ascend Magazine","og_description":"As artificial intelligence (AI) advances \u2013 particularly large language models (LLMs) which are increasingly integrated into social, governmental and economic systems \u2013 discriminatory stereotypes and biases persist. These prejudices reflect and reinforce historical and systemic inequalities embedded in massive datasets that models like OpenAI\u2019s Generative Pre-trained Transformer (GPT) and Google\u2019s Gemini learn from. York University [&hellip;]","og_url":"https:\/\/www.yorku.ca\/ascend\/article\/mitigating-bias\/","og_site_name":"Ascend Magazine","article_modified_time":"2026-01-30T16:43:45+00:00","og_image":[{"width":2560,"height":969,"url":"https:\/\/www.yorku.ca\/ascend\/wp-content\/uploads\/sites\/689\/2025\/12\/Mitigating-Bias-Christo-scaled.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.yorku.ca\/ascend\/article\/mitigating-bias\/","url":"https:\/\/www.yorku.ca\/ascend\/article\/mitigating-bias\/","name":"Mitigating bias - Ascend Magazine","isPartOf":{"@id":"https:\/\/www.yorku.ca\/ascend\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.yorku.ca\/ascend\/article\/mitigating-bias\/#primaryimage"},"image":{"@id":"https:\/\/www.yorku.ca\/ascend\/article\/mitigating-bias\/#primaryimage"},"thumbnailUrl":"https:\/\/www.yorku.ca\/ascend\/wp-content\/uploads\/sites\/689\/2025\/12\/Mitigating-Bias-Christo-scaled.jpg","datePublished":"2026-01-28T14:26:29+00:00","dateModified":"2026-01-30T16:43:45+00:00","breadcrumb":{"@id":"https:\/\/www.yorku.ca\/ascend\/article\/mitigating-bias\/#breadcrumb"},"inLanguage":"en-CA","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.yorku.ca\/ascend\/article\/mitigating-bias\/"]}]},{"@type":"ImageObject","inLanguage":"en-CA","@id":"https:\/\/www.yorku.ca\/ascend\/article\/mitigating-bias\/#primaryimage","url":"https:\/\/www.yorku.ca\/ascend\/wp-content\/uploads\/sites\/689\/2025\/12\/Mitigating-Bias-Christo-scaled.jpg","contentUrl":"https:\/\/www.yorku.ca\/ascend\/wp-content\/uploads\/sites\/689\/2025\/12\/Mitigating-Bias-Christo-scaled.jpg","width":2560,"height":969},{"@type":"BreadcrumbList","@id":"https:\/\/www.yorku.ca\/ascend\/article\/mitigating-bias\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.yorku.ca\/ascend\/"},{"@type":"ListItem","position":2,"name":"Articles","item":"https:\/\/www.yorku.ca\/ascend\/article\/"},{"@type":"ListItem","position":3,"name":"Mitigating bias"}]},{"@type":"WebSite","@id":"https:\/\/www.yorku.ca\/ascend\/#website","url":"https:\/\/www.yorku.ca\/ascend\/","name":"Ascend Magazine","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.yorku.ca\/ascend\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-CA"}]}},"taxonomy_info":{"category":[{"value":61,"label":"Inclusive research"}],"post_tag":[{"value":62,"label":"Artificial intelligence"}]},"featured_image_src_large":[],"author_info":[],"comment_info":"","_links":{"self":[{"href":"https:\/\/www.yorku.ca\/ascend\/wp-json\/wp\/v2\/article\/654","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.yorku.ca\/ascend\/wp-json\/wp\/v2\/article"}],"about":[{"href":"https:\/\/www.yorku.ca\/ascend\/wp-json\/wp\/v2\/types\/article"}],"wp:attachment":[{"href":"https:\/\/www.yorku.ca\/ascend\/wp-json\/wp\/v2\/media?parent=654"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.yorku.ca\/ascend\/wp-json\/wp\/v2\/categories?post=654"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.yorku.ca\/ascend\/wp-json\/wp\/v2\/tags?post=654"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}