Skip to main content Skip to local navigation

News

[New paper] Position: Contextual Integrity is Inadequately Applied to Language Models

Professor Shvartzshnaider presented a new paper, co-authored with Vasisht Duddu, at the recent ICML 2025 conference titled: "Position: Contextual Integrity is Inadequately Applied to Language Models" "Machine learning community is discovering Contextual Integrity (CI) as a useful framework to assess the privacy implications of large language models (LLMs). This is an encouraging development. The CI […]

Talk at HotPETS 2025

Professor Shvartzshnaider presented at the recent HotPETS 2025 In LLMs We Trust? A Contextual Integrity PerspectiveYan Shvartzshnaider (York University), Vasisht Duddu (University of Waterloo) The talk broadly discuss recent results in LLM-CI: Assessing Contextual Integrity Norms in Language Models "Large language models (LLMs), while memorizing parts of their training data scraped from the Internet, may […]

[New paper] Measuring NIST Authentication Standards Compliance by Higher Education Institutions

Prof Shvartzshnaider co-authored a paper on "Measuring NIST Authentication Standards Compliance by Higher Education Institutions" with Noah Apthorpe and Boen Beavers, Colgate University  and Brett Frischmann, Villanova University that will appear in the processing of the Twenty-First Symposium on Usable Privacy and Security "In this paper, we examine the authentication policies of a diverse set of 135 […]

[🏆 honorable mention]: Trust and Friction: Negotiating How Information Flows Through Decentralized Social Media

Recent paper co-authored with Sohyeon Hwang, Priyanka Nanayakkara was conditionally accepted to the upcoming 28th ACM Conference on Computer-Supported Cooperative Work and Social Computing [Update: The paper was received an honorable mention for best paper] "Decentralized social media protocols enable users in independent, user-hosted servers (i.e., instances) to interact with each other while they self-govern. This […]

Paper presented at the 6th AAAI Workshop on Privacy-Preserving Artificial Intelligence

Prof Shvartzshnaider presented work at the 6th AAAI Workshop on Privacy-Preserving Artificial Intelligence that among the few papers selected for oral presentation at the workshop. LLM on the wall, who *now*, is the appropriate one of all?": Contextual Integrity Evaluation of LLMsYan Shvartzshnaider (York University), Vasisht Duddu (University of Waterloo)

The Conversation article: EdTech Privacy

Prof Shvartzshnaider co-authored an article on EdTech Privacy in TheConversation: "We need approaches that consider contextual norms and respect social values, since privacy is about respecting the integrity of a social context. The values in the educational context have evolved over generations to ensure information flows in appropriate ways. Embracing this perspective would help to […]

Blog post: Privacy Inserts

Post for the Balkinization Symposium on Ignacio Cofone, The Privacy Fallacy: Harm and Power in the Information Economy Cambridge University Press (2023).

The 6th Symposium on Applications of Contextual Integrity

The 6th Annual PrivaCI symposium took place in Rutgers University in New Jersey, USA on September 27-28, 2024. Prof  Shvartzshnaider co-chaired the event with SC&I Chair and Associate Professor of Library and Information Science Rebecca Reynolds; SC&I Visiting Professor Louise Barkhuus; Ruobin Gong, Assistant Professor of Statistics, Rutgers University. The symposium brought together over 70 international scholars to […]

Talk at CrySP Speaker Series on Privacy

Professor Shavrtazshnaider visited the Cryptography, Security, and Privacy (CrySP) group in the University of Waterloo and presented a talk on: PrivaCI — Privacy through Contextual Integrity