ICYMI: Highlights from Part 2 of IP Osgoode's Bracing for Impact AI Conference Series

ICYMI: Highlights from Part 2 of IP Osgoode's Bracing for Impact AI Conference Series


On March 21, 2019, we had the pleasure of attending IP Ogsoode's Bracing for Impact conference series held at the Toronto Reference Library. This year’s conference theme was data governance, with a focus on novel legal issues with respect to two key sectors - health/science and smart cities. Professor D’Agostino’s opening remarks touched on the legal and ethical dimensions of data governance, given the large amount of activity over the last year in the AI space.  The day was broken down into five panel discussions, with a luncheon keynote by Professor Kang Lee from the University of Toronto.

Why is Data so Important to the Development of AI?

The first discussion focused on the impact of data quantity and quality which determine AI capability. Jonathan Penney (Assistant Professor of Law; Director, Law & Technology Institute, Schulich School of Law, Dalhousie University) provided three instances where data was more important than the AI systems themselves: in advancing AI, in addressing bias and discriminatory practices in existing systems, and in AI accountability and transparency to understand decision making. Notably, Alexander Wissner-Gross examined the last 30 years of AI development, and found that the recent advances were largely due to the availability of large data sets. In 2011, IBM Watson Jeopardy Champion used data from 8.6 million Wikipedia articles and in 2014, GoogleNet object classification used 1.5 million images on ImageNet to train its AI system. Carole Piovesan (Partner & Co-Founder, INQ Data Law)  echoed the importance of data to AI systems, and touched upon the two growing debates regarding data exchange and privacy.  The crux of the privacy debate focuses on the trade-off between privacy as a quasi-constitutional value versus the importance of innovation and the need for data to produce public goods. She called upon the audience to think about what a fair exchange in today’s data marketplace means to them.  Finally, the shifting policy led by the EU's adoption of the General Data Protection Regulation (GDPR) was discussed. In Canada, current regulations still focus mainly on consent. Both speakers acknowledged that we should be moving towards establishing standards as very few people actually enforce their rights.

Intellectual Property at a Crossroad

Three key ideas came out of the second panel discussion, namely, the issue of whether AI systems and programs are eligible for copyright or patent protection under current statutes, the international implications and developments, and the importance of AI in collaboration. Dave Green (Assistant General Counsel, IP Law & Policy, Microsoft) shared Microsoft’s perspective on AI’s role in enabling machine intelligence to simulate or augment elements of human thinking. Two copyright issues that come into play with AI are defining “Works of Authorship” and identifying whether specific types of “copying” are enough to create liability, both of which have been complicated by the use of computer programs and factual materials. Internationally, the requirement that humans be the authors of creative works is found in the constitutions of US, Hong Kong, India, New Zealand, in the UK and other countries. As technology and AI advances, do we want to continue to insist upon the requirement that authors of creative works be humans? If we don’t, what does that say about downstream issues such as intent, infringement, and liability? In regards to international approaches to data mining - should there be a fair dealing exception, particularly when you look at addressing the issue of bias? The WIPO recently established a new division that focuses purely on AI, which will be especially important given the spike in AI patenting activity that has occurred over the past several years. Shlomit Yanisky-Ravid (Faculty Member and Lecturer, ONO Academic Law School and Fordham Law School) challenged the audience with the Turing Test, proving that it is often difficult to identify between works created by AI or a human being.

Catherine Lacavera (Director of IP, Litigation and Employment, Google Inc.) shared her belief that the existing patent and copyright systems are robust enough to deal with changes we are seeing in AI, though the regulatory and social impact front of AI are changing at a fast pace. In this regard, it is important to balance social benefit with the potential for abuse and the importance of building diverse data sets and incorporating privacy and affordability in our design principles going forward. Maya Medeiros (Partner, Norton Rose Fullbright Canada LLP) stressed the importance of using IP rights to facilitate multi-party collaborations to protect AI innovation and incentivize collaborative behaviour. Furthermore, she raised the issues of fair dealing in data mining and the use of different types of IP rights to protect different aspects of works being generated.

Resolving Data Barriers

The third panel  focused on the tools required to access data and facilitates the development of AI.

Momin Malik (Data Science Postdoctoral Fellow, Berkman Klein Center for Internet & Society at Harvard University) discussed how AI is beneficial in certain contexts, such as for predicting behaviour. However, the data that is valuable for AI is often limited by access to copyright protected materials.  For example, in the development of Google's information retrieval system, the company faced many copyright issues.  However, they were able to successfully navigate the copyright challenges by entering into agreements with publishers to create Google Books, and ultimately make data more accessible to the public.

Paul Gagnon (Legal Counsel, Element AI) contemplated whether sui generis legislation is the way forward. Europe, for example, relied on the existing concept of fair dealing as an exemption for data mining. However, this exemption is limited as it only applies to researchers and not commercial institutions. Having open data and accessible data are two distinct concepts. Accessibility does not necessitate that you can use the data. Uses may be restricted by specific purposes, such as “for academic use only”.

Dave Green concluded the panel discussion by contemplating whether copyright could “make nice with AI”. AI does not copy for the purpose of replicating the work or infringing on the underlying value of expression, but rather it can unlock different insights than “Works of Authorship”. This is the difference between the use of a photo as a work, for aesthetic purposes or factual reporting, and the use of a photo as data.  Green looked at examples of how different jurisdictions are making copyright safe for AI and machine learning, such as the fair use exception in Israel. Democratizing the right to learn and research is essential to this field and it remains to be seen how other jurisdictions may embrace this fact.

Luncheon Keynote: Affective Artificial Intelligence & Law: Opportunities, Applications, and Challenges

Kang Lee (Professor and Tier 1 CRC Chair in developmental neuroscience, University of Toronto) amazed the audience with a showcase of his connected health venture, NuraLogix.  Dr. Lee's interdisciplinary invention brings together research from neuroscience, psychology, physiology, and deep learning to produce AI that can detect, measure, and analyze human affect through physiological cues. The Anura™ mobile application turns smart devices into a personal health tool that individuals can use to manage stress and get updates on their personal health. It uses Transdermal Optical Imaging (TOI™), which uses video to recognize facial blood flow imaging from the human face. This image is then processed by DeepAffex™, which is the AI that can detect and measure different human emotions. Dr. Lee’s work is significant as it demonstrates how AI can improve the health and science fields to give patients more control over their health care.

Big Data, Health & Science

The fourth panel discussion focused on the unique AI and data issues in the health and science sectors. James Elder (Professor, Lassonde School of Engineering; York Research Chair in Human and Computer Vision, York University) discussed potential uses for converting raw data into 2D images and subsequently converting these images into 3D models. 3D modelling with real data has applications for road and pedestrian traffic. The technology may also address some privacy concerns since his 3D virtualization technology turns the 2D images into avatars, which has the effect of anonymizing visual appearances. There are many opportunities for visual AI to help improve daily processes.

Victor Garcia (Managing Director & CEO, ABCLive Corporation) discussed how big data can transform the health sciences. Data helps to improve the way companies in this sector do business. Clinical, insurance claims, pharmaceutical, research and development, patient behaviour, and lifestyle data can all contribute a plethora of knowledge to the health sector. These can improve process efficiencies and make hospital resources available sooner to new patients. For example, Humber River Hospital used data analytics to improve their health care services and increase efficiency by 40%.

Ian Stedman (PhD Candidate, Osgoode Hall Law School; Fellow in AI Law & Ethics at SickKids’ Centre for Computational Medicine) highlighted SickKid's move to integrate AI into their practice with the development of a task force to examine how data governance and policies, infrastructure, AI solutions, and ethics interacted before implementing new AI tools.  Stedman stressed that data source and quality are essential because in the health sector, it is essential to ask all the right questions to make accurate conclusions and diagnoses. With clinical studies, it is much easier to access data since there is a research plan, which includes the research purpose, the targeted population, and the results the researcher hopes to observe. However, with the data that AI relies on, in order to unlock its potential value, researchers study data to find patterns. Therefore, it is difficult to ask for secondary use disclosure before the research is conducted when the researcher may not know what they are looking for. The takeaway is that regardless of the industry, harmonization and collaboration are key. There is opportunity to put data together from different sources to discover the potential of new clinical decision making tools.

What Makes a Smart City?

In Toronto, and internationally, data privacy issues have come to the forefront of public discussion due to the development of smart cities. Given the proposed Sidewalk Toronto, the collection, storage, and use of data has led to a heated debate about data governance. The Mayor of Barrie, Jeff Lehman, discussed the Municipal Innovation Exchange (MIX) project which calls upon start-ups and small organizations to develop new technologies that use data to address civic challenges. Instead of putting out a traditional municipal tender, the cities released a Request for Solutions and invited responses from the public to provide a cohesive opportunity for collaboration. In response to the issue of data localization in the Sidewalk Toronto debate, Mayor Lehman believes that consent is possible, but that the data must reside in Canada to ensure that the national government can set the rules around the data being collected. Finally, Mayor Lehman advocated for the use of Privacy Impact Assessments to evaluate the impact of new technology on privacy.

Neetika Sathe (Vice President, Advanced Planning, Alectra Inc.) advocated for the importance of data policies regarding smart cities to be worked on at every level of government to develop a national data strategy. Furthermore, Sathe introduced the audience to some of Alectra’s projects and the data collection challenges associated with each. These projects included AlectraDrive (end-to-end integrated EV workplace charging pilot project), the Advantage Power Pricing Program (which collects smart meter data), and the Transactive Energy Platform (which uses a private blockchain network that limits access to data).

Natasha Tusikov (Assistant Professor, Dept. Social Science, York University; The City Institute at York University) challenged the audience to think about who should own, control and govern data related to smart cities.  Prof. Tusikov discussed the issue of conflicting public and private authority, raising her concern that Waterfront Toronto is not an expert in IP, but in land development. As an example of regulating the governance of smart cities, Barcelona developed a manifesto outlining the importance of technological sovereignty and maintaining digital rights.

To close the panel discussion, John Weigelt (National Technology Officer, Microsoft Canada Inc.)  spoke about the importance of solidifying the participants involved in developing a smart city and the business model we want to create. If employed correctly, AI will solve societal challenges. Municipalities and companies that can thoughtfully clarify their approach to AI first will prosper the most from its benefits.

The conference encouraged thought-provoking discussion about data governance and its implications on health and smart cities. We hope that the discussion about data collection and what we value as society continues beyond this event. Thoughtful and inclusive discussion will allow us to collectively brace for impact as AI technology continues to advance.


Written by Lauren Chan and Summer Lewis. Lauren Chan is an IPilogue editor and a business student at the University of Guelph, and Summer Lewis is an IPilogue editor and a JD candidate at Osgoode Hall Law School.