Skip to main content
opinion
Open this photo in gallery:

Screens display the logos of OpenAI and ChatGPT in Toulouse, France, on Jan. 23.LIONEL BONAVENTURE/AFP/Getty Images

Kean Birch is director of the Institute for Technoscience and Society at York University.

A growing chorus of voices from across the political spectrum is raising concerns about artificial intelligence technologies, with many people calling for the regulation of AI, or at least a halt to further deployment while we think through how to regulate it. This includes an open letter published on Tuesday that was signed by about 75 Canadian researchers and startup chief executives.

I agree wholeheartedly with these calls for regulation and I’ve long thought about how bizarre it is that we don’t regulate AI companies that are literally experimenting on us – given that they are being trained on our data – even though we heavily (and necessarily) regulate biopharmaceutical testing. I think we need to do far more in Canada to regulate what’s coming down the AI pipeline and we need to do so now.

It’s not just about the misinformation and loss of jobs that a lot of people fear. Absent regulation of AI, we risk further entrenching Big Tech’s dominance over the direction of our technologies.

Here’s what I see as the most significant issues facing us with the development of AI technologies. And none of them can be solved via individual choices or market signals. A co-ordinated regulatory approach is required.

First, it is deeply problematic that our personal, health and user data are critical inputs into the development of AI algorithms. I don’t want my personal information and user data to be deployed to develop new technologies I disagree with – and I’m pretty sure other people feel the same way.

But permissive terms and conditions agreements mean companies can largely do what they want with our data. Whatever future society AI could create, we are providing the building blocks for it through our data.

This society could easily end up as a dystopia.

According to a paper that infamously got Timnit Gebru, technical co-lead of Google’s Ethical AI Team, and other researchers fired in 2020, these large language models are best thought of as “stochastic parrots.” The models can put together outputs, such as human-like conversations, on the basis of probabilistic analysis – analyzing millions of real conversations – but they can’t tell us the meaning of the interactions.

This is why using a platform such as ChatGPT is often a hilarious exercise in spotting how much absolute nonsense it can spit back at you.

The use of these AI technologies developed with large data sets will only further embed a range of biases prevalent in human life. If AI gets more integrated into our lives with no oversight, it will amplify these biases and worse.

Which brings me to computing capacity. Developing AI requires immense computing power. The world’s computing capacity is being increasingly concentrated in the hands of Big Tech. Companies such as Amazon.com Inc., Microsoft Corp. and Alphabet Inc./Google dominate cloud computing, which provides the digital infrastructure on which much of AI is being developed and on which it will run.

This infrastructure will have to expand significantly in the future to keep up with the demands of AI developments, leading to negative effects such as rising greenhouse gas emissions and energy costs. Moreover, these companies, which are already accused of having too much power over us, will only further entrench their control.

This means we’re not going to see the development of AI technologies that can actually do useful things. My favourite idea, for example, would be to automate the investigation of tax avoidance and evasion by the wealthy and big business, and then automate the enforcement action against them.

Unfortunately, Big Tech is not going to invest in developing these kinds of AI technologies. That’s because the technologies we create usually end up reflecting the social, political and economic context in which they emerge.

We’re at a crossroads right now where we need to do something. Trying to regulate AI after the fact will not be a viable option.

Interact with The Globe