Future Cinema

Course Site for Future Cinema 1 (and sometimes Future Cinema 2: Applied Theory) at York University, Canada

4 questions – Oct. 17 – World Building

Hi all,

I really appreciated all the careful comments and questions. I echo Lee’s statement that critique doesn’t necessarily mean pessimism or a lack of enthusiasm. These technologies are not going to go away, and I think a critical engagement, while also being giddy about their potentials, is essential.

Looking forward to this week’s class! Here are my questions:

1. In “Design Justice, A.I., and Escape from the Matrix of Domination,” Sasha Costanza-Chock, in retelling their experience going through airport security as a femme-presenting transexual, makes clear that immersive worlds, such as the “security theatre” stage of the airport, are not only virtual, but are various real-world sites where strata of power are utilized and normalized. Thinking of games like “Papers Please,” and also Scott Lucas’s insistence that a 1:1 copy of the world (and the immersion that accompanies it) is impossible, in what ways can designers (and educators) produce their own virtual worlds that draw attention back to the real world? While Scott, most commonly, flags virtual worlds as places to enter into in order to escape from “real-world” responsibilities, what can we do about virtual worlds, like airport security, where the world is put upon its occupants and there is “no escape” when they are immersed within it?

2. The National Post, in promoting the Frankenstein AI project gushes that it is “making AI kinder and therefore more human”; building on this, an audience member of the Frankenstein AI project in praising the project, states that the AI was a way to learn more about her human compatriots attending with her, and that it in fact gave her greater insight into them then she would have had on her own. Do you think that AI, and the immersive worlds created by humans/AI and populated by AI and humans, is meant to be a insight-revealing mirror to its players/audience? By reflecting people back at them selves, what are the positives? Negatives?

There has been a long history of designers trying to make machines more “human,” with the benchmark for AI being the being equivalent to humans. What do you think are some of the positives in trying to make an AI more human? What are some of its drawbacks?

3. Projects like Cell as City, and the mandate of the USC School of Cinematic Arts, promote themselves as generating models and virtual worlds that are “a platform for visionary and predictive imagination”. What are the strengths of such virtual world creations and large scale simulations? What sorts of things are they trying to predict or prototype? What might they be good at predicting? What are some of the traps of building and using “predicting machines” and/or “simulating machines”?

4. Joy Buolamwini, in conversation with Constanza-Chock’s arguments, warns of the dangers of what she calls the “coded gaze,” where the inherent unconscious and conscious biases of programmers (often built on common code libraries used and reused over many years) are perpetuated in ways that harm, most often, anyone who is not a while male. What might be some effective ways to resist “Universalist design principles” (as Constanza-Chock calls them)? How might code and programming begin to become more intersectional?

Tue, October 16 2018 » Future Cinema, Web 2.0, scary, software, surveillance

Login