Our program is empowered by a welcoming and diverse community of students with a uniquely global perspective. Together we are making things right for our communities and our future.
Douglas Gregory (he/him/his) is a game designer and educator, with 14 years of experience developing commercial video games at both indie and AAA scale. From 2011-2021, Douglas worked at Ubisoft Toronto, contributing to Splinter Cell: Blacklist, Starlink: Battle for Atlas, and Far Cry 6.
Since 2017, Douglas has also taught part-time in the Bachelor of Game Design program at Sheridan College, covering areas including game mechanics, systems design, game economies, balancing, and uses of data in design. He also serves as an elected community moderator on GameDev.StackExchange, helping fellow developers solve design and technical problems that arise in game development.
Douglas is pursuing research into uses of procedural generation in game development, including runtime generation of new environments or mission objectives, and its use as a tool for developers authoring game content with algorithmic acceleration or mixed initiative collaboration with the machine.
One of his most recent projects was created at a 3-day TOJam in 2021. That's My Jam is a freestyling rhythm game where players improvise their moves on the fly. Pressing buttons to the beat drives a procedurally-generated dance to be performed by the on-screen dancers, while the player is scored for their timing accuracy.
Jonathan Silveira is a media artist from Toronto, Ontario with a Bdes from OCAD University in Graphic Design. Jonathan has experience in graphic design, motion capture and creative coding in interactive media. Jonathan is a member of The Public Visualization Lab (PVL), a cross-institutional research lab with members at York, OCAD and Ryerson Universities. His work at the PVL has been in the production of content and tools for artists, designers, and communities for public installations and exhibitions such as Everything In Its Place, 2017 (Reel Asian Film Festival, Toronto.), Diagrams of Power, 2018 (Onsite Gallery, Toronto), Passing Through the Heart, 2020 (ISEA International conference) and Nuit Live: Online Archive Activation, 2020 (Nuit Blanche, Toronto).
Jonathan’s research interest is in exploring the intersection of narrative, game engines, cinematics, and interactive immersive productions. His major research project at York University investigates the use of game engines to model complex behaviours in artificial life ecosystems. The project leverages generative ontologies, game design, and storytelling to explore the dynamics of resource distribution on entities inside an ecosystem. This work was inspired by the role of algorithms in the emergence of radicalization, conspiracy theories and anti-scientific rhetoric on social media networks. The theoretical framework that guides this project was developed using cybernetics, to analyze the effect of a system whose actions are fed back into its self to steer its future behaviour, and symbolic interactionism to investigate the individual’s micro-level actions with each other that evolve into their society. This project follows the research-creation methodology situated in the fields of computational and generative arts.
Kieran Maraj is a Toronto-based performer and researcher of electronic music. He builds and plays custom instrumental systems that focus on gestural performance, machine collaboration, and realtime improvisation. Through his practice he has been exploring the relationship between sound, music, and technology for over a decade. He is interested in the expressive possibilities and collective experiences afforded by technological systems and is an active member of the Dispersion Lab.
Kieran’s current research is focused on machine extension, augmentation, and collaboration in both solo and collective music making practices. What becomes possible when control of a sonic performance is a process of negotiation between humans and machine agents? What can a machine add to a musical performance that a human cannot, and how can these differing agencies create a cohesive whole? This exploration involves the use of machine learning technologies in realtime improvisatory settings and the construction of complex interactive sonic systems.
Kimberly Davis graduated from York University with a specialized honors Bachelor of Arts Degree in Digital Media. She has worked as a student ambassador and mentor for the Digital Media program, and at the Art Gallery of York University (AGYU) assisting artists with technical aspects to their artworks. She has also worked with other independent artists as a professional assistant and tech artist.
Kimberly’s research interests involve interactive installations, data visualization, mixed reality, and user interfaces. She hopes to expand her knowledge of these areas while also exploring the possibilities of what can be created. One of Kimberly’s most recent projects is a capstone project called “Hi Glitchi :).” This research-based project explores the use of artificial intelligence in a quirky format to create an unconventional relationship between a flower and a human using the Twitter platform. This relationship has been inspired by the book “The Secret Life of Plants,” written by Peter Tompkins. In this book, it is said that plants are living creatures categorized as blind, deaf, and dumb compared to humans, but Glitchi is quite the opposite because he has his own voice to communicate. Participants are able to communicate with this “extraterrestrial” flower triggering unexpected responses based on users’ questions through the use of speech recognition, petal movement, lights, Tweets, along with Twitter Sentiment Analysis.
With a BA in Theatre and Digital Media, Kwame explores the interdisciplinary nature of digital media at the intersection of performance and computational art. He is interested in exploring the body as a medium for human-computer interaction, connecting the physical to the digital in performance and interactive settings. As someone with a background in devised theatre, he enjoys work that is performative and dynamic and leans toward the abstract and the metaphorical, leaving more room for play and experimentation. Coming from the world of theatre, he also brings an eye for storytelling and narrative.
His most recent project, Kaleidoscope Dreams uses Facemesh, a program that tracks the user's face, creating a surreal virtual mask on a screen. Fluttering about in the environment are butterflies. If the user closes their eyes, the butterflies are drawn to the user’s third eye of the virtual mask. When the butterflies get close enough to the third eye, and if the user opens their mouth, a synth triangle wave is produced and the butterflies go through a metamorphosis, becoming more ethereal, kaleidoscopic and free in their movements. This project was inspired by the mythology surrounding butterflies, in which some cultures view them as a representation of the soul, of rebirth, or of death, as well as the third eye, which in many cultures symbolizes a state of enlightenment. The title is derived from the official name of a group of butterflies, a kaleidoscope, as well as from the Chinese philosopher Zhuangzi, who once woke up from a dream in which he was a butterfly and thought “Am I a man who dreamt of being a butterfly, or am I a butterfly dreaming that I am a man?”
Zhouyang Lu is a creative, innovative, and passionate Chinese digital media designer also with experience in graphic designing engaging logos, posters, web pages, app UI, brochures, promotional videos, etc. At the same time, he has experience in 3d modelling and physical installation art. After his BA in Digital Media at York University, he has commenced graduate studies, pursuing an MA in Digital Media and focusing on the intersection of Design, interactive technologies and Data visualization. He is currently exploring intersections of visual and print media, and formats of interactive installations.
BREATHE: A COVID-19 Data Visualization Design
The Coronavirus Disease 2019 (COVID-19) is indeed one of the most significant pandemics in history. Even though this is an ongoing situation, this pandemic offers a wide array of data that might be difficult for ordinary, non-scientific people to understand. This topic is relevant nowadays, and people need to be educated regarding the intensity of this pandemic; this can be done by creating emotions using a data visualization installation. The researcher also finds this topic as a good theme for this research project because it aims to remind people of the first anniversary of this pandemic and how it has spread at a terrible rate.
While this pandemic was surfacing throughout the world, the researcher made numerous designs that visualize the virus and its implications. Hence, choosing this issue as the central theme of this research can gather together differing audiences and opinions. Moreover, the wide range of data about COVID-19 overwhelms people—the main reason why people are still not knowledgeable regarding the dangers of this pandemic, resulting in people preferring to not wear masks and to be disobedient to social distancing guidelines. By applying the concept of data visualization, the large datasets regarding this pandemic can be easily explained and, hence, understood by the general audience.
Raghad El-Shebiny, also known as Riggy, is a recent graduate of the Digital Media BA rpogram. When She’s not working or studying, Raghad is usually spending time with her family or friends outdoors or cooking and watching Netflix. She loves being involved in student government and student associations advocating for herself and her peers and organizing events to raise awareness about the field of digital media. Raghad is passionate about finding new ways to help people and give back to their communities. She is a genuine believer in the power of education to help make changes in the world.
While at York University, Raghad has been working with various professors in the Lassonde School of Engineering on projects exploring new ways to present, teach, and assess course material using pre-existing and specially made technology. Raghad’s work with the Lassonde Educational Innovation Studio (LEIS) helped supporting professors with transitioning their courses to an online format when the pandemic started, as well as supporting and setting up professional development workshops for faculty helping tp find new ways of presenting and assessing course content using existing technology.
Zian Liu received his B.Sc. (2020) in computer science from the Lassonde School of Engineering, at York. Liu is a scientific researcher whose projects mainly focus on interactive entertainment such as traditional PC games, experimental VR and AR applications. In the summer of 2021, Liu joined one of the R&D teams of Tencent Games, which is responsible for producing virtual idols and virtual production. His work has been used to facilitate a number of live events with Tencent's subsidiary game studios and game products such as PUBG Mobile and Arena of Valor.
Since motion capture for gaming will be the significant component in virtual worlds such as the Metaverse, Liu's research project proposes replacing the traditional mocap commodities with an advanced, economical CV-based multi-person pose estimation solution. By integrating the CV-based 3D multi-person pose estimation into the game engine, the project will have a chance to explore the possibilities and opportunities of combining VR and this new technology which may soon become a common feature in the game world.
Eyal Assaf has been immersed in the creative side of digital technologies since the mid-1990’s. He has worked in Film, Television and Game productions, mainly in technical roles. In parallel, he is also involved in the academic side of the industry, as a professor and program coordinator in Game and VFX courses. He is the author of the book Rigging for Games—A Primer for Technical Artists, as well as a series of online lessons related to the courses he teaches. Eyal is passionate about problem-solving, especially when it comes to digital workflows and pipeline development, including the use of game engines for research outside the entertainment arenas.
A current and very real challenge is the analysis of the constant cat-and-mouse game between data protectors and data attackers. For every new firewall, anti-virus or ransomware protection erected, new ways are found to circumvent those digital fortresses and hack into the data. Eyal’s research areas of interest include how digital agents that simulate organic behaviour, can be mobilized to deal with elements of cybersecurity and digital viral behavior. This includes connections between the digital and physical spaces including biometrics, gamification and different sensory inputs.
Grace Grothaus is a computational media artist grappling with the climate crisis. Her practice-based artistic research encompasses environmental sensing, physical computing, algorithmically generated imagery, and speculative futurity. Her projects take the form of interactive or responsive indoor and outdoor installations, and performances. Grothaus' artwork has been exhibited around the world, including the International Symposium of Electronic Art, Environmental Crisis: Art & Science in London, UK, Cité Internationale des Arts in Paris, and the World Creativity Biennale in Brazil. Grothaus has received awards for her work from organizations such as the National Foundation for Advancement in the Arts in the United States and was an Art 365 Fellow. She has been invited to speak about her work for the University of California San Diego's Design@Large series and Ecoartspace in Santa Fe among others.
Grothaus’ research questions center around the present and future global climate crisis. She is deeply concerned with the question of how to foster empathetic ecosystemic relationships between one another and our more-than-human environs. Can environmental sensing and visualization artworks act as an empirical interface for grasping our complex, interwoven, beyond-human ecologies of present-day earth and inspire new ways of thinking about them? If so, might they lead to novel methods for response and environmental engagement with the ongoing event of climate change? She is exploring how environmental sensing and visualization wearables, installations, and performances may empower citizen-sensing scenarios which expand capacities and processes and form new ways of articulating “sites” within practices of environmental monitoring. She aims to create works that hold space for future-facing reflection regarding human agency enacted through the constructed world and are generative of imagined alternatives.
Hrysovalanti Fereniki Maheras holds a B.A. Hons in Media Arts from Plymouth University, UK and an M.A. in Digital Media from York University, CA. She is currently a researcher at n-D::StudioLab. In her research, she speculates on the innovation of computational machines as emotional beings, as she navigates the connections between the philosophical theories written about the human soul, and the cybernetic theories and artworks created for the exploration of the mind of technology. Her practice emphasizes the creation of groups of electronic sound/kinetic sculptures that act as artificially living communities. Traversing both virtual and physical worlds, she explores the creation of a virtual analog environment emerging in a shared complex physical habitat.
Project: Thumos is an artwork made of an array of five free-standing interactive sculptures. The word thumos, or spiritual Eros, is an ancient Greek word used by Plato in his book The Symposium to describe the part of the human soul that translates to the whole of emotions. I am influenced by Plato’s approach to the study of the human soul as an analogy for understanding the mind and desire of the future of the computing machine by preconceiving the rise of mechanical beings and speculating on what they could be. The process of the creation of the artwork Thumos follows a research-creation methodology that focuses on the attunement of natural and artificial organisms in a digitally modulated physical habitat. In the Thumos habitat, a community of synthetically emotive sculptures negotiate the transference of artificial emotional states through stochastic dialogues. The dialogues unfold in a charged environment, where the simulated desires and thoughts of the sculptures generate light and sound events. My effort has been to incorporate the capacity of humans to entrain their emotions towards positively charged emotional states in an interactive installation that invites people to co-create with the synthetically emotive sculptures the experience of the Thumos installation. It is in this spirit that my work uses a speculative approach to create an alternative eco-systemic framework for art making based on cybernetic theories and philosophy as its basis.
Janica Olpindo is a queer Filipina artist and researcher who immigrated to Toronto, Canada in 2007. She has a BFA in Integrated Media from OCAD University and an MA in Digital Media from York University. Ideas within Olpindo's recent work stem from her interest in mechanical systems within breaking (or "breakdancing") and machines, as well as her interest in human-computer interaction. Olpindo works with electronics, digital media, and installation.
Her current research, Breaking Barriers: Transforming Breaking Movements and its Culture is focused on increasing the sense of inclusion in the culture and practice of breakdancing. The work applies interactive machine learning and methods from dance training to “breaking", with the goal of creating “interactive spaces” that invite movements and expressions by participants of diverse gender, sexuality and abilities that might otherwise feel excluded from these performance spaces. The project builds from Janica’s personal relationship to the “b-girl” culture in Toronto, and will further provide training in the rigorous use of technological and scholarly methods to affect positive societal change. The project is expected to benefit both this particular community, and more broadly provide an example of an inclusive and ethical use of artificial intelligence and machine learning in embodied, communal social spaces.
Marcus A. Gordon is a PhD student in computational arts with a research focus on live coding performance and archimusic as modes of interdisciplinary practice. His work seeks to explore the dynamic relationship between architecture and the ecology of eversive virtuality, a concept that describes the presence of immateriality in physical space.
His research at the n-D::StudioLab embodies an exploration into algorithmic composition processes, but also in the making of instruments for both expression and analysis. In direct relation to his dissertation research, this exploration begets a narrative around the epistemological nature of live coding and how he intends to apply it to academic research of systems that further the understanding of human association to nature.
Marcus holds an MFA in Digital Futures from OCAD University where he learned holography, data visualization and began his research on the subject of the transplane image. His master's thesis Habitat 44º sparked a research interest in the relationships between energy and information, and laid a groundwork for his research into the concept of material energies.
Michael Palumbo is a musician, scholar, and developer. He holds an MA in Performance Studies from York University and a BFA in Electroacoustic Music from Concordia University. His current PhD research spans electroacoustic music improvisation, distributed creativity, and version control systems. These interests are expressed through the projects MischMasch, a VR-based multiplayer modular synthesizer, and git show, a digital musical instrument design and composition experiment involving many composers. Michael studies with Dr. Graham Wakefield in the Alice Lab for Computational Worldmaking. He performs music under the alias Thispatcher, including regular gigs around Toronto, has released three albums to date, and produces a monthly telematic concert series named Exit Points. He also contributes to open-source software projects, and maintains allhands, a utility for easily streaming continuous control data over the web for real-time collaboration and performance.
Nick Fox-Gieg is a researcher, animator, and developer in Toronto. Most recently, he has been working on XR projects for the Verizon 5G EdTech Challenge, NYT T Brand Studio, the University of Waterloo, Google Creative Lab, and Framestore. His awards include a 2017 Engadget Alternate Realities grant, Eyebeam and Fulbright Fellowships, and the jury prize for Best Animated Short at SXSW 2010; his work has also been shown at the Ottawa, Rotterdam, and TIFF film festivals, at the Centre Pompidou, and on CBC TV; his practice has been supported by grants from Bravo!FACT, the Canada Council for the Arts, and the arts councils of Ontario, Pennsylvania, Toronto, and West Virginia. Fox-Gieg holds an MFA from the California Institute of the Arts and a BFA from Carnegie Mellon University.
Nick's research takes the form of the Lightning Artist Toolkit (Latk): a complete pipeline for frame-by-frame volumetric animation, the only open-source example of its kind as far as he is aware. Microsoft’s Kinect, the first consumer depth camera, arrived in 2010; in 2015, developer preview versions of the HTC Vive VR headset introduced the first mass-market six-degrees-of-freedom (6DoF) controllers—wands tracked in 3D space. Combined, these two developments enable exciting new approaches to wrangle the media that we collectively refer to as “XR” (a catchall acronym for “virtual reality + augmented reality + mixed reality”)—in his case, for creating hand-drawn XR animation with 6DoF drawing tools. In particular, Google’s Tilt Brush, the 3D light-painting application promoted alongside the Vive hardware in 2016, and its open-source successor Open Brush, have become popular enough with the general public to provide meaningful quantities of 3D drawing data for machine-learning-based animation experiments.
Racelar Ho is an artist, theorist, curator, and founder of the IVAS art group. She earned bachelor’s and master’s degrees in Architecture and Landscape Design and Experimental Art from Guangzhou Academy of Fine Arts. Her artistic practice and scholarly research focus on the significance and influence of contemporary, post-human aesthetics on emerging virtual environments’ somatic and spatial geographies. Infinite virtual environments, constructing worlds of poetic thoughts and Zen dialogues in different dimensions, and exploring idealistic and transcendent worlds of vitality are some of the ways she has explored these ideas.
As a fellow facilitator of interdisciplinary discourse, she founded an international Art-Sci group (IVAS) in 2017. To create a new visual and creative production, IVAS explores the relationship between an audio-visual media form and a physical and virtual space and reframes contemporary issues through hybrid and interdisciplinary visual forms. She is currently leading a multi-disciplinary art team working on mixed-reality and multi-sensory game art projects related to climate change, the post-human technoscape and spatial rhetorics research.
Racelar Ho's theoretical work and artistic practice are centred on (post-human and contemporary) rhetorical aesthetics, which is concerned with rhetorical-sociological-somatic-spatial (geographical)-aesthetic issues. She is curious about amplifying the impact of dialogue methodologies between creators and audiences in infinite virtual environments in my work. Her creation also aims to build a hybrid-infinite world to express poetic thoughts about Zen dialogues in different dimensions and explore transcendent beings' idealistic world.
Rory holds an MA and BA Hons in Digital Media from York University, and is a researcher in the DisPerSion Lab. His work explores the crossroads between sonic ecosystems, agent simulation, telematic music, and human/machine collaboration. Exploring the musical potential of artificial life systems, his work places performers alongside virtual beings inspired by natural processes to investigate resulting perceptual and performative outcomes of their interplay. Rory's past conference paper publications have investigated extensions to Deep Listening practices including non-human participants (CMMR 19), and methodologies for networked musical performance in the wake of the global pandemic (Audio Mostly 21). His performance practice considers timbral movement through noise, the architectural potential of sound through mapping sonic content to space, and subsonic frequency through a processed and extended bass guitar setup. He is also an active member of the Doug Van Nort Electro-Acoustic Orchestra.
His most recent project, Maxtrip (Hoy & Van Nort—Audio Mostly 21) was developed in response to the rising number of multi-platform telematic activities happening during the pandemic. Maxtrip was designed to control all aspects of the JackTrip startup and connection process within Max, lowering the barrier of entry for students and performers with Max experience but without practical JackTrip knowledge. Its use also allows for greater control and flexibility of established JackTrip connections.