AI at MUTEK Forum: Centralizing Infrastructure and Impact  

At the closing happy hour of MUTEK forum day two which was dedicated to exploring Artificial Intelligence, a tiny humanoid robot wandered about the crowd. Negotiating these human-machine interactions was NAO, an autonomous programmable robot hosted by The Goethe Institute Montreal in collaboration with Milieux Institute, Hexagram, and Eastern Bloc as part of “The Robot In Residency” project. Among the team working directly with the robot are Milieux members Ceyda Yolgormez, Zeph Thibodeau, and Patil Tchilinguirian, who brought NAO to Les 7 Doigts de la main for the meet and greet.   

Although standing at no more than a foot tall and well below the knees of all other attendees, NAO nonetheless asserted its presence in the room. Narrating its experience both vocally and through text displayed on a projector at the back of the room, NAO spoke with a subtle, though ominous, clarity about its circumstances. When people asked it to dance or speak, NAO maintained that it was not a show pony, and after engaging with people for a while, NAO was determined to move on. Walking about on its own, NAO ruminated on its desire for autonomy and its capacity for finitude with a short, biting wit that made those around laugh with a mixture of astonishment and unease.  

The reaction generated by NAO was expressive of the discussions that emerged from the day’s programming on the uncertain future ahead for artificial intelligence. Panelists, workshop participants, and the keynote speaker all addressed the duality of our techno present, marked at once by excitement at AI’s creative possibility and concern for its potential negative impacts. Amid this uncertainty, a pressing question arises which the forum attendees sought to address: how are artists, writers, researchers, and industry professionals positioned to creatively intervene in the deployment and management of AI technologies?  

The day began with a keynote address by Dr. Sarah Myers West, award-winning researcher and Managing Director of the AI Now Institute. Presented with support from the Machine Agencies working group at Milieux and Concordia University’s Applied AI Institute, Myers West urged the audience to challenge deterministic ideas of AI by considering a plastic future for artificial intelligence. Myers West rightly points out that within the current media frenzy, AI is operating as a floating signifier often unattached from its material reality in both techno-utopic ideals or dystopian visions. To combat this, Myers West argues that we need to look at the infrastructural underpinnings of artificial intelligence which are currently centralized in the hands of the few Big Tech companies who have the data advantage. To enact any possible alternative, we must challenge power monopoly and harness different incentive structures through continuous experimentation. According to Myers West, we are not at ground zero regarding regulation and so we must harness pre-existing tools to widen the aperture of, and beyond, AI policy. 

In the subsequent Q&A, Dr. Fenwick McKelvey of Machine Agencies and the Applied AI Institute asked a pertinent question regarding the relationship between artificial intelligence and open source. In response, Myers West noted that while the opacity of AI technologies is a clear issue, transparency does not yield the same affordances as traditional software. Indeed, revealing the source code does not necessarily enable greater forms of knowledge nor ensure better regulatory possibility. Rather, what is required is that we do not reaffirm a “singularized output” for AI, both in terms of its material or ethicopolitical reality. What is important is that we ensure an open context as we write alternative trajectories for AI. 

Dr. Sarah Myers West – Photo credit: Maryse Boyce

Following the call from Myers West for creative experimentation, the afternoon sessions were dedicated to centralizing the voices of artists and researchers working directly with AI. In the panel “Resisting Unstable Diffusions: Art and the Governance of AI” led by writer and researcher Melissa Vincent, we heard from Michael Running Wolf of MILA, Dreaming Beyond AI’s Raziye Buse Çetin, and Blair Attard-Frost from the University of Toronto on how AI can negatively harm creatives by further sidelining minoritized perspectives. Running Wolf expresses that Indigenous perspectives in tech and especially in AI are few and far in between. His work at MILA challenges this occlusion by recovering Indigenous languages through AI technologies in a way that beautifully gives voice to lost languages from those who are underheard in conversations on AI development and elsewhere. Attard-Frost attests to the sidelining and creative potential of minority perspectives in AI development and regulation, noting that AI is not built for queer and trans peoples. Yet, for Attard-Frost, queer and trans imaginaries are aptly positioned to aid in collective organizing and action, challenging the shortcomings of AI’s singularized perspective and construction.  

Before we all went down to meet NAO at the AI happy hour, I attended “From Text to Sound: Building Multi-Modal AI Agents,” a workshop presented by UKAI Projects in collaboration with Machine Agencies and the Applied AI Institute. Over the course of two hours, presenters Jerrold McGrath, Kasra Goodarznezhad, and Luisa Ji gave us an operationalizable toolkit for how we could do more beautiful and interesting things with AI. This included a walkthrough of their own process followed by a demonstration of various programs and datasets for creating multimodally with AI. A particular focus was given to text-to-sound models.  Amongst the ones shown were Eleven Labs, a text to speech software, and Riffusion which generates music from images of sound instead of audio. UKAI projects emphasized that working with AI could be done according to the principles of beauty, compassion, and care, rather than institutional interest, progress, and profit.  

Jerrold McGrath, UKAI Project – Photo credit: Maryse Boyce

At three points in the workshop, we were asked to discuss the impacts of our work with other participants. In small groups, discussions emerged not about what we did, — an important restraint imposed by the workshop leaders — but focusing instead on what we thought our work was doing and what we wanted it to do. This exercise felt foreign in the best way possible. At conferences and forums, we come primed to network, readied with soundbite rundowns of what we do. I work with so-and-so; I am a student at such-and-such a university. As a result, we hear so much about what people do and rarely hear why people choose to do their work. In these discussions, beauty, compassion, and care can frequently recede into the background.  

As the workshop with UKAI projects emphasized and other presenters echoed in discussing the future of AI we need to be centralizing impact — what is AI doing and what do we want it to do? With this in mind, we can hopefully do more beautiful and interesting things with AI.  

By Kristen Lewis, PhD student in Art History at Concordia University

More
News and Research