‘Machines that Begin to Resemble Life’ — Milieux at ARS

Part I of our interview with Institute Co-Director Chris Salter by Stephanie Creaghan, Head of Communications at the Milieux Institute, and Brennan McCracken, PhD candidate in English at Concordia University 

Milieux co-director Chris Salter, along with LePARC co-director Angélique Willkie, will be leading the keynote discussion with seminal new media artist David Rokeby at this year’s Ars Electronica Festival, and will also be presenting his collaborative project SENSEFACTORY at the Hexagram Network’s EMERGENCE/Y Garden at the festival. 

Brennan wrote the questions, and I (Stephanie) elaborated on them in a dynamic and engaging discussion with Chris about the keynote speech, his project, and embodied cognition. 

Install image of SENSEFACTORY

Brennan via Stephanie: The theme for the Hexagram Network garden at this year’s Ars Electronica festival is EMERGENCE/Y, and the topic for your collaborative keynote discussion, with David Rokeby and Angélique Willkie, is machine-body interaction. How do you understand the keynote topic within that theme? How do notions of EMERGENCE/Y afford an understanding of the current state of machine-body interaction?  

Chris Salter: I just wrote an essay on the generative trained pre-transformer models (the so-called GPT3 models) released by Open AI, which have gotten a lot of hype because people are like, “Wow, machines can now generate not only sentences, but full paragraphs that sound like humans wrote them!” They sound like human language but there is no human that has produced this sequence of words. This brings to mind a talk that Jean-François Lyotard did in 1980s called Can Thought Go On Without a Body? We, as human beings, have language, and with this we can create meaning. Then, we have bodies that can not only hold tools—animals can manipulate tools—but one of the oldest arguments about standing on two feet is that our hands are free to create new tools. 

I’m convinced that our whole way that we live in the world is based not only on what our brains do, but what our whole holistic system does. What’s interesting is that as we develop machines that we increasingly call intelligent—which is a very problematic word because psychologists, cognitive scientists, neuroscientists, linguists don’t really know what the definition of human intelligence is. So we don’t really know what human intelligence is but we want to make an intelligent machine, which is a little bit of a problem.  If you look back at the early histories of artificial intelligence, however, they have a really specific understanding of what intelligence is: it is the idea of logical thought-processing, that humans make sense of the world through logical propositions, and we can model those logical propositions in the workings of the brain. We can then build machines that operate in those same logical structures, and therefore “simulate” intelligence. The problem with all that is that human beings are finite, and machines are not—theoretically. A Turing machine can forever move its tape back and forth because it has no time in it. Human beings have time, and that time is in the physics of our bodies, in the organs, in the skin, in our senses, in our brain, in our muscles, because they decay, they slow down, they get injured. The body can repair itself in a sophisticated way, but our whole way of understanding the world is based on the fact that we are finite and have finite time. 

George Lakoff (the American linguist), for example, says that we can’t even have language without a body, because metaphors themselves are rooted in physical things: “I feel down today,” so I think down, where is that? Down is this way, not up there! Or I’m feeling “up,” or “effervescent;” all these metaphors are deeply rooted in a physical experience of the world, and their meaning is intrinsically linked to our embodiment. 

Meaning tells us something about language, and mathematical and physical concepts tell us something about machines, but all of this indicates that machines come from a specific material world; it is not a world out there, removed from us but, in fact, one we have created. The question of embodied cognition has become a lot more dominant in these discussions. Thirty years ago, all of this was ignored. It’s this famous thing about common sense that philosopher Hubert Dreyfuss first criticized about intelligent machines in the 1970s—machines have no common sense. For instance, if you knock the cup on the ground, you know the water will follow. But the machine does not—it doesn’t know anything about water, or the ground… all of this contextual significance, this fiendishly difficult problem for machines. This gets us to the Hexagram theme on the question of EMERGENCE/Y. We are facing these human-produced crises that of course affect things more than human, and part of those crises is our own foibles, our own pride, our own misunderstanding of our role in the world, and not understanding the repercussions, not only on us but on other entities. We tend to think we know everything because we have access to language and thought. 

We claim that we’re entering a new phase of humanity—we’re in this crazy tension whereas humans with our own way of making meaning, we don’t know what to do, because we’re producing the conditions that are generating these crises that we then have to solve. 

Stephanie: That makes me think of Spinoza’s Ethics. In the section On the Affects he describes how self-esteem and repentance are “very violent” affects, because we believe ourselves to be free, or responsible for/in control of actions in a way that is disproportionate to reality. 

CS: We feel that, because the so-called artificially intelligent deep-learning systems are so good at what they do, that we’ve solved the problem of intelligence, but in reality we’ve side-stepped the problem! These systems we’ve built are very good at specific tasks, but they can’t switch contexts. If I’m looking at an image, I associate it to something else and thus create a different context for it; we have associative possibilities, but machines do not.  

SC: Spinoza also talks about this, about our associative capacities—how we link images or experiences/affects with one another and generate context. He also describes how the mind’s capacity becomes elevated through bodily experience. 

CS: Bodily experience is highly contextual. Marcel Mauss’ Techniques of the Body points out how certain cultures’ bodies are shaped highly differently than others. There is a social, cultural shaping of what we mean by embodiment. This is a problem in discussions on embodied cognition, as there are certain universal assumptions made. For instance, here’s a good example: in an earlier installation of  Haptic Field, we took it to Indonesia. Part of this environment takes place in darkness, and people were wearing suits that had haptic actuators; they’re getting vibrations, and they had LEDs so they can see others moving. They’re also wearing glasses that blur their sight (frosted lenses). So vision is reconfigured—you see a world where these is less distinction in the visual field. 

Installation image of Haptic Field

In this environment in Bandung, that at times goes completely dark, people screamed and screamed, because in the darkness—because of the optical phenomena of these lights perhaps—they start to see spirits, ancestors. These are Muslim cultures, but these experiences are deeply rooted in earlier animistic traditions in Java. The director of the centre was also an anthropologist and she was doing interviews with viewers; they claimed they were terrified of people coming close too them in darkness because it felt like spirits coming onto their bodies. There’s not a universal form of bodily perception, in that sense. There’s a very specific perception. Whereas in Germany when the lights went dark, and there was a lot more space, people had other issues: they saw their lives passing by them, all these other kinds of metaphors. 

We have to be mindful that bodies are shaped by their cultural environments, and that technologies are designed to accommodate some abstract universal body. In my new book, Sensing Machines (which comes out in March 2022), I talk about how sensors co-shape our very lives. My argument is that this doesn’t start with surveillance capitalism Silicon Valley style. It goes back to the 19th century when physiologists and psychologists in Europe came up with the mathematics and models to quantify sense-perception. They were trying to learn knowledge about how senses work, about how the brain works. Those psychophysical models, described by the German psychologist Gustav Fechner in 1870 are unbelievably still being used today to model perceptual listening systems, to design AR and VR glasses. What happens is that contemporary technologies are designed to confirm how we think models of perception work, not how  perception might actually work, especially when it is culturally and socially shaped. And therefore we create technologies which reinforce a certain understanding of how we model how we perceive, which is then reified in actual technical-material devices. 

In other words, our bodies are never separated from the technological environment in which they are a part of. I’m very much not a technophobe, and very much someone who says there is not a clear separation between human beings and technological beings. We want to create that division or this illusion that these technologies are exterior to us, that they appeared out of nowhere, but in fact we are technical beings, whether it’s using language, or writing, or pictures, or code, we are integrally technical beings.

Find out more about the keynote speech here, and more about the Hexagram Network’s EMERGENCE/Y garden here. Part II of the interview will be published on Monday, September 13th as a postmortem for Ars.

More
News and Research