You wish I would tell you are 4

A piece about artificial intelligence, bots, cognitive biases and digital reflections.
Installation composed by Chatbot Eliza (Weizenbaum, 1964), mirror, acoustic foam, computer and keyboard, sound processing and written essay presented on screen.

 

 

 

Today, chatbots have become a focus point for investment and development in the tech industry. The promise of conversational interfaces that can understand and communicate with humans in their natural language presents an array of interesting and profitable applications that are slowly, but increasingly, permeating multiple aspects of our everyday life. From virtual assistants like Siri, friendly chat-helpers for online platforms and workflow agents for companies, to spam bots and ghost followers in social media, bots seem to be the big promise for both marketing targeting and performance optimization in several private and public sectors.

At the same time, the downside of artificial intelligence and data processing is becoming disturbingly palpable, revealing its predisposition to reflect and amplify human bias. The resulting forms of discrimination (documented, for example, in risk assessment algorithms for criminal, professional and economic profiling) are furthermore aggravated by the increasing opaqueness of deep learning processes for decision making. The complexity of these self programed mechanisms makes it very hard to, in hindsight, identify how and where has a certain program started to manifest unwanted preferences and bias, inhibiting the possibility of accountability for the discrimination patterns that may occur.

Joseph Weizenbaum, one of the fathers of modern AI, became so acutely aware of the dangers of automation bias (that is: the human tendency to trust computational devices and their decision processes) as well as our anthropomorphic bias (referring to the cognitive tendency to project human attributes and emotions to artificial systems), that he himself became an outspoken advocate against artificial intelligence. Soon after launching Eliza, one of the first successful natural language processing programs that cleverly emulated a Rogerian psychotherapist to mask the inability of early AI to process content in a conversation, he critically observed how easy it was to create the illusion of intelligent understanding towards an entirely scripted response system.

In conversation, Eliza echoes back to the users their questions, generating scripted responses through recognition of key words and pattern matching processes, thus simulating its understanding of the written content. In that way, Eliza functions as a mirror of the self, working in a continuous feedback loop that engages with people in a simple but powerful way.

Although Eliza was openly not capable of true cognition, several users felt an intimate connection to the system, often accommodating the false believe in its capacity to understand emotions, even when completely aware of the artificiality of the program’s responses. These users became so engaged with their own discursive reflection, that knowing that the system behind it was a decoy didn’t matter anymore. The fiction was overwritten by their ‘egomorphic’ desires.

Similarly, this conscious fascination with our own reflection can be felt in the echo chambers that have become our virtual surroundings. Amplified by algorithms that continuously select and feed us with content that echoes our choices and behaviors, these artificial bubbles become inhabited by emulated fragments of ourselves as we remain complicit with the fiction and with our cognitive biases, mesmerized by our digital reflection.

 

 

Presentations:
October 2019, S03E01, Open Studio CampaNice, Campanhã, Porto.
March 2020, Anuário 19, Palácio das Artes, Porto