AI is always advancing, whether in terms of capabilities, or the number of open source projects that allow a growing number of people to train their own models for their needs. The artist and programmer Giacomo Miceli has thus decided to train two AIs… before having them talk to each other ad infinitum.
To spice things up a bit, the artist chose to model an AI on texts, speeches, interviews – and using the voice – of German director Werner Herzog. While the AI that gives him the replica is modeled on the personality of the Slovenian philosopher Slavoj Žižek. The two now discuss philosophy endlessly on the site The Infinite Conversation which you can view by clicking here.
This site proposes to listen to an infinite debate between “hallucinations of pieces of silicon”
Upon arrival on the site, the visitor is thus immersed in the middle of the ongoing conversation. The conversations themselves, however, say more about how current machine learning models work than contribute to knowledge. Moreover, a disclaimer underlines it as soon as you arrive on the site: “everything you hear is totally machine generated. The opinions and beliefs expressed do not represent anyone. They are simply the hallucinations of a piece of silicon”.
Moreover, in his FAQ, the artist explains that he designed the site for these three reasons: “1) because I could; 2) because I wanted to raise awareness about the powers of machine learning […] 3) because it is a declaration of love to Werner Herzog and Slavoj Žižek, their brilliant ideas and their idiosyncratic speech”.
The site was designed “using mostly open source, open source tools. The generation of the text itself is done through a popular language model that I have refined thanks to interviews and content signed by the two speakers”. Obviously, once again, nothing in these speeches should be taken really seriously. At times the conversations seem to be going somewhere.
At other times, the exchange makes absolutely no sense. Sometimes the “silicon tip hallucinations” lead to texts that are a priori not very reassuring, like this extract, where one of the AIs wonders what she would do locked in a room with a gun and five naked women:
“I think you are right. But I have a similar fantasy, which I’ve never talked about publicly. If someone locked me in a room with a gun and five naked women, completely unembarrassed; if he left the room and closed the door and I knew he would be back in an hour – I don’t know what would happen to me. I don’t know if I could resist this temptation or not. The gun is right here, as are the five naked women. I don’t know what would happen to me. Yes I think so. I don’t know, however”rants the AI in an excerpt.
For having listened to the exchanges for a few hours: overall the conversation has a common thread (if one AI talks about something, the other answers or asks a question related to the subject). But as disturbing as certain excerpts can sometimes be, everything shows that these AIs mostly pretend to discuss philosophy, and do not really understand the substance of the subject in question – while showing a surface understanding.
We are far from pure gibberish, and what this artistic project shows is that between the text thus generated and credible voices, it is already possible to create very disturbing experiences based on AI. Experiences that are even likely to sow confusion or even mislead certain audiences. This could represent a real danger right now, especially since the site shows that it is already possible to use one or more AIs in this way.