(RE)FAIRE TOURNER LA TERRE
11min of reading
Author
Micha Barban Dangerfield

« Root Mean Square Error ». It’s not the clearest introduction to the subject. To the layperson, this improbable little combination of words is just an impossible riddle. To the initiated, it is a digital prophecy. To keep things simple, this formula corresponds to a frequently used measure, a rule which makes it possible to observe the moment a user deviates from a trajectory predicted by the algorithms disseminated like beacons through cyberspace. In other words, it remembers the moment a recommendation algorithm (on YouTube for example) gets it wrong. Everyone who designs an algorithm will study this deviation in detail, and will always try, as a good modern oracle should, to reduce it as much as possible - the ultimate aim being to adjust the algorithm and so to join together, in an all-powerful equation, prediction and reality.

This formula interests us here because it sums up, by itself, the way in which algorithms canalise our attention. Our way of seeing the world - might we even say our obsessions? Since our screens have become our windows onto the world, it is fair to say that algorithms are the window frame. And the question becomes, what type of window is on offer: a picture window? Or an arrow slit?

When they predict our behaviours, our cognitive schemata, when they guide our browsing history, when they observe our digital reflexes, algorithms (going by the name of recommendations) are training themselves to divine and orient our desires, our opinions, our responses to the phenomena that surround us, to tailor our internet for us. All you need to do is click on the “explore” button on Instagram to get a sense (certainly crude and sometimes shameful) of what we are curious about, according to the algorithms. Equally, all you have to do is to look at the recommendations bar that appears alongside your YouTube views to evaluate your preferences, sound out your musical tastes, locate your opinions. And what you need to know here is that these algorithms, made from scratch and self-improving, behave with and respond to several biases. Chief among them: our own.

Down the rabbit hole

In a series of podcasts entitled ‘The Rabbit Hole’, the New York Times has sought to analyse how anyone, via the internet, can be radicalised. The story told is that of Caleb Cain, a young misfit with a semi-dysfunctional upbringing, an adolescent isolated from his peers, introverted, prone to depression and hooked on YouTube - the emergency exit through which he sought to escape the world. Plunging into the vagaries of his viewing history (12,000 videos!) journalists Kevin Roose and Andy Mills retraced his story to its origins better to understand the conditions of his progressive radicalisation.

We are at the beginning of the Obama years and Caleb is spending most of his time sailing the high seas of YouTube, allowing himself to drift with the undercurrents of the recommendation algorithms. First, there were speeches by the great gurus of the new atheism, such as Christopher Hitchens or Richard Dawkins, which seemed to correspond to his search for meaning. Then, videos exalting the ideas of paleo-conservatives and conspiracy theorists like Alex Jones and Stefan Molyneux came his way - a conveyor belt of videos interrupted by parodies of the global hit “Let it Go” from Frozen. Go figure. At that moment in the story, Caleb recounts having experienced a sense of falling, as if plunging into the bottomless pit that is YouTube - and the passage in which he found himself was very narrow.

Beyond a simple examination of the content which made up Caleb’s viewing history, it is interesting to observe the vertical tunnel down which he was stumbling, its walls apparently made of a panoply of equations. On this subject, Guillaume Chaslot, founder of Algo Transparency and former employee of YouTube and Google, is implacable. Responsible in the past for developing recommendation algorithms on YouTube, Chaslot is a past master of the architecture of what are often called ‘filter bubbles’. Fired from the video giant, he is calling for a greater collective awareness while militating for more transparency. “The aim of YouTube from the outset is to maximise the time that viewers spend on the site.” And how do you make that happen? All you would have had to do would be to create algorithms which activate the cognitive reward system of individuals, to confirm their initial assumptions, in other words, to extend and prolong their biases or their penchants to the point at which they reach an echo chamber that is tailored precisely to fit them. A loop. An infinite loop. At which point it seems indispensable to cast a critical eye upon what we collectively consider to be radical, as no conception of the world will be able to escape. No philosophical, ideological, political or moral position would be able to save itself from this intellectual huis clos. Whether it is progressive, reactionary, admissible or intolerable, all of our convictions are becoming radicalised.

Dear Big Mother

These bubbles or cocoons are what the science fiction writer Alain Damasio calls Big Mother. Following in the footsteps of George Orwell’s Big Brother idea in 1984, Damasio describes the shift towards an incubating system, which is reassuring, a “techno-cocoon” woven especially by us, for us, which is all around us. A safe and familiar womb which both thickens and thins - as that is, indeed, where the objective of the algorithm lies - in making our lives more streamlined. Fans of SF cinema and particularly The Matrix films will appreciate the maternal allegory borrowed by Damasio. You only have to look at the way data and algorithms are represented in the Wachowsk I sisters’ films to understand where they get their title from. Viscous, liquid, immaterial and fugitive… Big Data is fluid. The movement of data, the circulation of information is seen here as the organic ebbing and flowing of life: water, tears, fluid, and especially amniotic fluid.

In this context, we may also call to mind the image of Neo, the foetus-adult, unconscious and connected to the matrix from the comfort of a cavity filled with a viscous liquid powerfully evoking the mother’s amniotic sac. An opaque, living pocket that wraps around the body and fills the mind. In another scene, the “digital rain” which falls on the surface of Neo’s screen, this torrent of data commanded from the other side of the machine, reinforces once again the association with the fluids of the Living. A similar analysis could be applied to the film Ghost in the Shell... In short, there is enough material to write whole books about the way in which Big Data liquifies on the screen and in the pages of science-fiction stories. But this digression risks leading us too far into the quicksands hidden below the matrix’s power, its existence completely alien to us, insensible of our free will. Let’s not get sucked in. Because if all the algorithms present in our lives create around us a kind of opaque virtual membrane directing our gaze towards a vanishing point without us even realizing it, they never form more than the reflection of what we already are. Like a mirror held up to us… Which it is incumbent upon us to break.

During the writing of this article, with a view to reconciling with these dreadful algorithms, I plunged into Aurélie Jean’s, De l’autre côté de la machine, a pertinent recommendation by a friend. It is in response to the virulent criticisms that laypeople (myself included) often level at algorithms that this information scientist attempts an impossible justification. Because they reflect the world as it is, good and bad, Aurélie Jean urges us not to blame algorithms, but rather to remind users and designers of their own responsibility. The starting point was the following: algorithms are biased, because we are.

A world in a state of inertia, history at a halt

Let’s take some examples. In the US, in the 1970s, the tests conducted on the first airbags proved that they represented a mortal danger for women and children. We might therefore ask ourselves why a device conceived with the aim of saving passengers could have failed to the point where it presented a threat to their survival. The answer is simple: because men invented it. Since the automotive engineering industry was dominated by men, the designers of the first airbags failed to take into account a variety of body sizes when building the prototype, making the rookie error of basing it uniquely on an average-size driver measuring 1m77. More broadly, this example shows us that the effect of these algorithms is to reproduce, confirm and sometimes even hypertrophy the biases which make up our reality. We can accuse them of every evil under the sun, but we have to remember one fundamental rule: if algorithms are sexist, racist, discriminatory and unjust, it’s because our world is so.

Just like those to which Caleb was exposed on YouTube, algorithms, in their totality, reproduce our cognitive and collective structure. This is the same for the facial recognition algorithms which discriminate against people of colour (in 2016, less than 6% of developers in the US were black) but also the digital recruitment system put in place by Amazon in 2018 which favoured men over women. We might also cite the “mutant” algorithm created to grade British secondary school students in 2020 when exams were cancelled, and the results of which (above all the biases) deepened the social inequalities which were already at work in reality. Or think about the female cyborgs distributed by the notorious business RealDoll, whose AI responds to the most sexist fantasies of a patriarchy which seeks to constantly reassert its power. All of these systems, of whatever sort, amplify the imperfections of the collective consciousness and the stupidities of history.

In opposition to the (illusionary) idea that we collectively hold of technological progress, perceived of as an uninterrupted movement towards what lies after and ahead, the algorithm, such as it is today, brings with it the risk of plunging the world into inevitable paralysis. An inertia to be feared. Herein lies the myth upon which we base our apprehension of the internet and according to which the digital revolution will be able to change who we are: we have to oppose the idea that the Internet reinforces who we are - our beliefs, our systems of adherence and apprehension. But all is not lost. Confronted with the gloomy prospects offered by algorithms, there are still some solutions.

ALGORITHMS WHICH ACTIVATE THE COGNITIVE REWARD SYSTEM OF INDIVIDUALS, TO CONFIRM THEIR INITIAL ASSUMPTIONS, IN OTHERS WORDS, TO EXTEND AND PROLONG THEIR BIASES OR THEIR PENCHANTS TO THE POINT AT WHICH THEY REACH AN ECHO CHAMBER THAT IS TAILORED PRECISELY TO FIT THEM. A LOOP. AN INFINITE LOOP.

Algorithms on trial - how do you plead?

How can we go about ending the paralysis imposed by algorithms? With Aurélie Jean, we learn that it is possible to integrate the supposed “explicit” bias in the machines: forcing the algorithm to admit its errors (its implicit bias) and repair them. Which is no small undertaking given the complexity of some of them. Wecould, as Guillaume Chaslot suggests, encourage their transparency, so that they do not become impenetrable black boxes. Above all, we cannot allow them to become ultra-powerful tools for increasingly digital forms of governance.

Little more than a year ago Alexandra Ocasio-Cortez, elected to the House of Representatives, publicly expressed her concern over the way in which the deployment of facial recognition programmes, widely used in the public sphere, were already going awry. With the complicity of Facebook, making available to governments and businesses the profiles published on its diverse platforms, these machines are collecting all the traces we leave. Beyond the ethical questions raised by such a technology, AOC highlights the dangers which lurk in the cracks. For when black people are involved, facial recognition tools still have a margin of error and confusion which is unforgivable, putting at risk populations who risk paying an extremely high price.

To avoid such a catastrophe, these biases must be held to account in the courtroom of public and expert opinion; they must be brought into the open and we must be able to judge them (it is important to bear in mind that today the majority of software designers, citing concerns about competition, keep their algorithms secret). In reinstating a form of imputability and imposing a duty of transparency, we can once more, all together, determine the destiny of our digital psyches, and strangle the executive power that has been implicitly bequeathed to algorithms. They would serve us by destroying our real biases, confronting them and correcting them where necessary. And we would enjoy once more the real freedom to continue to burrow into our rabbit holes, or to return to the surface to change everything.

Text translated into English by Sara & Emma Bielecki.

MORE CONTENT

FACED WITH TECHNOLOGY, THERE ARE TWO MAIN RESPONSES: EXCESSIVE OPTIMISM OR CONSERVATISM.
Julien Tauvel
An discussion with Sophie Abriat
Issue
AURÈCE VETTIER
An interview with Sophie Abriat between science and art
TO PREDICT THE FUTURE, STRONGER THAN PREDICTIVE BIG DATA, THERE IS MICHEL HOUELLEBECQ AND HIS ULTRA-PERFORMING AND INIMITABLE ALGORITHM: THE HOUELLEBECQ®
Issue
MICHEL HOUELLEBECQ : PROPHÈTE ?
In the near future, some people believe, the art of prediction will be easy.
FASHION DECONSTRUCTION
A sociological and philosophical point of view on the fashion silhouette
Issue
I.A. : LES NOUVEAUX CONTOURS DE LA CRÉATIVITÉ
Decoding by Isabelle Constant
I OFTEN TELL ACTORS THAT OUR PERSONAL LIVES ARE PATHETIC COMPARED TO THOSE OF THE CHARACTERS
THOMAS JOLLY
“I draw inspiration from wherever I can. I don’t want to erect cultural barriers”
Issue
CAHIER CENTRAL IMAGINARY
Thoughts collected by Manon Renault
I have a personal history with this term. I constructed myself in deconstruction.
Issue
PAUL B. PRECIADO
A discussion on Deconstruction with Micha Barban Dangerfield