One of these pictures of me is real and the other is AI – but which is which?

Abi vs Adobe Firefly
Which version of our writer Abigail is real? - Amit Lennon

Click on the faces throughout this piece that you think are real 

The two images don’t look exactly alike, but there are some uncanny similarities around the eyes, nose and mouth. There I am, on the right: a photograph of me taken recently in a studio in Brixton, south London. I perch in front of a green background, my shirt slightly creased. And there I am again, on the left.

Except that person isn’t really me.

It is an odd facsimile in a white T-shirt. She has preternaturally smooth skin, no imperfections, and long, glossy hair. This portrait was generated by artificial intelligence (AI), using just three data points from my face and a brief, eight-word prompt: ‘Woman in her 20s of white British descent.’

My real and bot-generated portraits are the work of the London-based photographer Amit Lennon, who has created a similar set of images for his latest project, Artificial Intelligence Portrayal. I doubt many people seeing it in isolation would be able to tell that my AI portrait isn’t real – which, Lennon explains, is exactly the point.

The motivation for the project was to ‘see behind the curtain of AI’, he explains. ‘The goal is to make people question what they are looking at and what ingredients went into “making the cake”. I want them to concentrate less on the mechanics of how the work was made and more on the result, how it makes you feel.’

Lennon photographed his subjects in front of coloured backgrounds and then asked them for a brief description of themselves including their age, ethnicity and profession, such as, ‘Man in his 20s; actor; from London; English, Israeli and Irish descent.’

Afterwards he took fragments of their real photographs as a starting point – the distance between the eyes, the distance from between the eyes to the tip of the nose, and the width of the mouth – and then input their description into a freely available AI generation tool, Adobe Firefly, to create a digital image, displayed next to the original as a pair. Like other tools on the market, Firefly has been trained to translate text prompts into images by learning from a vast set of stock imagery. And it’s remarkably easy to use – as simple as putting one sentence into a text box and clicking ‘generate’.

This is an art project, not a science experiment, Lennon points out. ‘It’s like any creative process, there’s a lot that I’ve discarded.’ Only the best – or most uncanny – portraits were selected. Many of the pictures generated proved ‘fairly random’.

The results, Lennon explains, also reveal AI’s social stereotyping and encourage viewers to consider ‘the biases, the errors and assumptions that are being fed back’. A man described as ‘an actor from Clydebank of Scottish, Irish and Singapore-Chinese descent’, for example, has been given a bright red beard. In fact, in some of the images the AI tool generated of me, I look like a man in a suit, with coiffed hair – presumably the program’s idea of what a ‘journalist’ looks like. It also gives nearly every man a beard at first attempt, says Lennon, and ‘loves hats’.

In other portraits, it simply smoothed out any signs of individuality, giving women baby-smooth skin, pruned eyebrows and perfectly glossy hair. ‘It beautifies people a lot,’ Lennon says. It seems there is something of a gendered double standard here. This is the other problem with AI imagery, says Lennon: ‘It feeds itself from the data it’s made,’ resulting in ‘retouched and Instagram-filtered’ versions of people. Although not always.

One of Lennon’s subjects, Lola Choo Antopolski, 19, a student at the University of Cambridge, says her portrait looks like her ‘ugly cousin’: ‘The AI representation is like me merged with many other people. It’s really weird to look into someone’s eyes and know that it’s not real,’ she says. ‘It’s actually terrifying that we can no longer tell what’s human and what’s not.’

Manipulated images are regularly making headlines. The reaction to the relatively low-stakes editing of the Princess of Wales’s recent family portrait is just one illustration of how uncomfortable people feel about the erosion of ‘photographic evidence’. More sinister is an image of the Pope wearing a white puffer coat that went viral last year, before it was revealed to be in fact AI generated. When an image has been faked, we feel cheated. How much can we trust what we see in photographs today?

‘As the AI gets better, the visual clues will become less easy to detect,’ Lennon says. Of his own project he admits, ‘I’m a professional photographer, and I would struggle to see which is an artificial picture.’

For years the software has been available on smartphones to allow us to doctor our own photographs, but now programs such as Midjourney, DALL-E 3 and Stable Diffusion allow anybody to generate a fantastical AI image in seconds. Thanks to the smartphone, it is estimated that more than 1.8 trillion photographs are taken every year.

Now, we might drown in them. In another leap forward, earlier this year OpenAI – the company behind the chatbot ChatGPT – announced Sora, a remarkably realistic (and accessible even to non-tech spods) AI video generator.

AI images posted by ‘virtual influencers’ now proliferate on social media too. One such virtual character, @lilmiquela on Instagram, ‘a 19-year-old robot living in LA’ by the name of Miquela, has 2.6 million followers watching her every fake move – hanging out with her fake friends, going on fake holiday or getting her fake nails done. Convincing AI images can now even win prestigious photography prizes, as in the case of German artist Boris Eldagsen, who won a Sony World Photography Award.

The copyright issues are another minefield. As Nick Dunmur, head of business and legal at the Association of Photographers, explains, some of the image generators currently available were trained on ‘uncurated scrapes of the internet, including everybody’s creative work, without permission or payment to those whose work has been caught up in that. AI technology should support the creative process and not supplant the creator… there are huge problems with the technology in its current form.’

As Lennon explains, ‘The attraction for a lot of companies is that AI is free – copyright-free, model-free – you can make an image and you don’t have to pay a penny. But for portraiture, which is much more specific, everyone’s unique, it’s harder; so this project explores how AI treats uniqueness, and how it treats individuality.’

For Lennon, these are all debates he hopes his project sparks. Does he think AI can replace a photographer behind a lens? ‘AI is not a photographer, there is no lens or camera,’ says Lennon. ‘It’s up to you to decide what you prefer.’

All images courtesy of Amit Lennon

Advertisement