Unless you’ve been under a rock (or bobbing about in the Arctic) for the last week, you’ve probably noticed that a controversial debate has been reignited in the photo community about the relationship between photography and reality, writes Jon Adams.
Artist/photographer Boris Eldagsen won the Creative Open category of the Sony World Photography competition with Pseudomnesia: The Electrician – an image that turned out to be entirely computer-generated, so although it looked ‘real’, it was actually just made using AI on a computer. Boris subsequently turned down the award on the grounds that his entry was not photography, and made himself famous in the process, with an excellent example of self-marketing!
The ongoing debate
Now for journos in the photo game, this debate about whether photography is – or perhaps should be – a reflection of the real has been going on since the art form was invented, and the last major expedition into this territory began in the 90s with the rise of image-editing software like Photoshop. We all know that notions like “the camera never lies” were only ever rhetorical phrases that rarely stood up to close scrutiny, but the relationship between photography and reality always takes the view that photography is the only variable here, because the reality part of the equation is fixed.
Although this is intuitive (we know what reality is … don’t we?), I’m not so sure this is the right way of looking at it. Although what’s real is out there, doing its thing, it’s our perception of this that we’re referring to, rather than what’s real in actuality.
I wandered out to shoot a real view of a clump of forget-me-nots, and using the perspective and magnification of the human eye (a lens of around 50mm focal length), this is the shot I got…
It’s not remotely appealing photographically, but pop on a macro lens, and you literally unlock a different way of perceiving the world. In effect, it changes our perception of the real, and thus changes reality itself! Set your macro lens to 1:1 (where the subject is the same size as it would be if you placed it directly on the sensor), and you get something very different that reveals a new version of reality that you simply can’t perceive unaided. It is still real, of course, but this reality is only possible with technological assistance.
Pushing the boundaries
So let’s take it a bit further. The flower was wobbling in the breeze outside, so I took it indoors to make the shooting conditions more controllable. I tried a few different backgrounds (all of which were real pieces of paper) and tried a few different aperture values, so I could get the defocused areas the way I wanted them. This gave me a better shot…
I settled on the yellow background, as it complemented the stamen of my ‘lead’ flower, but wanted wanted a bit more texture and a little bit of glisten on the petals, so I got out a real water spray, and squirted real water droplets onto the subject to make it more appealing. I also brought in an LED light to add some modelling to the subject, and make the wet areas ‘pop’. These adjustments gave me this…
So now I have my preferred real shot of a real subject, but it’s quite a long way from the real shot of the same subject that I started with. I don’t think anyone would argue that this wasn’t a real photograph, but in creating it, you might consider that I’ve manipulated reality to such an extent that the result is not a genuine reflection of what’s actually real (even though …err… it is!).
And to maintain the purist flow, I haven’t even used the array of adjustment options in the RAW converter yet. If I do that now, and make a few tweaks to the contrast and colour, and add a little vignetting, then my final real photograph looks like this…
I’m quite happy with it, in the sense that it accomplishes what I set out to do, but it is a construction that I’ve assembled using technology (albeit with a natural ‘prop’). It looks the way it does because of my preferences, technical knowhow and the experience gained from seeing lots and lots of similarly-styled images. Put that way, my approach sounds awkwardly similar to what an AI app does!
Owning the decisions
But, there’s a difference, in that I’ve controlled the process, made lots of decisions and have a sense of ownership over what I’ve created. In other words, I’ve put in the time and the effort, and although I’m not the ‘author’ of the forget-me-not, I can safely say that I’m the author of this self-commissioned picture that features it. Had I commissioned an AI app to make something similar, the journey – and the rewards – stops dead at the instructions.
What you choose to do with reality is entirely up to you as a creative photographer, as your personal, subjective version of what’s ‘out there’ is what makes your pictures yours. But if you tell an AI to produce it (from its references and its databased ‘experience’), you’ll get – in the foreseeable future – a strikingly similar result, that’s indistinguishable from an image you could create. Right now, I can give Microsoft’s Bing AI the prompt “forget-me-not on yellow” and below is what it produced, based on those five words…
The viewer, no matter how discerning, may not spot the difference between these AI-generated forget-me-nots and the real ones in all the other shots. There is a difference between them, but if you ignore the process, both could be perceived as equally real. And remember, what we call ‘reality’ tends to be perception-based, rather than actual.
I’m reminded of a scene in BladeRunner 2049 where Joe looks at Deckard’s dog lapping a drink from the floor. “Is it real?” he asks. Deckard shrugs and says, “I don’t know. Ask him!”