Computer Generated Images — On Computational Photography and “AI” Assisted Tools
For a few years now, various technology publications and companies have talked about “computational photography”—that is, the supplementation of traditional image processing for software processing, such as on smartphones—as the next leap in photography at large. I was and remain skeptical. Phones are fine cameras; particularly in normal, daytime photography. But to me, even as someone who wouldn’t really consider myself a “pixel peeper,” iPhone photos in general don’t impress on their technical merits. Under less ideal conditions, or even just higher scrutiny than viewed compressed on the device they were taken with, I feel the images don’t hold up. That’s not to say there’s no use for them—like I said, I think they’re great for most normal situations. Even computational photography has its benefits, such as the generally-good-enough portrait mode that’s now ubiquitous. Just consider me unconvinced.
That’s not to say software doesn’t have it’s place in photography. In fact, software processing already plays a large part in the creation of your final image, whether you’re aware or not. I was an Adobe Lightroom user for years, but found I felt that Capture One offered better results for my Fujifilm files. This is largely due to how the RAW converter; just as a film photo differs depending on what developer and process you used, so too will a digital file differ depending on how the software you’re using interprets the data.
As technology improves, so do our photos. As scanners get better, we’re able to pull more and more detail out of film photos, data that’s been there all along. But I feel we’re starting to see the same for digital, as well. As processing advances, we can go back to old files and pull new data that’s been lying dormant.
It’s rare for me to go back to old files, like it’s rare for me to re-read books. But I have from time to time, especially on images I felt I didn’t quite get the results I’d wanted. Here’s a shot I took back in 2020:
Even now, there are times I shoot color where I’m not quite sure what to do with it, or which direction I should push things in. Here, I opted for high-contrast, and warm; both of which I way overdid. But the image is still interesting; this man, standing on the bridge, isolated by the last light of the Sun. During a down period, I went back to this image, near the top of my list of “not-quites,” to have another go at making an image while I wasn’t producing new work. Here’s how that turned out:
Much more natural colors, a softer hand overall. It’s still dark, still silhouetted, but you get the outline of people on the riverwalk, the freshly struck streetlamps. It feels a lot better to me. This wasn’t even really a case of improved processing, though it did benefit a bit from that in the softer highlights. Mostly though, it was just fresh eyes.
Make no mistake though: the processing has certainly improved. I like to espouse what I call “no fear photography”: that is, not worrying about the imperfections or flaws and focusing on the image and the emotion it conveys. At points in my photographic journey, I’ve even leaned into blurriness, certainly into grain and grit. There are always limitations, baked into your processes, your gear, or any part of the imaging chain. For example, as a Fujifilm shooter, I shoot APS-C sensors, which are smaller than a 35mm film frame is. This can lead to worse image quality in low light, as there’s less area to collect light information. With this, you occasionally get noise.
While I love the look of grain, I’ve yet to find much personal joy in color noise. It’s far from the end of the world, but for now, I’d prefer not to have it. So I’ve followed the development and release of various de-noising programs, even early into making photos. A few years ago, I heard that DxO, a company known most for technical tests of various sensors, had released a new program which offered best in class noise reduction. That alone wasn’t enough to make me switch, but it put them on my radar. A few days ago, I heard of a new program from DxO, called DxO Pure Raw, which promises machine-learning aided RAW conversion into DNG files (a proprietary RAW format from Adobe). Even more, they offer specific camera and lens profiles which aim to correct imperfections precisely. With a free trial on offer, I decided to give it a whirl. Take a look at a before-and-after comparison with a high-ISO RAW file I return to for testing:
Kind of wild, right? Not only the noise, but the sharpness and detail resolved really is great. Though this is not the first time I’ve been impressed with results from an app like this. I’ve also used some of the Topaz Labs software, which similarly uses ML processes to denoise, upscale, or sharpen images. I don’t know for certain, but I think these are using software to interpolate where data is lacking. Basically, it’s finding the color noise, the soft edges, and taking an educated guess about how things should look, how they should appear, and then presenting that to you. The software updates; the models and algorithms update.
Here’s another comparison from DxO, of a recent image I took in daylight, zoomed into 100%:
It’s just that much sharper, that much cleaner. I love the way the chromatic aberration clears up. Mind you, I am already happy with the standard processing from Fujifilm files in Capture One. Here’s another image I took, processed entirely normally, zoomed into 100%:
Pretty damn sharp for a nearly five-year-old APS-C sensor! And yet, this will only get better. Honestly, sharpness (independent from resolution) is not something I've ever been worried about on the Fujifilm system. Here's a 100% crop of a portrait taken with the even older Fujifilm X-H1 and the 90mm f/2 lens:
I don’t even like shooting in aperture priority mode, so why does this form of the tool taking over not bother me? I’m not sure; it’s still a computer trying to make a guess about my intent or the final image, even moreso than in-camera sharpening or something. I guess it feels like an optional step in the chain, something I can use when I feel it would benefit me, and discard if it doesn’t. And most of all, it feels like it supplements my own abilities, the data that was already there.
Even now, Fujifilm’s smaller APS-C sensors are sometimes met with skepticism: why would anyone choose to shoot smaller than full-frame? But we’re way beyond the looking glass as far as technical concerns are concerned. I can already take my sufficiently-large 4k-by-6k image, and quadruple the size with no loss in quality. I can remove grain effortlessly. The RAW files I’m making now might prove almost endlessly useful. For now, I’ll continue to rarely using these tools, as I rarely feel I need them. They can sit and wait, ready for if or when I want to use them.
If you liked this post, please consider subscribing to my newsletter, Monochromatic Aberration, to get these delivered right to your inbox, and a series of photo wallpapers exclusive to subscribers. You can do so at that link above or using the button near the bottom of your screen. I write about photograph, writing, and trying to live an examined life. We've currently got 84 members on the site; trying to hit 100 by mid-June. Your support means a lot.