Imagining the future of AI photography

tech

Portrait mode and corollary features (e.g. portrait lighting) are halting first steps towards a true AI-augmented camera. Here’s a fanciful look at our smartphone future:


It’s April 4, 2027, and Julie is making good progress. For the seventh time that day, she clambers up the squeaky attic ladder and crouch-steps her way to a tall pile of cardboard boxes. She squints at the next box in the stack, just making out her mother’s scrawl: “FAMILY PHOTOS.” It slides easily across the dusty floorboards, and Julie descends the ladder, flopping the box from step to step above her.

With a heavy sigh, she sets the box down on a dining room chair. Her kitchen scissors make quick work of the duct tape; Julie glances inside—and winces. No neat stacks, no carefully-curated photo albums. Instead, the box is full to the brim with loose snapshots, unlabeled and unsorted. Just a few years back, organizing these photos would have taken an entire weekend.

Fortunately for Julie, times have changed. She works quickly, plucking photos from the box and tossing them into a messy grid on the table. Within a few minutes, she has strewn hundreds of memories across the oak panels. They’re arranged in no particular order; Julie spots a baby photo of her grandmother from the 40s, adjacent to a faded Kodak print of Aunt Susie driving in the mid–70s. The very next snapshot in the row is a Polaroid from Christmas 1991; her little brother triumphantly lifts a brand-new video game console package above his head.

With a nostalgic smile, Julie whips out her smartphone and opens the photo enhancement app that makes this spring cleaning possible. The real work begins; she waves the device over the table, lazily panning its viewfinder across the rows and columns of snapshots.

As she does, the camera does its magic. Each individual photograph is extracted, cropped, and saved to its own file. The process is nearly instant; after just a minute or two of haphazard scanning, the app beeps and confirms that it’s captured all the data it needs. Julie sweeps the impromptu collage into a waiting trash cash.

It’s almost hard to believe how much she trusts the phone to capture these photos. Once, she would have been horrified to throw away such precious memories. Now, in a single day, she has filled a half-dozen garbage bags with old snapshots.

As she breaks down the empty cardboard box, the phone app (and the cloud service that powers it) does its own tidying up. First, it leverages machine learning to automatically recognize every object in every photo: that’s a baby beneath a maple tree in late fall. That’s a 1976 AMC Hornet. That’s a Sega Genesis.

With that context in hand, the service can clean up the photos. First, the easy stuff: wiping away physical scratches. Removing decades’ worth of discoloration and fade. Filling in missing, damaged details using robust healing algorithms. The AMC logo on the Hornet’s hood, obliterated by a coffee stain on the photo, is now recreated from a library of vintage car logos. A gouge in the leaves gets repaired too; maple leaves have a distinctive shape, and the app generates a few more to fill the hole. The Sega Genesis, motion-blurred in the boy’s excitement, is sharpened using actual product photography.

The restoration isn’t limited to inanimate objects, though. The app knows that it’s Aunt Susie who’s sitting behind the wheel of the Hornet, even though she’s obscured by glare on the windshield and some lens flare. Using Susie’s existing visual profile, the tool sharpens her face and glasses and fills in the blown-out highlights with data from other images.

The service automatically assigns metadata to each image, too. Every calendar, clock, Christmas tree, or party hat in the background helps the service narrow down the image date. Less obvious visual clues can help, too; the app might recognize that ’76 Hornet from previous scans and assign the photo of Susie to the same era. Even the girl’s tight perm could help to date the photo; given enough data, the app might know exactly when she adopted—and abandoned—that distinctive look. In the same way, visual cues could help pin down each photo’s location, too.

As Julie sets the last of the trash bags by the curb, she feels a twinge of bittersweet guilt. The garbage truck is on its way; soon, the original photos will be gone for good.

But based on experience, she’s confident enough in the restoral to let them go. The digitized shots are far more portable, more flexible, and more interesting than the paper copies ever were. She can make edits that would otherwise have been impossible—like tweaking the exposure level or even the location and brightness of the original scene’s light sources. Or altering the camera’s focal length, decades after the shot was taken; the app’s AI has used the source image to extrapolate exactly where each object sits in three-dimensional space.

Finally, Julie can even perform that “Zoom… enhance” magic promised by science fiction movies for decades. As she steps back into the kitchen, she grabs her tablet and plops down at the counter. Time to take a closer look at Aunt Susie’s unbelievable 70s curls. ■