A few weeks ago, I bought an iPhone X. I love its face-unlock authentication and its gorgeously tall screen, but its dual-lens camera is easily my favorite feature. The iPhone X is the best camera I’ve every owned, and it’s not even really a competition. I’ve never had an SLR and my last point-and-shoot was from the early days of digital photography.
In fact, the iPhone X camera is so good (or, rather, “good enough”) that it’s hard to imagine I’ll ever consider buying a standalone camera again.
That’s not to say there isn’t plenty of room for improvement. In particular, I find “portrait mode” (the fake bokeh blur effect) alternately intoxicating and maddening. In many cases, it does a great job isolating my photo’s foreground subject. But when it fails, it fails hard. As many others have pointed out, hair poses a serious challenge to its algorithm, as do non-contiguous background areas (e.g. a piece of the background visible through the crook of your subject’s arm) and ambiguous edges.
Could Apple fix these sorts of hiccups in software? This is my first dance with Portrait mode, so I can’t say whether the feature has improved since its first release last year. But I have at least some hope that edge detection will improve and fill in some of the gaps (pun intended).
Even if the algorithms improve, I’d like to see some way to touch up these problematic Portrait mode depth maps. There are already several interesting apps that let me see the underlying depth channel. Slør paints it as a black-and-white alpha channel; Focos lets me spin the depth layers around like I’m in some sort of sci-fi flick (“Rotate. Zoom. Enhance.”).
But neither of those apps—nor any others that I’ve heard of—let you actually fix the depth sensing errors that creep into Portrait mode photos. Take those non-continugous background areas I mentioned earlier. Given a decent channel-editing interface, it would be relatively simple to paint a foreground area back into the background, where it belongs.
It’s possible that Apple’s developer-facing APIs won’t allow depth changes to be written back into a fully editable Portrait mode photo. If not, that’s a shame and ought to be corrected. In the meantime, though, I’d love to see an app handle depth edits “in-house”, then export a flattened photo (with the proper blur baked in) back to the camera roll.
Hopefully that sort of functionality arrives soon. Portrait mode is a blast, but it’s hard to feel too enthusiastic when it produces such glaring flaws. ■