Trying to wrap my head around new HDR features in P/Shop, watching a tutorial from Photoshop Cafe, I thought maybe this would be useful.
When it comes to images, video, and displays, all the hype makes it even more confusing, so I want to cut through the clutter… The name itself is somewhat misleading, with the definition of Dynamic Range [per Wikipedia] being: “the ratio between the largest and smallest values that a certain quantity can assume”. But since pure black is the absolute darkest, & pure white is the absolute brightest color possible, the distance or difference between them cannot change – the range between black & white is the same in an 8-bit Black & White GIF & a 32-bit HDR color image in Photoshop. What HDR Really means in this sense is the total number of colors at different brightness levels in either the image or video itself, or in the picture we see on a display panel.
Defining what HDR Really means is trickiest with display panels, from watches to cell phones to PC monitors to HDTVs. If you start with the 2 extremes, black & white, only a relative few using certain technologies can approach actual black, while the spec uses an extremely high brightness level to approach true white [a brightness level that you couldn’t stand in a PC monitor]. Now the further apart those 2 extremes, the more colors & brightness levels possible between them – the closer together, the fewer colors you’ll see. Add in the display characteristics for any given panel, which is usually biased towards one color or another, and saying YMMV is a big understatement. With a PC/laptop HDR is mainly good for entertainment, since any color calibration generally goes out the window as soon as HDR is turned on. You’ll see more colors & brightness levels, so potentially more detail, but whether they’re the correct colors or not is another matter entirely. Sadly, this is where you can add: “But wait, there’s more”. Since people expect to see a difference with HDR, and many don’t have the patience or inclination to look for often subtle increases in detail, artificially boosting or enhancing parts of the image, e.g., shadows, is unfortunately common.
HDR video adds a complication… video files tend to be very large, while 4k video files are larger than regular or full HD at 1080p. No one wants to stream 2 copies of the same film, one HDR, one not, so they stream the regular version alongside the extra data for HDR. That extra data can be in a number of formats, so if you want to watch a movie that uses the Dolby format for HDR, you have to have a display [TV] that is compatible with Dolby HDR.
When cameras are involved HDR is easier to nail down, though it still can be tricky. A camera cannot currently capture everything that your eyes see [though advancements are coming] -- they can only handle a limited range of dark to light. In a photo studio, on a photo shoot, or on a movie set you have all these lights so that everything fits into the range the camera is capable of. Real world, without the lights, the camera’s ideally set for the lightness range of the most important part of the photo, and there’s just not much data captured for anything outside of that range. A classic example is a room with daylight coming through a window – if you shoot facing the window, you’ll either capture the detail seen through the window, & everything else will be dark, or you shoot everything else in the room & the windows blown out, maybe completely white. The current solution is to take 2 photos, one of the room & one through the window.
Back on your PC or laptop you can open both photos in whatever image editing app, cut out the properly exposed window & paste it in the photo you exposed for the room. That’s not HDR. The HDR way of doing it is to combine both photos into a large file containing, combining the data for each one. At that point the combined photos may not show much [if any] difference – just because the data is there doesn’t always mean anything, since it’s up to you to adjust the lightness & darkness of different parts of the photo. IOW, all merging the 2 photos did was give you the data you need to edit the photo properly. However, you can now go further… using Photoshop [I’m not sure if any other software offers this option] you can use a mode tailored for HDR monitors that shows you the amount of details and light to dark range that your HDR monitor can display. This gives you a little bit of extra accuracy as you adjust &/or edit the photo, using, working with the extra data in an HDR image, which can then be sort of squeezed into a regular, non-HDR photo when you’re done. Again, the idea is to have the extra data available when you edit, but when you’re done, any data that’s not visible has no real use or purpose, and discarding it means your photo can be displayed anywhere any other photo can be shown, no special displays or anything needed.
There aren’t many file formats available to store HDR images, and there’s not a lot of software available that can read them. Software is still evolving so that doing more than combining images & outputting a standard photo is becoming more common. Fortunately the more garish FX that used to be automatic with HDR are starting to fade away.