Photoshop's New Curves

13 March 2010

As someone who works primarily in 16bit mode for my editing, I've become increasingly frustrated with the Curves dialog's anachronistic dedication to an 8bit way of life. It's time for an update, and I'm going to take the next few paragraphs to explain why as well as what I'd like to see done in CS5.

What's wrong with the current setup? Because the current dialog is built around an '8bpc' interface (note: the backend operates with a higher, bpc-respecting precision), we can run into a few problems:

  1. There's no way to respect middle gray. Adobe implemented '16bit' editing at 15bpc specifically to allow a true middle gray - but it's almost impossible to maintain when applying a curve.

  2. You can't change the display size for the window. No zooming in to tweak your curve, no blowing it up for projection / display / teaching purposes, etc. A PITA for projection, IMO.

  3. You end up restricted to not placing points more than ~5 mapping values apart, remanding fine control of values to multiple curve sets or to be generated through masking of the same. Even though your dataset may allow for and benefit from greater resolution.


What would be better? Simply mapping the backend directly to the interface. Start with the end values, and instead of always being 0 and 255, they become 0 and (1.0 * MODE_MAXVALUE). For 16bpc RGB processing this would mean we would have values from 0 to 32768 (with a pinnable 50% gray!); for 32bpc 0 to 1 (pinnable 0.5); and just as now 8bpc would be 0 to 255. CMYK and LAB would also get the option to display values of 0 to 100 and -1.0 to 1.0 as appropriate. Everything between the end values is of course interpolated to the display size.

What's better is that once the interface is set to be relational between display size and the actual data format, it becomes simple to make the interface scalable and the user can now be allowed to resize the curve window to any size desired, to zoom in and out of the curve ala online mapping programs (scroll wheel!), and to place points as close to one another as is desired (accepting that maximum zoom becomes 1->1 mapping with the actual data resolution). The user might want an option to display curve values as if mapping to different bit depths (8bpc 'traditional', 16bpc for reference, 0.0-1.0 for geeks), and this should again be easy to implement through an addition to the Options dialog.

The one difficult part about this may be the ACV Curves files. I've not been able to dig up the internal data structure, though I suspect given Adobe's historic efficiency and a bit of prodding around a few files that it's simply storing the curve's 8bit values. Obviously this would require revision, likely offering a second data format for newer curves using a pair of floating point values for each point on the curve. There's an outside chance that - depending on how the files are processed internally and whether there is an internal EOF marker in the files - they could store two datasets within the same file, the first down-resolved for legacy purposes and the second containing the floating-point values. That part is to Adobe, though.

If it's something you'd like to see, please let Adobe know.

'Perfect' Sharpening pt. 2 - Hardware augmentation

05 March 2010

In the last mini-article, I discussed the possibility of employing a tool already in wide photographic use for the purposes of recovering focus lost to the AA filter, optical imperfections, demosaicing, etc. But what about camera shake - how do we solve that?

Well, we could place patterns of known size throughout the scene to capture motion and generate a compensatory PSF for each frame, but this suffers from the logistical burden of needing to carefully place them in each shoot and the practical one of needing to then clone them out of the final image. We could use a variety of deconvolution / PSF estimation techniques to estimate the amount of movement which occurred, but these are both computationally (very) expensive, as well as fraught with problems of accuracy.

Instead, what if we just used the camera's IS data? The camera (or lens, depending on your system) always does its best to compensate for linear and angular motion, but as it can only estimate the needed correction during the shutter exposure, it is (almost) always imperfect. But it ought still be able to record its movement during the exposure, and if the firmware were directed to embed that output in a captured file we could then use that data to calculate sensor motion, combine that with our previous PSF, and in so doing arrive at as near a perfect reconstruction of scene sharpness as I can (currently) wrap my tiny little mind around. I believe this will require inclusion of an additional ADC in most systems to record this data, but it shouldn't need be anything as expensive as the ones used for the sensor data - 14 bits of angular data precision would be a bit over the top. With the growth of the P&S IS market, as well the rumors of pending EVIL (Electronic Viewfinder, Interchangeable Lens) rangefinder announcements, the companies stand to make a lot if they can provide DSLR-matching sharpness in this way.

If you want to see it happen, still contact Adobe, et. al. about the previous suggestion for CC-based correction, and then talk to your favorite hardware manufacturer about making mid-exposure IS data available to the post processing engine. Maybe next generation chips will even allow for in-body correction on this same basis.

(Photo up top was not recovered with this or any related techniques - it's just here to add some color).

Edit: It turns out that Microsoft had much the same idea! I swear, I had no information about their working on this - if I had access to pre-press SIGGRAPH articles, I would be a very very happy camper and probably wouldn't have time to blog so much :).
-