'Perfect' Sharpening pt. 2 - Hardware augmentation

05 March 2010

In the last mini-article, I discussed the possibility of employing a tool already in wide photographic use for the purposes of recovering focus lost to the AA filter, optical imperfections, demosaicing, etc. But what about camera shake - how do we solve that?

Well, we could place patterns of known size throughout the scene to capture motion and generate a compensatory PSF for each frame, but this suffers from the logistical burden of needing to carefully place them in each shoot and the practical one of needing to then clone them out of the final image. We could use a variety of deconvolution / PSF estimation techniques to estimate the amount of movement which occurred, but these are both computationally (very) expensive, as well as fraught with problems of accuracy.

Instead, what if we just used the camera's IS data? The camera (or lens, depending on your system) always does its best to compensate for linear and angular motion, but as it can only estimate the needed correction during the shutter exposure, it is (almost) always imperfect. But it ought still be able to record its movement during the exposure, and if the firmware were directed to embed that output in a captured file we could then use that data to calculate sensor motion, combine that with our previous PSF, and in so doing arrive at as near a perfect reconstruction of scene sharpness as I can (currently) wrap my tiny little mind around. I believe this will require inclusion of an additional ADC in most systems to record this data, but it shouldn't need be anything as expensive as the ones used for the sensor data - 14 bits of angular data precision would be a bit over the top. With the growth of the P&S IS market, as well the rumors of pending EVIL (Electronic Viewfinder, Interchangeable Lens) rangefinder announcements, the companies stand to make a lot if they can provide DSLR-matching sharpness in this way.

If you want to see it happen, still contact Adobe, et. al. about the previous suggestion for CC-based correction, and then talk to your favorite hardware manufacturer about making mid-exposure IS data available to the post processing engine. Maybe next generation chips will even allow for in-body correction on this same basis.

(Photo up top was not recovered with this or any related techniques - it's just here to add some color).

Edit: It turns out that Microsoft had much the same idea! I swear, I had no information about their working on this - if I had access to pre-press SIGGRAPH articles, I would be a very very happy camper and probably wouldn't have time to blog so much :).

No comments:

-