When the Channels Come Marching Home Again...

02 November 2010

I was having a conversation with my friend Natalia Taffarel last night when she brought up the utility of automatically generating many of the possible channels which we can use to mask, correct color, etc. And since it's the cool thing to do (everyone makes a set!) I decided to be cool too. It probably won't make me as cool as if I finally paid attention to Facebook, but hey, it's a start :).

The first edition - linked below - will generate 12 additional channels of your image (for a total of 15, coming from RGB):
  1. Hue ("H")
  2. HSL Saturation ("HSL - S")
  3. HSL Lightness ("HSL - L")
  4. HSB Saturation ("HSB - S")
  5. HSB Brightness ("HSB - B")
  6. LAB Lightness ("L")
  7. LAB A ("a")
  8. LAB B ("b")
  9. Cyan ("C")
  10. Magenta ("M")
  11. Yellow ("Y")
  12. Black ("K")
The only reason that my action is different than the other 3 or 4 you probably already have is that it uses the gamut-preserving CMYK-in-RGB method which I wrote about previously, giving you full detail in your added channels without worrying about gamut clipping.

Please note: This action set requires the HSB / HSL optional plugin which can be found on your Photoshop DVD or downloaded from Adobe support.

Late this week or early next I'll finish up a second set so that you can get the channels out individually if you like vs. having to go all or nothing.

So without further ado, I give you... the 15 Channel Salute!

CMYK in RGB - Explained

31 October 2010

When I first posted the action set for this, I promised I'd write more later about it.  Since then, I've also received a few inquiries asking that I explain what the actions are doing in order to achieve something which many believed wasn't possible.  Today, after much, much too long a delay I'll do that - it seems like an easy topic in comparison to what I last wrote about!  I'll even leave out the math... well, mostly!

To understand what's going on, we need to discuss what happens when we normally convert an image from RGB to CMYK.  In the first place, the color modes themselves are opposite one another - where RGB is additive, CMYK is subtractive (or multiplicative, depending on the verbage you prefer).  This is fairly straightforward to understand, as adding more light in RGB makes things brighter (which is intuitive), while adding more ink (which absorbs, or subtracts, more light) in CMYK makes things darker.  That's the change in color mode.  [If you'd like to read more or see a video, I suggest Joe Francis' discussion of it here].

But traditional conversion to CMYK also involves converting to a different color space.  Not surprisingly, standard printing presses can't reproduce the same range of colors which our increasingly wide-gamut monitors can (at least, not at prices most of us can afford).  So conversion to CMYK also involves a color space change which results in the undesirable color shifts which many users end of feeling is just a part of the CMYK color mode.

To be clear: CMYK conversion also deals with the dot gain of the printing press involved, the interplay of the machine's physical configuration as well the actual density of individual inks.  Like the mathematical difference between subtractive and multiplicative blending, that's beyond what we need to deal with today.
The actions which I presented avoid the issue of color shifts by maintaining the same color space.  Now, there are two ways which we could go about doing this.  In the first, we could pull an old Dan Margulis trick and create a 'false' CMYK profile, spending a lot of time tweaking our color coordinates to both give us a suitably large color space as well as ink primaries which could accurately create said colors.  But that takes time.  And false profiles confuse people.  And above all, it doesn't give you those channels back in your RGB document - which is what I needed at the time.  So let's go with option two.

In option two, we calculate the CMYK equivalent values manually as if we were acting as PS's conversion engine.  I'll spare you the actual math, but the crux of the concept is to act like we're already in a CMYK document.
  1.  We start (because we have to) with the hard part - creating the black layer.  Whereas each of the other channels has a direct analog in RGB (C->R, M->G, Y->B), K has nothing which we can easily relate it to.  Because the CMYK color mode is subtractive, that means that the equivalent K value could be anything from 0 up to the lowest value of the other "inks" (C, M, or Y).  To find that 'maximum black' value, we merge the inverse of each of the R, G, and B channels into a new channel using the Darken blend mode.
  2. Next, we let the user decide whether they want to lighten that black value at all before continuing by giving them a standard Curves dialog.  This lets them tamp down that black to something which gives more emphasis to the color channels should they desire it.
  3. Then we tell PS to show us what the 'CMYK' image would look like without its black component, by subtracting it out of the whole.  Conveniently, this creates the R, G, and B inverses of the C, M, and Y channels which we're after and so we can simply make inverted copies of each to create our final channels - it's that easy!
All of this happens with almost no loss of image fidelity.  In 8bpc color, there is literally no difference between the original image and the CMYK version which we generate.  In 16bpc color we can get errors as high as 64 / 32768, though the average is < 16 / 32768 per channel.  Functionally, it's a lossless conversion while retaining the entire color spectrum which existed in the original image.

A related note and a few completely unrelated observations which I made while writing this:
  • Just because the actions as provided are functionally lossless doesn't mean that the academic in me is satisfied.  When I get some more time I'm going to try to actually make the reconstruction perfect.  If you beat me to it, please let me know :).
  • Don't use the color sampler tool (Info Palette) to test error levels at anything less than 100% zoom.  Otherwise it uses that same awful resizing algorithm which PS uses to preview images on non-HW-accelerated systems in order to estimate what a value might be, not actually reporting the real value to you.  This can give you all sorts of ghosts to chase.
  • There are some idiosyncracies to the way that Calculations and Apply Image each do what should be the same math.  The differences are small, but real.  If I have time in the future I'll delve deeper into it, but just be aware of it if it's the sort of thing which interests you.
  • As discussed a few times elsewhere, there is a difference in the output between the Image->Adjustments version of the Brightness / Contrast tool and its Adjustment Layer counterpart, specifically in how Legacy-mode Contrast is calculated.  The Image->Adjustments version calculates it based around the actual mean value of the image, while the Adjustment Layer version assumes a mean of 127.5.  The results are the same, albeit offset from one another (in brightness) by the distance of the actual mean from 127.5.  Generally speaking that's not terribly important (though it does make an argument for greater granularity in the PS controls), but I filed a bug report with Adobe just the same detailing the problem and asking that they bring the tools into alignment with one another.  The discrepancy confuses some people horribly.  Vote for me as your CS6 Beta Tester :).
Please feel free to ask if you still have questions!

      VF - Dirty Secrets, Dirty Tricks

      06 September 2010

      Well, the day has finally come. It's time to lay bare all those little white lies which I've fed you so far. I will caution you up front: this will be another moderately technical day. No numbers, and no math, but a bit of cranial expansion just the same.

      Like yesterday, in the interest of leaving this readable in a single sitting, I won't go into any great detail about any one point, instead giving you a quick rundown alongside some external sources for more reading. Always feel free to ask questions, though, so that I'll have more than the 2 questions I have currently to answer tomorrow :).

      So, without further ado, in no particular order, here we go...

      Dirty Secrets:
      • A Gaussian Blur "Wave" is Very Different From a Sine Wave.

        First of all, this does not invalidate the idea of spatial frequencies, of their mixing, or anything else. But it does have some implications for understanding how the frequencies which we're using interact with one another, and how our separations behave. To learn more about what a Gaussian distribution looks like, I recommend this Wikipedia article.

      • The Photoshop Gaussian Blur filter isn't a Gaussian Blur filter.

        Huh? Longtime PS users may remember that GB used to take a lot longer to complete than it does now [on equivalent systems]. And then magically at some point in its history (I honestly can't remember which version it debuted in), lead programmer Chris Cox implemented one of a number of Gaussian approximation functions - functions which give results which are accurate to an actual Gaussian function to within <1% (usually, at least), but which can be performed by the computer 20+ times faster. Again, this doesn't have many real-world consequences for frequency work, but is wonderful geek trivia, and also brings up some ideas which become relevant later.

        It's also worth noting that this gives us a bit of way around the filter's arbitrary 250px radius limit. A Box Blur (which has a maximum size of 999px), run 3x at the same radius is roughly the same as running the Gaussian Blur at that radius.

      • Most People Already Use High Pass Sharpening

        It's called the Unsharp Mask filter. Seriously - USM is exactly the same as HP sharpening as performed by the methods outlined in this series. Now, it doesn't have the advantage of being able to run curves against the result to control highlight / shadow, etc., nor is it easy to perform "bandpass sharpening" with it (accentuating a range of frequencies, so as to exclude the highest components [where noise "lives"] from the sharpening process). But, it is an old friend for many of us, and makes "HP vs. USM" debates quite comical after you learn the truth.

      • Bandstops Lower Total Image Contrast

        You may have figured this out already if you've followed along closely, but removing a frequency band from an image inevitably results in a loss of some % of the image's total contrast. This is best compensated for with a Curves adjustment, but Brightness / Contrast or Levels can also be used. Generally - for small, localized corrections with bandstop filtration this loss is meaningless and can be ignored; for large moves, though (especially simple bandpassing), it's best to make a correction.

      • Bandpass Filtering Can Cause Scaling Issues

        This probably the source of the greatest misunderstanding about any sort of frequency work in skin retouching (aside maybe from the visceral reaction many people have when you tell them that you're going to use a 'blur' filter in high-end work). In short, what looks good at one image size will not always also look good at a smaller size - the interaction of the component frequencies (as well our ingrained expectation of what things 'should' look like) can make skin which looks flawless at full size appear hideous ('plasticy') when resized. The two best ways of handling this are to either keep two windows open within PS so that you can constantly check what the image looks like small, or to use some form of synthetic frequency replacement to provide enough material to make smaller versions look 'right'.

      • Frequencies Have Color

        This isn't so much a 'white lie', as something which we just didn't bring up. Just as certain types of image components tend to "live" in a range of frequencies, sometimes colors do too. Take for example the red checkering of a tablecloth, the blue reflection of a skylight on a tungsten-lit ball, or a model's red hair against a white backdrop. This can lead to difficulty if we make major changes to an image while being careless in handling such colors. On the other hand, knowing this can be a huge advantage once you've mastered it - say goodbye to color moiré!
      Dirty Tricks:
      • Skin and Smart Objects

        We talked yesterday about how bandstop filters can be used to retouch skin as a "DeGrunge" / "Inverted High Pass" ("IHP") / etc. technique. The greatest difficulty with this procedure is that - for high end beauty work at least -different regions of the skin will require the removal of different frequencies from an image.

        When you think about it, this makes sense. Not only does the skin have a natural variation in its texture across different parts of the face and body, but just as objects appear smaller the further they are from you, the natural 'frequencies' which make up skin's appearance are also compressed or expanded with varying distance. As a consequence, different portions of the body need different kinds of work (or work on different frequency bands).

        By using a Smart Object copy of the image (or better, just the skin areas), you can quickly duplicate these, change the settings as appropriate, and mask them into your work. Even better, if you're disciplined about using your SO's, if you go back to make changes in the image itself later, they will automatically update through, making this a truly "nondestructive" process.

      • Skin and Selections

        One of the best things you can do when you want to use bandstop techniques on skin is to start with a good selection of that skin area (the Select Color Range tool is great for this), and either save it in a channel or simply copy the skin areas into a new layer [be sure to turn on Lock Transparent Pixels if using a separate layer]. By doing this, you keep the frequency filters from sampling non-skin colors in their processing and "bleeding" those into your result, allowing you a much more better result than you'll otherwise get (the GB filter's edge handling makes this even more important). To wit, Imagenomic's Portraiture relies on this idea to get its results [see discussion here].

        Thanks to my friend Richard Vernon, I'm reminded that the "Apply Image" version of our separation techniques doesn't play nicely with selections - it doesn't handle the alpha channel (transparency) in a way which plays nicely with others.  As such, you need to use the "Brightness / Contrast" version of separating if you mean to use this technique in your skin work.

      • What if We Didn't Use Gaussian Blur to separate?

        Here's one of the 'biggies' - what would happen if we weren't limiting ourselves to separating images with just the method we've been using? I'll let my friend Koray explain in his forum post on the subject. The technical version is that the Gaussian 'kernel' (or 'smoothing operator') is just one sort of 'waveform' which we can decompose an image into. Others like the Median filter (a median operator) and Surface Blur (a bilateral smoothing operator) give results which are more edge aware and gradation friendly - two factors which are immensely valuable in enhancing local contrast (demonstrated by Koray), as well as in separating detail if, for example, we are planning to focus on healing / cloning details to correct blemishes and irregularities.

      • How About Skin Transplants?

        It's one of those things we don't like to talk about as retouchers (at least not in reference to any particular client), but most of us have had an experience where the subject's skin was just in a horrific state in the original photo. One in which we really wished you could just use another model's skin to cover it with. Well, now that you now how to separate frequencies, and you know what frequencies the skin lives in - you can! [Tip: make sure that you match pore size, lighting angle, light quality (harsh, soft), and skin source (which part of the body) when transplanting.]

      • Blown Highlights

        Much like the 'transplantation' discussion above, by working on two frequency components separately, it's often easier to work with parts of an image which have been blown out in camera - instead of that awful gray mess which the healing tool will often give you, two strokes in two different layers will give you an often very believable recovery.

      • Automation

        Everything which we've discussed can be heavily automated in Photoshop - from detail enhancement to skin smoothing, sharpening to stray removal. I highly encourage you to work with Smart Objects in this to maintain a non-destructive workflow, especially so that you can go back and tweak your results as you refine your understanding of visual frequencies.

        Yesterday I provided a set of actions which do a number of the basic GB separations in PS. I challenge you to make more of your own, incorporating as many or as few of the techniques which we've discussed over the past few days as you like. I further challenge you to share these on your favorite retouching forum(s), and to explain what you've done and why to those who ask. The power to separate detail, to enhance it, to heal and clone it, etc. is as big a deal as first learning to adjust global color and contrast with a curve. Share it.

      In Closing

      I'd like to take a moment to thank everyone over the ModelMayhem Digital Art & Retouching Forum for their participation in the discussions about these and related topics. If it weren't for their interest in the subject and collaboration in elucidating the details, none of this would have been possible. Head on over when you get the chance and see the amazing work these guys have done, both in terms of retouching itself, as well automating every aspect of these processes.

      I also want to thank you for your readership over the past week or so as we've gone through what for many of you was likely the most technical discussion of Photoshop you've yet experienced. I sincerely hope that it was helpful. And while my writings on this blog will continue on a multitude of different subjects, I hope that you'll always feel free to ask when you have questions about this topic. As above, this is the beginning of a whole new way of looking at imaging for many of you - one which I hope to make as painless as possible.

      Happy Labor Day!

      VF - Why Sean, Why?!?

      05 September 2010

      After yesterday's marathon session of technobabble and math, it's only fitting that you should be rewarded with an entry today which which will be more intuitive and directly beneficial to your workflow. Now, that said, after how precipitously readership dropped off yesterday (I believe in light of the length of the post), I won't be going into such excruciating detail today. Instead I'll make broad strokes and incorporate a few external sources, asking that you tell me where you need more information for a subsequent update.

      One of the basic principles of retouching which I try to impart to people is how important it is to isolate those portions of an image which you want to work on. Sometimes that takes the form of a simple selection; sometimes it's a complex mask; sometimes a color-based selection; sometimes an operation on a channel; and other times, it's a frequency-based operation. Among the things which the last category allow us to do are what you came back for today:
      • Sharpening:

        I'm sure that many of you are familiar with the idea of "High Pass Sharpening", a technique which has been around the internet for about as long as I've been using Photoshop (a long time). In fact, this technique is just what it advertises - amplifying the high frequency portions of the image (by running a highpass filter on a copy of the image) in order to accentuate the detail.

        As it's normally done, though, this technique uses the PS filter naively and so it discards some tonal detail which might otherwise be retained and selectively enhanced. My personal preference when using variants of HP sharpening is to clip a Curves adjustment layer to the high-frequency layer. This allows one to tune the sharpening effect in the highlight and shadow areas separately and achieve just the level of sharpening desired.

      • Detail Enhancement:

        Often mistaken for the singular solution to the "Dave Hill look" (sorry Dave), use of large-radius HP filters to enhance local detail is just an expanded version of the sharpening discussed above (alternatively known as HiRaLoAm). In this case, we're just selecting a larger swatch of frequencies to enhance, resulting in that larger 'gritty' look [Calvin Hollywood is another big fan of these techniques].

        Again, though, it's important to use a revised technique vs. simply running a naive HP filter so that you can retain full contrast in the detail - otherwise, what's the point? Also note that, while Linear Light is the way which we blend the frequencies back in, other blend modes are sometimes preferable artistically (beware that some come with side-effects, especially Hard Light, Pin Light, and Vivid Light).

      • Stray Hair Removal:

        One of the neat facts about frequencies is that certain types of photographed objects (or their details) tend to 'live' within certain frequency bands. Hair, for example, is a very fine detail, and so tends to exist only in higher frequencies. We can use that fact to our advantage by performing a separation as we've previously discussed, and then simply using the healing or cloning brush on the high-frequency layer to remove the hair with no trace that it had ever been there. [And yes, while the healing brush often works for this on the full-frequency image, experienced retouchers know that no tool is perfect and there are situations in which it gets very confused by the larger context of the image.]

      • Skin Smoothing:

        This will be the longest component discussion we have today, but one which has also been the most popular. To start, please take a minute to go read byRo's classic writeup on frequency separation for use in skin retouching over at RetouchPro. He calls it the "quick de-grunge technique".

        Go read it now and we'll resume when you get back.

        Pretty impressive for how quickly he did that (real world execution can be Natalia Taffarel, Gry Garness, and Christy Schuler. [Oh, and BTW, as of this writing, only one of those three very talented ladies knows what you've already learned - that's how elite your efforts thus far have made you :).].

      • Skin Retouching & Beyond:

        While the above is a brilliant, easy technique, it's actually only just the beginning. What if, instead of simply removing those image frequencies (applying a bandstop), we worked on the "grunge" frequencies with the healing and cloning tools like we talked about doing to remove stray hairs? I won't bore you with detail in this post, suffice that this creates an incredibly believable result without taking as long as conventional methods.

        Even better, this can be used on both layers in order to remove unsightly features (skin folds) by healing or cloning on each of the layers - in the high-frequency you can focus on patching in good texture, while in the low-frequency you're able to focus on getting the overall shape right. [As a bonus, because the low-frequency layer has no detail to it, you don't have to be quite so precise as when working on a single (full-frequency) image].

      • Whatever Else You Come Up With:

        Seriously - the above are just some of the everday (formerly) difficult tasks in retouching which can be streamlined by incorporating an understanding of visual frequencies. But by no means is that list conclusive. As we'll discuss in tomorrow's post, the underlying techniques which we've been covering are limited only by your creative application of them.
      Until tomorrow...

      P.S. I did promise you some automation, didn't I? We'll get into a heavy discussion tomorrow, but for now here is a set of actions which perform each of the techniques discussed yesterday. Each assumes that you are in the bit depth it identifies itself with, and that you are running it from the topmost layer. If you are in a single layer document, you will get an error message shortly after running it - this is normal and you should just click "Continue". If you will only be using single-layered documents, you can avoid the message by disabling the "Copy Merge" step. These actions will create all needed duplicate layers for you, and you can turn off the instruction dialogs at any time by unchecking them in the actions panel. Finally, while I have had no difficulty with them, I make no warrant that they will work for you, nor do I warrant that it will not mess up your files. Use them at your own risk.

      VF - The Mechanics

      04 September 2010

      First of all, a note for everyone who's been following so closely - your support means a lot. Further, I apologize for the delay in posting this. Unlike more established bloggers, I'm not just posting up pre-written material. I'm writing this as we go and attempting to respond to what I hear back from you in the process. As such, when life throw's me a curve ball, posting gets delayed. You have my apologies.

      Now, before we get into how we do lots of fancy things in Photoshop, this is going to be one of the most intensive days we spend on technical discussion, so let's start by spending a few moments reviewing where we've been so far. First, we demonstrated that (just like sounds) images can be seen (Ha! I kill me!) as being composed of many different frequencies which interact in order to create a whole image. We discussed the definitions for all of the processing tools which we're going to employ - lowpass, highpass, bandpass, and bandstop filters. And we looked at how adding the low frequencies and the high frequencies from an image together gives us the whole:

      DC United's Chris Pontius

      Then we expanded upon this to realize that, like the simpler kinds of math (the good kinds), the order in which we do things is commutative - that is, that subtracting the low frequencies from a whole image is the same as directly extracting its high frequencies through a highpass filter:

      Image subtraction demo

      Most recently, we discussed how the bandpass and bandstop processes can be thought of as being similarly inverse processes.

      So - how do we do it in PS. Do we just use the High Pass (HP) and Gaussian Blur (GB) filters? Unfortunately, no, and the reason why is going to involve some more... math (sorry guys!), and one of those little white lies which I've been telling you up until now.

      To make our first pass at explaining what goes on, let's go back to our audio examples. When we were adding two audio tones together, each of those component sounds had amplitudes between -1.0 and 1.0. Or we might say that each had a range of 2. Because when we add them together we could get extremes of: 1 + 1 or (-1) + (-1), our result could have amplitudes from -2.0 to 2.0, or a range of 4. In theory, each time we add a sound in, we expand the range of the data which we're trying to handle. In real life, though, we have to keep those values scaled to a range which we can actually work with.

      What does that mean for images?

      For the purposes of discussion, we'll refer to PS images being able to have levels (the equivalent of amplitudes) from 0 - 255 (a range of 256). In truth, many of you know that with 16 or 32 bit processing you have different ranges, but we'll only use one set of numbers for now.

      Anyway, there are a number of differences between performing operations on sounds and on images in PS. The most significant of these is the fact that images don't (naturally) have negative values. Photoshop doesn't store brightness values of -255, or even of -1 (at least not for our purposes), and the images we work with aren't -255 to 255, -128 to 128, etc. This has some significant implications for how we handle our operations.

      As an example, let's pretend we don't know about that difference and I'll separate an image rather naively. I'm going to use the picture of Santino which we've used a few times so far:

      Tino

      Now, I'll blur a copy of that image in a separate layer:

      Tino Blurred

      And subtract that from a third copy with the Apply Image command:

      Tino Blurred

      Not very much like what I've been showing you so far, is it? Sorry about that.

      Here's the problem - do you remember how when we were mixing low and high sounds together, sometimes the high frequency brought the low frequency signal 'up', but at other times it brought it 'down' (and vice-versa)? [go back to review the time-correlated tracks to see what I mean] Now look closely at the result I've shown you above. You'll notice that the result only shows those areas which are brighter than the low frequency version. And this is because we don't have negative values. All of the areas which were darker in the high frequency than in the low frequency have been clipped to 0.

      Take a minute to digest that, because it's as important as it is difficult to understand. The high frequency data doesn't "know" that we need it to occupy a finite space, and it wants to have both positive and negative values, just like it would in real life. Not having negative values means that we need to find another way to record those areas which are darker in the other frequency set. One way of dealing with this is to just take the darker areas of the high frequency data and combine those back into our mix above - we would use three layers to accomplish one separation, demonstrated in the image below. To do this, I created the 'Lighter High Frequency' layer as above, and the 'Darker High Frequency' layer with the Apply Image command (more details later). Take a look:

      Tino Separation

      In the first high-frequency set, our blend mode (which we'll discuss after bit) is ignoring the black areas while adding the light areas into the final image (adding black to a pixel is like adding zero to a number). In the second high-frequency piece, the dark areas are lowering the final values while the white areas are ignored.

      This is technically great because it gives us a 100% accurate de/reconstruction of our image (that is, summing those three layers back together is pixel-for-pixel identical to the original). On the other hand, it's really inconvenient for our high-frequency data to be on two separate layers. How might we get it onto a single layer?

      That leads us to our second technique. In this one, we pretend that we can have both 'positive' and 'negative' numbers in the same layer. To do so, though, we need an arbitrary value which will serve as the '0' point around which positive and negative values will appear. In Photoshop, this is 50% gray - that neutral value which many of you already use as a starting point with Soft Light, Overlay, etc. layers. Photoshop will ignore that middle gray value (it won't change the pixels when we blend with it), but when other values are brighter than 50%, it will lighten the final image while when values are darker than 50%, it will darken the final image. This option is what most retouchers I know do in practice, and what I hope you will settle upon at the conclusion of this discussion.

      Like most things in life, though, this isn't going to come free. In order to put two layers into one as we're discussing, a compromise has to be made. Remember that each separation we make can have the full range of values in it - the sounds could go from -1.0 to 1.0, and our images can go from 0 to 255. In the same way, the high frequency image data can be as much as 255 levels above or below the low frequency values (these also ranging 0-255). In effect, our high-frequency data has a range of 512; not just 256. To compress this down into a single layer, then, we have to sacrifice some level of precision in getting there - we need to compress 512 levels down into 256.

      My preferred method of doing this is to 'scale down' the data - to map the darkest possible dark of the high frequency data to 0, and the lightest possible light of the high frequency data to 255 (128 still being neutral). This preserves all of the finest details in the image, but sacrifices a small amount of its 'smoothness' (numbers later).

      The PS High Pass filter, on the other hand, seems to have been designed for creating lots of rough contrast, and so simply 'lops off' those light lights and the dark darks within the high-frequency data (much the way many of you may be familiar with a color channel 'clipping' when it's over or underexposed). This makes for a more contrasty layer (part of why some people like it so much), but it sacrifices a lot of fine detail in order to get there. To give you a side-by-side comparison of best-possible reconstruction using the default workflow, take a look at a closeup from Tino's uniform (the four stars represent the four MLS Cups which DC United has won):

      Highpass comparisons

      You can see quickly that the High Pass filter's version is far more contrasty right out of the box. Unfortunately, you'll also notice that its reconstruction (ironically) loses high-frequency contrast when blended back in to restore the original image. This isn't to say all is lost for the filter, though. Like adding and subtracting the frequencies from one another, contrast mapping is commutative - we can do things in a different order and still get the same result. In this case, we'll be able to use the HP filter so as to avoid having to mess with (what for many is a terrifying experience) the Apply Image tool. If we go to Image->Adjustments->Brightness / Contrast and choose to lower image contrast by (-50) with the Legacy option enabled, we can then use the included highpass filter to get that single-layer high-frequency data, while retaining all of that wonderful fine detail contrast.

      Highpass comparisons

      Notice how the results are identical - this is great, both for image quality (obviously) as well for automation implications which some of you are undoubtedly already thinking about.

      For now, let's finally go through the step-by-step PS instructions.

      To perform Highpass filtration into three layers:
      1. Make three copies of your image (two new copies if working on a single-layered document).
      2. Label the bottom layer "Low Frequency". Label the middle layer "High Frequency Light". Label the top layer "High Frequency Dark".
      3. Select the Low Frequency layer.
      4. Run the Gaussian Blur filter at your separation radius.
      5. Select the "High Frequency Light" layer. Set its blend mode to "Linear Dodge (Add)".
      6. Open the Image->Apply Image dialog box.
      7. In the Source box, select "Low Frequency" as the Layer, "RGB" as the Channel. Make sure the "Invert" box is unchecked.
      8. In the Blending box, choose "Subtract". Opacity should be 100%, Scale 1, Offset 0, Preserve Transparency and Mask.. should be unchecked.
      9. Click OK.
      10. Select the "High Frequency Dark" layer. Set its blend mode to "Linear Burn".
      11. Open the Image->Apply Image dialog box.
      12. In the Source box, select "Low Frequency" as the Layer, "RGB" as the Channel. Make sure the "Invert" box is checked.
      13. In the Blending box, choose "Linear Dodge (Add)". Opacity should be 100%, Scale 1, Offset 0, Preserve Transparency and Mask.. should be unchecked.
      14. Click OK.

      This method works in all bit depths and results in a reconstruction with a mean error of 0 (StDev & median also 0). That is, it is mathematically (and technically) perfect.

      To perform Highpass filtration into two layers using the Apply Image command:
      1. In 16bit mode:
        1. Make two copies of the current image (one copy if working on a single-layered document).
        2. Label the bottom layer "Low Frequency". Label the upper layer "High Frequency".
        3. Select the Low Frequency layer.
        4. Run the Gaussian Blur filter at your separation radius.
        5. Select the High Frequency layer. Set its blend mode to "Linear Light".
        6. Open the Image->Apply Image command.
        7. In the Source box, select "Low Frequency" as the Layer, "RGB" as the Channel. Make sure the "Invert" box is checked.
        8. In the Blending box, choose "Add". Opacity should be 100%, Scale 2, Offset 0, Preserve Transparency and Mask.. should be unchecked.
        9. Click OK.

      2. In 8bit mode:
        1. Make two copies of the current image (one copy if working on a single-layered document).
        2. Label the bottom layer "Low Frequency". Label the upper layer "High Frequency".
        3. Select the Low Frequency layer.
        4. Run the Gaussian Blur filter at your separation radius.
        5. Select the High Frequency layer. Set its blend mode to "Linear Light".
        6. Open the Image->Apply Image command.
        7. In the Source box, select "Low Frequency" as the Layer, "RGB" as the Channel. Make sure the "Invert" box is not checked.
        8. In the blending box, choose "Subtract". Opacity should be 100%, Scale 2, Offset 128. Preserve Transparency and Mask.. should be unchecked.
        9. Click OK.

      These methods result in a reconstruction with a maximal error of 1 level difference in each channel (that is, a 1/256 maximum shift in 8bit; a 1/32769 shift in 16bit). The average shift is 0.49 with a StDev of 0.50 and Median 0. In less mathematical terms, it is functionally lossless in 16bit.

      To perform Highpass filtration into two layers using the High Pass Filter:
      1. Make two copies of the current image (one copy if working on a single-layered document).
      2. Label the bottom layer "Low Frequency". Label the upper layer "High Frequency".
      3. Select the Low Frequency layer.
      4. Run the Gaussian Blur filter at your separation radius.
      5. Select the High Frequency layer. Set its blend mode to "Linear Light".
      6. Choose Image->Adjustments->Brightness Contrast.
      7. Check the "Legacy" option.
      8. Enter a value of -50 in the Contrast box. Leave Brightness at 0.
      9. Click OK.
      10. Run the High Pass filter at the same radius which you used in step (4).
      This method works in all bit depths, and results in a reconstruction with a maximal error of 1 level difference in each channel (that is, a 1/256 maximum shift in 8bit; a 1/32769 shift in 16bit). The average shift is 0.54 with a StDev of 0.59 and Median 0. In less mathematical terms, it is functionally lossless in 16bit. In 8bit, it is just slightly (yet probably meaninglessly) inferior to the 8bit Apply Image technique.

      To perform Bandpass filtration with a single layer:
      1. Make a single copy of your image.
      2. Label this layer "Bandpass".
      3. Choose Image->Adjustments->Brightness Contrast.
      4. Check the "Legacy" option.
      5. Enter a value of -50 in the Contrast box. Leave Brightness at 0.
      6. Click OK.
      7. Run the High Pass filter at the radius for the lowest frequency which you want to be visible (remember, highpass filters keep frequencies above a threshold value).
      8. Run the Gaussian Blur filter at the radius for the highest frequency which you want to be visible.
      9. The Bandpass layer is your bandpass'd result.
      This method results in a bandpass which is within 1 level of an 'ideal' Gaussian separation in any bit depth. Again, functionally perfect in 16bit, and almost always close enough in 8bit. It will be rather low contrast by default (a necessary by-product of allowing fine detail retention) which you may want to augment with another B/C adjustment or with normal curves [16bit has a huge advantage here of course]. It is also worth remembering that high frequencies are low radii, and that low frequencies are high radii in the Photoshop context. This is a white lie which I'd hoped to only begin discussing that tomorrow, but as it's confusing a few folks today, we'll get it out there now. The discussion of why that is will still remain for later.

      To perform Bandstop filtration with a single layer:
      1. Make a single copy of your image.
      2. Label this layer "Bandstop".
      3. Set the layer's blend mode to "Linear Light".
      4. Invert the layer (Image->Adjustments->Invert).
      5. Choose Image->Adjustments->Brightness Contrast.
      6. Check the "Legacy" option.
      7. Enter a value of -50 in the Contrast box. Leave Brightness at 0.
      8. Click OK.
      9. Run the High Pass filter at the radius for the lowest frequency which you want to block.
      10. Run the Gaussian Blur filter at the radius for the highest frequency which you want to block.
      11. The Bandstop layer is now acting as the bandstop for your image.
      This method results in a bandstop which is within 1 level of an 'ideal' Gaussian separation in any bit depth. Again, functionally perfect in 16bit, and almost always close enough in 8bit.

      ...

      Wow. We just covered an awful lot.

      I'm going to stop here for today to give you time to digest what you're just read, to allow you time to ask more questions either directly or in the forums, so that you can point out what I'm sure are a plethora of typos in the above, and so that I can get to another soccer game :). We'll resume tomorrow with a discussion of what all of this can be used for in practice (including determination of separation radii!), discussion of advanced application (multiple-radius separation, etc.), and some examples of how you can automate many of these processes. Monday will still be the day when I answer as many questions as I receive, and when I will also reveal what white lies remain in this text.

      Thank you for your patience, and thank you for reading.

      And a second thanks to mistermonday for pointing out a gross and oft-overlooked typo in the bandstop instructions above - my apologies for the oversight!

      VF - Tools of the Trade

      31 August 2010

      We continue our discussion on visual frequencies by examining the "tools of the trade" which we'll use to bed these ideas to our will in practical retouching. To do so, though, we'll start like we did last time by introducing more traditional audio equivalents as well as some definitions.

      In principle, we will demonstrate that, just as we were adding frequencies together to form new ones before, we can subtract components out of a whole. Consider the figure below adapted from yesterday. By subtracting the low frequency component out of the whole, we are left with a high frequency signal.

      Sound subtraction demo

      We can do that same thing with an image, subtracting out the low frequency portions to leave us only with the high frequency:

      Image subtraction demo

      This should be fairly intuitive - if we can add two things together to get a sum, we should be able to tease them apart too. But before we get too far into that, we really do need to spend some time going over the definitions of a few terms to make sure that we don't confuse things later on. For reference, all of the terms I'm going to use are from general signals processing, and are therefore applicable to sound as well as to images.
      • Bandstop filter - A filter which stops a frequency band from passing through it. Generally, which frequencies are blocked is selected (or 'tuned') by the user.
      • Bandpass filter - A filter which allows only a select frequency band to pass through it. Like the bandstop filter, the range which is allowed to pass is controlled by the user.
      • Lowpass filter - A filter which allows only frequencies which are lower than a selected value to pass through.
      • Highpass filter - A filter which allows only frequencies which are higher than a selected value to pass through.
      Let's put those in context. Let's say that I have an arbitrary sound source which consists of four component tones as below:
      • a 25Hz tone
      • a 200 Hz tone
      • a 1,000 Hz tone
      • and a 10,000 Hz tone
      Now, if we apply a highpass filter set to 500 Hz to that sound, which components are going to be left in the result? Only two portions - the 1,000 Hz and the 10,000 Hz tones. By allowing only frequencies above 500 Hz to pass through, we have eliminated the other two entirely from the result.

      On the other side of things, by applying a lowpass filter with the same setting (500 Hz), we can eliminate the 1,000 Hz and the 10,000 Hz tones while keeping the 25 Hz and 200 Hz components (those frequencies which fall below the cutoff).

      We'll take this one step further. If I apply a bandstop filter to this sound, configured to block from 100-5,000 Hz, which components will be left? Because I am allowing nothing above 100 Hz, nor anything below 5,000 Hz, I'm left only with the 25 Hz and 10,000 Hz tones in my final sound. Equally, if we were to apply a bandpass filter with those same settings, we would have eliminated the 25 Hz and 10,000 Hz components while retaining the 200 Hz and 1,000 Hz portions.

      ...

      Whew - you made it! Now let's apply those definitions to Photoshop terms so that we can get back to something fun, shall we?

      I don't think you'll be surprised, but the Photoshop High Pass filter is.... a highpass filter! Shocking, I know. On the other hand, what a lot of people don't know is that Gaussian Blur is its exact opposite - it is a lowpass filter.

      Now, Photoshop doesn't have bandpass or bandstop filters per se, but that doesn't mean we can't create them ourselves. Think about it. A bandpass filter is nothing but lowpass and highpass filters operating on the same source. So, if we apply the High Pass filter to an image, followed by a Gaussian Blur, we have selected (or bandpassed) all the frequencies between them, creating a layer which contains only those portions of the image.

      So what about the bandstop filter? Well, let's think for a second. From its definition, a bandstop subtracts out the frequencies which we've selected. And in the paragraph above, we figured out how to create a layer which only contains those frequencies. Now, if we go back to arithmetic (yes, it's math, but hold on - it's easy math), you'll remember that subtracting one value from another is the same as adding its inverse. So, if we invert (Image->Adjustments->Invert) the layer which we created above, we'll have transformed it from a bandpass filter into a bandstop layer. Neat, huh?

      Let's look at some examples again before we wrap up for today. Some of what I'll show you will look very weird, but please accept it for what it is - we'll get into applications by the end of the week. For now, just focus on understanding what's going on with the image and what we're doing to get there.

      The first series is an image you've already seen - the difference is that you now know what you're really looking at. The left is the whole image. In the center, the original image after we've applied a lowpass filter. On the right, the original image after applying a highpass filter.

      Sound subtraction demo

      Next, we see the original image on the left. In the center, I have applied a bandpass filter to the image, allowing through only select intermediate frequencies. The right shows what happens to the image when I transform the bandpass filter into a bandstop filter. Crazy looking, isn't it? But I promise, it's going to be something you'll love before long

      Sound subtraction demo

      We'll stop here for tonight. I've given you a lot of information to chew on and throwing too much more at you now is as likely to make things worse as it is better. Consider what we discussed, review it as you have time, and feel free to ask questions if you have them in your forum of choice. When we come back on Thursday (tomorrow is a soccer game!) we'll jump into how we actually go about making these filters in Photoshop.

      So that you know what we'll be covering generally, let me give you the tentative schedule for the rest of this series:
      • Thursday Saturday - The Mechanics - the process to actual apply these filters in PS - there are some sticking points!
      • Saturday Sunday - Why Are We Doing This? - how these techniques can be used in real-world retouching and how to make the process easier for yourself
      • Sunday Monday - Dirty Truths and Dirty Tricks - highpass was just the beginning + all my lies laid bare
      • Monday onward - Q & A - whatever you ask!
      (schedule subject to change depending on Hurricane Earl and
      whether Pepco is actually ready for downed lines this time)

      Visual Frequencies

      29 August 2010

      Few subjects have gotten as much forum attention in the past couple of years as the 'awakening' surrounding the use of visual frequencies in retouching. I say 'awakening' of course because, while new HDR software is seemingly being released everyday, visual frequencies have always been a part of the images we work on. Up until now, though, only a very few retouchers knew they existed, and even fewer know even now how to fully employ them in production work.

      So, my wife having left me for some high-speed training this week, I want to make use of the free time by embarking on a short series to try to make up for where my previous attempts to tackle the subject have fallen flat.

      Before we get started, though, this is a very complex subject, and as such, I am very much going to be glossing over a lot of the details in the first few posts in order to convey the basic principles. The truth, as they say is ugly, and so we'll save it for when you have the foundation to tackle it properly.

      For most people, our familiarity with frequencies is in terms of sound. We all know that birds chirping, glass breaking, and children screaming are (typically) high-pitched (or high-frequency) sounds. Equally, we associate sounds like explosions, fog horns, and books dropping as being low-frequency sounds. But what does that have to do with images?

      Sine Wave
      Well, we have to go back to high school physics for that one, so bear with me here - it's been a while for both of us. Do you remember what a low frequency sound 'looks' like, with a longish wavelength, or period? The figure at right (generated with ASU's J-DSP Editor) shows a graph of such a generic low frequency tone.



      Sine WaveAnd what good would an arbitrary 'low' frequency tone be without a complimentary 'high' frequency tone? The figure at left shows just such, having a frequency which is twice that of our first sample. [Don't worry - I know this is boring, but it is leading somewhere good!]



      Sine WaveNow, no one likes to sit around listening to single-frequency hums day in and day out, so what is happening physically when we mix two sounds together? You might remember that the two components can combine either constructively, emphasizing one another, or they can combine destructively, canceling one another out. The figure at right demonstrates what happens when we combine our low and high frequency sounds from above. Note how they combine constructively and destructively, depending on their relative values.



      Sine WaveSine WaveSine WaveLet's trying applying this in a more visual sense. To do so in Photoshop, I constructed the figures at left, which consist of nothing but vertical bars evenly spaced across the screen. In a second layer (also shown at left) within the same document, I created another series of bars, this time twice as wide as those in the first. These should be considered as analogous to the equivalent sound waves which we looked at above. If we combine them in Photoshop (a process which we will go over later), we should get something similar to the combination of sounds from before. In fact, the third figure reflects just that - an eerily close replication of the sound pattern. Pay close attention to the way that the highs and lows combine just as they do in the audio signal. And while this example is simply one-dimensional and wholly contrived, it is a process which occurs across as many dimensions as we feed it - highs continue to build on other highs, and to cancel with lows.


      At this point, it would be reasonable if you're thinking, "Gee, it's great that you can compose these frequencies like that Sean, but I'm not making photographs - I'm retouching them." Here's the magic part. Just as every sound which your computer microphone records can be broken down into its component frequencies using processes which are the reverse of what we did above (we'll talk about them later); we can use Photoshop to break images down into their component frequencies. Let me show you what I mean.


      The triptych below shows a breakdown of a shot of DC United player Chris Pontius as he gains possession during an MLS match (yes, it's a 'real' photo). On the right is the image as output from Lightroom. One the left, I've used the Gaussian Blur filter to show only the lower frequency portions of the image; in the middle, only the high frequency portions. By combining the two back together, we can recreate the original. I will tell you right now that we can do this very, very accurately in Photoshop - at least as accurately as we can switch color modes.


      DC United's Chris Pontius

      This next image does it again, but this time I've broken the image into three different segments, combining back to the same source image. For now you'll have to take my word that we can do this as many times as we like for an unlimited number of separations, and really limitless possibilities for retouching.


      DC United's Chris Pontius

      Over the next few blog posts, I'll get into the hows, the whys, and above all things the details of this, but for a moment let sink in what we just demonstrated. Where many of us grew up in the retouching world with Margulis and Krause teaching us the 'revolutionary' idea that an image could have 10, 13, maybe 20 or more channel-based representations of itself; this idea represents the ability to create as many more permutations as we could ever want. You can look at an image not just in terms of additive or subtractive color; not just luminance and chrominance; nor even hue, saturation, and lightness - no, you can combine these with size; even with shape. We have a lot more to talk about, but all in good time. If you're too anxious to wait, head on over to ModelMayhem to read the "HighPass Sucks (+ solution)" thread which got a lot of this hubbub started; otherwise, I'll hope to see you again here soon.



      ...

      Addendum: Much of the above was written hurriedly. If there are typos, I would appreciate your help in identifying them. Part of the rush has been that it was originally my intention to make this a video tutorial series, but I'm embarrassed to admit that I no longer know of any (free) utilities for generating and mixing constant audio tones (demonstration of mixed audio being the biggest boon in moving to a video format). If you know of such a utility (which is also GUI'd and easy to use), please drop me a note!

      CMYK in RGB

      18 May 2010

      I'll write more later, but I put up an action for converting RGB to CMYK from within an RGB file. Read more and download the action here.

      Photoshop's New Curves

      13 March 2010

      As someone who works primarily in 16bit mode for my editing, I've become increasingly frustrated with the Curves dialog's anachronistic dedication to an 8bit way of life. It's time for an update, and I'm going to take the next few paragraphs to explain why as well as what I'd like to see done in CS5.

      What's wrong with the current setup? Because the current dialog is built around an '8bpc' interface (note: the backend operates with a higher, bpc-respecting precision), we can run into a few problems:

      1. There's no way to respect middle gray. Adobe implemented '16bit' editing at 15bpc specifically to allow a true middle gray - but it's almost impossible to maintain when applying a curve.

      2. You can't change the display size for the window. No zooming in to tweak your curve, no blowing it up for projection / display / teaching purposes, etc. A PITA for projection, IMO.

      3. You end up restricted to not placing points more than ~5 mapping values apart, remanding fine control of values to multiple curve sets or to be generated through masking of the same. Even though your dataset may allow for and benefit from greater resolution.


      What would be better? Simply mapping the backend directly to the interface. Start with the end values, and instead of always being 0 and 255, they become 0 and (1.0 * MODE_MAXVALUE). For 16bpc RGB processing this would mean we would have values from 0 to 32768 (with a pinnable 50% gray!); for 32bpc 0 to 1 (pinnable 0.5); and just as now 8bpc would be 0 to 255. CMYK and LAB would also get the option to display values of 0 to 100 and -1.0 to 1.0 as appropriate. Everything between the end values is of course interpolated to the display size.

      What's better is that once the interface is set to be relational between display size and the actual data format, it becomes simple to make the interface scalable and the user can now be allowed to resize the curve window to any size desired, to zoom in and out of the curve ala online mapping programs (scroll wheel!), and to place points as close to one another as is desired (accepting that maximum zoom becomes 1->1 mapping with the actual data resolution). The user might want an option to display curve values as if mapping to different bit depths (8bpc 'traditional', 16bpc for reference, 0.0-1.0 for geeks), and this should again be easy to implement through an addition to the Options dialog.

      The one difficult part about this may be the ACV Curves files. I've not been able to dig up the internal data structure, though I suspect given Adobe's historic efficiency and a bit of prodding around a few files that it's simply storing the curve's 8bit values. Obviously this would require revision, likely offering a second data format for newer curves using a pair of floating point values for each point on the curve. There's an outside chance that - depending on how the files are processed internally and whether there is an internal EOF marker in the files - they could store two datasets within the same file, the first down-resolved for legacy purposes and the second containing the floating-point values. That part is to Adobe, though.

      If it's something you'd like to see, please let Adobe know.

      'Perfect' Sharpening pt. 2 - Hardware augmentation

      05 March 2010

      In the last mini-article, I discussed the possibility of employing a tool already in wide photographic use for the purposes of recovering focus lost to the AA filter, optical imperfections, demosaicing, etc. But what about camera shake - how do we solve that?

      Well, we could place patterns of known size throughout the scene to capture motion and generate a compensatory PSF for each frame, but this suffers from the logistical burden of needing to carefully place them in each shoot and the practical one of needing to then clone them out of the final image. We could use a variety of deconvolution / PSF estimation techniques to estimate the amount of movement which occurred, but these are both computationally (very) expensive, as well as fraught with problems of accuracy.

      Instead, what if we just used the camera's IS data? The camera (or lens, depending on your system) always does its best to compensate for linear and angular motion, but as it can only estimate the needed correction during the shutter exposure, it is (almost) always imperfect. But it ought still be able to record its movement during the exposure, and if the firmware were directed to embed that output in a captured file we could then use that data to calculate sensor motion, combine that with our previous PSF, and in so doing arrive at as near a perfect reconstruction of scene sharpness as I can (currently) wrap my tiny little mind around. I believe this will require inclusion of an additional ADC in most systems to record this data, but it shouldn't need be anything as expensive as the ones used for the sensor data - 14 bits of angular data precision would be a bit over the top. With the growth of the P&S IS market, as well the rumors of pending EVIL (Electronic Viewfinder, Interchangeable Lens) rangefinder announcements, the companies stand to make a lot if they can provide DSLR-matching sharpness in this way.

      If you want to see it happen, still contact Adobe, et. al. about the previous suggestion for CC-based correction, and then talk to your favorite hardware manufacturer about making mid-exposure IS data available to the post processing engine. Maybe next generation chips will even allow for in-body correction on this same basis.

      (Photo up top was not recovered with this or any related techniques - it's just here to add some color).

      Edit: It turns out that Microsoft had much the same idea! I swear, I had no information about their working on this - if I had access to pre-press SIGGRAPH articles, I would be a very very happy camper and probably wouldn't have time to blog so much :).

      Perfect Sharpening

      20 February 2010

      A Proposal for Perfect Edge Sharpening Using COTS Equipment - Software Implementation Required
      Sean Baker, 20 February 2010

      Summary
      It is possible, through software implementation, to use existing, COTS equipment to create for any digital imaging combination a 'perfect' recreation of scene sharpness, accounting in one step for lens imperfections, diffraction, low-pass filtering, and demosaicing. Image data truly lost by the imaging system is not proposed to be recoverable, but restoration of all that which was originally transmitted is. Suggestions for implementation are provided; specific code and algorithms are not.

      Background
      In the digital photography world, it is common for photographers working in certain fields to photograph color charts so as to later ensure that when colors in the scene are rendered in the final digital image match their color in life, controlling for variance in sensor color sensitivity, color casts in lighting equipment & modifiers employed, environmental reflections, etc. The most common such tool is a Gretag MacBeth ColorChecker - a card consisting of 24 swatches of known, factory-inspected color which are then used by software such as XRite's own application, Adobe's DNG Profiles Editor, or various scripting solutions for Adobe Photoshop to render color perfectly after its RAW capture.

      It is also known that digital imaging systems by their nature have a certain degree of 'softness' to their RAW files as a result of optical imperfection in lenses, low pass filters employed to prevent moire patterns, the Bayer array itself, etc. This softness leads to many experts in the field recommending a 'Capture sharpening' step to post-processing, so that all subsequent actions taken are based on the most accurate version of the scene available.

      These two facts present an opportunity for one to combine them and create a reconstruction of the imaging system's imperfections, allowing a theoretically 'perfect' sharpening during post production.

      Concept, Implementation
      By using a frame-filling, or nearly frame-filling shot of a color checker, the edges of the color swatches can be used in determining both the horizontal as well as the vertical components of the system Point Spread Function (PSF) across all parts of the image area. The advantage here over other point or line based systems is that correction can be made for each part of the frame and not generalized over the whole (consider that the MTF of a lens always changes, generally degrading radially from the image center). As each component edge of the CC card is sampled, the edge width is measured and stored in a correlation table along with its image position. The user at this point might also be given the option to provide additional CC samplings to provide additional data points. Upon completion of the table, a fitting function (likely - but not necessarily - a 2d order polynomial) is generated from the table and used to calculate horizontal and vertical PSF components at each pixel position in the image. These fitting functions ought be savable and transmissible in the same manner as presets in Adobe Lightroom, Curves in Adobe Photoshop, etc.

      Sharpening is now performed to restore scene sharpness. In the actual software implementation, it is desirable that the user be given a familiar 'Strength' setting, allowing him / her to use more or less of the sharpening than 'ideal', as well to include at least a rudimentary noise reduction function to exclude such pixels from the sharpening process. [It should be noted that for the NR mentioned, if something more advanced than a median filter is to be employed, it would be ideal again to sample from the known darker and / or bluer portions of the CC chart so as to determine the 'baseline' noise present.]

      Recommendation, Closing
      As someone who values image sharpness - or at least the option of having it - I would pay good money to have a product such as this in my workflow. As competition within the digital postproduction realm increases, companies like Adobe would do well to internalize such a feature for future revisions of its products (namely - LR, PS) to maintain its position as the 'gold standard'. As a user, if you feel this would be a valuable addition to your workflow, I strongly suggestion you contact Adobe and let them know your thoughts. They'll only be able to devote the time if there is a genuine clamoring for it, so it's to us to make enough noise to be heard.

      Afterward
      It's worth noting that the idea outlined above will still be imperfect (and why a 'strength' control becomes so important). It will not account for very low frequency contrast differences across the imaging area. As well, because the focus distance for a frame-filling color checker is likely to be considerably different than the subject distance in the actual scene, the optical system will move and its MTF slightly altered in the process. This is unavoidable, but also presents another possibility to be explored. By using an interface similar to that of PictureCode's Noise Ninja, wherein problem noise areas are selected and controlled for by the software, the user might select those edges within a scene which are in focus and which ought to be 'perfectly' sharp (natural edges, etc.). By correlating these across the scene in the same manner as described above [correlation table, fitting function, etc.] the scene might be better restored than by any conventional method.

      Also, the horizontal and vertical components might also be allowed to be supplemented either in the manner described just above, or by allowing considerable rotation of the CC during its capture, allowing for calculation of more axes to a PSF. While this will considerably increase the level of computation involved, for some users it may be worth the time & effort.

      Update:
      Thanks to Bob Freund over at MM, I've been reminded that I should have been explicit in stating that all of the above measurement and sharpening can and should only be performed after the correction of chromatic aberration, as its presence could confuse or wholly ruin any measure of the actual PSF. An algorithm could be developed to account for, and overcome its presence, but better to just correct it in the first place - after all, who cares this much about sharpening but not correcting CA?
      -