A new family member...

05 March 2011

It has XCode, too. I won't say what that's for just yet, but let's hope it pans out like it could, eh?

When the Channels Come Marching Home Again...

02 November 2010

I was having a conversation with my friend Natalia Taffarel last night when she brought up the utility of automatically generating many of the possible channels which we can use to mask, correct color, etc. And since it's the cool thing to do (everyone makes a set!) I decided to be cool too. It probably won't make me as cool as if I finally paid attention to Facebook, but hey, it's a start :).

The first edition - linked below - will generate 12 additional channels of your image (for a total of 15, coming from RGB):
  1. Hue ("H")
  2. HSL Saturation ("HSL - S")
  3. HSL Lightness ("HSL - L")
  4. HSB Saturation ("HSB - S")
  5. HSB Brightness ("HSB - B")
  6. LAB Lightness ("L")
  7. LAB A ("a")
  8. LAB B ("b")
  9. Cyan ("C")
  10. Magenta ("M")
  11. Yellow ("Y")
  12. Black ("K")
The only reason that my action is different than the other 3 or 4 you probably already have is that it uses the gamut-preserving CMYK-in-RGB method which I wrote about previously, giving you full detail in your added channels without worrying about gamut clipping.

Please note: This action set requires the HSB / HSL optional plugin which can be found on your Photoshop DVD or downloaded from Adobe support.

Late this week or early next I'll finish up a second set so that you can get the channels out individually if you like vs. having to go all or nothing.

So without further ado, I give you... the 15 Channel Salute!

CMYK in RGB - Explained

31 October 2010

When I first posted the action set for this, I promised I'd write more later about it.  Since then, I've also received a few inquiries asking that I explain what the actions are doing in order to achieve something which many believed wasn't possible.  Today, after much, much too long a delay I'll do that - it seems like an easy topic in comparison to what I last wrote about!  I'll even leave out the math... well, mostly!

To understand what's going on, we need to discuss what happens when we normally convert an image from RGB to CMYK.  In the first place, the color modes themselves are opposite one another - where RGB is additive, CMYK is subtractive (or multiplicative, depending on the verbage you prefer).  This is fairly straightforward to understand, as adding more light in RGB makes things brighter (which is intuitive), while adding more ink (which absorbs, or subtracts, more light) in CMYK makes things darker.  That's the change in color mode.  [If you'd like to read more or see a video, I suggest Joe Francis' discussion of it here].

But traditional conversion to CMYK also involves converting to a different color space.  Not surprisingly, standard printing presses can't reproduce the same range of colors which our increasingly wide-gamut monitors can (at least, not at prices most of us can afford).  So conversion to CMYK also involves a color space change which results in the undesirable color shifts which many users end of feeling is just a part of the CMYK color mode.

To be clear: CMYK conversion also deals with the dot gain of the printing press involved, the interplay of the machine's physical configuration as well the actual density of individual inks.  Like the mathematical difference between subtractive and multiplicative blending, that's beyond what we need to deal with today.
The actions which I presented avoid the issue of color shifts by maintaining the same color space.  Now, there are two ways which we could go about doing this.  In the first, we could pull an old Dan Margulis trick and create a 'false' CMYK profile, spending a lot of time tweaking our color coordinates to both give us a suitably large color space as well as ink primaries which could accurately create said colors.  But that takes time.  And false profiles confuse people.  And above all, it doesn't give you those channels back in your RGB document - which is what I needed at the time.  So let's go with option two.

In option two, we calculate the CMYK equivalent values manually as if we were acting as PS's conversion engine.  I'll spare you the actual math, but the crux of the concept is to act like we're already in a CMYK document.
  1.  We start (because we have to) with the hard part - creating the black layer.  Whereas each of the other channels has a direct analog in RGB (C->R, M->G, Y->B), K has nothing which we can easily relate it to.  Because the CMYK color mode is subtractive, that means that the equivalent K value could be anything from 0 up to the lowest value of the other "inks" (C, M, or Y).  To find that 'maximum black' value, we merge the inverse of each of the R, G, and B channels into a new channel using the Darken blend mode.
  2. Next, we let the user decide whether they want to lighten that black value at all before continuing by giving them a standard Curves dialog.  This lets them tamp down that black to something which gives more emphasis to the color channels should they desire it.
  3. Then we tell PS to show us what the 'CMYK' image would look like without its black component, by subtracting it out of the whole.  Conveniently, this creates the R, G, and B inverses of the C, M, and Y channels which we're after and so we can simply make inverted copies of each to create our final channels - it's that easy!
All of this happens with almost no loss of image fidelity.  In 8bpc color, there is literally no difference between the original image and the CMYK version which we generate.  In 16bpc color we can get errors as high as 64 / 32768, though the average is < 16 / 32768 per channel.  Functionally, it's a lossless conversion while retaining the entire color spectrum which existed in the original image.

A related note and a few completely unrelated observations which I made while writing this:
  • Just because the actions as provided are functionally lossless doesn't mean that the academic in me is satisfied.  When I get some more time I'm going to try to actually make the reconstruction perfect.  If you beat me to it, please let me know :).
  • Don't use the color sampler tool (Info Palette) to test error levels at anything less than 100% zoom.  Otherwise it uses that same awful resizing algorithm which PS uses to preview images on non-HW-accelerated systems in order to estimate what a value might be, not actually reporting the real value to you.  This can give you all sorts of ghosts to chase.
  • There are some idiosyncracies to the way that Calculations and Apply Image each do what should be the same math.  The differences are small, but real.  If I have time in the future I'll delve deeper into it, but just be aware of it if it's the sort of thing which interests you.
  • As discussed a few times elsewhere, there is a difference in the output between the Image->Adjustments version of the Brightness / Contrast tool and its Adjustment Layer counterpart, specifically in how Legacy-mode Contrast is calculated.  The Image->Adjustments version calculates it based around the actual mean value of the image, while the Adjustment Layer version assumes a mean of 127.5.  The results are the same, albeit offset from one another (in brightness) by the distance of the actual mean from 127.5.  Generally speaking that's not terribly important (though it does make an argument for greater granularity in the PS controls), but I filed a bug report with Adobe just the same detailing the problem and asking that they bring the tools into alignment with one another.  The discrepancy confuses some people horribly.  Vote for me as your CS6 Beta Tester :).
Please feel free to ask if you still have questions!

      VF - Dirty Secrets, Dirty Tricks

      06 September 2010

      Well, the day has finally come. It's time to lay bare all those little white lies which I've fed you so far. I will caution you up front: this will be another moderately technical day. No numbers, and no math, but a bit of cranial expansion just the same.

      Like yesterday, in the interest of leaving this readable in a single sitting, I won't go into any great detail about any one point, instead giving you a quick rundown alongside some external sources for more reading. Always feel free to ask questions, though, so that I'll have more than the 2 questions I have currently to answer tomorrow :).

      So, without further ado, in no particular order, here we go...

      Dirty Secrets:
      • A Gaussian Blur "Wave" is Very Different From a Sine Wave.

        First of all, this does not invalidate the idea of spatial frequencies, of their mixing, or anything else. But it does have some implications for understanding how the frequencies which we're using interact with one another, and how our separations behave. To learn more about what a Gaussian distribution looks like, I recommend this Wikipedia article.

      • The Photoshop Gaussian Blur filter isn't a Gaussian Blur filter.

        Huh? Longtime PS users may remember that GB used to take a lot longer to complete than it does now [on equivalent systems]. And then magically at some point in its history (I honestly can't remember which version it debuted in), lead programmer Chris Cox implemented one of a number of Gaussian approximation functions - functions which give results which are accurate to an actual Gaussian function to within <1% (usually, at least), but which can be performed by the computer 20+ times faster. Again, this doesn't have many real-world consequences for frequency work, but is wonderful geek trivia, and also brings up some ideas which become relevant later.

        It's also worth noting that this gives us a bit of way around the filter's arbitrary 250px radius limit. A Box Blur (which has a maximum size of 999px), run 3x at the same radius is roughly the same as running the Gaussian Blur at that radius.

      • Most People Already Use High Pass Sharpening

        It's called the Unsharp Mask filter. Seriously - USM is exactly the same as HP sharpening as performed by the methods outlined in this series. Now, it doesn't have the advantage of being able to run curves against the result to control highlight / shadow, etc., nor is it easy to perform "bandpass sharpening" with it (accentuating a range of frequencies, so as to exclude the highest components [where noise "lives"] from the sharpening process). But, it is an old friend for many of us, and makes "HP vs. USM" debates quite comical after you learn the truth.

      • Bandstops Lower Total Image Contrast

        You may have figured this out already if you've followed along closely, but removing a frequency band from an image inevitably results in a loss of some % of the image's total contrast. This is best compensated for with a Curves adjustment, but Brightness / Contrast or Levels can also be used. Generally - for small, localized corrections with bandstop filtration this loss is meaningless and can be ignored; for large moves, though (especially simple bandpassing), it's best to make a correction.

      • Bandpass Filtering Can Cause Scaling Issues

        This probably the source of the greatest misunderstanding about any sort of frequency work in skin retouching (aside maybe from the visceral reaction many people have when you tell them that you're going to use a 'blur' filter in high-end work). In short, what looks good at one image size will not always also look good at a smaller size - the interaction of the component frequencies (as well our ingrained expectation of what things 'should' look like) can make skin which looks flawless at full size appear hideous ('plasticy') when resized. The two best ways of handling this are to either keep two windows open within PS so that you can constantly check what the image looks like small, or to use some form of synthetic frequency replacement to provide enough material to make smaller versions look 'right'.

      • Frequencies Have Color

        This isn't so much a 'white lie', as something which we just didn't bring up. Just as certain types of image components tend to "live" in a range of frequencies, sometimes colors do too. Take for example the red checkering of a tablecloth, the blue reflection of a skylight on a tungsten-lit ball, or a model's red hair against a white backdrop. This can lead to difficulty if we make major changes to an image while being careless in handling such colors. On the other hand, knowing this can be a huge advantage once you've mastered it - say goodbye to color moiré!
      Dirty Tricks:
      • Skin and Smart Objects

        We talked yesterday about how bandstop filters can be used to retouch skin as a "DeGrunge" / "Inverted High Pass" ("IHP") / etc. technique. The greatest difficulty with this procedure is that - for high end beauty work at least -different regions of the skin will require the removal of different frequencies from an image.

        When you think about it, this makes sense. Not only does the skin have a natural variation in its texture across different parts of the face and body, but just as objects appear smaller the further they are from you, the natural 'frequencies' which make up skin's appearance are also compressed or expanded with varying distance. As a consequence, different portions of the body need different kinds of work (or work on different frequency bands).

        By using a Smart Object copy of the image (or better, just the skin areas), you can quickly duplicate these, change the settings as appropriate, and mask them into your work. Even better, if you're disciplined about using your SO's, if you go back to make changes in the image itself later, they will automatically update through, making this a truly "nondestructive" process.

      • Skin and Selections

        One of the best things you can do when you want to use bandstop techniques on skin is to start with a good selection of that skin area (the Select Color Range tool is great for this), and either save it in a channel or simply copy the skin areas into a new layer [be sure to turn on Lock Transparent Pixels if using a separate layer]. By doing this, you keep the frequency filters from sampling non-skin colors in their processing and "bleeding" those into your result, allowing you a much more better result than you'll otherwise get (the GB filter's edge handling makes this even more important). To wit, Imagenomic's Portraiture relies on this idea to get its results [see discussion here].

        Thanks to my friend Richard Vernon, I'm reminded that the "Apply Image" version of our separation techniques doesn't play nicely with selections - it doesn't handle the alpha channel (transparency) in a way which plays nicely with others.  As such, you need to use the "Brightness / Contrast" version of separating if you mean to use this technique in your skin work.

      • What if We Didn't Use Gaussian Blur to separate?

        Here's one of the 'biggies' - what would happen if we weren't limiting ourselves to separating images with just the method we've been using? I'll let my friend Koray explain in his forum post on the subject. The technical version is that the Gaussian 'kernel' (or 'smoothing operator') is just one sort of 'waveform' which we can decompose an image into. Others like the Median filter (a median operator) and Surface Blur (a bilateral smoothing operator) give results which are more edge aware and gradation friendly - two factors which are immensely valuable in enhancing local contrast (demonstrated by Koray), as well as in separating detail if, for example, we are planning to focus on healing / cloning details to correct blemishes and irregularities.

      • How About Skin Transplants?

        It's one of those things we don't like to talk about as retouchers (at least not in reference to any particular client), but most of us have had an experience where the subject's skin was just in a horrific state in the original photo. One in which we really wished you could just use another model's skin to cover it with. Well, now that you now how to separate frequencies, and you know what frequencies the skin lives in - you can! [Tip: make sure that you match pore size, lighting angle, light quality (harsh, soft), and skin source (which part of the body) when transplanting.]

      • Blown Highlights

        Much like the 'transplantation' discussion above, by working on two frequency components separately, it's often easier to work with parts of an image which have been blown out in camera - instead of that awful gray mess which the healing tool will often give you, two strokes in two different layers will give you an often very believable recovery.

      • Automation

        Everything which we've discussed can be heavily automated in Photoshop - from detail enhancement to skin smoothing, sharpening to stray removal. I highly encourage you to work with Smart Objects in this to maintain a non-destructive workflow, especially so that you can go back and tweak your results as you refine your understanding of visual frequencies.

        Yesterday I provided a set of actions which do a number of the basic GB separations in PS. I challenge you to make more of your own, incorporating as many or as few of the techniques which we've discussed over the past few days as you like. I further challenge you to share these on your favorite retouching forum(s), and to explain what you've done and why to those who ask. The power to separate detail, to enhance it, to heal and clone it, etc. is as big a deal as first learning to adjust global color and contrast with a curve. Share it.

      In Closing

      I'd like to take a moment to thank everyone over the ModelMayhem Digital Art & Retouching Forum for their participation in the discussions about these and related topics. If it weren't for their interest in the subject and collaboration in elucidating the details, none of this would have been possible. Head on over when you get the chance and see the amazing work these guys have done, both in terms of retouching itself, as well automating every aspect of these processes.

      I also want to thank you for your readership over the past week or so as we've gone through what for many of you was likely the most technical discussion of Photoshop you've yet experienced. I sincerely hope that it was helpful. And while my writings on this blog will continue on a multitude of different subjects, I hope that you'll always feel free to ask when you have questions about this topic. As above, this is the beginning of a whole new way of looking at imaging for many of you - one which I hope to make as painless as possible.

      Happy Labor Day!

      VF - Why Sean, Why?!?

      05 September 2010

      After yesterday's marathon session of technobabble and math, it's only fitting that you should be rewarded with an entry today which which will be more intuitive and directly beneficial to your workflow. Now, that said, after how precipitously readership dropped off yesterday (I believe in light of the length of the post), I won't be going into such excruciating detail today. Instead I'll make broad strokes and incorporate a few external sources, asking that you tell me where you need more information for a subsequent update.

      One of the basic principles of retouching which I try to impart to people is how important it is to isolate those portions of an image which you want to work on. Sometimes that takes the form of a simple selection; sometimes it's a complex mask; sometimes a color-based selection; sometimes an operation on a channel; and other times, it's a frequency-based operation. Among the things which the last category allow us to do are what you came back for today:
      • Sharpening:

        I'm sure that many of you are familiar with the idea of "High Pass Sharpening", a technique which has been around the internet for about as long as I've been using Photoshop (a long time). In fact, this technique is just what it advertises - amplifying the high frequency portions of the image (by running a highpass filter on a copy of the image) in order to accentuate the detail.

        As it's normally done, though, this technique uses the PS filter naively and so it discards some tonal detail which might otherwise be retained and selectively enhanced. My personal preference when using variants of HP sharpening is to clip a Curves adjustment layer to the high-frequency layer. This allows one to tune the sharpening effect in the highlight and shadow areas separately and achieve just the level of sharpening desired.

      • Detail Enhancement:

        Often mistaken for the singular solution to the "Dave Hill look" (sorry Dave), use of large-radius HP filters to enhance local detail is just an expanded version of the sharpening discussed above (alternatively known as HiRaLoAm). In this case, we're just selecting a larger swatch of frequencies to enhance, resulting in that larger 'gritty' look [Calvin Hollywood is another big fan of these techniques].

        Again, though, it's important to use a revised technique vs. simply running a naive HP filter so that you can retain full contrast in the detail - otherwise, what's the point? Also note that, while Linear Light is the way which we blend the frequencies back in, other blend modes are sometimes preferable artistically (beware that some come with side-effects, especially Hard Light, Pin Light, and Vivid Light).

      • Stray Hair Removal:

        One of the neat facts about frequencies is that certain types of photographed objects (or their details) tend to 'live' within certain frequency bands. Hair, for example, is a very fine detail, and so tends to exist only in higher frequencies. We can use that fact to our advantage by performing a separation as we've previously discussed, and then simply using the healing or cloning brush on the high-frequency layer to remove the hair with no trace that it had ever been there. [And yes, while the healing brush often works for this on the full-frequency image, experienced retouchers know that no tool is perfect and there are situations in which it gets very confused by the larger context of the image.]

      • Skin Smoothing:

        This will be the longest component discussion we have today, but one which has also been the most popular. To start, please take a minute to go read byRo's classic writeup on frequency separation for use in skin retouching over at RetouchPro. He calls it the "quick de-grunge technique".

        Go read it now and we'll resume when you get back.

        Pretty impressive for how quickly he did that (real world execution can be Natalia Taffarel, Gry Garness, and Christy Schuler. [Oh, and BTW, as of this writing, only one of those three very talented ladies knows what you've already learned - that's how elite your efforts thus far have made you :).].

      • Skin Retouching & Beyond:

        While the above is a brilliant, easy technique, it's actually only just the beginning. What if, instead of simply removing those image frequencies (applying a bandstop), we worked on the "grunge" frequencies with the healing and cloning tools like we talked about doing to remove stray hairs? I won't bore you with detail in this post, suffice that this creates an incredibly believable result without taking as long as conventional methods.

        Even better, this can be used on both layers in order to remove unsightly features (skin folds) by healing or cloning on each of the layers - in the high-frequency you can focus on patching in good texture, while in the low-frequency you're able to focus on getting the overall shape right. [As a bonus, because the low-frequency layer has no detail to it, you don't have to be quite so precise as when working on a single (full-frequency) image].

      • Whatever Else You Come Up With:

        Seriously - the above are just some of the everday (formerly) difficult tasks in retouching which can be streamlined by incorporating an understanding of visual frequencies. But by no means is that list conclusive. As we'll discuss in tomorrow's post, the underlying techniques which we've been covering are limited only by your creative application of them.
      Until tomorrow...

      P.S. I did promise you some automation, didn't I? We'll get into a heavy discussion tomorrow, but for now here is a set of actions which perform each of the techniques discussed yesterday. Each assumes that you are in the bit depth it identifies itself with, and that you are running it from the topmost layer. If you are in a single layer document, you will get an error message shortly after running it - this is normal and you should just click "Continue". If you will only be using single-layered documents, you can avoid the message by disabling the "Copy Merge" step. These actions will create all needed duplicate layers for you, and you can turn off the instruction dialogs at any time by unchecking them in the actions panel. Finally, while I have had no difficulty with them, I make no warrant that they will work for you, nor do I warrant that it will not mess up your files. Use them at your own risk.

      VF - The Mechanics

      04 September 2010

      First of all, a note for everyone who's been following so closely - your support means a lot. Further, I apologize for the delay in posting this. Unlike more established bloggers, I'm not just posting up pre-written material. I'm writing this as we go and attempting to respond to what I hear back from you in the process. As such, when life throw's me a curve ball, posting gets delayed. You have my apologies.

      Now, before we get into how we do lots of fancy things in Photoshop, this is going to be one of the most intensive days we spend on technical discussion, so let's start by spending a few moments reviewing where we've been so far. First, we demonstrated that (just like sounds) images can be seen (Ha! I kill me!) as being composed of many different frequencies which interact in order to create a whole image. We discussed the definitions for all of the processing tools which we're going to employ - lowpass, highpass, bandpass, and bandstop filters. And we looked at how adding the low frequencies and the high frequencies from an image together gives us the whole:

      DC United's Chris Pontius

      Then we expanded upon this to realize that, like the simpler kinds of math (the good kinds), the order in which we do things is commutative - that is, that subtracting the low frequencies from a whole image is the same as directly extracting its high frequencies through a highpass filter:

      Image subtraction demo

      Most recently, we discussed how the bandpass and bandstop processes can be thought of as being similarly inverse processes.

      So - how do we do it in PS. Do we just use the High Pass (HP) and Gaussian Blur (GB) filters? Unfortunately, no, and the reason why is going to involve some more... math (sorry guys!), and one of those little white lies which I've been telling you up until now.

      To make our first pass at explaining what goes on, let's go back to our audio examples. When we were adding two audio tones together, each of those component sounds had amplitudes between -1.0 and 1.0. Or we might say that each had a range of 2. Because when we add them together we could get extremes of: 1 + 1 or (-1) + (-1), our result could have amplitudes from -2.0 to 2.0, or a range of 4. In theory, each time we add a sound in, we expand the range of the data which we're trying to handle. In real life, though, we have to keep those values scaled to a range which we can actually work with.

      What does that mean for images?

      For the purposes of discussion, we'll refer to PS images being able to have levels (the equivalent of amplitudes) from 0 - 255 (a range of 256). In truth, many of you know that with 16 or 32 bit processing you have different ranges, but we'll only use one set of numbers for now.

      Anyway, there are a number of differences between performing operations on sounds and on images in PS. The most significant of these is the fact that images don't (naturally) have negative values. Photoshop doesn't store brightness values of -255, or even of -1 (at least not for our purposes), and the images we work with aren't -255 to 255, -128 to 128, etc. This has some significant implications for how we handle our operations.

      As an example, let's pretend we don't know about that difference and I'll separate an image rather naively. I'm going to use the picture of Santino which we've used a few times so far:


      Now, I'll blur a copy of that image in a separate layer:

      Tino Blurred

      And subtract that from a third copy with the Apply Image command:

      Tino Blurred

      Not very much like what I've been showing you so far, is it? Sorry about that.

      Here's the problem - do you remember how when we were mixing low and high sounds together, sometimes the high frequency brought the low frequency signal 'up', but at other times it brought it 'down' (and vice-versa)? [go back to review the time-correlated tracks to see what I mean] Now look closely at the result I've shown you above. You'll notice that the result only shows those areas which are brighter than the low frequency version. And this is because we don't have negative values. All of the areas which were darker in the high frequency than in the low frequency have been clipped to 0.

      Take a minute to digest that, because it's as important as it is difficult to understand. The high frequency data doesn't "know" that we need it to occupy a finite space, and it wants to have both positive and negative values, just like it would in real life. Not having negative values means that we need to find another way to record those areas which are darker in the other frequency set. One way of dealing with this is to just take the darker areas of the high frequency data and combine those back into our mix above - we would use three layers to accomplish one separation, demonstrated in the image below. To do this, I created the 'Lighter High Frequency' layer as above, and the 'Darker High Frequency' layer with the Apply Image command (more details later). Take a look:

      Tino Separation

      In the first high-frequency set, our blend mode (which we'll discuss after bit) is ignoring the black areas while adding the light areas into the final image (adding black to a pixel is like adding zero to a number). In the second high-frequency piece, the dark areas are lowering the final values while the white areas are ignored.

      This is technically great because it gives us a 100% accurate de/reconstruction of our image (that is, summing those three layers back together is pixel-for-pixel identical to the original). On the other hand, it's really inconvenient for our high-frequency data to be on two separate layers. How might we get it onto a single layer?

      That leads us to our second technique. In this one, we pretend that we can have both 'positive' and 'negative' numbers in the same layer. To do so, though, we need an arbitrary value which will serve as the '0' point around which positive and negative values will appear. In Photoshop, this is 50% gray - that neutral value which many of you already use as a starting point with Soft Light, Overlay, etc. layers. Photoshop will ignore that middle gray value (it won't change the pixels when we blend with it), but when other values are brighter than 50%, it will lighten the final image while when values are darker than 50%, it will darken the final image. This option is what most retouchers I know do in practice, and what I hope you will settle upon at the conclusion of this discussion.

      Like most things in life, though, this isn't going to come free. In order to put two layers into one as we're discussing, a compromise has to be made. Remember that each separation we make can have the full range of values in it - the sounds could go from -1.0 to 1.0, and our images can go from 0 to 255. In the same way, the high frequency image data can be as much as 255 levels above or below the low frequency values (these also ranging 0-255). In effect, our high-frequency data has a range of 512; not just 256. To compress this down into a single layer, then, we have to sacrifice some level of precision in getting there - we need to compress 512 levels down into 256.

      My preferred method of doing this is to 'scale down' the data - to map the darkest possible dark of the high frequency data to 0, and the lightest possible light of the high frequency data to 255 (128 still being neutral). This preserves all of the finest details in the image, but sacrifices a small amount of its 'smoothness' (numbers later).

      The PS High Pass filter, on the other hand, seems to have been designed for creating lots of rough contrast, and so simply 'lops off' those light lights and the dark darks within the high-frequency data (much the way many of you may be familiar with a color channel 'clipping' when it's over or underexposed). This makes for a more contrasty layer (part of why some people like it so much), but it sacrifices a lot of fine detail in order to get there. To give you a side-by-side comparison of best-possible reconstruction using the default workflow, take a look at a closeup from Tino's uniform (the four stars represent the four MLS Cups which DC United has won):

      Highpass comparisons

      You can see quickly that the High Pass filter's version is far more contrasty right out of the box. Unfortunately, you'll also notice that its reconstruction (ironically) loses high-frequency contrast when blended back in to restore the original image. This isn't to say all is lost for the filter, though. Like adding and subtracting the frequencies from one another, contrast mapping is commutative - we can do things in a different order and still get the same result. In this case, we'll be able to use the HP filter so as to avoid having to mess with (what for many is a terrifying experience) the Apply Image tool. If we go to Image->Adjustments->Brightness / Contrast and choose to lower image contrast by (-50) with the Legacy option enabled, we can then use the included highpass filter to get that single-layer high-frequency data, while retaining all of that wonderful fine detail contrast.

      Highpass comparisons

      Notice how the results are identical - this is great, both for image quality (obviously) as well for automation implications which some of you are undoubtedly already thinking about.

      For now, let's finally go through the step-by-step PS instructions.

      To perform Highpass filtration into three layers:
      1. Make three copies of your image (two new copies if working on a single-layered document).
      2. Label the bottom layer "Low Frequency". Label the middle layer "High Frequency Light". Label the top layer "High Frequency Dark".
      3. Select the Low Frequency layer.
      4. Run the Gaussian Blur filter at your separation radius.
      5. Select the "High Frequency Light" layer. Set its blend mode to "Linear Dodge (Add)".
      6. Open the Image->Apply Image dialog box.
      7. In the Source box, select "Low Frequency" as the Layer, "RGB" as the Channel. Make sure the "Invert" box is unchecked.
      8. In the Blending box, choose "Subtract". Opacity should be 100%, Scale 1, Offset 0, Preserve Transparency and Mask.. should be unchecked.
      9. Click OK.
      10. Select the "High Frequency Dark" layer. Set its blend mode to "Linear Burn".
      11. Open the Image->Apply Image dialog box.
      12. In the Source box, select "Low Frequency" as the Layer, "RGB" as the Channel. Make sure the "Invert" box is checked.
      13. In the Blending box, choose "Linear Dodge (Add)". Opacity should be 100%, Scale 1, Offset 0, Preserve Transparency and Mask.. should be unchecked.
      14. Click OK.

      This method works in all bit depths and results in a reconstruction with a mean error of 0 (StDev & median also 0). That is, it is mathematically (and technically) perfect.

      To perform Highpass filtration into two layers using the Apply Image command:
      1. In 16bit mode:
        1. Make two copies of the current image (one copy if working on a single-layered document).
        2. Label the bottom layer "Low Frequency". Label the upper layer "High Frequency".
        3. Select the Low Frequency layer.
        4. Run the Gaussian Blur filter at your separation radius.
        5. Select the High Frequency layer. Set its blend mode to "Linear Light".
        6. Open the Image->Apply Image command.
        7. In the Source box, select "Low Frequency" as the Layer, "RGB" as the Channel. Make sure the "Invert" box is checked.
        8. In the Blending box, choose "Add". Opacity should be 100%, Scale 2, Offset 0, Preserve Transparency and Mask.. should be unchecked.
        9. Click OK.

      2. In 8bit mode:
        1. Make two copies of the current image (one copy if working on a single-layered document).
        2. Label the bottom layer "Low Frequency". Label the upper layer "High Frequency".
        3. Select the Low Frequency layer.
        4. Run the Gaussian Blur filter at your separation radius.
        5. Select the High Frequency layer. Set its blend mode to "Linear Light".
        6. Open the Image->Apply Image command.
        7. In the Source box, select "Low Frequency" as the Layer, "RGB" as the Channel. Make sure the "Invert" box is not checked.
        8. In the blending box, choose "Subtract". Opacity should be 100%, Scale 2, Offset 128. Preserve Transparency and Mask.. should be unchecked.
        9. Click OK.

      These methods result in a reconstruction with a maximal error of 1 level difference in each channel (that is, a 1/256 maximum shift in 8bit; a 1/32769 shift in 16bit). The average shift is 0.49 with a StDev of 0.50 and Median 0. In less mathematical terms, it is functionally lossless in 16bit.

      To perform Highpass filtration into two layers using the High Pass Filter:
      1. Make two copies of the current image (one copy if working on a single-layered document).
      2. Label the bottom layer "Low Frequency". Label the upper layer "High Frequency".
      3. Select the Low Frequency layer.
      4. Run the Gaussian Blur filter at your separation radius.
      5. Select the High Frequency layer. Set its blend mode to "Linear Light".
      6. Choose Image->Adjustments->Brightness Contrast.
      7. Check the "Legacy" option.
      8. Enter a value of -50 in the Contrast box. Leave Brightness at 0.
      9. Click OK.
      10. Run the High Pass filter at the same radius which you used in step (4).
      This method works in all bit depths, and results in a reconstruction with a maximal error of 1 level difference in each channel (that is, a 1/256 maximum shift in 8bit; a 1/32769 shift in 16bit). The average shift is 0.54 with a StDev of 0.59 and Median 0. In less mathematical terms, it is functionally lossless in 16bit. In 8bit, it is just slightly (yet probably meaninglessly) inferior to the 8bit Apply Image technique.

      To perform Bandpass filtration with a single layer:
      1. Make a single copy of your image.
      2. Label this layer "Bandpass".
      3. Choose Image->Adjustments->Brightness Contrast.
      4. Check the "Legacy" option.
      5. Enter a value of -50 in the Contrast box. Leave Brightness at 0.
      6. Click OK.
      7. Run the High Pass filter at the radius for the lowest frequency which you want to be visible (remember, highpass filters keep frequencies above a threshold value).
      8. Run the Gaussian Blur filter at the radius for the highest frequency which you want to be visible.
      9. The Bandpass layer is your bandpass'd result.
      This method results in a bandpass which is within 1 level of an 'ideal' Gaussian separation in any bit depth. Again, functionally perfect in 16bit, and almost always close enough in 8bit. It will be rather low contrast by default (a necessary by-product of allowing fine detail retention) which you may want to augment with another B/C adjustment or with normal curves [16bit has a huge advantage here of course]. It is also worth remembering that high frequencies are low radii, and that low frequencies are high radii in the Photoshop context. This is a white lie which I'd hoped to only begin discussing that tomorrow, but as it's confusing a few folks today, we'll get it out there now. The discussion of why that is will still remain for later.

      To perform Bandstop filtration with a single layer:
      1. Make a single copy of your image.
      2. Label this layer "Bandstop".
      3. Set the layer's blend mode to "Linear Light".
      4. Invert the layer (Image->Adjustments->Invert).
      5. Choose Image->Adjustments->Brightness Contrast.
      6. Check the "Legacy" option.
      7. Enter a value of -50 in the Contrast box. Leave Brightness at 0.
      8. Click OK.
      9. Run the High Pass filter at the radius for the lowest frequency which you want to block.
      10. Run the Gaussian Blur filter at the radius for the highest frequency which you want to block.
      11. The Bandstop layer is now acting as the bandstop for your image.
      This method results in a bandstop which is within 1 level of an 'ideal' Gaussian separation in any bit depth. Again, functionally perfect in 16bit, and almost always close enough in 8bit.


      Wow. We just covered an awful lot.

      I'm going to stop here for today to give you time to digest what you're just read, to allow you time to ask more questions either directly or in the forums, so that you can point out what I'm sure are a plethora of typos in the above, and so that I can get to another soccer game :). We'll resume tomorrow with a discussion of what all of this can be used for in practice (including determination of separation radii!), discussion of advanced application (multiple-radius separation, etc.), and some examples of how you can automate many of these processes. Monday will still be the day when I answer as many questions as I receive, and when I will also reveal what white lies remain in this text.

      Thank you for your patience, and thank you for reading.

      And a second thanks to mistermonday for pointing out a gross and oft-overlooked typo in the bandstop instructions above - my apologies for the oversight!

      VF - Tools of the Trade

      31 August 2010

      We continue our discussion on visual frequencies by examining the "tools of the trade" which we'll use to bed these ideas to our will in practical retouching. To do so, though, we'll start like we did last time by introducing more traditional audio equivalents as well as some definitions.

      In principle, we will demonstrate that, just as we were adding frequencies together to form new ones before, we can subtract components out of a whole. Consider the figure below adapted from yesterday. By subtracting the low frequency component out of the whole, we are left with a high frequency signal.

      Sound subtraction demo

      We can do that same thing with an image, subtracting out the low frequency portions to leave us only with the high frequency:

      Image subtraction demo

      This should be fairly intuitive - if we can add two things together to get a sum, we should be able to tease them apart too. But before we get too far into that, we really do need to spend some time going over the definitions of a few terms to make sure that we don't confuse things later on. For reference, all of the terms I'm going to use are from general signals processing, and are therefore applicable to sound as well as to images.
      • Bandstop filter - A filter which stops a frequency band from passing through it. Generally, which frequencies are blocked is selected (or 'tuned') by the user.
      • Bandpass filter - A filter which allows only a select frequency band to pass through it. Like the bandstop filter, the range which is allowed to pass is controlled by the user.
      • Lowpass filter - A filter which allows only frequencies which are lower than a selected value to pass through.
      • Highpass filter - A filter which allows only frequencies which are higher than a selected value to pass through.
      Let's put those in context. Let's say that I have an arbitrary sound source which consists of four component tones as below:
      • a 25Hz tone
      • a 200 Hz tone
      • a 1,000 Hz tone
      • and a 10,000 Hz tone
      Now, if we apply a highpass filter set to 500 Hz to that sound, which components are going to be left in the result? Only two portions - the 1,000 Hz and the 10,000 Hz tones. By allowing only frequencies above 500 Hz to pass through, we have eliminated the other two entirely from the result.

      On the other side of things, by applying a lowpass filter with the same setting (500 Hz), we can eliminate the 1,000 Hz and the 10,000 Hz tones while keeping the 25 Hz and 200 Hz components (those frequencies which fall below the cutoff).

      We'll take this one step further. If I apply a bandstop filter to this sound, configured to block from 100-5,000 Hz, which components will be left? Because I am allowing nothing above 100 Hz, nor anything below 5,000 Hz, I'm left only with the 25 Hz and 10,000 Hz tones in my final sound. Equally, if we were to apply a bandpass filter with those same settings, we would have eliminated the 25 Hz and 10,000 Hz components while retaining the 200 Hz and 1,000 Hz portions.


      Whew - you made it! Now let's apply those definitions to Photoshop terms so that we can get back to something fun, shall we?

      I don't think you'll be surprised, but the Photoshop High Pass filter is.... a highpass filter! Shocking, I know. On the other hand, what a lot of people don't know is that Gaussian Blur is its exact opposite - it is a lowpass filter.

      Now, Photoshop doesn't have bandpass or bandstop filters per se, but that doesn't mean we can't create them ourselves. Think about it. A bandpass filter is nothing but lowpass and highpass filters operating on the same source. So, if we apply the High Pass filter to an image, followed by a Gaussian Blur, we have selected (or bandpassed) all the frequencies between them, creating a layer which contains only those portions of the image.

      So what about the bandstop filter? Well, let's think for a second. From its definition, a bandstop subtracts out the frequencies which we've selected. And in the paragraph above, we figured out how to create a layer which only contains those frequencies. Now, if we go back to arithmetic (yes, it's math, but hold on - it's easy math), you'll remember that subtracting one value from another is the same as adding its inverse. So, if we invert (Image->Adjustments->Invert) the layer which we created above, we'll have transformed it from a bandpass filter into a bandstop layer. Neat, huh?

      Let's look at some examples again before we wrap up for today. Some of what I'll show you will look very weird, but please accept it for what it is - we'll get into applications by the end of the week. For now, just focus on understanding what's going on with the image and what we're doing to get there.

      The first series is an image you've already seen - the difference is that you now know what you're really looking at. The left is the whole image. In the center, the original image after we've applied a lowpass filter. On the right, the original image after applying a highpass filter.

      Sound subtraction demo

      Next, we see the original image on the left. In the center, I have applied a bandpass filter to the image, allowing through only select intermediate frequencies. The right shows what happens to the image when I transform the bandpass filter into a bandstop filter. Crazy looking, isn't it? But I promise, it's going to be something you'll love before long

      Sound subtraction demo

      We'll stop here for tonight. I've given you a lot of information to chew on and throwing too much more at you now is as likely to make things worse as it is better. Consider what we discussed, review it as you have time, and feel free to ask questions if you have them in your forum of choice. When we come back on Thursday (tomorrow is a soccer game!) we'll jump into how we actually go about making these filters in Photoshop.

      So that you know what we'll be covering generally, let me give you the tentative schedule for the rest of this series:
      • Thursday Saturday - The Mechanics - the process to actual apply these filters in PS - there are some sticking points!
      • Saturday Sunday - Why Are We Doing This? - how these techniques can be used in real-world retouching and how to make the process easier for yourself
      • Sunday Monday - Dirty Truths and Dirty Tricks - highpass was just the beginning + all my lies laid bare
      • Monday onward - Q & A - whatever you ask!
      (schedule subject to change depending on Hurricane Earl and
      whether Pepco is actually ready for downed lines this time)