VF - Dirty Secrets, Dirty Tricks

06 September 2010

Well, the day has finally come. It's time to lay bare all those little white lies which I've fed you so far. I will caution you up front: this will be another moderately technical day. No numbers, and no math, but a bit of cranial expansion just the same.

Like yesterday, in the interest of leaving this readable in a single sitting, I won't go into any great detail about any one point, instead giving you a quick rundown alongside some external sources for more reading. Always feel free to ask questions, though, so that I'll have more than the 2 questions I have currently to answer tomorrow :).

So, without further ado, in no particular order, here we go...

Dirty Secrets:
  • A Gaussian Blur "Wave" is Very Different From a Sine Wave.

    First of all, this does not invalidate the idea of spatial frequencies, of their mixing, or anything else. But it does have some implications for understanding how the frequencies which we're using interact with one another, and how our separations behave. To learn more about what a Gaussian distribution looks like, I recommend this Wikipedia article.

  • The Photoshop Gaussian Blur filter isn't a Gaussian Blur filter.

    Huh? Longtime PS users may remember that GB used to take a lot longer to complete than it does now [on equivalent systems]. And then magically at some point in its history (I honestly can't remember which version it debuted in), lead programmer Chris Cox implemented one of a number of Gaussian approximation functions - functions which give results which are accurate to an actual Gaussian function to within <1% (usually, at least), but which can be performed by the computer 20+ times faster. Again, this doesn't have many real-world consequences for frequency work, but is wonderful geek trivia, and also brings up some ideas which become relevant later.

    It's also worth noting that this gives us a bit of way around the filter's arbitrary 250px radius limit. A Box Blur (which has a maximum size of 999px), run 3x at the same radius is roughly the same as running the Gaussian Blur at that radius.

  • Most People Already Use High Pass Sharpening

    It's called the Unsharp Mask filter. Seriously - USM is exactly the same as HP sharpening as performed by the methods outlined in this series. Now, it doesn't have the advantage of being able to run curves against the result to control highlight / shadow, etc., nor is it easy to perform "bandpass sharpening" with it (accentuating a range of frequencies, so as to exclude the highest components [where noise "lives"] from the sharpening process). But, it is an old friend for many of us, and makes "HP vs. USM" debates quite comical after you learn the truth.

  • Bandstops Lower Total Image Contrast

    You may have figured this out already if you've followed along closely, but removing a frequency band from an image inevitably results in a loss of some % of the image's total contrast. This is best compensated for with a Curves adjustment, but Brightness / Contrast or Levels can also be used. Generally - for small, localized corrections with bandstop filtration this loss is meaningless and can be ignored; for large moves, though (especially simple bandpassing), it's best to make a correction.

  • Bandpass Filtering Can Cause Scaling Issues

    This probably the source of the greatest misunderstanding about any sort of frequency work in skin retouching (aside maybe from the visceral reaction many people have when you tell them that you're going to use a 'blur' filter in high-end work). In short, what looks good at one image size will not always also look good at a smaller size - the interaction of the component frequencies (as well our ingrained expectation of what things 'should' look like) can make skin which looks flawless at full size appear hideous ('plasticy') when resized. The two best ways of handling this are to either keep two windows open within PS so that you can constantly check what the image looks like small, or to use some form of synthetic frequency replacement to provide enough material to make smaller versions look 'right'.

  • Frequencies Have Color

    This isn't so much a 'white lie', as something which we just didn't bring up. Just as certain types of image components tend to "live" in a range of frequencies, sometimes colors do too. Take for example the red checkering of a tablecloth, the blue reflection of a skylight on a tungsten-lit ball, or a model's red hair against a white backdrop. This can lead to difficulty if we make major changes to an image while being careless in handling such colors. On the other hand, knowing this can be a huge advantage once you've mastered it - say goodbye to color moiré!
Dirty Tricks:
  • Skin and Smart Objects

    We talked yesterday about how bandstop filters can be used to retouch skin as a "DeGrunge" / "Inverted High Pass" ("IHP") / etc. technique. The greatest difficulty with this procedure is that - for high end beauty work at least -different regions of the skin will require the removal of different frequencies from an image.

    When you think about it, this makes sense. Not only does the skin have a natural variation in its texture across different parts of the face and body, but just as objects appear smaller the further they are from you, the natural 'frequencies' which make up skin's appearance are also compressed or expanded with varying distance. As a consequence, different portions of the body need different kinds of work (or work on different frequency bands).

    By using a Smart Object copy of the image (or better, just the skin areas), you can quickly duplicate these, change the settings as appropriate, and mask them into your work. Even better, if you're disciplined about using your SO's, if you go back to make changes in the image itself later, they will automatically update through, making this a truly "nondestructive" process.

  • Skin and Selections

    One of the best things you can do when you want to use bandstop techniques on skin is to start with a good selection of that skin area (the Select Color Range tool is great for this), and either save it in a channel or simply copy the skin areas into a new layer [be sure to turn on Lock Transparent Pixels if using a separate layer]. By doing this, you keep the frequency filters from sampling non-skin colors in their processing and "bleeding" those into your result, allowing you a much more better result than you'll otherwise get (the GB filter's edge handling makes this even more important). To wit, Imagenomic's Portraiture relies on this idea to get its results [see discussion here].

    Thanks to my friend Richard Vernon, I'm reminded that the "Apply Image" version of our separation techniques doesn't play nicely with selections - it doesn't handle the alpha channel (transparency) in a way which plays nicely with others.  As such, you need to use the "Brightness / Contrast" version of separating if you mean to use this technique in your skin work.

  • What if We Didn't Use Gaussian Blur to separate?

    Here's one of the 'biggies' - what would happen if we weren't limiting ourselves to separating images with just the method we've been using? I'll let my friend Koray explain in his forum post on the subject. The technical version is that the Gaussian 'kernel' (or 'smoothing operator') is just one sort of 'waveform' which we can decompose an image into. Others like the Median filter (a median operator) and Surface Blur (a bilateral smoothing operator) give results which are more edge aware and gradation friendly - two factors which are immensely valuable in enhancing local contrast (demonstrated by Koray), as well as in separating detail if, for example, we are planning to focus on healing / cloning details to correct blemishes and irregularities.

  • How About Skin Transplants?

    It's one of those things we don't like to talk about as retouchers (at least not in reference to any particular client), but most of us have had an experience where the subject's skin was just in a horrific state in the original photo. One in which we really wished you could just use another model's skin to cover it with. Well, now that you now how to separate frequencies, and you know what frequencies the skin lives in - you can! [Tip: make sure that you match pore size, lighting angle, light quality (harsh, soft), and skin source (which part of the body) when transplanting.]

  • Blown Highlights

    Much like the 'transplantation' discussion above, by working on two frequency components separately, it's often easier to work with parts of an image which have been blown out in camera - instead of that awful gray mess which the healing tool will often give you, two strokes in two different layers will give you an often very believable recovery.

  • Automation

    Everything which we've discussed can be heavily automated in Photoshop - from detail enhancement to skin smoothing, sharpening to stray removal. I highly encourage you to work with Smart Objects in this to maintain a non-destructive workflow, especially so that you can go back and tweak your results as you refine your understanding of visual frequencies.

    Yesterday I provided a set of actions which do a number of the basic GB separations in PS. I challenge you to make more of your own, incorporating as many or as few of the techniques which we've discussed over the past few days as you like. I further challenge you to share these on your favorite retouching forum(s), and to explain what you've done and why to those who ask. The power to separate detail, to enhance it, to heal and clone it, etc. is as big a deal as first learning to adjust global color and contrast with a curve. Share it.

In Closing

I'd like to take a moment to thank everyone over the ModelMayhem Digital Art & Retouching Forum for their participation in the discussions about these and related topics. If it weren't for their interest in the subject and collaboration in elucidating the details, none of this would have been possible. Head on over when you get the chance and see the amazing work these guys have done, both in terms of retouching itself, as well automating every aspect of these processes.

I also want to thank you for your readership over the past week or so as we've gone through what for many of you was likely the most technical discussion of Photoshop you've yet experienced. I sincerely hope that it was helpful. And while my writings on this blog will continue on a multitude of different subjects, I hope that you'll always feel free to ask when you have questions about this topic. As above, this is the beginning of a whole new way of looking at imaging for many of you - one which I hope to make as painless as possible.

Happy Labor Day!

VF - Why Sean, Why?!?

05 September 2010

After yesterday's marathon session of technobabble and math, it's only fitting that you should be rewarded with an entry today which which will be more intuitive and directly beneficial to your workflow. Now, that said, after how precipitously readership dropped off yesterday (I believe in light of the length of the post), I won't be going into such excruciating detail today. Instead I'll make broad strokes and incorporate a few external sources, asking that you tell me where you need more information for a subsequent update.

One of the basic principles of retouching which I try to impart to people is how important it is to isolate those portions of an image which you want to work on. Sometimes that takes the form of a simple selection; sometimes it's a complex mask; sometimes a color-based selection; sometimes an operation on a channel; and other times, it's a frequency-based operation. Among the things which the last category allow us to do are what you came back for today:
  • Sharpening:

    I'm sure that many of you are familiar with the idea of "High Pass Sharpening", a technique which has been around the internet for about as long as I've been using Photoshop (a long time). In fact, this technique is just what it advertises - amplifying the high frequency portions of the image (by running a highpass filter on a copy of the image) in order to accentuate the detail.

    As it's normally done, though, this technique uses the PS filter naively and so it discards some tonal detail which might otherwise be retained and selectively enhanced. My personal preference when using variants of HP sharpening is to clip a Curves adjustment layer to the high-frequency layer. This allows one to tune the sharpening effect in the highlight and shadow areas separately and achieve just the level of sharpening desired.

  • Detail Enhancement:

    Often mistaken for the singular solution to the "Dave Hill look" (sorry Dave), use of large-radius HP filters to enhance local detail is just an expanded version of the sharpening discussed above (alternatively known as HiRaLoAm). In this case, we're just selecting a larger swatch of frequencies to enhance, resulting in that larger 'gritty' look [Calvin Hollywood is another big fan of these techniques].

    Again, though, it's important to use a revised technique vs. simply running a naive HP filter so that you can retain full contrast in the detail - otherwise, what's the point? Also note that, while Linear Light is the way which we blend the frequencies back in, other blend modes are sometimes preferable artistically (beware that some come with side-effects, especially Hard Light, Pin Light, and Vivid Light).

  • Stray Hair Removal:

    One of the neat facts about frequencies is that certain types of photographed objects (or their details) tend to 'live' within certain frequency bands. Hair, for example, is a very fine detail, and so tends to exist only in higher frequencies. We can use that fact to our advantage by performing a separation as we've previously discussed, and then simply using the healing or cloning brush on the high-frequency layer to remove the hair with no trace that it had ever been there. [And yes, while the healing brush often works for this on the full-frequency image, experienced retouchers know that no tool is perfect and there are situations in which it gets very confused by the larger context of the image.]

  • Skin Smoothing:

    This will be the longest component discussion we have today, but one which has also been the most popular. To start, please take a minute to go read byRo's classic writeup on frequency separation for use in skin retouching over at RetouchPro. He calls it the "quick de-grunge technique".

    Go read it now and we'll resume when you get back.

    Pretty impressive for how quickly he did that (real world execution can be Natalia Taffarel, Gry Garness, and Christy Schuler. [Oh, and BTW, as of this writing, only one of those three very talented ladies knows what you've already learned - that's how elite your efforts thus far have made you :).].

  • Skin Retouching & Beyond:

    While the above is a brilliant, easy technique, it's actually only just the beginning. What if, instead of simply removing those image frequencies (applying a bandstop), we worked on the "grunge" frequencies with the healing and cloning tools like we talked about doing to remove stray hairs? I won't bore you with detail in this post, suffice that this creates an incredibly believable result without taking as long as conventional methods.

    Even better, this can be used on both layers in order to remove unsightly features (skin folds) by healing or cloning on each of the layers - in the high-frequency you can focus on patching in good texture, while in the low-frequency you're able to focus on getting the overall shape right. [As a bonus, because the low-frequency layer has no detail to it, you don't have to be quite so precise as when working on a single (full-frequency) image].

  • Whatever Else You Come Up With:

    Seriously - the above are just some of the everday (formerly) difficult tasks in retouching which can be streamlined by incorporating an understanding of visual frequencies. But by no means is that list conclusive. As we'll discuss in tomorrow's post, the underlying techniques which we've been covering are limited only by your creative application of them.
Until tomorrow...

P.S. I did promise you some automation, didn't I? We'll get into a heavy discussion tomorrow, but for now here is a set of actions which perform each of the techniques discussed yesterday. Each assumes that you are in the bit depth it identifies itself with, and that you are running it from the topmost layer. If you are in a single layer document, you will get an error message shortly after running it - this is normal and you should just click "Continue". If you will only be using single-layered documents, you can avoid the message by disabling the "Copy Merge" step. These actions will create all needed duplicate layers for you, and you can turn off the instruction dialogs at any time by unchecking them in the actions panel. Finally, while I have had no difficulty with them, I make no warrant that they will work for you, nor do I warrant that it will not mess up your files. Use them at your own risk.

VF - The Mechanics

04 September 2010

First of all, a note for everyone who's been following so closely - your support means a lot. Further, I apologize for the delay in posting this. Unlike more established bloggers, I'm not just posting up pre-written material. I'm writing this as we go and attempting to respond to what I hear back from you in the process. As such, when life throw's me a curve ball, posting gets delayed. You have my apologies.

Now, before we get into how we do lots of fancy things in Photoshop, this is going to be one of the most intensive days we spend on technical discussion, so let's start by spending a few moments reviewing where we've been so far. First, we demonstrated that (just like sounds) images can be seen (Ha! I kill me!) as being composed of many different frequencies which interact in order to create a whole image. We discussed the definitions for all of the processing tools which we're going to employ - lowpass, highpass, bandpass, and bandstop filters. And we looked at how adding the low frequencies and the high frequencies from an image together gives us the whole:

DC United's Chris Pontius

Then we expanded upon this to realize that, like the simpler kinds of math (the good kinds), the order in which we do things is commutative - that is, that subtracting the low frequencies from a whole image is the same as directly extracting its high frequencies through a highpass filter:

Image subtraction demo

Most recently, we discussed how the bandpass and bandstop processes can be thought of as being similarly inverse processes.

So - how do we do it in PS. Do we just use the High Pass (HP) and Gaussian Blur (GB) filters? Unfortunately, no, and the reason why is going to involve some more... math (sorry guys!), and one of those little white lies which I've been telling you up until now.

To make our first pass at explaining what goes on, let's go back to our audio examples. When we were adding two audio tones together, each of those component sounds had amplitudes between -1.0 and 1.0. Or we might say that each had a range of 2. Because when we add them together we could get extremes of: 1 + 1 or (-1) + (-1), our result could have amplitudes from -2.0 to 2.0, or a range of 4. In theory, each time we add a sound in, we expand the range of the data which we're trying to handle. In real life, though, we have to keep those values scaled to a range which we can actually work with.

What does that mean for images?

For the purposes of discussion, we'll refer to PS images being able to have levels (the equivalent of amplitudes) from 0 - 255 (a range of 256). In truth, many of you know that with 16 or 32 bit processing you have different ranges, but we'll only use one set of numbers for now.

Anyway, there are a number of differences between performing operations on sounds and on images in PS. The most significant of these is the fact that images don't (naturally) have negative values. Photoshop doesn't store brightness values of -255, or even of -1 (at least not for our purposes), and the images we work with aren't -255 to 255, -128 to 128, etc. This has some significant implications for how we handle our operations.

As an example, let's pretend we don't know about that difference and I'll separate an image rather naively. I'm going to use the picture of Santino which we've used a few times so far:

Tino

Now, I'll blur a copy of that image in a separate layer:

Tino Blurred

And subtract that from a third copy with the Apply Image command:

Tino Blurred

Not very much like what I've been showing you so far, is it? Sorry about that.

Here's the problem - do you remember how when we were mixing low and high sounds together, sometimes the high frequency brought the low frequency signal 'up', but at other times it brought it 'down' (and vice-versa)? [go back to review the time-correlated tracks to see what I mean] Now look closely at the result I've shown you above. You'll notice that the result only shows those areas which are brighter than the low frequency version. And this is because we don't have negative values. All of the areas which were darker in the high frequency than in the low frequency have been clipped to 0.

Take a minute to digest that, because it's as important as it is difficult to understand. The high frequency data doesn't "know" that we need it to occupy a finite space, and it wants to have both positive and negative values, just like it would in real life. Not having negative values means that we need to find another way to record those areas which are darker in the other frequency set. One way of dealing with this is to just take the darker areas of the high frequency data and combine those back into our mix above - we would use three layers to accomplish one separation, demonstrated in the image below. To do this, I created the 'Lighter High Frequency' layer as above, and the 'Darker High Frequency' layer with the Apply Image command (more details later). Take a look:

Tino Separation

In the first high-frequency set, our blend mode (which we'll discuss after bit) is ignoring the black areas while adding the light areas into the final image (adding black to a pixel is like adding zero to a number). In the second high-frequency piece, the dark areas are lowering the final values while the white areas are ignored.

This is technically great because it gives us a 100% accurate de/reconstruction of our image (that is, summing those three layers back together is pixel-for-pixel identical to the original). On the other hand, it's really inconvenient for our high-frequency data to be on two separate layers. How might we get it onto a single layer?

That leads us to our second technique. In this one, we pretend that we can have both 'positive' and 'negative' numbers in the same layer. To do so, though, we need an arbitrary value which will serve as the '0' point around which positive and negative values will appear. In Photoshop, this is 50% gray - that neutral value which many of you already use as a starting point with Soft Light, Overlay, etc. layers. Photoshop will ignore that middle gray value (it won't change the pixels when we blend with it), but when other values are brighter than 50%, it will lighten the final image while when values are darker than 50%, it will darken the final image. This option is what most retouchers I know do in practice, and what I hope you will settle upon at the conclusion of this discussion.

Like most things in life, though, this isn't going to come free. In order to put two layers into one as we're discussing, a compromise has to be made. Remember that each separation we make can have the full range of values in it - the sounds could go from -1.0 to 1.0, and our images can go from 0 to 255. In the same way, the high frequency image data can be as much as 255 levels above or below the low frequency values (these also ranging 0-255). In effect, our high-frequency data has a range of 512; not just 256. To compress this down into a single layer, then, we have to sacrifice some level of precision in getting there - we need to compress 512 levels down into 256.

My preferred method of doing this is to 'scale down' the data - to map the darkest possible dark of the high frequency data to 0, and the lightest possible light of the high frequency data to 255 (128 still being neutral). This preserves all of the finest details in the image, but sacrifices a small amount of its 'smoothness' (numbers later).

The PS High Pass filter, on the other hand, seems to have been designed for creating lots of rough contrast, and so simply 'lops off' those light lights and the dark darks within the high-frequency data (much the way many of you may be familiar with a color channel 'clipping' when it's over or underexposed). This makes for a more contrasty layer (part of why some people like it so much), but it sacrifices a lot of fine detail in order to get there. To give you a side-by-side comparison of best-possible reconstruction using the default workflow, take a look at a closeup from Tino's uniform (the four stars represent the four MLS Cups which DC United has won):

Highpass comparisons

You can see quickly that the High Pass filter's version is far more contrasty right out of the box. Unfortunately, you'll also notice that its reconstruction (ironically) loses high-frequency contrast when blended back in to restore the original image. This isn't to say all is lost for the filter, though. Like adding and subtracting the frequencies from one another, contrast mapping is commutative - we can do things in a different order and still get the same result. In this case, we'll be able to use the HP filter so as to avoid having to mess with (what for many is a terrifying experience) the Apply Image tool. If we go to Image->Adjustments->Brightness / Contrast and choose to lower image contrast by (-50) with the Legacy option enabled, we can then use the included highpass filter to get that single-layer high-frequency data, while retaining all of that wonderful fine detail contrast.

Highpass comparisons

Notice how the results are identical - this is great, both for image quality (obviously) as well for automation implications which some of you are undoubtedly already thinking about.

For now, let's finally go through the step-by-step PS instructions.

To perform Highpass filtration into three layers:
  1. Make three copies of your image (two new copies if working on a single-layered document).
  2. Label the bottom layer "Low Frequency". Label the middle layer "High Frequency Light". Label the top layer "High Frequency Dark".
  3. Select the Low Frequency layer.
  4. Run the Gaussian Blur filter at your separation radius.
  5. Select the "High Frequency Light" layer. Set its blend mode to "Linear Dodge (Add)".
  6. Open the Image->Apply Image dialog box.
  7. In the Source box, select "Low Frequency" as the Layer, "RGB" as the Channel. Make sure the "Invert" box is unchecked.
  8. In the Blending box, choose "Subtract". Opacity should be 100%, Scale 1, Offset 0, Preserve Transparency and Mask.. should be unchecked.
  9. Click OK.
  10. Select the "High Frequency Dark" layer. Set its blend mode to "Linear Burn".
  11. Open the Image->Apply Image dialog box.
  12. In the Source box, select "Low Frequency" as the Layer, "RGB" as the Channel. Make sure the "Invert" box is checked.
  13. In the Blending box, choose "Linear Dodge (Add)". Opacity should be 100%, Scale 1, Offset 0, Preserve Transparency and Mask.. should be unchecked.
  14. Click OK.

This method works in all bit depths and results in a reconstruction with a mean error of 0 (StDev & median also 0). That is, it is mathematically (and technically) perfect.

To perform Highpass filtration into two layers using the Apply Image command:
  1. In 16bit mode:
    1. Make two copies of the current image (one copy if working on a single-layered document).
    2. Label the bottom layer "Low Frequency". Label the upper layer "High Frequency".
    3. Select the Low Frequency layer.
    4. Run the Gaussian Blur filter at your separation radius.
    5. Select the High Frequency layer. Set its blend mode to "Linear Light".
    6. Open the Image->Apply Image command.
    7. In the Source box, select "Low Frequency" as the Layer, "RGB" as the Channel. Make sure the "Invert" box is checked.
    8. In the Blending box, choose "Add". Opacity should be 100%, Scale 2, Offset 0, Preserve Transparency and Mask.. should be unchecked.
    9. Click OK.

  2. In 8bit mode:
    1. Make two copies of the current image (one copy if working on a single-layered document).
    2. Label the bottom layer "Low Frequency". Label the upper layer "High Frequency".
    3. Select the Low Frequency layer.
    4. Run the Gaussian Blur filter at your separation radius.
    5. Select the High Frequency layer. Set its blend mode to "Linear Light".
    6. Open the Image->Apply Image command.
    7. In the Source box, select "Low Frequency" as the Layer, "RGB" as the Channel. Make sure the "Invert" box is not checked.
    8. In the blending box, choose "Subtract". Opacity should be 100%, Scale 2, Offset 128. Preserve Transparency and Mask.. should be unchecked.
    9. Click OK.

These methods result in a reconstruction with a maximal error of 1 level difference in each channel (that is, a 1/256 maximum shift in 8bit; a 1/32769 shift in 16bit). The average shift is 0.49 with a StDev of 0.50 and Median 0. In less mathematical terms, it is functionally lossless in 16bit.

To perform Highpass filtration into two layers using the High Pass Filter:
  1. Make two copies of the current image (one copy if working on a single-layered document).
  2. Label the bottom layer "Low Frequency". Label the upper layer "High Frequency".
  3. Select the Low Frequency layer.
  4. Run the Gaussian Blur filter at your separation radius.
  5. Select the High Frequency layer. Set its blend mode to "Linear Light".
  6. Choose Image->Adjustments->Brightness Contrast.
  7. Check the "Legacy" option.
  8. Enter a value of -50 in the Contrast box. Leave Brightness at 0.
  9. Click OK.
  10. Run the High Pass filter at the same radius which you used in step (4).
This method works in all bit depths, and results in a reconstruction with a maximal error of 1 level difference in each channel (that is, a 1/256 maximum shift in 8bit; a 1/32769 shift in 16bit). The average shift is 0.54 with a StDev of 0.59 and Median 0. In less mathematical terms, it is functionally lossless in 16bit. In 8bit, it is just slightly (yet probably meaninglessly) inferior to the 8bit Apply Image technique.

To perform Bandpass filtration with a single layer:
  1. Make a single copy of your image.
  2. Label this layer "Bandpass".
  3. Choose Image->Adjustments->Brightness Contrast.
  4. Check the "Legacy" option.
  5. Enter a value of -50 in the Contrast box. Leave Brightness at 0.
  6. Click OK.
  7. Run the High Pass filter at the radius for the lowest frequency which you want to be visible (remember, highpass filters keep frequencies above a threshold value).
  8. Run the Gaussian Blur filter at the radius for the highest frequency which you want to be visible.
  9. The Bandpass layer is your bandpass'd result.
This method results in a bandpass which is within 1 level of an 'ideal' Gaussian separation in any bit depth. Again, functionally perfect in 16bit, and almost always close enough in 8bit. It will be rather low contrast by default (a necessary by-product of allowing fine detail retention) which you may want to augment with another B/C adjustment or with normal curves [16bit has a huge advantage here of course]. It is also worth remembering that high frequencies are low radii, and that low frequencies are high radii in the Photoshop context. This is a white lie which I'd hoped to only begin discussing that tomorrow, but as it's confusing a few folks today, we'll get it out there now. The discussion of why that is will still remain for later.

To perform Bandstop filtration with a single layer:
  1. Make a single copy of your image.
  2. Label this layer "Bandstop".
  3. Set the layer's blend mode to "Linear Light".
  4. Invert the layer (Image->Adjustments->Invert).
  5. Choose Image->Adjustments->Brightness Contrast.
  6. Check the "Legacy" option.
  7. Enter a value of -50 in the Contrast box. Leave Brightness at 0.
  8. Click OK.
  9. Run the High Pass filter at the radius for the lowest frequency which you want to block.
  10. Run the Gaussian Blur filter at the radius for the highest frequency which you want to block.
  11. The Bandstop layer is now acting as the bandstop for your image.
This method results in a bandstop which is within 1 level of an 'ideal' Gaussian separation in any bit depth. Again, functionally perfect in 16bit, and almost always close enough in 8bit.

...

Wow. We just covered an awful lot.

I'm going to stop here for today to give you time to digest what you're just read, to allow you time to ask more questions either directly or in the forums, so that you can point out what I'm sure are a plethora of typos in the above, and so that I can get to another soccer game :). We'll resume tomorrow with a discussion of what all of this can be used for in practice (including determination of separation radii!), discussion of advanced application (multiple-radius separation, etc.), and some examples of how you can automate many of these processes. Monday will still be the day when I answer as many questions as I receive, and when I will also reveal what white lies remain in this text.

Thank you for your patience, and thank you for reading.

And a second thanks to mistermonday for pointing out a gross and oft-overlooked typo in the bandstop instructions above - my apologies for the oversight!
-