One of the effects offered by PixStack is selective color. The user selects a color, and pixels close to that color are kept, while everything else is grayed out.

Original image
Image after the selective color effect has been applied

At its core, the algorithm is fairly simple1:

for(int i = 0;i < numPixels;i++) {
	if(!colorsClose(pixel[i], targetColor, tolerance)) {
		pixel[i] = gray(pixel[i]);
	}
}

The main difficulty comes from trying to figure out how "close" two colors are. Close in this case refers to visual similarity. It doesn't matter whether the difference between two pixel values is big or small, but rather how similar the colors appear to the human eye. While RGB isn't very useful in this case, it turns out the problem of color difference has been studied before, and calculating the difference, as a practical matter, isn't that difficult.

As a first step, you'll first need to transform your pixel values into L*a*b* color space. The EasyRGB math page makes this process simple. You can then use a formula such as CIE76, which is defined as: \[ \Delta E^*_{ab} = \sqrt{(L^*_2 - L^*_1)^2 + (a^*_2 - a^*_1)^2 + (b^*_2 - b^*_1)^2} \] While it has some weaknesses that have been corrected in later formulations, it's also the simplest of the color difference equations.

Once you have a result, you can then scale it according to your own preferences (or a user supplied tolerance), depending on how similar you want the two colors to be. Color difference could also be used for other purposes; for example, to replace a color in an image with another color.

1One pitfall is that the algorithm operates at a very low level. A likely use of an effect such as selective color is to selectively highlight an object in the image. If there are individual pixels in the background that are close to the target color, they probably shouldn't be included. Likewise, if there are pixels within an object that are of a slightly different color compared to the target, they probably should be included. A potential improvement would be to look at how many pixels in the surrounding area are close to the target color, and to adjust the result based on that.