Page 1 of 1

Normalization in Image processing

Posted: 2014-08-22T07:20:48-07:00
by Ali
HI...

I have a basic question regarding preprocessing techniques(in particular normalization) in computer vision/image processing.

This is what I read about normalization under my computer vision course.

"As objects in images usually have parameters that vary with in certain intervals(e.g size, position, intensity...). However, results of image analysis should be independent of this variation.

So the goal is to transform the image such that parameters are mapped onto normalized values(or some appropriate approximation)

1) We do normalization to standard interval [0,a] e.g [0,255].
2) We normalize to zero mean and unit variance i.e. normalized intensities have mean = 0 and variance = 1.

And the most important normalization method is Histogram equalization"

I get the first point that it is necessary for contrast stretching to use the complete dynamic range of intensity so we do this first step. But I don't know why do we do the second step? why we normalize to zero mean and unit variance??

Can some one help me out please?

Re: Normalization in Image processing

Posted: 2014-08-22T07:54:09-07:00
by snibgo
1) We do normalization to standard interval [0,a] e.g [0,255].
Fair enough.
2) We normalize to zero mean and unit variance i.e. normalized intensities have mean = 0 and variance = 1.
If the image has values from 0 to 255, then normalising the mean to 0 (zero) will wipe it out, setting all values to zero. That would remove all the information from the image.

More sensibly, the mean would be set to the mid-point of the range. If the range is 0 to 255, the midpoint is 127.5.

Setting variance (or standard deviation) to any particular value may be useful for comparing images.
And the most important normalization method is Histogram equalization
Histogram equalization is an important method. It will spread the values to the full range eg 0 to 255, set the mean to the mid point, and the SD to 0.288 of the range.
why we normalize to zero mean and unit variance??
Machine vision is commonly used to identify objects in a scene, or find their orientation, etc. This involves image comparison. I might photograph the same object twice under different lighting. Normalization increases the chances that the two photos will look the same, and can be compared with each other.

Does that help?

Re: Normalization in Image processing

Posted: 2014-08-22T09:09:56-07:00
by fmw42
In normalized cross correlation, one subtracts the mean and divides by the standard deviation to achieve what you have in 1) and 2). The mean subtraction mitigates brightness variations and the division by the standard deviation mitigates variations in the spread of the data about the mean so that the two images have similar means and standard deviations.

If one used histogram equalization, that tries to make the standard deviation infinite so that all grayevels have the same counts.

These may be two competing effects that do not always work well together.

Re: Normalization in Image processing

Posted: 2014-08-23T03:07:45-07:00
by Ali
Thanks... Yes It helped me a lot :)

Re: Normalization in Image processing

Posted: 2014-08-24T22:27:39-07:00
by snibgo
I've just figured out that mean=0 and variance=1 describes the "standard normal distribution", aka Gaussian, or bell-shaped curve. This is different to "equalization", which is a straight line. See http://en.wikipedia.org/wiki/Normal_distribution