Page 1 of 1

Object Recognition

Posted: 2010-08-10T11:16:47-07:00
by fmw42
IM right now will not likely help on this problem. The only function that compares two same size (or different size) images is "compare" and it won't work well if the images are rotated or scaled so that the shapes don't align well. You need some shape recognition functions (perhaps the co-occurence matrices or Fourier descriptors?) or other means of identifying the outline and/or texture. But IM does not have these things right now. Your only hope would be to write your own functions to do what you need to do (or possibly do that in some IM API).

If the objects are not rotated or scaled then IM compare with two image can find the best match. If the match is high enough then you can say it matches to one of your shapes. But if they are rotate or scaled, then that will not work well.

Even if properly rotated, the pencil and the spoon may be hard to distinguish.

If you want to explore compare or -compose difference, see

http://www.imagemagick.org/script/compare.php
http://www.imagemagick.org/Usage/compare/
viewtopic.php?f=1&t=14613&p=51076&hilit ... ric#p51076
http://www.fmwconcepts.com/imagemagick/ ... mcrosscorr

Re: Object Recognition, what is possible and fast?

Posted: 2010-08-10T11:36:34-07:00
by TheGecko
Hi Fred,
I had already found these sites and for example the site with the eye of the ape was similar to my try, but it needs a lot of time.

I searched for a faster algorithm which works also with rotation or position and found a codesnippet for co-occurrence matrices (it's written in Python) and I tried it, but the image is always black.

I think of the second Parameter, labels. If I have the image in an array (for example in the variable 'a' ,
I take this array for the first and for the second parameter. Maybe I need a different value for the second parameter?)

Maybe someone has a guess:

Thanks,
Thegecko

Code: Select all

import numpy as np
import scipy.ndimage as scind

def cooccurrence(quantized_image, labels, scale=3):
     """Calculates co-occurrence matrices for all the objects in the image.

     Return an array P of shape (nobjects, nlevels, nlevels) such that
     P[o, :, :] is the cooccurence matrix for object o.

     quantized_image -- a numpy array of integer type
     labels          -- a numpy array of integer type
     scale           -- an integer

     For each object O, the cooccurrence matrix is defined as follows.
     Given a row number I in the matrix, let A be the set of pixels in
     O with gray level I, excluding pixels in the rightmost S
     columns of the image.  Let B be the set of pixels in O that are S
     pixels to the right of a pixel in A.  Row I of the cooccurence
     matrix is the gray-level histogram of the pixels in B.
     """
     nlevels = quantized_image.max() + 1
     nobjects = labels.max()
     image_a = quantized_image[:, :-scale]
     image_b = quantized_image[:, scale:]
     labels_ab = labels[:, :-scale]
     equilabel = ((labels[:, :-scale] == labels[:, scale:]) & (labels[:,:-scale] > 0))
     P, bins_P = np.histogramdd([labels_ab[equilabel]-1,image_a[equilabel], image_b[equilabel]], nobjects, nlevels, nlevels))
     pixel_count = fix(scind.sum(equilabel, labels[:,:-scale],np.arange(nobjects)+1))
     pixel_count = np.tile(pixel_count[:,np.newaxis,np.newaxis],1,nlevels,nlevels))
     return (P.astype(float) / pixel_count.astype(float), nlevels)

def fix(whatever_it_returned):
     """Convert a result from scipy.ndimage to a numpy array

     scipy.ndimage has the annoying habit of returning a single, bare
     value instead of an array if the indexes passed in are of length 1.
     For instance:
     scind.maximum(image, labels, [1]) returns a float
     but
     scind.maximum(image, labels, [1,2]) returns a list
     """
     if getattr(whatever_it_returned,"__getitem__",False):
         return np.array(whatever_it_returned)
     else:
         return np.array([whatever_it_returned])
(It's an extract from a method called Haralick texture algorithm, the origin is from http://www.cellprofiler.org)

Re: Object Recognition, what is possible and fast?

Posted: 2010-08-10T11:45:29-07:00
by fmw42
I am familiar somewhat with Haralick texture using cooccurence matrices. But I don't program (only script from the shell) so I probably cannot help. But this will not likely help if the image is rotated or scale, if I recall. But it has been too long, so I am not really sure. I would have to re-read his paper again.

Re: Object Recognition, what is possible and fast?

Posted: 2010-08-10T12:00:09-07:00
by TheGecko
I've found a very interesting paper which helped me a lot. But I haven't such a big knowledge in mathematics to understand the math behind it ;)
Here is the link: http://research.microsoft.com/en-us/um/ ... ceCVPR.pdf

Thanks,
TheGecko

Re: Object Recognition, what is possible and fast?

Posted: 2010-08-10T12:42:11-07:00
by fmw42
When I get some time, I will take a look in more detail. The math does look formidable.

Re: Object Recognition, what is possible and fast?

Posted: 2010-08-10T13:11:55-07:00
by TheGecko
Thanks for your effort.

I saw that this paper is special for colour co-occurrence matrices. That would be a nice feature if it's working, but maybe are Gray level Co-occurrence matrices (GLCM) is sufficient.

I also read in Wikipedia to co-occurrence matrices about Haralick's methods (http://en.wikipedia.org/wiki/Co-occurrence_matrix), maybe these techniques are better for this job?

It's also written, that you can eliminate the disadvantage of the rotation, if you make CM for 4 pictures (i.e. 0, 45, 90, and 135 degrees).

Thanks,
TheGecko

Re: Object Recognition, what is possible and fast?

Posted: 2010-08-10T14:02:06-07:00
by fmw42
It's also written, that you can eliminate the disadvantage of the rotation, if you make CM for 4 pictures (i.e. 0, 45, 90, and 135 degrees)
I doubt that is sufficient. What if you have 22.5 degrees rotation. That is still significant.

Search google for "rotation invariant image matching" and see what comes up

Re: Object Recognition, what is possible and fast?

Posted: 2010-08-10T14:57:22-07:00
by TheGecko
Hi Fred,
First I want to quote the wikipedia article:
Note that the (Δx,Δy) parameterization makes the co-occurrence matrix sensitive to rotation. We choose one offset vector, so a rotation of the image not equal to 180 degrees will result in a different co-occurrence distribution for the same (rotated) image. This is rarely desirable in the applications co-occurrence matrices are used in, so the co-occurrence matrix is often formed using a set of offsets sweeping through 180 degrees (i.e. 0, 45, 90, and 135 degrees) at the same distance to achieve a degree of rotational invariance.
My english isn't very good, but I think that rotational invariance mean, that it is "immune" against roation.

But that isn't very important anymore because of your tip with google, I found my algorithm (I think :)) : SURF.

The first result in google is a very long paper from a student of Helsinki University but then there was a paper about "rotation invariant image matching" (http://www.stanford.edu/~gtakacs/papers/riffpolar.pdf)and they compared different algorithms which are immune against rotation. After a short search I found SURF, it has a lot of open source librarys and implementations (http://en.wikipedia.org/wiki/SURF) also for GPU or MultiCore to speed things up. What do you thing, is it the right thing or is my travelling path completly wrong?

Thank you very much,
TheGecko

Re: Object Recognition, what is possible and fast?

Posted: 2010-08-10T19:42:55-07:00
by fmw42
Never heard of that before. But I have been out of touch with these things for many years. I will have to look at the paper when I get time. Is it just a feature detector or image matching? Do they have any examples of matching images?

Re: Object Recognition, what is possible and fast?

Posted: 2010-08-11T05:13:50-07:00
by TheGecko
Here a some example pictures from the script above.

Image Image Image Image

I think, the SURF algorithm works like that:
First, it searches for "interesting point" and at this points he also search for a right angle. If you have then a rotated image, it finds again these "interesting points" and see, that they have a different angles.
So he can say, okay, it's the same image, but it's rotated.

TheGecko :D

Edit: It's also fantastic that the algorithm compared them right, because there is also a perspective distortion and the photo is brighter and have some bright spots from the light.

Re: Object Recognition, what is possible and fast?

Posted: 2010-08-11T08:09:03-07:00
by fmw42
Looks pretty nice. I have to read this. But how does it show its measure of comparison, ie whether it thinks it matches? What about scale changes (though perspective is a good test).

Re: Object Recognition, what is possible and fast?

Posted: 2011-07-10T07:13:22-07:00
by jumpjack
any news about this topic? I found a similar algorithm, SIFT, and a Windows port (siftWin32), but I can't get it working.
http://www.cs.ubc.ca/~lowe/keypoints/

I also found another windows source... but I can't get it even compiled:
http://nashruddin.com/template-matching ... ample.html

And java ports pof SURF does not run on my PC! (which is the manual command line to start a jar program?)

Re: Object Recognition, what is possible and fast?

Posted: 2011-08-01T16:42:12-07:00
by Rafajafar
This really all depends on your classification and the type of photos you're looking at.

For instance, if you're interested in shapes with a white background, you can apply the Sobel Operator (http://en.wikipedia.org/wiki/Sobel_operator) on the image. Then, identify all areas that are more than 50% in color (white). Take these areas, and do a vectorization on the area with the largest number of connected pixels that comprises a complete circuit (think, you can draw a line and get back to where you started without the pen leaving the paper... and it outlines the object). Using these vector coordinates within the circuit, you can guess within a tolerance if something fits a classification in a few ways.

A way I'd suggest is that, now you've found the vectors comprising the shape of the image, normalize it by rotating and scaling the vectors such that the two points farthest from each other are on the edge of the picture in a set orientation. Then I'd take the area of the region circumscribed by the circuit. I'd also take the absolute value of the average direction of each vector. Now you have two numbers to go off of. If both numbers are within a given tolerance, the picture is a likely match.

Add color spectrography into the mix and you might even be able to determine if it's the same substance (i.e. differentiate between plastic and metal spoons)

I've never had to try this technique myself, but it might work.

The point is, get creative. Machine vision is more art than science.