Opinions and references RE: image quality

Discuss digital image processing techniques and algorithms. We encourage its application to ImageMagick but you can discuss any software solutions here.
Posts: 1944
Joined: 2010-08-28T11:16:00-07:00
Authentication code: 8675308
Location: Montreal, Canada

Opinions and references RE: image quality

Post by NicolasRobidoux »

Dr. Chris Bore, Lecturer at Kingston University London, UK, and consultant and technical trainer, put down some interesting observations within a LinkedIn Image Processing group forum discussion in response to a query by Charles Tsai RE: "What is the best way to compare Image Quality between image from different camera?". I asked Dr. Bore for permission to summarize them here, and he kindly obliged. (I must say that I often find the discussions within the LinkedIn groups Image Processing and Image Processing Interest Group interesting and useful.)

In response to a comment by Wayne Prentice to the effect that image quality needs to be considered within a specific application:
Chris Bore wrote:... you need to know the aim first. I always devise Figures of Merit (FoM) that express the desired outcome - those that have been suggested may be fine (resolution, snr, mtf,psf) depending on the application. Determine the target measure for each relevant FoM, then you can produce a table to compare. In some cases there may be many thousands of combinations of settings, and in those cases I apply a weighting factor to each FoM according to its importance for the application - for instance snr may be relatively unimportant compared to resolution for some applications - and then combine the FoM, weighted, to a single number that then guides as to 'Quality'.

Many things can be measured with very simple equipment - for example the PSF (Point Spread Function) may be measured directly by photographing the (distant, so smaller than a single pixel) spot produce by a laser pointer (which is held close to the surface and distant from the camera so it produces a small but bright dot). You can use ImageJ or Octave to produce the 2D DFT of such an image as a direct way to produce the MTF (Modulation Transfer Function).

None of the above is very helpful if you are measuring subjective image quality (how people 'feel' about the images produced) - in that case you need a set of images of the type that you intend the camera to produce, and to test them by showing them to people from the target audience, gathering their responses and analyzing them.

Be aware the cameras use psycho enhancement techniques that make images look 'better' to certain audiences by using optical illusions - so these effects will not be measured by simple objective measures, hence the need to ask people what they like and to use the right type of pictures for that audience.
In response to a request for comments by CT Yeung RE: how people "feel" about images:
Chris Bore wrote:... I am not suggesting a standard observer model - rather the opposite, I am suggesting rating of image quality (and of characteristics such as perceived sharpness) by actual ratings from human viewers. It is often not done because it is difficult to design consistent test settings and to avoid viewer bias (for example it is well known that viewers rate video 'image quality' higher when the soundtrack is played louder - in the same way viewers may rate image quality better after a cup of tea). When it is done, it is often in 'controlled' viewing conditions, which also makes it less valuable (for example to rate a medical image in controlled conditions may produce images that are less optimal when viewed in the cramped, perhaps crowded and awkward conditions of a hospital theatre).

On your question if the two images are objectively the same I think there are at least two situations: (1) the images are identical, pixel by pixel - even in this case viewers (even the same viewer after the cup of tea) may rate the images differently, and this may require careful repetition to see the most common rating. (2) the images are the same according to chosen objective standards (MTF, contrast etc) - in this case the viewer may genuinely perceive the images differently, and this may show that the objective standards (Figures of Merit) may need to be reviewed to better correspond to what actual viewers perceive, which would be approaching the idea of a standard observer model calibrated against the perceptions of actual human viewers.

The field is fraught with difficulty, but is important in the market - people will choose based on their perceptions so it is useful to have some idea what those perceptions may be.

There are difficulties due to variation in one viewer's perceptions between sessions, and even to the same image within sessions. There is bias from the way the images are presented - if you show the 'best' image always on the left, the viewer will tend ot prefer the left hand image even if you have deliberately shown them their own prior choice of 'best' on the right. It also appears to me that there is cultural bias - some countries seem to prefer very saturated images while others like more muted colours, and I think (without real objective evidence) that people like pictures that look most like their own TV because they are exposed to that sort of picture while watching TV for hours in a pleasant environment so become conditioned to like it. In general viewers do tend to prefer what they already have - so for example doctors may tend to think the image from the scanner they use is better than the image from a scanner they don't use. But it is not impossible, and rests I think on definition of Figures of Merit for the human viewers to rate (perceived sharpness etc) and trying to include in the testing the range of environments and viewers who will actually view the images in the end.

Image quality is not only what the viewer 'feels' - you can test for what they can do with the image. For instance a doctor may 'like' one type of medical image but in fact you may be able to show that they can better diagnose some feature using an image they 'like' less - which then presents you with a real problem - do you present the best image for diagnosis (in which case you won't sell the scanner) or the less good one for patients, but that the doctor 'likes' so they will buy it? Or do you try to 'condition' the doctor so that they end up liking the better image from the diagnostic viewpoint (which is dangerously close to Sales and Marketing)? Or do you try to 'educate' the viewer so they can rationally decide that a particular image is better (which is how camera magazine reviews work - the readers tend to prefer the images the review rates as best..).

All very good fun and I only wish there was more applied research into the topic.
In response to a request by yours truly for pointers to relevant literature:
Chris Bore wrote:... You should find quite a lot by Nokia at research.nokia.com/publications - search on subjective image quality. Also Kodak and Philips. This paper:
http://www.i3a.org/wp-content/uploads/2 ... ips_v2.pdf
is quite a nice brief summary of some ideas
Chris Bore wrote:... You might also get hold of a copy of 'Leonardo: on painting' which summaries some of Leonardo da Vinci's experiments, experience and thoughts on visual image quality - most of which remain valid today when we are talking about 'realism' (although modern art has 'moved on' from realism under the mistaken impression that photography is 'realistic'. :-)