How does EWA work with sample locations and filter weights?

Discuss digital image processing techniques and algorithms. We encourage its application to ImageMagick but you can discuss any software solutions here.
CranberryBBQ
Posts: 12
Joined: 2014-01-23T17:11:13-07:00
Authentication code: 6789

Re: How does EWA work with sample locations and filter weigh

Post by CranberryBBQ »

anthony wrote:EWA originally only was used with Gaussian filters producing fairly blurry results (at that time there was also a code error (+ instead of -) that was later fixed).

The destination pixel is reverse mapped to the source image to get the center of the ellipse. And yes it actually should not be the center but it works well for a linear transformation like perspective for which EWA was designed.

The deritives are then used to determine the axis of the ellipse (this was imprived later by Professor Nicholas Robidoux with some heavy math, at which point the size bug was discovered), and that is also multiplied by the filter support window. (typically 2.0 especially for gaussian, but this is now variable)
This point is interesting: When you say the derivatives are used to determine the axis of the ellipse, do you mean the derivatives of the source coords with respect to (1, 0) and (0, 1) unit-length destination-space basis vectors, or the derivatives of the source coords with respect to (pixel_radius, 0) and (0, pixel_radius) in destination-space? I ask because the former would imply you're using a pixel radius of 1.0, which results in more overlap between pixels than the "minimal" pixel radius of sqrt(2.0)/2.0 (unless you're talking about diameter instead of radius, in which the reverse would apply). If so, is this deliberate?

Also, what about when the ellipse radius covers less than the distance between source texels (when you're upsizing in an area)? You have to expand it in this case to ensure sufficient support, correct?
anthony wrote:Now every pixel within the ellipse on the source image is then read (using scan lines). Originally it was every pixel in teh containing rectangle, whcih for a long thin ellipse could be HUGE!. I worked out a way to scan the containing parrelogram, whcih reduced the hit-miss of the ellipse area down to a constant 71% (same as an area of a circle in a sqaure), rather than some rediculious figure you get from fitting a rectangle to a long thing diagonal ellipse. This gave the whole system enormous speed boot (in one test case involving horizons from 2 hours to 5 minutes!). I was actually impress by this enormous improvement when the result appeared while I was waiting for a plane at Sydney airport!

Here is the diagram I used for figuring out the scan area (as apposed to teh actual sampling area contained within.
Image

I was very carful in ensuring ALL pixels in the scan area are also part of the ellipse area. In fact in the code there is still commented out debug lines to generate gnuplot output to display sample points with the ellipse, parellolgram bounds for specific test points. For example...
Image

The arrows are the calculated major/minor ellipse axis form the mapping deritives, teh blue ellipse is the sampling area, the blue points the source pixels in teh sample are, the red bounds is the area being scanned. You can see how much bigger this scan area would have been if we'd fitted a XY aligned rectangle to the ellipse! as it is we now have a constant 71% hit/miss ratio.
That's clever! I'm essentially restricted to a supersampling approach for fragment shaders anyway (thankfully the distortion in my case is slight), so I never thought much about the difficulties of actually narrowing down initial candidates for an ellipse overlap test, but using independent scanlines over a parallelogram area is a great solution. I guess it seems so obvious in retrospect, but it's nontrivial to start with: You need to use independent search ranges for each scanline to avoid the "huge xy-aligned rectangle" problem, but the fact that you can do that may not be clear until you can first visualize a tight bounding volume (or area) that's axis-aligned along one of its dimensions. The problem is, until you realize you want a scanline-based solution, it's not immediately obvious that's the kind of bounding volume you want in the first place, and the unrelated axes of the ellipse don't really give you the best hint about what approach to take. I'm glad you worked it out! :)

Your average hit/miss ratio should be pi/4 though (depending on exact source texel boundaries), or about 78.5%, right?
anthony wrote:The squared distance of the sample pixel (interger) from the reverse mapped source pixel (floating point), is then used to look up the weighting factor from a pre-prepared weighting table (also distance squared distorted) and used to weight the color/alpha, contribibution of the pixel (fairly standard), The result is color for the destination.
This deals with the heart of my confusion above, and my last post details my current thoughts on the matter. The "filter-space" derivation may be a little too verbose, but the basic idea is this: Once you have an ellipse containing all of your source texels (and once you find their positions), scale your distance vectors along each ellipse axis until the ellipse axes fit to the expected filter range (a radius of 2 for cubic filters, etc.) Is this true of ImageMagick? That is, do you scale your offset vectors similarly before computing their squared distances, or do you just directly calculate squared distances in the source domain even if they may be far narrower (upsizing) or wider (downsizing) than your filter's support range?
anthony wrote:Now originnaly the filter function was a blurry gassian filter. and that works well. Later when Nicholas got involved, and helped fix the sign bug, we started trying other filters such as Jinc to see how they perform, especially with negative weightings. A Sinc filter for eample is bad as you can get positive reinforcement in a no-op case, generating a pattern of pure black and white from a 'hash' or pixel level checkerboard image. This has worked out great, and it is more realistic. The results was removal of many blocking artiacts, especially along diagonal edges involving strong color changes. It has however been a bumpy ride, and the development of the Robidoux cylkindrical filter (whcih turned out to be almost identical to a 2-pass Micthell-Netrei filter) shows this development.

Basically the EWA implementation has created a small revolution in just basic resizing (scaling) of images, beyond its original use for distortion.
Definitely! Cylindrical resizing may be a lot slower, but unless you're using a Gaussian (which is truly separable), there's just no other escape from the axis-aligned artifacts you get from separable resizing. When I first stumbled upon your site a few years back, I was so happy that someone finally bothered to implement something better and put so much effort into explaining how it all works. Your site is probably the most accessible educational resource around for people interested in resizing, and it covers issues I probably never would have learned about otherwise...like the fact that to extend a sinc filter's frequency domain properties into two dimensions, you need a jinc rather than just a cylindrical sinc.
anthony wrote:Now Nicholas and I have also talked about using the whole parrelogram, and later using a 'diamond' sampling area. This then lets us its diagonanly aligned linear (2-pass) sampling filters instead of a cylindrical filter. We actually think that it is quite feasible to do so, though it would be about the same speed as it is now (even with the added sampling area complication), but we have not implemented it. The diamond shape would be more directly determined from the same major-minor axis as used for ellipse determination, and would have no allignment with the X-Y axis. The scan area would also directly match the samplign area.

A diamond (or perhaps even a quadrangle) would also be able to be more directly determined using reverse mapping of destination mid-pixel locations, rather than actually using deritives. Really it is a matter of developing a alternative sampling module to use, and some way of seleting it. I purposefully have sampling seperate to the distortion, so that should be fairly easy to develop, unlike trying to implement piecewise distortions (trianglar or quadralteral distortions) as the code currently stands, as distortion and mapping are current too intertwined.
Using separable two-pass filters with distortions sounds complicated. I mean, I can visualize it:
  1. Separate a diamond into diagonal strips (scanlines of nonzero-width)
  2. Use a weighted [one-dimensional or two-dimensional?]l filter along each strip, which collapses each strip to a single vector value (one for each color channel)
  3. Use a weighted one-dimensional filter in the other direction to get a pixel value
However, it seems like this would be error-prone and might result in unanticipated artifacts, because the samples in each strip aren't actually going to lie on a straight line. I wonder if it would be better to treat them like they are in a straight line (thereby throwing away their off-axis distances) or to calculate 2D distances within each strip? If this does pose a problem though, it could probably be alleviated by adding a more accurate discrete-to-continuous mapping: Instead of giving each source texel a 0 or 1 coverage value for strips (or even for EWA in general), you could give them a floating point coverage value and factor "partial source pixels" into your filter calculations as well (multiply each source texel's weight by its coverage ratio).
anthony wrote:One point. You mention point sample forward mapping. Or Splatting. In some ways that is simpler, but you still have the problem of weighted samples, this time in the destination, for pixels that have overlapign splats. I have not looked into splatting myself, but I gather that is done by using a extra 'weight' channel (much like a alpha channel), to keep track of the colors ratio/weights , and latter megered into the color channels after all the splatting phase is finished.

It would be interesting to attempt to 'splat' source image pixels into a destination image, using forward mapped distortions, especially now that we have no extra channel restrictions in the new major release in development. However one side effect that you would have is you would not be able to use Virtual Pixel effects such as tiling or horizons, that comes with reverse mapping. You would just get areas of 'undefined' or 'no color' in the destination image where no source pixel 'splatted' the destination raster space.

Just a shame I do not have time to expore such a distortion method. But if I did I would try to ensure it is also 'peice-wise' distortion capable.
Oops. I never really meant to imply splatting, but it seems like in my confusion, question 2 in the OP does exactly that. (It's because I hadn't given enough thought to how EWA should actually work. I'm writing from a supersampling standpoint, where I have to implicitly reverse-map a destination-space circular/square pixel to source-space by projecting individual bilinear samples. I COULD do actual EWA in a shader with dynamic branches, but it would be criminally slow. ;))

Still, are virtual pixel effects really inherently incompatible with splatting the source to the destination? I can see why they might be incompatible under the current codebase, but it seems like the forward and reverse mappings should be proper inverses of each other given the right parameterization...at least for affine and perspective distortions. You'd have to set up the reverse mapping first, but think of it like texture mapping: You might ordinarily index a source texture with (u, v) texture coordinates in the range ([0.0, 1.0], [0.0, 1.0]) (although ImageMagick's codebase might think in terms of ([0.0, src_x_res], [0.0, src_y_res])). This would seem to imply that you can't forward-map the same source texel to two destination pixel locations, because the one-to-many mapping is lost in the clamped parameterization. However, you can also wrap your coordinates and use a modulus operation, so texture coords (0.5, 0.5), (1.5, 0.5), (0.5, 1.5), and (1.5, 1.5) all map to the same source texel. If you're mapping from source-to-destination, you wouldn't just transform absolute source texel coordinates into destination-space. Instead, you'd consider the source image to be an infinitely tiled plane, then reverse-project the destination area into the tiled source coordinate frame, then forward-project the overlapped texels back into the destination coordinate frame.

There's not really much of a point in that though, unless you can get a piecewise version of it to work better than EWA or supersampling with complex distortions.

Anyway, my biggest question in the OP was question 3, by which I meant something entirely different from splatting: For question 3 I meant this:
After you find the source texels contributing to each destination pixel, and after you compute your offset vectors from source-to-destination-pixels (from which you derive your filter weights), what coordinate frame do you transform these vectors to before you calculate their [squared] vector lengths/distances? I asked because it doesn't seem like it would make sense to calculate weights from absolute source-space texel distances: Your support size is 2 if you're using a cubic filter, but your ellipse might actually cover hundreds of source texels in each dimension...so it would seem you have to scale these distances to fit your filter size in some manner. I'm now guessing you're implicitly using some variation of the "filter-space" I derived above, but the details could be different by some constant factor too, especially if you're using a pixel radius different from sqrt(2.0)/2.0.
Last edited by CranberryBBQ on 2014-06-02T15:52:20-07:00, edited 1 time in total.
User avatar
fmw42
Posts: 25562
Joined: 2007-07-02T17:14:51-07:00
Authentication code: 1152
Location: Sunnyvale, California, USA

Re: How does EWA work with sample locations and filter weigh

Post by fmw42 »

My understanding is that the derivatives are between source and destination and vice-versa. I do not think any filter space coordinates are used.

The pixels in the ellipse are not transformed to the destination space circle. The weighting is done in source coordinates within the ellipse. But if I recall correctly, the distances are normalized in proportion to the axes lengths. So that effectively converts the ellipse distances into an effective radius, so the weighting can be circularly symmetric. If that is not the case, then the gaussian function would need to do the normalization internally. But since Anthony is using a look up table to go from distance to weights, he is probably normalizing the distances before accessing the weights.

But Anthony can explain further.

I know much of this because I developed my own Gaussian EWA filter many years ago (and worked with Anthony at the start of his development to help with the derivatives, but he used Heckbert's formulation rather than mine, so I stepped back.). See my perspective transformation work at http://www.fmwconcepts.com/fmw/ipt.html (and have published papers on that topic -- http://www.fmwconcepts.com/fmw/fmw_pubs.html)
User avatar
anthony
Posts: 8883
Joined: 2004-05-31T19:27:03-07:00
Authentication code: 8675308
Location: Brisbane, Australia

Re: How does EWA work with sample locations and filter weigh

Post by anthony »

CranberryBBQ wrote:
anthony wrote:=The deritives are then used to determine the axis of the ellipse (this was imprived later by Professor Nicholas Robidoux with some heavy math, at which point the size bug was discovered), and that is also multiplied by the filter support window. (typically 2.0 especially for gaussian, but this is now variable)
This point is interesting: When you say the derivatives are used to determine the axis of the ellipse, do you mean the derivatives of the source coords with respect to (1, 0) and (0, 1) unit-length destination-space basis vectors, or the derivatives of the source coords with respect to (pixel_radius, 0) and (0, pixel_radius) in destination-space? I ask because the former would imply you're using a pixel radius of 1.0, which results in more overlap between pixels than the "minimal" pixel radius of sqrt(2.0)/2.0 (unless you're talking about diameter instead of radius, in which the reverse would apply). If so, is this deliberate?
The deritives are determined in two ways. Both from the actual distortion function.

Typically they are the calculated from a the deritive function itself. Paul Heckbert detailed that for perspective.
The deritives however do not line up with the ellipse major/minor axis, as that is where the heavy math comes in.

The one other method is in Polar distortion mapping a polar plot (y=0 is center - along the top edge) In that case I directly given ellipse major minor vectors as the 'deritives' as the ellipse in the source image is actually alinged with X and Y axis. It is a lot easier to just give those than calculate the deritive from the reverse mapping.

Of course the cartesion to Polar distortion involves circular arcs rather than ellipses, so EWA is turned off for that distortion.

CranberryBBQ wrote: Also, what about when the ellipse radius covers less than the distance between source texels (when you're upsizing in an area)? You have to expand it in this case to ensure sufficient support, correct?
The sampling area is bounded by a unit circle at a minimum. That is the major and minor axis of the ellipse will not be made smaller than 1. This is then multiplied by the the support window. Of course if you have a support window less than half the square root 2 (approx 0.707 diagonal distance), then you can get 'no pixels in sample area'. In this can EWA fails and Im falls back to a 'point' or 'interpolative' sampling method, which can be seperatally defined. The only filter with a support that was less than that was "box", so a cylindrical box support is enlarged slightly.
See Cylindrical Filters... http://www.imagemagick.org/Usage/filter ... terpolated

You can purposefully set the support too small, whcih can produce some very neat enlargment effects!
See the notes in the previous link, whcih also has a link to...
See.. resampling Failure... http://www.imagemagick.org/Usage/distor ... rt_failure


That's clever! I'm essentially restricted to a supersampling approach for fragment shaders anyway (thankfully the distortion in my case is slight), so I never thought much about the difficulties of actually narrowing down initial candidates for an ellipse overlap test, but using independent scanlines over a parallelogram area is a great solution. I guess it seems so obvious in retrospect, but it's nontrivial to start with: You need to use independent search ranges for each scanline to avoid the "huge xy-aligned rectangle" problem, but the fact that you can do that may not be clear until you can first visualize a tight bounding volume (or area) that's axis-aligned along one of its dimensions. The problem is, until you realize you want a scanline-based solution, it's not immediately obvious that's the kind of bounding volume you want in the first place, and the unrelated axes of the ellipse don't really give you the best hint about what approach to take. I'm glad you worked it out! :)

Your average hit/miss ratio should be pi/4 though (depending on exact source texel boundaries), or about 78.5%, right?
From my memory it was 71% but then my long term memory with numbers is not really trustworthy. But it was a better ratio than 1:5000 (or some riduclious figure) I was getting from even reasonable diagonal ellipses, using Heckbert's implementation. It was a major insight into the problem that payed off bigtime.

It is the difference between theoritical algorithms, and practical progrmming.
anthony wrote:The squared distance of the sample pixel (interger) from the reverse mapped source pixel (floating point), is then used to look up the weighting factor from a pre-prepared weighting table (also distance squared distorted) and used to weight the color/alpha, contribibution of the pixel (fairly standard), The result is color for the destination.
This deals with the heart of my confusion above, and my last post details my current thoughts on the matter. The "filter-space" derivation may be a little too verbose, but the basic idea is this: Once you have an ellipse containing all of your source texels (and once you find their positions), scale your distance vectors along each ellipse axis until the ellipse axes fit to the expected filter range (a radius of 2 for cubic filters, etc.) Is this true of ImageMagick? That is, do you scale your offset vectors similarly before computing their squared distances, or do you just directly calculate squared distances in the source domain even if they may be far narrower (upsizing) or wider (downsizing) than your filter's support range?
I prepare a table of the filter support range (about 1000 values), using squared distances. For each sample point I get the square of teh distance (avoid a square route), in terms of ellipse radius (along the scaled ellipse vectors), That gives a value from 0 to 1 whcih is then scaled to the table and the filter weighting value looked up that aspect is unchanged from the original code. Essentually the ellipse is mapped to the filter support.

I know the mapping is correct as if you enlarge a single pixel on a background using distort and again using resize, both with the same filter and scaling, you get the exact same image. It was a good double check of the filter results.

anthony wrote:Basically the EWA implementation has created a small revolution in just basic resizing (scaling) of images, beyond its original use for distortion.
Definitely! Cylindrical resizing may be a lot slower, but unless you're using a Gaussian (which is truly separable), there's just no other escape from the axis-aligned artifacts you get from separable resizing. When I first stumbled upon your site a few years back, I was so happy that someone finally bothered to implement something better and put so much effort into explaining how it all works. Your site is probably the most accessible educational resource around for people interested in resizing, and it covers issues I probably never would have learned about otherwise...like the fact that to extend a sinc filter's frequency domain properties into two dimensions, you need a jinc rather than just a cylindrical sinc.
T totally agree. I looked at a lot of sources, books (I work at a university though not as an academic, and have access to a good collection of source books), web sites, and so on, and I found just trying to get your head around things enough to actually implement it was much of the battle. To solve this I document it, which is why resize, filters and distort are all HUGE parts of the IM Examples Web Site. I document this so I, and others can refer back to what I found. I still go back and look at what I have done.

Resizing first then filters, and now Distorts are a major effort on my part, and one I am proud of.


anthony wrote:Now Nicholas and I have also talked about using the whole parrelogram, and later using a 'diamond' sampling area. This then lets us its diagonanly aligned linear (2-pass) sampling filters instead of a cylindrical filter. We actually think that it is quite feasible to do so, though it would be about the same speed as it is now (even with the added sampling area complication), but we have not implemented it. The diamond shape would be more directly determined from the same major-minor axis as used for ellipse determination, and would have no allignment with the X-Y axis. The scan area would also directly match the samplign area.

A diamond (or perhaps even a quadrangle) would also be able to be more directly determined using reverse mapping of destination mid-pixel locations, rather than actually using deritives. Really it is a matter of developing a alternative sampling module to use, and some way of seleting it. I purposefully have sampling seperate to the distortion, so that should be fairly easy to develop, unlike trying to implement piecewise distortions (trianglar or quadralteral distortions) as the code currently stands, as distortion and mapping are current too intertwined.
Using separable two-pass filters with distortions sounds complicated. I mean, I can visualize it:
  1. Separate a diamond into diagonal strips (scanlines of nonzero-width)
  2. Use a weighted [one-dimensional or two-dimensional?]l filter along each strip, which collapses each strip to a single vector value (one for each color channel)
  3. Use a weighted one-dimensional filter in the other direction to get a pixel value
However, it seems like this would be error-prone and might result in unanticipated artifacts, because the samples in each strip aren't actually going to lie on a straight line. I wonder if it would be better to treat them like they are in a straight line (thereby throwing away their off-axis distances) or to calculate 2D distances within each strip? If this does pose a problem though, it could probably be alleviated by adding a more accurate discrete-to-continuous mapping: Instead of giving each source texel a 0 or 1 coverage value for strips (or even for EWA in general), you could give them a floating point coverage value and factor "partial source pixels" into your filter calculations as well (multiply each source texel's weight by its coverage ratio).
I would not implement it quite in that way. More like I do for EWA. Each point is determined, and then the diamond vectors distances used to look up the weights whch can both be applied to each sample then and there. So a two pass filter but done in 1 pass. The reason is each destination pixel will have a different sampling diamond, and as such you are better doing both passing at each sample point, rather than as a seperate 'pass'. Basically a 1 pass distorting with different sampling areas.
Still, are virtual pixel effects really inherently incompatible with splatting the source to the destination? I can see why they might be incompatible under the current codebase, but it seems like the forward and reverse mappings should be proper inverses of each other given the right parameterization...at least for affine and perspective distortions
It coul only be done if you forward mapped (splatted) a virtual pixel that actually hit the destination space. But then you need to figure out how to do that, whcih typically need a reverse map. I have the same problems for calculating the perfered size of a distorted image. I need to forward map source image corners of the destination space to determine image (window) size. that is why some distorts (shepards) can not calculate a default distorted image size, and uses the original image size and virtual offset.
You'd have to set up the reverse mapping first
okay I jumped the gun. You got it.


Anyway, my biggest question in the OP was question 3, by which I meant something entirely different from splatting: For question 3 I meant this:
After you find the source texels contributing to each destination pixel, and after you compute your offset vectors from source-to-destination-pixels (from which you derive your filter weights), what coordinate frame do you transform these vectors to before you calculate their [squared] vector lengths/distances? I asked because it doesn't seem like it would make sense to calculate weights from absolute source-space texel distances: Your support size is 2 if you're using a cubic filter, but your ellipse might actually cover hundreds of source texels in each dimension...so it would seem you have to scale these distances to fit your filter size in some manner. I'm now guessing you're implicitly using some variation of the "filter-space" I derived above, but the details could be different by some constant factor too, especially if you're using a pixel radius different from sqrt(2.0)/2.0.
[/quote]

I calculate it in terms of distances along major/minor axis vectors of the sampling ellipse, to get a elliptical radial distance, taht is then just used to look up a table. so in terms of space... ellipical space (squared) mapped to a unit circle (squared), and then mapped to a table of filter values (with a pre-squared index)
Anthony Thyssen -- Webmaster for ImageMagick Example Pages
https://imagemagick.org/Usage/
Post Reply