How does EWA work with sample locations and filter weights?

Discuss digital image processing techniques and algorithms. We encourage its application to ImageMagick but you can discuss any software solutions here.
CranberryBBQ
Posts: 12
Joined: 2014-01-23T17:11:13-07:00
Authentication code: 6789

How does EWA work with sample locations and filter weights?

Post by CranberryBBQ »

By default, ImageMagick's distort operator often uses an "elliptical weighted average" of source samples to produce a destination pixel. I understand the concept on the most basic level: Project a circular pixel into the coordinate frame of the source, and you'll get an ellipse or similar conic shape for most sane distortions. Take a bunch of samples inside this shape, weight them with a combined reconstruction/resampling filter, and you'll get your destination pixel color. It's similar to hardware anisotropic filtering in a way, except the sampling is more comprehensive than AF's supersampling approach (and based on point samples, not mipmapped linear samples), and it's generalized for arbitrary filters.

However, that's about the extent of what I understand, and I don't know the details of how resampling with distortion actually works:
  1. What radius should your circle have for EWA? What happens if you use squares instead of circles? I assume this would bias the filters toward diagonals, but could this cause serious artifacts with negative-lobed filters like jinc, cubics, etc.? Is supersampling as compatible with arbitrary reconstruction/resampling filters as EWA sampling?
  2. How exactly are the sample locations chosen for EWA? Does ImageMagick just search for all source texels whose centers are inside some multiple of your circle's radius from the pixel center ("some multiple" being the filter's support width)? This would partially explain why certain distortions don't work with EWA: It's easy to find the source coords for any point inside the destination circle, but it's harder for crazy distortions to determine whether the center of the nearest source texel is still inside that circle, and it's especially hard if you need to convert back into destination coords to calculate weights.
  3. Do you compute sample weights for distortions based on destination-space distances or source-space distances? (Would this differ for supersampling vs. EWA?) This would clearly make a huge difference in sample weights at oblique angles (where samples on the "far" side of the center will be more numerous or spaced farther apart in source space, depending on whether you do strict EWA or supersample), but I can't determine which is "correct." My best guess is you compute weights on destination-space distances, but I'm not sure, and it worries me that you do things differently for orthogonal resizing depending on whether you're upsizing or downsizing: For orthogonal downsizing, you scale the reconstruction/resampling filter to fit over a specific number of destination pixels. For orthogonal upsizing, you scale your filter to fit over a specific number of source pixels (to maintain minimum source support)...but I may just be looking at it from the wrong perspective too.
  4. If you compute weights based on source-space distances, how does the hugely asymmetrical sampling interact with negative-lobed filters like jinc, cubics, etc.? How do you size the filter to ensure sufficient support on each side (relative to ellipse axes)?
  5. This question applies to resizers in general, but assuming negative-lobed filters, how much error would you introduce by placing an unbalanced number of samples on each "side" of your destination circle's center? I suppose increasing the support size would alleviate this? (I ask because I'm considering the benefits of reweighting the same samples for different R/G/B subpixels. It adds a tolerable amount of bias to a Gaussian filter, but I'm not sure if it would totally break something like Lanczos jinc.)
In case anyone's wondering, I'm asking because I want to do some area-sampling in a post-processing shader: My problem domain involves trying to preserve high-frequency texture pattern as well as possible, and I need higher-quality resampling than mipmapping and anisotropic filtering can provide. So far, I've implemented a Gaussian-weighted average (from screenspace distances) of bilinear samples based on N-queens pattern screenspace offsets, and I've also applied different weights for each R/G/B subpixel. It works well enough, but using bilinear samples softens things to begin with, and I'd prefer a sharper filter than a Gaussian. I'm currently experimenting with a Jinc Lanczos2 filter using 16 bilinear taps in a regular 4x4 screenspace supersampling grid, but I can't really evaluate and tune its performance until I know I'm not butchering the fundamentals (and reading Paul Heckbert's Master's Thesis would be overkill: https://www.cs.cmu.edu/~ph/texfund/texfund.pdf ;)).
User avatar
fmw42
Posts: 25562
Joined: 2007-07-02T17:14:51-07:00
Authentication code: 1152
Location: Sunnyvale, California, USA

Re: How does EWA work with sample locations and filter weigh

Post by fmw42 »

Proper answers need to come from Anthony, who wrote the code.

In the mean time, see
http://www.imagemagick.org/Usage/distorts/#mapping
http://www.imagemagick.org/Usage/distor ... a_resample
http://www.imagemagick.org/Usage/filter/ (search for EWA)

Originally the EWA used a Gaussian weighting filter for the ellipse shape, but I believe it has been replaced by a Jinc function.

EWA is a reverse mapping. So the circle in the output for a given pixel location is projected into the input based upon the type of distortion (eg perspective), all the pixels that are found inside the ellipse are weighted as above and the result is returned to the output pixel location.

Further details can be found in the references above.
CranberryBBQ
Posts: 12
Joined: 2014-01-23T17:11:13-07:00
Authentication code: 6789

Re: How does EWA work with sample locations and filter weigh

Post by CranberryBBQ »

Thanks, I suppose the emphasis on "reverse mapping" answers question 2: The circle is projected into an ellipse in the source's coordinate frame, and the source texels are found by explicitly testing their locations against the source-space ellipse. That would explain why EWA doesn't work with crazy mappings: It always assumes the pixel maps to an ellipse to do the source-space search, but the true pixel shape/size will be much different for crazy distortions like inverse polar.

Unfortunately, my most pressing questions are 3 and 4, which concern whether weights should be calculated [from distances] in the destination or source coordinate frames, and how to deal with asymmetry if the latter is the case. :-/
User avatar
fmw42
Posts: 25562
Joined: 2007-07-02T17:14:51-07:00
Authentication code: 1152
Location: Sunnyvale, California, USA

Re: How does EWA work with sample locations and filter weigh

Post by fmw42 »

I believe the weighting (e.g. gaussian) is done in the input space in the ellipse domain on all the value found that fall within the ellipse. One computes semi-major and semi-minor axes by using the derivatives of the transformation. The shape could be more of an egg-shape where the to semi-major/minor axes lengths may be different on each side of the "ellipse", if one projects both ways relative to the circle. One could project the four corners of a square pixel and get a trapezoid. However, I believe that historically testing has indicated that it is easier to weight the ellipse rather than a trapezoid.
CranberryBBQ
Posts: 12
Joined: 2014-01-23T17:11:13-07:00
Authentication code: 6789

Re: How does EWA work with sample locations and filter weigh

Post by CranberryBBQ »

Ah, I see...well, crap. I guess I'll have to wait on Anthony for a definitive answer, but my followup question is this: If weights are computed based on distances in the source domain, is that because it's faster or easier to code, or is it necessary for correctness? (In my case it's a lot faster to do destination-space weighting, i.e. screenspace weighting...but quality has to come first.)

Question 4 is still a mystery to me too: If weights are computed based on distances in the source domain, how does ImageMagick deal with negative-lobed filters with an "unbalanced" sample distribution?
User avatar
fmw42
Posts: 25562
Joined: 2007-07-02T17:14:51-07:00
Authentication code: 1152
Location: Sunnyvale, California, USA

Re: How does EWA work with sample locations and filter weigh

Post by fmw42 »

You only have one pixel in the destination (output domain) so that makes no sense to weight there.

The gaussian has no negative lobes and the other filters are cut of at the first zero crossing. See the resampling reference earlier.
CranberryBBQ
Posts: 12
Joined: 2014-01-23T17:11:13-07:00
Authentication code: 6789

Re: How does EWA work with sample locations and filter weigh

Post by CranberryBBQ »

I think you misunderstood me: I'm not talking about weighting destination pixels...as you say, that's confusing concepts and makes no sense. I'm talking about weighting source texels depending on:
  1. Their distance from the source pixel in the source coordinate frame. ImageMagick needs to transform the destination pixel location to the source frame to do this...but ImageMagick has to do this anyway to get the source location of the ellipse's center, before it ever even finds the samples in the first place.
  2. Their distance from the source pixel in the destination coordinate frame. ImageMagick would need to transform the source texel locations (or their offset vectors from the ellipse center) to the destination frame to do this. This works easier for me though based on the particulars of my supersampling (non-EWA) method of picking samples.
In other words, my question is about which coordinate frame you compute the length of the "same" vectors in, because I'm not sure which distance (or expression of distance) is more important:
  • It seems like the distance in the source frame would be more important for a continuous reconstruction filter, because the source frame represents the signal in its own domain. I'd assume that properly reconstructing a continuous signal [unblurred, in its entirety] from samples would occur based on "what the signal is" rather than "what some observer sees of it."
  • It seems like the distance in the destination frame would be more important for a low-passed resampling filter, because blurring depends on how large a signal feature is from the observer's perspective, not how big it is in the source itself. This is the concept underlying mipmapping and anisotropic filtering.
However, both filters are combined in the digital domain, because the continuous signal is never explicitly constructed or represented digitally. If my reasoning is correct, it would seem there's a tradeoff between weighting distances in the source frame vs. destination frame, but I'm not sure which consideration should dominate...or if it really matters which you use, due to the tradeoff.

Anyway, I don't understand how the other filters could be just cut off at the first zero crossing without artifacts: Wouldn't it make e.g. Jinc totally pointless for distortion? Cutting off other filters at the first zero crossing would effectively turn them into inferior versions of the Gaussian, but EWA resampling was extended beyond Heckbert's original Gaussian filter precisely because its lack of negative lobes made it too blurry. I couldn't anything like this in the resampling references above either.
Last edited by CranberryBBQ on 2014-05-31T12:52:51-07:00, edited 3 times in total.
User avatar
fmw42
Posts: 25562
Joined: 2007-07-02T17:14:51-07:00
Authentication code: 1152
Location: Sunnyvale, California, USA

Re: How does EWA work with sample locations and filter weigh

Post by fmw42 »

The answer is a). It makes no sense to transform each pixel back to destination space where they all go to within the one pixel or so radius. Thus there is little if no distance to use for weighting. It has to be (gaussian) weighted in source space for all pixels that fall within the ellipse. Then the result is put in the one destination pixel coordinate.

The filters are tapered by a windowing function. Some have one negative lobe. See the resampling reference. I am not that expert on all the details of the filter used now by EWA (which I believe is now one of the Robidoux filters, but am not 100% sure). There is information in the resampling reference from Anthony. Search for EWA and you can find out what filter is being used.

For proper details, you will have to wait until Anthony answers (though he is not often on the forum these days).
CranberryBBQ
Posts: 12
Joined: 2014-01-23T17:11:13-07:00
Authentication code: 6789

Re: How does EWA work with sample locations and filter weigh

Post by CranberryBBQ »

fmw42 wrote:The answer is a). It makes no sense to transform each pixel back to destination space where they all go to within the one pixel or so radius. Thus there is little if no distance to use for weighting. It has to be (gaussian) weighted in source space for all pixels that fall within the ellipse. Then the result is put in the one destination pixel coordinate.
This isn't really true though. Consider the simple case of plain resizing (no distortion) with a Lanczos2 sinc filter:
When you're downsizing, the Lanczos2 sinc support window has a radius of 2 destination pixels (diameter of 4), because samples have to be taken from outside the pixel boundaries to get enough support for proper reconstruction and resampling. If you downsizing to a tiny size, the 4-pixel support window could potentially cover hundreds of source texels (even the entire image). There's always plenty of distance to use for weighting, because you scale your filter based on the support requirements: If your support includes all source texels falling within a radius of 2 destination pixels (Lanczos2 sinc), all of your distances are scaled from [0, 2] in each dimension, even if they're hundreds of texels apart in the source (which may be the case for extreme downsizing). In other words, for downsizing, weights are computed based on destination-frame distances. (However, weights seem to be computed based on source-frame distances for upsizing, which is why distortions become more confusing to me: You may be upsizing in one dimension and downsizing in another, or in extreme cases that may even change in different parts of the same destination pixel.)

The same concept would have to extend to EWA too in some way, at least in areas where you're "downsizing:" If you're using a Gaussian filter, the ellipse may only be the projection of a single pixel's area into source space, but if you're using a larger filter with a higher support requirement, you'll probably have to give your destination-frame circle a radius equal to the support before projecting into an ellipse in source-space. If you use e.g. a Lanczos2 Jinc and project only a single pixel size into source space, you may or may not have enough samples for reconstruction (~4 along each ellipse axis, but probably a bit more because of Jinc's farther and irrational zero crossings), but you probably won't have enough for the low-passed resampling filter if the distortion downsizes in this area...and so you'll get aliasing.
fmw42 wrote:The filters are tapered by a windowing function. Some have one negative lobe. See the resampling reference. I am not that expert on all the details of the filter used now by EWA (which I believe is now one of the Robidoux filters, but am not 100% sure). There is information in the resampling reference from Anthony. Search for EWA and you can find out what filter is being used.
Yeah, I think the default is the plain "Robidoux" filter, which has one negative lobe. Even one negative lobe is enough to make me wonder how distortion copes with unbalanced sample distributions (in the source frame) for general distortions if it's computing weights based on source-frame distances. ImageMagick has filters with a lot of negative lobes though, at least for resize and distort resize: I've used a 15-lobe Jinc-windowed Jinc before on a particular image. ;)
fmw42 wrote:For proper details, you will have to wait until Anthony answers (though he is not often on the forum these days).
The ImageMagick website is always my first resource for this kind of thing. I haven't read the whole thing through and through, so I could have missed something, but I've read a good bit of it more than once. Reading Anthony and Nicolas has taught me most of what I know about sampling filters and EWA...but as far as I know, this question isn't really explored or answered on the site. :-/
Last edited by CranberryBBQ on 2014-05-31T15:22:54-07:00, edited 7 times in total.
User avatar
fmw42
Posts: 25562
Joined: 2007-07-02T17:14:51-07:00
Authentication code: 1152
Location: Sunnyvale, California, USA

Re: How does EWA work with sample locations and filter weigh

Post by fmw42 »

Lancsos, I believe, and most cubic filters, in general use only a 4x4 neighborhood in the source domain for a given pixel in the output domain, unless you specify more lobes. So it is similar to EWA in the direction of processing, but uses a much limited size region to resample.

Anything further needs to come from Anthony and/or Nicolas.
CranberryBBQ
Posts: 12
Joined: 2014-01-23T17:11:13-07:00
Authentication code: 6789

Re: How does EWA work with sample locations and filter weigh

Post by CranberryBBQ »

fmw42 wrote:Lancsos, I believe, and most cubic filters, in general use only a 4x4 neighborhood in the source domain for a given pixel in the output domain, unless you specify more lobes. So it is similar to EWA in the direction of processing, but uses a much limited size region to resample.
Lanczos2 and cubic resize filters do use a 4x4 window for resizing (well, 4+4 for separable resizing)...but it's only a 4x4 source window if you're upsizing. If you're downsizing, they use a 4x4 destination window (and consider all the source texels falling within it), which essentially adds a low-pass resampling filter to the reconstruction filter. If they didn't do this, large downsizes would always be extremely aliased (barely better than point sampling), because they would only reconstruct the signal at an infinitesimally small destination point with no concern for how much of the source signal is actually covered by each destination pixel.
fmw42 wrote:Anything further needs to come from Anthony and/or Nicolas.
Yeah, I'm hoping they'll take an interest in this thread. :)
CranberryBBQ
Posts: 12
Joined: 2014-01-23T17:11:13-07:00
Authentication code: 6789

Re: How does EWA work with sample locations and filter weigh

Post by CranberryBBQ »

The more I think about it, the more I think my comments on filter scaling just inadvertantly answered my own question...and everything else is falling into place too once I look at it from that perspective:
  1. Since EWA models a pixel's projection into source-space with an ellipse, it inherently uses a symmetric model of pixel area. We don't need to worry about asymmetric sample distributions with EWA, because EWA already uses a model that inherently avoids them...but that's also part of why it's too inaccurate for crazy distortions like inverse polar.
  2. Strictly speaking, we shouldn't compute weights based on distances in either destination space OR source space. We should compute weights based on distances in filter-space, where the length of each filter-space axis is equal to the nominal support diameter. No matter what coordinate frame we calculate our source-destination offset vectors in, we have to scale the vectors along each filter-space dimension to ensure each filter-space axis has the correct length. Once we scale our offset vectors this way, it shouldn't matter whether we started with source-space or destination-space vectors (assuming the EWA mapping is valid)...either way should result in the same weights. That basically answers my main question.
  3. As far as sizing the ellipse in the first place, I now suppose you initially project a circular pixel with "circle radius = support radius" into a source-space ellipse, then expand the ellipse along any axis with a source-space diameter less than its support size (i.e. if you're upsizing in that dimension).
It would be good to have confirmation from Anthony, but this is finally starting to make sense.[/b]
Last edited by CranberryBBQ on 2014-05-31T15:27:35-07:00, edited 1 time in total.
User avatar
fmw42
Posts: 25562
Joined: 2007-07-02T17:14:51-07:00
Authentication code: 1152
Location: Sunnyvale, California, USA

Re: How does EWA work with sample locations and filter weigh

Post by fmw42 »

How do you define "filter space" and how to you get your pixels into that space? Doing any further processing to resample in "filter space" would add much more time to the processing.
CranberryBBQ
Posts: 12
Joined: 2014-01-23T17:11:13-07:00
Authentication code: 6789

Re: How does EWA work with sample locations and filter weigh

Post by CranberryBBQ »

fmw42 wrote:How do you define "filter space" and how to you get your pixels into that space?
The term is a bit loose, because of the fact that we don't scale distances to a [0, 1] range but a [0, support] range.

That being noted, deriving filter-space depends on what coordinate frame you're defining its basis vectors relative to. Let's consider the destination frame our global coordinate frame, because the axes of filter-space are parallel to our pixel-sized-ellipse axes, which are always parallel to our destination-space xy axes. Let's also define some axis-naming terminology, borrowed loosely from texture mapping:
  • x and y axes are destination-space axes (consider it screenspace)
  • u and v axes are source-space axes (consider it texture space)
  • s and t axes are filter-space axes (not sure if those are the best letters to use, but whatever)
The basis-vectors of source-space in destination-space terms are:
  • u_axis_in_dest_space = (dx/du, dy/du)
  • v_axis_in_dest_space = (dx/dv, dy/dv)
The matrix that transforms source-space column vectors to destination-space column vectors is:
[dx/du dx/dv]
[dy/du dy/dv]

Similarly:
  • x_axis_in_source_space = (du/dx, dv/dx) in source-space
  • y_axis_in_source_space = (du/dy, dv/dy) in source-space
The matrix that transforms destination-space column vectors to source-space column vectors is the inverse of the above:
[du/dx du/dy]
[dv/dx dv/dy]

Those basis vectors and matrices are based on the distortion mapping or texture mapping, depending on context. In the case of EWA, we can naively project a pixel of a given radius into an ellipse in source space using the following mapping:
pixel_radius = sqrt(2.0)/2.0
naive_ellipse_s_axis_in_source_space = x_axis_in_source_space * pixel_radius
naive_ellipse_t_axis_in_source_space = y_axis_in_source_space * pixel_radius
I believe pixel_radius = sqrt(2.0)/2.0, because this is the smallest radius ensuring there are no "gaps" between pixels in any direction. Anyway, the axes of this ellipse (from center to edge) actually form the basis vectors of a coordinate frame of their own. I'll expand upon that later, but this frame is basically just a scaled version of destination-space and of filter-space. However, this ellipse formed by the pixel size is neither the correct size for finding source samples, nor does its coordinate frame form a proper basis for filtering:
  1. It's only the correct size for finding overlapping samples if you're using a filter whose support = pixel_radius...which may be the case for a cylindrical box filter, but nothing else. (Box filters have an orthogonal support radius of 0.5 units in filter-space. When you're just upsizing, this means box reconstruction filters have an orthogonal support radius of 0.5 source texels, tent filters have a support radius of 1 texel, cubics and Lanczos2 have a support radius of 2 texels, Lanczos4 has a support radius of 4 texels, etc. These orthogonal support radii expand for downsizing to 0.5, 1, 2, and 4 destination pixels respectively, due to the need for a low-pass resampling filter on top of the reconstruction filter. However, in the context of searching for EWA samples within an arbitrarily oriented ellipse, the support search radius for each filter is probably (2.0 * pixel_radius) times the filter's orthogonal support radius: When we're searching for source texels along an elliptical axis that's diagonal in source-space, neighbors along this axis could be as far as sqrt(2.0) texels apart.)
  2. ...and even for a box filter, a support radius equal to the pixel radius is only the right size when the distortion downsizes in both dimensions. If we're upsizing along one or both axes of our ellipse, it won't be big enough to contain our minimal number of support samples. Consider a 1024x1024 final image composed of a distortion of an 8x8 source image, for instance: Most destination pixels will only project to a tiny fraction of a source pixel.
  3. Weighting filters assume each source sample's distance from the destination is scaled according to the support size of the filter. For instance, a cubic filter expects source samples to be distributed (as evenly as possible) through a range of [0, 2] units away from the destination. However, transforming source-space offsets to the coordinate frame of a naively calculated ellipse will not satisfy this.
For these reasons, we need to resize the ellipse to create a proper sampling area and proper basis vectors for filter-space.

Constructing the basis vectors of filter-space actually depends on the length of the above vectors (whether you're upsizing or downsizing along a given dimension).
Let support = the support radius for you're chosen filter, (e.g. support = 2 for cubic filters, support = 4 for Lanczos4 sinc)

The s and t vectors will have the following mapping:
  • s_axis_in_source_space = (du/ds, dv/ds)
  • t_axis_in_source_space = (du/dt, dv/dt)
...and they are computed by the following vector operations:
  • s_axis_in_source_space = max(x_axis_in_source_space * pixel_radius * 2.0, normalize(x_axis_in_source_space) * pixel_radius * 2.0) = max(x_axis_in_source_space * sqrt(2.0), normalize(x_axis_in_source_space) * sqrt(2.0))
  • t_axis_in_source_space = max(y_axis_in_source_space * pixel_radius * 2.0, normalize(y_axis_in_source_space) * pixel_radius * 2.0) = max(y_axis_in_source_space * sqrt(2.0), normalize(y_axis_in_source_space) * sqrt(2.0))
EDIT: I had to update those (twice now), because I had them scaled wrong originally. :p You want each radial ellipse axis to have a length equal to the diameter of 1 destination pixel for downsizing and 1 source texel for upsizing (keeping in mind that the diagonal distances between pixels and texels could be as much as sqrt(2.0)). That's equal to the size of a tent filter. We'll scale it later to the actual filter size for finding our support samples, but the above is correct for defining the filter space basis vectors, due to the fact that filters expect samples to be within a distance of [0, support] rather than [0, 1].

The matrix that transforms filter-space column vectors to source-space column vectors is:
[du/ds du/dt]
[dv/ds dv/dt]
which =
[s_axis_in_source_space[0] t_axis_in_source_space[0]]
[s_axis_in_source_space[1] t_axis_in_source_space[1]]

The matrix that transforms source-space column vectors to filter-space column vectors is the inverse of that:
[ds/du ds/dv]
[dt/du dt/dv]
(which you'd get by computing the matrix inverse of the above)

That seems like a lot of math, but it's actually rather simple: s_axis_in_source_space and t_axis_in_source_space are the basis vectors of filter-space (expressed in source-space units). They form an ellipse, but this ellipse is not the same size as a pixel. Instead, it's related to the "support ellipse" you need to search for source samples inside in the following way:
  • support_ellipse_s_axis_in_source_space = s_axis_in_source_space * support = support * max(x_axis_in_source_space * sqrt(2.0), normalize(x_axis_in_source_space) * sqrt(2.0))
  • support_ellipse_t_axis_in_source_space = t_axis_in_source_space * support = support * max(y_axis_in_source_space * sqrt(2.0), normalize(y_axis_in_source_space) * sqrt(2.0))
The filter-space ellipse is scaled differently from the support-finding ellipse so that when you convert the source-space offset vectors of each source sample to filter-space (see above matrix), their length will be scaled correctly to fall within a range of [0, support].

I imagine ImageMagick does some of this more implicitly than explicitly, but from what I can tell, this is the correct way to size the EWA ellipse and compute filter weights for distortions. The concept of filter-space applies to regular resizing too, except it's more trivial to derive: Filter-space corresponds to destination-space for downsizing and source-space for upsizing, on a per-axis basis. It's only more complicated for distorts because the source and destination axes aren't necessarily parallel to each other.

tl;dr summary:
  • Get the basis vectors of the destination frame expressed in source frame coordinates, i.e.
    x_axis_in_source_space = (du/dx, dv/dx)
    y_axis_in_source_space = (du/dy, dv/dy)
  • Scale those axes to get the basis vectors of filter-space:
    s_axis_in_source_space = max(x_axis_in_source_space * sqrt(2.0), normalize(x_axis_in_source_space) * sqrt(2.0))
    t_axis_in_source_space = max(y_axis_in_source_space * sqrt(2.0), normalize(y_axis_in_source_space) * sqrt(2.0))
  • Find your source samples within the area covered by an ellipse with axes:
    support_ellipse_s_axis_in_source_space = s_axis_in_source_space * support
    support_ellipse_t_axis_in_source_space = t_axis_in_source_space * support
  • Transform source sample offset vectors into filter-space by multiplying by the inverse of the matrix formed by the [s_axis_in_source_space, t_axis_in_source_space] vectors.
User avatar
anthony
Posts: 8883
Joined: 2004-05-31T19:27:03-07:00
Authentication code: 8675308
Location: Brisbane, Australia

Re: How does EWA work with sample locations and filter weigh

Post by anthony »

EWA originally only was used with Gaussian filters producing fairly blurry results (at that time there was also a code error (+ instead of -) that was later fixed).

The destination pixel is reverse mapped to the source image to get the center of the ellipse. And yes it actually should not be the center but it works well for a linear transformation like perspective for which EWA was designed.

The deritives are then used to determine the axis of the ellipse (this was imprived later by Professor Nicholas Robidoux with some heavy math, at which point the size bug was discovered), and that is also multiplied by the filter support window. (typically 2.0 especially for gaussian, but this is now variable)

Now every pixel within the ellipse on the source image is then read (using scan lines). Originally it was every pixel in teh containing rectangle, whcih for a long thin ellipse could be HUGE!. I worked out a way to scan the containing parrelogram, whcih reduced the hit-miss of the ellipse area down to a constant 71% (same as an area of a circle in a sqaure), rather than some rediculious figure you get from fitting a rectangle to a long thing diagonal ellipse. This gave the whole system enormous speed boot (in one test case involving horizons from 2 hours to 5 minutes!). I was actually impress by this enormous improvement when the result appeared while I was waiting for a plane at Sydney airport!

Here is the diagram I used for figuring out the scan area (as apposed to teh actual sampling area contained within.
Image

I was very carful in ensuring ALL pixels in the scan area are also part of the ellipse area. In fact in the code there is still commented out debug lines to generate gnuplot output to display sample points with the ellipse, parellolgram bounds for specific test points. For example...
Image

The arrows are the calculated major/minor ellipse axis form the mapping deritives, teh blue ellipse is the sampling area, the blue points the source pixels in teh sample are, the red bounds is the area being scanned. You can see how much bigger this scan area would have been if we'd fitted a XY aligned rectangle to the ellipse! as it is we now have a constant 71% hit/miss ratio.

The squared distance of the sample pixel (interger) from the reverse mapped source pixel (floating point), is then used to look up the weighting factor from a pre-prepared weighting table (also distance squared distorted) and used to weight the color/alpha, contribibution of the pixel (fairly standard), The result is color for the destination.

Now originnaly the filter function was a blurry gassian filter. and that works well. Later when Nicholas got involved, and helped fix the sign bug, we started trying other filters such as Jinc to see how they perform, especially with negative weightings. A Sinc filter for eample is bad as you can get positive reinforcement in a no-op case, generating a pattern of pure black and white from a 'hash' or pixel level checkerboard image. This has worked out great, and it is more realistic. The results was removal of many blocking artiacts, especially along diagonal edges involving strong color changes. It has however been a bumpy ride, and the development of the Robidoux cylkindrical filter (whcih turned out to be almost identical to a 2-pass Micthell-Netrei filter) shows this development.

Basically the EWA implementation has created a small revolution in just basic resizing (scaling) of images, beyond its original use for distortion.


Now Nicholas and I have also talked about using the whole parrelogram, and later using a 'diamond' sampling area. This then lets us its diagonanly aligned linear (2-pass) sampling filters instead of a cylindrical filter. We actually think that it is quite feasible to do so, though it would be about the same speed as it is now (even with the added sampling area complication), but we have not implemented it. The diamond shape would be more directly determined from the same major-minor axis as used for ellipse determination, and would have no allignment with the X-Y axis. The scan area would also directly match the samplign area.

A diamond (or perhaps even a quadrangle) would also be able to be more directly determined using reverse mapping of destination mid-pixel locations, rather than actually using deritives. Really it is a matter of developing a alternative sampling module to use, and some way of seleting it. I purposefully have sampling seperate to the distortion, so that should be fairly easy to develop, unlike trying to implement piecewise distortions (trianglar or quadralteral distortions) as the code currently stands, as distortion and mapping are current too intertwined.

One point. You mention point sample forward mapping. Or Splatting. In some ways that is simpler, but you still have the problem of weighted samples, this time in the destination, for pixels that have overlapign splats. I have not looked into splatting myself, but I gather that is done by using a extra 'weight' channel (much like a alpha channel), to keep track of the colors ratio/weights , and latter megered into the color channels after all the splatting phase is finished.

It would be interesting to attempt to 'splat' source image pixels into a destination image, using forward mapped distortions, especially now that we have no extra channel restrictions in the new major release in development. However one side effect that you would have is you would not be able to use Virtual Pixel effects such as tiling or horizons, that comes with reverse mapping. You would just get areas of 'undefined' or 'no color' in the destination image where no source pixel 'splatted' the destination raster space.

Just a shame I do not have time to expore such a distortion method. But if I did I would try to ensure it is also 'peice-wise' distortion capable.
Anthony Thyssen -- Webmaster for ImageMagick Example Pages
https://imagemagick.org/Usage/
Post Reply