Re: Efficient Evaluation of Image Expressions

Hermann <>
Mon, 23 Jul 2007 15:03:47 -0000

          From comp.compilers

Related articles
Efficient Evaluation of Image Expressions (Hermann) (2007-07-18)
Re: Efficient Evaluation of Image Expressions (2007-07-19)
Re: Efficient Evaluation of Image Expressions (Michael Tiomkin) (2007-07-19)
Re: Efficient Evaluation of Image Expressions (Nick) (2007-07-22)
Re: Efficient Evaluation of Image Expressions (Hermann) (2007-07-23)
Re: Efficient Evaluation of Image Expressions (Nick) (2007-07-30)
Re: Efficient Evaluation of Image Expressions (Nick) (2007-08-01)
| List of all articles for this month |

From: Hermann <>
Newsgroups: comp.compilers
Date: Mon, 23 Jul 2007 15:03:47 -0000
Organization: Compilers Central
References: 07-07-06607-07-079
Keywords: optimize
Posted-Date: 26 Jul 2007 00:56:20 EDT

On 22 jul, 05:53, Nick <> wrote:

> This sounds to me like a typical array programming problem. If the
> code fragment you've supplied is the inner part of a larger loop which
> iterates across eight potentially large bitmaps (arrays), and the
> interpretive overhead of an expression far exceeds the time to
> actually do the arithmetic, then one of your choices is to attempt the
> operations as arrays.

This is part of a landscape dynamic simulation software called
Dinamica EGO. Basically, Dinamica is a data flow processing
environment, where you can chain operators forming loops, decisions.
It has been used mainly to perform simulations concerning the future
of the Amazon forest by our research team.

The expression posted previously is an example of an real expression
used in model by an specific operator called CalculateMap. Basically
the main data flow engine collects the relevant images and passes them
to the CalculateMap operator. The expressions can be more complicated
than that, but I4ve tried to keep my example as much didatic as
possible. :)

> As a programmer, I would definitely write the expression having
> eliminated the common subexpression myself. (This was mentioned
> earlier) At the very least, your code is much, much clearer.
> Or use a max function. r := max(0, (-4113.26 + 0.07*i7 -
> 5.08*i8/100 .....

You right again. However, the software is mainly used by environment
researchers and it really difficult to convince them to write
efficient code.

> But if you can change your computational procedure to deal with i1,
> i2, i3 as entire matrices, then at least you have a force multiplier
> where you have do the parsing and tree traversal once, but get the
> answers for a million or more pixels.

This seems to be a good solution. However, the images are huge and the
models typically deals with several gigabytes of images. As far as I
can seen, applying each operator to an entire image can have a huge
impact in memory consumption because you will ending up creating
several temporary images during the expression evaluation. Even more,
loops and conditional branches seems to complicate the calculation
using the entire images at once. Any additionl suggestion?

> The operations +, -, /, *, abs, max, etc. should have the correct
> missing value handing for nulls. 1 2 3 + 4 MV 6 should be 5 MV 9 and
> so on.

Again, this is a little bit more complicated than my first explanation
may suggest. The null values (or nodata, that4s how it is called by
many map processing software) "poisons" the execution of the
expression. So, in the following expression

(if i1 + 2 > i2 / 3 then
    // .... do something
    // ... do something else
) catch (
    // something if failed

if the the evalution of the if test failed because the corresponding
pixel value of i1 or i2 is null, then the execution of the entire if
is aborted and a null value begins to be propagated as the expression
result. This null value is caught by the catch operator and the
evaluation of the remaining of the expression continues.

Of course, the evaluation does not raise an exception, but instead, it
returns a NaN that is detected and skipped by the other operator,
except the catch.

> If i1 through i8 are small integer values always in the range of
> 0..255, then you can do the computations with pre-calculated tables.
> (You can return scaled large integer values with just enough digits
> behind the decimal places). If you truncate to get back a pixel
> value, all that excess precision will be cut off.

> Use 4 byte floats, rather than 8 byte doubles. (This is not a big
> help)

The values are always single precision (4 byte) floating points.

> I would suggest reading Jim Blinn's books, and Jon Bentley's book
> "Writing Efficient Programs".
> Also, have a look at "J",, for ideas.

Ok. Thank you very much for the references. I will take a look.



Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.