|Definition of Data Dependence Distance email@example.com (1991-12-06)|
|Re: Definition of Data Dependence Distance maydan@Neon.Stanford.EDU (1991-12-06)|
|Re: Definition of Data Dependence Distance frazier@CS.UCLA.EDU (1991-12-06)|
|Re: Definition of Data Dependence Distance grover@brahmand.Eng.Sun.COM (1991-12-10)|
|From:||maydan@Neon.Stanford.EDU (Dror E. Maydan)|
|Organization:||Computer Science Department, Stanford University, Ca , USA|
|Date:||6 Dec 91 22:32:04 GMT|
In article <1991Dec6.firstname.lastname@example.org> email@example.com (Bill Pugh) writes:
>I've been wrestling with the appropriate exact definition of data dependence
>distance (in loops), and haven't come to any definite conclusions. ...
> Does dependence distance represent the difference in the values of
> the loop variables, or the difference in the loop trip counts?
I think that at least with directions, most people use a variataion of
definition 1. There's no reason this cannot be extended to distances. If
you look at Wolfe's Supercomputers book, he uses definition 1 but reverses
the sign of the dependence when the step is negative. Thus he would agree
with the results of the first definition in the first two examples, but in
example 3, his definition would also give a distance of 3. Given that the
sign of a distance is somewhat arbitrary (the distance of the dependence
from 'a' to 'b' is the negative of the distance of the dependence from 'b'
to 'a'), it seems to me that Wolfe's definition is quite reasonable. I
believe that it will allow you to perform the standard optimizations using
the same framework used when normalization is performed.
Definition 2 would require changing the parallelization algorithms. If
the distance vector for the second example was (1,0), using standard
algorithms one would assume that the 'j' loop could be run in parallel if
the outer loop was run sequentially. This is clearly not the case.
Return to the
Search the comp.compilers archives again.