Re: Vector assignment semantics (was Re: latest trends in compiler optimization research?)

SM Ryan <wyrmwif@tsoft.org>
Thu, 16 Aug 2007 04:45:21 -0000

          From comp.compilers

Related articles
Re: Vector assignment semantics (was Re: latest trends in compiler opt al407@cam.ac.uk (Anton Lokhmotov) (2007-08-13)
Re: Vector assignment semantics (was Re: latest trends in compiler opt gah@ugcs.caltech.edu (glen herrmannsfeldt) (2007-08-15)
Re: Vector assignment semantics (was Re: latest trends in compiler opt Peter_Flass@Yahoo.com (Peter Flass) (2007-08-15)
Re: Vector assignment semantics (was Re: latest trends in compiler opt jwkenne@attglobal.net (John W. Kennedy) (2007-08-15)
Re: Vector assignment semantics (was Re: latest trends in compiler opt wyrmwif@tsoft.org (SM Ryan) (2007-08-16)
Re: Vector assignment semantics (was Re: latest trends in compiler opt bmoses-nospam@cits1.stanford.edu (Brooks Moses) (2007-08-15)
Re: Vector assignment semantics (was Re: latest trends in compiler opt mojaveg@mojaveg.lsan.mdsg-pacwest.com (2007-08-17)
Re: Vector assignment semantics (was Re: latest trends in compiler opt gah@ugcs.caltech.edu (glen herrmannsfeldt) (2007-08-17)
Re: Vector assignment semantics (was Re: latest trends in compiler opt Peter_Flass@Yahoo.com (Peter Flass) (2007-08-17)
Re: Vector assignment semantics (was Re: latest trends in compiler opt jjw@cs.sfu.ca (James J. Weinkam) (2007-08-20)
Re: Vector assignment semantics (was Re: latest trends in compiler opt gah@ugcs.caltech.edu (glen herrmannsfeldt) (2007-08-20)
[4 later articles]
| List of all articles for this month |

From: SM Ryan <wyrmwif@tsoft.org>
Newsgroups: comp.compilers,comp.lang.pl1
Date: Thu, 16 Aug 2007 04:45:21 -0000
Organization: Quick STOP Groceries
References: 07-08-041
Keywords: parallel, PL/I
Posted-Date: 17 Aug 2007 11:52:56 EDT

Peter Flass <Peter_Flass@Yahoo.com> wrote:
# > http://publibfp.boulder.ibm.com/cgi-bin/bookmgr/BOOKS/ibmol004/3.1.4.2.1
# >
# > It seems that IBM doesn't follow the ANSI-1976 standard, and this is
# > one more example. (The manual is copyright 1964, 1990, and I believe
# > applies to current IBM compilers.)
# >
# > In most other cases, PL/I does support what I would have called
# > "naturalness" at the expense of space and/or time. This may have been
# > more "natural" to programmers used to using DO loops for array
# > assignment.


One programmer's naturalness is another programmer's godawful hack. I
doubt one single construct shall express all programmer's intent for
all possible machines.


# None yet, but many thanks for the "heads up." I'm not sure I
# understand serial vs. parallel, but if I do understand I wa going to


A program of operations is usual presented as a totalled orderred list
of operations, implicitly partially orderred by how they load and
store in shared variables. Serial means the partial order implied by
loads/stores must be contained in the total order of operation
list. This is serial semantics: while the actual execution order may
differ, it must appear as if the operations are executed in the order
given, one operation at a time.


The total order of instructions is sometimes how programmers
think--they are expecting operations to appear in this strict
order--but sometimes the orderring is simply artifact of the sequential
presentation of most programming languages.
p := s div t; q := s mod t
Assuming p and q are independent of s and t, the programmer has to
physically write the div or mod operator first, but the actual
order is a whim of the moment and of no consequence.


In some cases the compiler cannot prove that some alternative
order is equivalent to this serial order and so is not allowed to
exploit this alternative order even though it might be much faster.
For example,
for i do a[f(i)] := op(a[g(i)]) od
if f(i)=g(j), i<j, this means a value stored in one iteration
will be used in a subsequent operation. Unless the compiler
can prove no such interference, or that store will be completed
before the load, this loop cannot be vectorised, since vectors
have defer stores of multiple loop iterations, causing an out
of order store after subsequent load.


One way to give the compiler more freedom is to allow an
alternate notation that does not imply a sequential order of
the operations.
forall i do P(i) od
might be defined with parallel semantics: that all stores in
P(i) occur after all loads, or it might be defined that loads
and stores of one iteration are in serial order, but that
loads and stores of different iterations have no orderring;
this would allow the compiler to freely vectorise, parallelise,
overlap iterations, etc because it does not have to prove
noninterference of different iterations--because if
interference does exist, it it the programmer's error.
(For example is some convergence code, load/store
interference is irrelevant to the eventual results and
only effects the speed of convergence.)


If all the compiler is given is the serial order, it has
no way to reach through the screen, grab the programmer
by the collar and demand to know whether the programmer
knows whether interference is impossible or knows whether
interference is irrelevant. Absent that information the
compiler is forced to make conservative but safe decisions
which might be contrary to the programmer's wishes. And if
the programmer has no way to expresses her wishes to the
compiler it becomes an exercise in mutual frustation.


# do parallel, and create an array temporary to hold the result of the
# expression. Now I don't know what I'll do.


Parallel does not always some kind strict anti-serial
order. It could mean don't-care or doesn't-matter; that could
include serial order or any other order.


--
SM Ryan http://www.rawbw.com/~wyrmwif/
This is one wacky game show.


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.