Parallelizing (WAS: Death by pointers.)

pardo@cs.washington.edu (David Keppel)
Sun, 24 Sep 1995 01:53:28 GMT

          From comp.compilers

Related articles
Death by pointers. (Was: order of argument evaluation in C++, etc.) johnston@imec.be (1995-08-30)
Re: Death by pointers. jhallen@world.std.com (1995-09-05)
Re: Death by pointers. ECE@dwaf-hri.pwv.gov.za (John Carter) (1995-09-23)
Parallelizing (WAS: Death by pointers.) pardo@cs.washington.edu (1995-09-24)
Re: Parallelizing (WAS: Death by pointers.) ECE@dwaf-hri.pwv.gov.za (John Carter) (1995-09-29)
Re: Parallelizing (WAS: Death by pointers.) preston@tera.com (1995-09-29)
Re: Parallelizing (WAS: Death by pointers.) creedy@mitre.org (1995-10-02)
Re: Parallelizing (WAS: Death by pointers.) stefan.monnier@epfl.ch (Stefan Monnier) (1995-10-03)
Re: Re: Parallelizing (WAS: Death by pointers.) chase@centerline.com (1995-10-04)
Re: Parallelizing (WAS: Death by pointers.) imp@village.org (Warner Losh) (1995-10-11)
[12 later articles]
| List of all articles for this month |

Newsgroups: comp.compilers
From: pardo@cs.washington.edu (David Keppel)
Keywords: C, optimize, parallel
Organization: Computer Science & Engineering, U. of Washington, Seattle
References: 95-09-030 95-09-061 95-09-120
Date: Sun, 24 Sep 1995 01:53:28 GMT

>[Discussion of alias analysis; parallelizing.]


\begin{soapbox}


This seems like an appropriate moment to mention two things. First:


%A Gene M. Amdahl
%T Validity of the Single Processor Approach to Achieving Large Scale
Computing Capabilities
%J AFIPS Conference Proceedings; Proceedings of the Spring Joint
Computer Conference
%D April 1967
%N 30
%P 483-485
%I Thompson Books, Academic Press


``For over a decade prophets have voiced the contention
that the organization of a single computer has reached its
limits and that ruly significant advances can be made only
by interconnection of a multiplicity of computers in such
a manner as to permit cooperative solution. ...


Demonstration is made of the continued validity of the
single processor approach and of the weakness of the
multiple processor apprach in terms of application to real
problems and their attendent irregularities.''


Amdahl makes the point that using two processors more than doubles the
processor cost -- because you also have to connect the processors --
and less than doubles the performance -- because shared resources
become bottlenecks. Moral (not quite, but I'll fudge it): If you *are*
going to build a parallel machine, make sure that you minimize the raw
cost/performance, even at the expense of making it harder to program.
Otherwise you will rapidly find yourself losing to the cost/performance
of a uniprocessor that is even easier to use than the ``easy to use''
multi (supercomputing grand challenge entrants are exmpted from this
moral).




Second:


%A Alexander C. Klaiber
%A James L. Frankel
%T Comparing Data-Parallel and Message-Passing Paradigms
%J Proceedings of the International Conference on Parallel Processing
(ICPP)
%D 1993


They describe a variety of optimizations that are used with C*, a data-
parallel language, in order to improve performance of a distributed
event simulator (an irregular computation) to very nearly match the
performance of the same program written to use message passing. [And,
incidentally, the data-parallel version was much easier to implement.]


Note that writing in a data-parallel language means that you probably
get to define away some alias analysis problems and simplify the
analysis of others. Moral (well, not quite, but I'll fudge it): If you
write your program in a data-parallel language, you can compile it to
run efficiently on a wide variety of parallel *and uniprocessor*
machine architectures.




Third, I frequently blurt out the following opinion: ``If people really
cared about performance, they'd rewrite their programs.'' There are
about 1,000,000 ways to interpret the above; for now I'll interpret it
as -- Moral: for a majority of users, the cost of rewriting their
programs outweighs the cost of sticking to the old code and accepting
somewhat worse performance. In particular, note that updating your
program from FORTRAN to FORTRAN plus a messasge-passing library doesn't
help you next year when you port it to an anesthetized-consistancy
(*very* relaxed consistancy) multiprocessor. Better you should just
keep buying the fastest uni and the best FORTRAN compilers.




To summarize, it is very difficult to build a commercially-successful
multicomputers because uniprocessors keep doing a good job with dusty
decks, and because it's very expensive to rewrite dusty decks.
Multicomputers will probably get a *lot* easier to sell once
data-parallel languages are widely used BUT in order to convince
anybody to actualy use a data-parallel language you have to sell them
on the idea that it will help them build better uniprocessor programs,
too -- faster code, lower development costs, more reliable software,
simpler maintainance, and so on.


That's my guess. Somebody please write me in ten years and tell me
if I've won :^) Oh, and I've been threatening to write DP-COBOL for
years, if you've got lots of money or just want to help, let me know!^)


(Note: The above discussion ignores set-top boxes.)


\end{soapbox}


;-D on ( The slippery soap ) Pardo
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.