UNCOL, or: dealing with loss of information when compiling.

Dave Lloyd <dave@occl-cam.demon.co.uk>
26 Jun 1996 11:40:12 -0400

          From comp.compilers

Related articles
UNCOL, or: dealing with loss of information when compiling. toon@moene.indiv.nluug.nl (Toon Moene) (1996-06-21)
Re: UNCOL, or: dealing with loss of information when compiling. preston@tera.com (1996-06-23)
UNCOL, or: dealing with loss of information when compiling. dave@occl-cam.demon.co.uk (Dave Lloyd) (1996-06-26)
| List of all articles for this month |

From: Dave Lloyd <dave@occl-cam.demon.co.uk>
Newsgroups: comp.compilers
Date: 26 Jun 1996 11:40:12 -0400
Organization: Compilers Central
Keywords: optimize, books

Toon Moene <toon@moene.indiv.nluug.nl> wrote:
> Do you have a reference for Wolfe's book ? (I recall that it has
> some repugnant title like "Supercompilers for Supercomputers", which to me
> has a uncomfortably high "Carl Sagan touch" to it).


"Optimising Supercompilers for Supercomputers" , Michael Wolfe, Pitman, ISBN
0-273-08801-7. Some of the invented algebra is pretty abominable (it's
computer science), but if you skip that, the book contains good discussion
of why a compiler would want to dramatically rearrange the details of loops
to get significant speedups (and this includes superscalar and cached
processors as well as vector processors).


> This is typical for the point of view of a compiler writer (sorry,
> this is not meant to be as harsh as it sounds). Translating
> (physics) knowledge into programs takes several steps:
>
> 1. From experimentation to mathematical model. Generalisation.
> 2. From mathematical model to computational model. Discretisation.
> 3. From computational model to source code. Coding.
> 4. From source code to executable. Compilation.
>
> In fact, I am involved in steps 1-3, although mostly 2 and 3. What
> you lose in steps 2 and 3 is _at least_ as important as the
> information that can be lost in step 4 when you have to deal with a
> poor intermediate language (be it a human readable one, like C, or
> an intermediate language like gcc's RTL representation).


I couldn't agree with you more. I used to be a geophysicist (MHD of the
earth's core) before becoming a compiler writer. Unfortunately the problem
is still that no processor is ever fast enough to model physics
satisfactorily, so much effort must be spent at every step. By the time a
Fortran compiler sees things, the physics is long lost and much of the maths
is unrecognisable. But there is still plenty of room for a compiler to make
an order of magnitude difference in running time. For *some* problems F90
vector syntax can be natural for the programmer and lets the compiler see
further up the chain, to at least allow it to rearrange/tile the iteration
order of loops for better cache usage, etc. You should find the HPF/F95
FORALL syntax lets you describe neighbour problems more easily, but it is
still all about the mechanics of computation (a bit more freedom for the
compiler) and not physics. (In fact vector syntax is redundant with FORALL
around.)


> [ This is actually being done to generate so-called "adjoint
> models" in various branches of physics to do sensitivity studies -
> see URL below and references therein for further details ]


I had a look at the URL in your sig but could not find what you describe.
Could you give me a more specific URL please.


Regards,
----------------------------------------------------------------------
Dave Lloyd Email: Dave@occl-cam.demon.co.uk
Oxford and Cambridge Compilers Ltd Phone: (44) 1223 572074
55 Brampton Rd, Cambridge CB1 3HJ, UK
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.