Any compilers for parallel platforms

"Alaric B. Williams" <alaric@abwillms.demon.co.uk>
29 Apr 1996 23:25:49 -0400

          From comp.compilers

Related articles
Ada compilers for parallel platforms sam818@seas.gwu.edu (1996-04-16)
Re: Ada compilers for parallel platforms rmeenaks@bfm.com (1996-04-19)
Any compilers for parallel platforms alaric@abwillms.demon.co.uk (Alaric B. Williams) (1996-04-29)
| List of all articles for this month |

From: "Alaric B. Williams" <alaric@abwillms.demon.co.uk>
Newsgroups: comp.compilers,comp.lang.ada
Date: 29 Apr 1996 23:25:49 -0400
Organization: Compilers Central
References: 96-04-091 96-04-115
Keywords: Ada, parallel

rmeenaks@bfm.com (Ram Meenakshisundaram) wrote:


>Samir N. Muhammad (sam818@seas.gwu.edu) wrote:
>: I am interested in Ada compilers for parallel machines.


Parallel processing in compilers is an underexplored area IMHO - as so
few multi-CPU systems exist to play with these days. For a start,
there are a few kinds of multi-CPU systems.


The lowest level is one CPU, multithreading. This is a common case, so
I won't go into it much, beyond saying that it can be treated very
similarly to the second case, but the gains in forking a lot are
usually lost.


The first is more than one CPU with the same address space. This can
be used to implement threads, with an equal distribution of the
threads between CPUs, but most languages need threading to be
explicitly managed with fork(), PAR, etc. Languages that offer
extensions to conventional loop constructs, ie rather than "For each
number from 1 to 10, complete the code 'Process Element X'", we have
"Process Elements 1..10". Cs loop system cannot reliably be
parallelised, since the code in the loop may depend upon the last
pass's results, and the loop variable/ending condition can be modified
at will.


The next level are things like systolic arrays, where many many CPUs
with independent address spaces hook up through some kind of network.
The kind of code that can take advantage of this is code written as a
load of subproblems which communicate via some rigid mechanism, often
message passing, which can be run over the network. This usually
requires extreme modification of conventional imperative code! The
above mentioned languages with cool loops can sometimes take advantage
of this with non-communicating subprocesses, but they must draw the
dividing line between threading on chip and using extra CPUs, as
setting up a CPU can be very time consuming (boot it up, pass it
code+data, run, get back data).


The next level are systems where loads of individual computers, with
differing CPUs and hardware, link up together. In general, it is
nearly impossible to split tasks over such a system, as the time
consuming interfaces between platforms (code must be compiled for each
CPU involved before execution, for example) make it too hard to
manage. This only works well for systems where individual tasks go to
each node, and that node controls the entire task.


OK, now for some observations.


First of all, all of this can be emulated by the next level down.
Multiple CPUs on an address space can be emulated by one CPU threading
preemptively. Multiple CPUs with restricted IPC can be emulated by a
single CPU or a cluster. Multiple computers can be emulated by a CPU
array. In the extreme, a network containing Crays, Z80s, PCs, etc. can
all be emulated by one PC (forgetting the processor-type boundaries,
for a start!).


In my opinion, the best system at the moment would be a hierachical
one, where a new task (application) is allocated to a computer, where
it is compiled for the local CPU's instruction set.


Tasks can communicate with a rich IPC system.


The task itself is divided into Sections, which execute in parallel
and can communicate with message passing.


Sections are divided into Threads, which execute orthoganally (in
parallel).


Threads are divided into Streams, which use cooperative multitasking.
Streams should be on the same CPU to take advantage of the trust
implicit in cooperative tasking.


A single computer would, of course, always allocate tasks to itself,
and allow them to IPC locally just as if they were on other systems.


If there are no systolic arrays on this computer, Sections are all
just preemptively multitasked. Threads are allocated to CPUs on an
address space, and preemptively threaded if there are more threads
than CPUs. Perhaps they all share one. Threads can share each other's
memory implicitly.


And Streams happily sit switching between themselves, sharing memory
and processor time as they see fit.


As far as I know, no system supports this :-)


Oh, yes; 'loops' that are allowed to execute in parallel are,
automatically, turned into seperate Sections or Threads if they seem
large enough, depending on system-specific cost/time heuristics.


Phew.




ABW
--
Alaric B. Williams Internet : alaric@abwillms.demon.co.uk
<A HREF="http://www.hardcafe.co.uk/Alaric/">http://www.hardcafe.co.uk/Alaric/</A>




--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.