compiling on parallel computers

Dale Worley <think!compass!worley@eddie.mit.edu.uucp>
Mon, 20 Jun 88 10:20:48 EDT

          From comp.compilers

Related articles
compiling on parallel computers think!compass!worley@eddie.mit.edu.uucp (Dale Worley) (1988-06-20)
| List of all articles for this month |

Date: Mon, 20 Jun 88 10:20:48 EDT
From: Dale Worley <think!compass!worley@eddie.mit.edu.uucp>

      From: the Moderator


      Irons looked at
      context-free parsing on small numbers of parallel processors and decided that
      it wasn't worth the effort, because each processor ends up waiting for the
      one to its left to tell it something crucial about the context for its
      chunk of the program. If the number of processors approximates the number
      If the number of processors approximates the number
      of tokens, then using some of the processors to look for special cases might
      make more sense. Has anyone looked at highly parallel compiling, as opposed
      to compiling for highly parallel machines? -John


From what little I know about programming for highly-parallel
machines, a critical point is minimizing the amount of "passing
context". For instance, one can compute the states passed through by
a finite automaton when reading n characters in O(log n) time with n
processors -- the trick is to assign one processor to each character
and have it compute the automaton's transition matrix (state before
that char --> state after that char). Then all the processors swap
information so that processor n computes the transition matrix of the
substring of characters 1 to n. Since the composition of transition
matrices is *associative*, this can be done in O(log n) time on n
processors. Then we just broadcast the initial state to all the
processors, and they compute the state actually attained.


The secret is to figure out the "meaning" of each segment of the text
independently of context, and then broadcast the context from the
processors who know it to those who don't. This is much like the way
humans parse things. The segment of text


a := b + c;


is interpreted by the human without knowing the declarations of a, b,
and c. Only after recognizing what the statement "sort of means", do
we combine it with the information about a, b, and c to figure out
what it "really means".


This implies a shift in parsing strategy. LR(1) parsing assumes that
the parser has complete left context, which we know is almost never
needed. Has anyone worked on parsing strategies with limited context
on the left as well as on the right?


Since I've been working on a compiler for Algol 68 (a language which
does *not* subscribe to Worth-ism), I've discovered that it has some
really noxious constructs which can be *syntactically* disambiguated
only by having *semantic* knowledge. For instance:


(4) XXX n


If XXX (boldface word "xxx") is an operator, it is the use of an
operator with the value 4 as left argument and n as right argument.
If XXX is a type-symbol, it is a declaration of an identifier n which
is an array of 4 XXX's. This sort of thing would be very hard to
process highly parallelly, and humans have trouble with it too.


Dale
[From Dale Worley <think!compass!worley@eddie.mit.edu.uucp>]
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.