Re: Optimizing Compilers project ideas

firth@sei.cmu.edu (Robert Firth)
24 Oct 91 15:47:14 GMT

          From comp.compilers

Related articles
Optimizing Compilers project ideas jliu@aludra.usc.edu (1991-10-19)
Re: Optimizing Compilers project ideas firth@sei.cmu.edu (1991-10-24)
Re: Optimizing Compilers project ideas arun@tinton.ccur.com (1991-10-26)
| List of all articles for this month |

Newsgroups: comp.compilers
From: firth@sei.cmu.edu (Robert Firth)
Keywords: optimize, parallel
Organization: Software Engineering Institute, Pittsburgh, PA
References: 91-10-092
Date: 24 Oct 91 15:47:14 GMT

In article 91-10-092 jliu@aludra.usc.edu (Jih-Cheng Liu) writes:


[potential research areas]
> 1. Continue to optimize loops.
> 2. Investigate superscalar optimization.
> 3. Parallelize loops for MPP machines.


Those seem like good areas. In addition, you could look at the way these
optimisations interact. For example, on a uniprocessor it is a good idea
to move loop-invariant expressions out of loops, since the code is then
executed once rather than many times. Even there, compilers make mistakes
by doing this to loops that are executed zero or one times.


But on a multiprocessor, if you can fully parallelize the loop, you don't
necessarily want to move invariant expressions out of it, since that could
lengthen the longest thread and so delay the overall computation.


That points to two places where perhaps more work should be done. First,
better strategies to parallelise code so as to balance the load as well as
possible, ie keep all processor crunching between synchronization points.
Secondly, how do we structure an optimising and parallelizing compler so
that enough information is available to it, and in the right phases or
places, to allow it to resolve such tradeoff issues intelligently?


Hope that helps
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.