Re: flow analysis and optimization (Cliff Click)
Tue, 9 Nov 1993 16:07:24 GMT

          From comp.compilers

Related articles
flow analysis and optimization (1993-11-04)
Re: flow analysis and optimization (1993-11-09)
Re: flow analysis and optimization (1993-11-09)
Re: flow analysis and optimization (1993-11-09)
Re: flow analysis and optimization bwilson@shasta.Stanford.EDU (1993-11-14)
| List of all articles for this month |

Newsgroups: comp.compilers
From: (Cliff Click)
Keywords: optimize, analysis
Organization: Center for Research on Parallel Computations
References: 93-11-041
Date: Tue, 9 Nov 1993 16:07:24 GMT (Vugranam Chakravarthy Sreedhar) writes:

> 3 methods are described for doing flow analyses and optimizations.
> 1. Syntax directed approach
> 2. Interval method
> 3. (general) iterative method.

> I would like to know which method you use in your compiler.

I'm part of the scalar compiler group at Rice. All my stuff uses the
iterative method on sparse representations because: a) It's linear in the
size of the program b) It handles any graph.

Caveat Emptor: I do not hack code with the parallel compiler group. I
think the parallel compiler group uses a syntax directed approach for the
intraprocedural stuff. The interprocedural stuff is probably done with
the iterative method again.

> What is the complexity (space/time) of doing interprocedural analysis?

Depends on which analysis. Some are linear, some are exponential.

> What benefits do you get (in terms of speedup and efficiency of the code
> generated)?

If only one fact is discoverd, but that fact lets you parallize/vectorize
your kernel, then you can get amazing speedups. For Fortran
interprocedural analysis seems to pay of fairly well. Don't know about
C/C++, but it seems like it should.

> (C/C++ guys how do handle pointer analysis, especially in presence of heap
> allocation, pointer arithmetic, type-casting, etc.?)

The parallel group does various flavors of Fortran, so heap allocation is
less of a problem. They do big alias analysis stuff instead.

-- -- Massively Scalar Compiler Group, Rice University

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.