|Parallel Compiler Representation email@example.com (Xinan TANG) (1994-03-08)|
|Re: Parallel Compiler Representation firstname.lastname@example.org (1994-03-21)|
|From:||Xinan TANG <email@example.com>|
|Keywords:||optimize, parallel, question|
|Date:||Tue, 8 Mar 1994 01:34:32 GMT|
I have a question concerning on intermediate representation of
There are many forms for compiler optimizations such as the CFG, SSA,
PDG, PDW etc. but all of them are not good enough to represent data
dependency for arrays and pointer introduced recursive data structures.
Actually, the parallelism just comes from array based loop and disjoint
recursive function calls.
When we are talking parallelism from array based loops, the techniques
used are subscript base dependence analysis, loop transformation and
parallelization. What's the place of internal representation? Isn't it
as important as in the compiler optimization? Can we say no matter
what's internal representation if you represent loop as "normal form"
that's OK? Can anyone who are writing sort of parallel compiler give me
a hint on this point.
Return to the
Search the comp.compilers archives again.