Re: Recognizing complicated patterns

pd@complex.Eng.Sun.COM (Peter C. Damron)
Wed, 15 Aug 90 00:52:39 GMT

          From comp.compilers

Related articles
Recognizing complicated patterns johnson@cs.uiuc.edu (Ralph Johnson) (1990-08-01)
Re: Recognizing complicated patterns moss@cs.umass.edu (1990-08-06)
Re: Recognizing complicated patterns pd@complex.Eng.Sun.COM (1990-08-15)
| List of all articles for this month |

Newsgroups: comp.compilers
From: pd@complex.Eng.Sun.COM (Peter C. Damron)
Summary: long reply
Keywords: code, optimize
Organization: Sun Microsystems, Mt. View, Ca.
References: <1990Aug01.130605.10204@esegue.segue.boston.ma.us>
Date: Wed, 15 Aug 90 00:52:39 GMT

I have been following with interest the postings on recognition of
idioms/complex instructions. I have included a brief summary of the
relevant discussion at the end of this posting.


I have been working on a dissertation with Robert Henry from University of
Washington. The tentative title is: "Code Generation for Complex
Instructions using Graph Parsing"


I can't really summarize the whole dissertation here, but let me try to
cover a few salient points.


Goal
----


Be able to recognize instances of "complex instructions" embedded in the
intermediate representation of a source program. Where, complex
instructions are things like "block copy", "string search", "polynomial
evaluation", etc.


Method
------


Use the program dependence graph (PDG) as the intermediate representation,
thus bringing related computations into connected sub-graphs.


Use a graph grammar to encode the computations of the complex instructions
into graph patterns.


Use graph parsing (or graph pattern matching) to find instances of pattern
graphs (complex instructions) in a given subject PDG.




Results and Comments
--------------------


1. Code Generation
------------------


It is possible to devise techniques to recognize instances of "complex
instructions" and generator code for them.


Historically, there have been a number of approaches to this problem. They
can be roughly categorized as:


A. Assembly language code
B. Library routines (that get in-lined)
C. Peephole or other special purpose optimizations
D. Code generators


Probably the most successful technique is to use library routines, and
in-line them when appropriate. Peephole optimization techniques tend to be
non-portable and difficult to develop or extend.


Code generators have so far been limited to recognizing tree patterns (and
DAG patterns to some degree). The tree patterns may be used to "encode" or
describe arbitrarily large expressions (as noted by chased@Eng.Sun.COM
(David Chase)). However, tree pattern matchers typically operate on ordered
(as in left, then right) binary trees.


One problem with this type of representation is the arbitrary order imposed
on unordered operations (e.g. sequences of statements that are not dependent
on each other). Perhaps tree pattern matchers can be extended to handle
this problem.


Another problem involves recognizing control flow (or dependence) constructs
(as noted by johnson@cs.uiuc.edu (Ralph Johnson)). It does seem that
structured control flow would be encodable into a tree form, but loops may
still be a problem because of the associated data flow (or dependence).
This causes the program representation to form a more general directed
graph.


It would be interesting to study further whether tree (or dag) based pattern
matchers could be used to recognize instances of complex instructions in
structured programs.


I considered the more general problem of recognizing complex instructions in
the presence of arbitrary control and data flow (really control and data
dependence).


The most relevant references that I know of are:


%T Analyzing Exotic Instructions for a Retargetable Code Generator
%A Thomas M. Morgan
%A Lawrence A. Rowe
%P 197-204
%J CCC82 (Sigplan Compiler Construction Conf.)
%D 1982
%X This work develops techniques for transforming high-level language
constructs into machine language constructs. Emphasis in on determining
equivalence. Not really suitable for code generation. Summary of Morgan's
dissertation work.


%T Analysis Techniques for Exotic Machine Instructions
and Their Use in Retargetable Compilers
%A T. M. Morgan
%R PhD Dissertation
%I UCBCS
%D JUN 1982


%A Lawrence Snyder
%K larry
%T Recognition and Selection of Idioms for Code Optimization
%J ACTA
%V 17
%P 327-348
%D 1982
%X This is basically a tree pattern matching algorithm for APL that uses
dynamic programming to select patterns/idioms. A pre-cursor to later tree
pattern matching work.




2. Use of the PDG
-----------------


It is possible to use the program dependence graph as an intermediate
representation, and generate code directly. Whether it is possible to
optimize on the PDG is another question to which I think the answer is yes.
After all, after flow analysis, all types of intermediate representations
end up being dependence graphs. The only thing special about the PDG is
that the dependence edges are more explicit, and control dependence is used
instead of control flow.


I would like to thank Karl Ottenstein from Los Alamos National Lab, and Tom
Reps from Univ. of Wisconsin for providing front-ends to produce PDG's.




3. Graph Parsing and Graph Pattern Matching
-------------------------------------------


It is possible to describe graphs as complex as PDG's using a (a new type
of) graph grammar. It is possible to parse PDG's based on graph grammars of
this type.


However, graph parsing is pretty slow (worst case exponential for PDG
graphs, in practice it looks like about n**2 as far out as I've tried).
Also, graph parsers are quite difficult to describe, though the basic
concepts of the construction are quite simple.


Overall, this is not likely to be a useful technique for production
compilers.


Hope I wasn't too long winded,
Peter.


----------------------------
Peter C. Damron
Sun Microsystems, Inc.
Advanced Languages, MS 12-40
2550 Garcia Ave.
Mtn. View, CA 94043


pdamron@eng.sun.com


----------------------------


Well, let me see if I can summarize the discussion so far.


It all started out with some discussion about implementing special functions
efficiently. I believe that cik@l.cc.purdue.edu (Herman Rubin) was one of
the one who suggested that nint() was a candidate for a function best
written in assembler.




Then ok@mudla.cs.mu.OZ (Richard O'Keefe) suggested the code:
> double nint(double x) {
> return ceil(x + 0.5) - (fmod(x*0.5 + 0.25, 1.0) != 0);
> }




Then, cik@l.cc.purdue.edu (Herman Rubin) comments:
> I do not see how a compiler can recognize that the complicated body of code
> can be replaced by a single instruction and make that substitution. One
> could put that one in, but another will arise, and another. And these will
> continue to crop up as long as the language primitives are restricted as
> they now are.




Now, chased@Eng.Sun.COM (David Chase) enters the picture with:
> Sounds to me like another case of someone who hasn't done their homework.
> Besides peephole optimization (mentioned by Preston Briggs), there's also
> Graham-Glanville style code-generation, which has been adapted in recent
> years to take advantage of improvements in tree-pattern matching...
>
> To make a long story short, you just write down the patterns, and ...
> the tree-pattern-matcher generator spits out tables for you.
> Find a new pattern, just add it to the list of patterns with a cost and code
> generation rule, and you're set. There's a bit of additional work required
> to canonicalize the input patterns (think of all the horrible ways that
>
> ceil(x + 0.5) - (fmod(x*0.5 + 0.25, 1.0) != 0)
>
> could be permuted), but similar transformations are already done to assist in
> the recognition of loop-invariant and inductive computations (for example).
> People have proposed optimizing APL in a similar fashion ...
>
> Do people build code generators like this today? Yes. Write to Robert
> Henry, rrh@cs.washington.edu, and ask for UWCODEGEN.




Then, johnson@cs.uiuc.edu (Ralph Johnson) points out:
> David Chase argued that it was already known how to find instructions that
> matched complicated patterns in the original program. I am familiar with
> about half of the papers that he mentioned, so perhaps I am missing
> something, but it was my understanding that these techniques were not very
> good for matching instructions with loops in them, like block moves.
> Further, the incompleteness of arithmetic says that it may not be possible to
> canonicalize all input patterns. The tree matching techniques are very good
> at finding complex addressing modes, but they are not the end-all and be-all
> of code generation.
>
> By the way, the GNU compilers use a pattern matching code generator. Also, I
> agree that those who are interested in this area should take a look at Robert
> Henry's UWCODEGEN.




Also, moss@cs.umass.edu (J. Eliot B. Moss) mentions:
> ... I seem to recall that someone did a PhD thesis in the last ten years
> that indeed could match patterns for rather complex instructions, though I
> am not sure about loops (I do think it tried for block moves and such, so
> maybe loops are included?). My recollection comes up with Rick Cattell's
> thesis...




Also of note is a comment by the moderator
johnl@esegue.segue.boston.ma.us (John R. Levine)
in response to a posting with a nice review intermediate representations
by preston@rice.edu (Preston Briggs).
> [Anybody tried program dependency graphs, per Ferrante et al. at IBM. From
> the articles I've seen, they look pretty neat. -John]
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.