Re: Any further work on superoptimizer?

hrubin@pop.stat.purdue.edu (Herman Rubin)
2 Feb 92 16:26:08 GMT

          From comp.compilers

Related articles
Any further work on superoptimizer? mark@hubcap.clemson.edu (1992-01-22)
Re: Any further work on superoptimizer? chased@rbbb.Eng.Sun.COM (1992-01-23)
Re: Any further work on superoptimizer? megatest!djones@decwrl.dec.com (1992-01-29)
Re: Any further work on superoptimizer? hrubin@pop.stat.purdue.edu (1992-02-02)
Re: Any further work on superoptimizer? rmf@chopin.cs.columbia.edu (1992-02-03)
Re: Any further work on superoptimizer? spot@CS.CMU.EDU (1992-02-03)
Re: Any further work on superoptimizer? mfx@cs.tu-berlin.de (1992-02-04)
Re: Any further work on superoptimizer? array!colin (1992-02-09)
Re: Any further work on superoptimizer? megatest!djones@decwrl.dec.com (1992-02-26)
| List of all articles for this month |

Newsgroups: comp.compilers
From: hrubin@pop.stat.purdue.edu (Herman Rubin)
Summary: Can it even be done?
Keywords: optimize
Organization: Purdue University Statistics Department
References: 92-01-087 92-01-117
Date: 2 Feb 92 16:26:08 GMT

In article 92-01-117 Dave Jones writes:
>In article 92-01-078 Mark Smotherman writes:
>>Henry Massalin published a paper on a superoptimizer in ASPLOS II, 1987
>>[1]. One important aspect of this work was the generation of *branchless*
>>run-time library routines for simple functions (e.g., absolute value, max)
>>that could be in-line substituted. Thus the overhead of procedure call
>>and the problem of branch stalls / delay slot scheduling were avoided.


>I don't remember if it was the same paper, but at about that time I read
>an article about a "superoptimizer" that supposedly tried every possible
>code sequence shorter than the original, and with a semantic analyzer
>decided which if any effectively did the same thing. It would often come
>up with surprising sequences using the XOR trick and other obscurities.
>However, using no computer help, I coded up sequences that beat two of his
>"superoptimized" ones. I sent the code-fragments to the author, but never
>received a reply.


If one has a program of even moderate size, the number of branches possible
will undoubtedly exceed the size of the universe, and even with the fastest
computing procedures consistent with current physics (NOT engineering),
the analysis could never be completed.


But there is a much greater problem. It is not only necessary to consider
rearrangements of code, but even totally different algorithms. I do not
see how a superoptimizer just looking at the code could replace the usual
multiplication scheme with any of the faster alternatives. These are at
least somewhat elementary, but consider the use of elliptic functions to
obtain high-accuracy trigonometric and exponential functions. And even
worse are the situations in generating random numbers, where different
methods, all correct, yield different answers.
--
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
Phone: (317)494-6054
hrubin@pop.stat.purdue.edu (Internet, bitnet)
{purdue,pur-ee}!pop.stat!hrubin(UUCP)
[I believe that the superoptimizer analyzed short sections of straight
line integer code, which are certainly a lot more tractable than big
chunks of branching floating point code. I don't think they found any new
ways to identify the next one bit in a bit string, though. -John]
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.