Interpreter Optimization

"Steven D. Majewski" <sdm7g@aemsun.med.Virginia.EDU>
Tue, 28 Apr 1992 19:27:18 GMT

          From comp.compilers

Related articles
Interpreter Optimization sdm7g@aemsun.med.Virginia.EDU (Steven D. Majewski) (1992-04-28)
Re: Interpreter Optimization moss@cs.umass.edu (1992-04-29)
Re: Interpreter Optimization pardo@cs.washington.edu (1992-04-30)
| List of all articles for this month |

Newsgroups: comp.compilers
From: "Steven D. Majewski" <sdm7g@aemsun.med.Virginia.EDU>
Keywords: interpreter
Organization: Compilers Central
Date: Tue, 28 Apr 1992 19:27:18 GMT

graham@maths.su.oz.au (Graham Matthews) writes:


>Recently there has been a fair bit of discussion in comp.compilers about
>compiler register allocation for optimization. As a developer of an
>interpreted language I would like to move the discusion toward similar
>optimization tricks that can be done in interpreted environments.
> ... I am interested in optimization tricks in such an environment. eg:
>
> a) ways to reduce the number of push's and pops
> b) notions of 'registers' to bypass the stack operations
> c) methods of reducing the overhead of function calling.
>
>There does not seem to much literature in optimizing interpreted
>languages.


    Interpretation and instruction path coprocessing
    Eddy H. Debaere, Jan M. Van Campenhout.
    -- Cambridge, Mass. : MIT Press, c1990.
    ix, 192 p. : ill. ; 24 cm. -- (Computer systems)
    Includes bibliographical references ([175]-186).


has some examples of optimization tricks.


I think I recall one being using registers to hold the first n stack
items. But mostly, their examples are there to illustrate how hard it is
(they define variant instructions for each stack/register alignment, i.e.
for TOS = r1, TOS = r2, ..., so the more registers you have, the more
variants you need to handle! ). Their suggested solution is a
hardware-fix: an instruction path coprocessor that 'linearizes' and
flattens the instruction stream. ( I *tried* to get a discussion of this
started in Comp.arch, but no takers! )


>Outside these sorts of small optimizations there is also the possibility
>in an interpreted environment for the code to optimize itself as it runs -
>for example we employ dynamic typing but remember type bindings in loops
>(the code therefore self optimizes at run-time). Does anyone have any
>experience with optimization as you run?




I have since been reading David Keppel's paper: A Case for Runtime Code
Generation, and I'm starting on the Self papers. Other than a new hardware
architecture optimized for interpreted languages ( like Debaere's IPC's
perhaps. ), I think this sort of thing may be the best approach -
    inline and flatten the inner loops dynamically,
    compile the (virtual) stack access into (real) register access.
The HARD part ( I think ) is the analyzer that decides when all this is
worth the trouble.


I would also suggest taking a look at Simon L. Peyton Jones paper on the
spineless-tagless G-machine, a virtual machine for running Haskell, for a
good perspective on how a really good virtual machine model can help
produce better code.
--
  Steven D. Majewski University of Virginia Physiology Dept.
  sdm7g@Virginia.EDU Box 449 Health Sciences Center
  Voice: (804)-982-0831/32 1300 Jefferson Park Avenue
  FAX: (804)-982-1616 Charlottesville, VA 22908
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.