Re: Compiler or interpreter?

glen herrmannsfeldt <gah@ugcs.caltech.edu>
Fri, 18 Jun 2010 18:53:49 +0000 (UTC)

          From comp.compilers

Related articles
Compiler or interpreter? gah@ugcs.caltech.edu (glen herrmannsfeldt) (2010-06-13)
Re: Compiler or interpreter? anton@mips.complang.tuwien.ac.at (2010-06-15)
Re: Compiler or interpreter? gah@ugcs.caltech.edu (glen herrmannsfeldt) (2010-06-16)
Re: Compiler or interpreter? gah@ugcs.caltech.edu (glen herrmannsfeldt) (2010-06-17)
Re: Compiler or interpreter? cr88192@hotmail.com (BGB / cr88192) (2010-06-18)
Re: Compiler or interpreter? paul.biggar@gmail.com (Paul Biggar) (2010-06-18)
Re: Compiler or interpreter? aek@bitsavers.org (Al Kossow) (2010-06-18)
Re: Compiler or interpreter? gah@ugcs.caltech.edu (glen herrmannsfeldt) (2010-06-18)
Re: Compiler or interpreter? cr88192@hotmail.com (BGB / cr88192) (2010-06-19)
Re: Compiler or interpreter? gah@ugcs.caltech.edu (glen herrmannsfeldt) (2010-06-19)
Re: Compiler or interpreter? cr88192@hotmail.com (BGB / cr88192) (2010-06-20)
| List of all articles for this month |

From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Newsgroups: comp.compilers
Date: Fri, 18 Jun 2010 18:53:49 +0000 (UTC)
Organization: Aioe.org NNTP Server
References: 10-06-032 10-06-038 10-06-045
Keywords: interpreter, design
Posted-Date: 19 Jun 2010 10:45:10 EDT

BGB / cr88192 <cr88192@hotmail.com> wrote:
(snip, I previously quoted)


>> "Ertl's most recent tests show that direct threading is the
>> fastest threading model on Xeon, Opteron, and Athlon processors;
>> indirect threading is the fastest threading model on Pentium M
>> processors; and subroutine threading is the fastest threading
>> model on Pentium 4, Pentium III, and PPC processors."
(snip)


>> Fortran READ and WRITE statements, the ** operator, and complex
>> division, are often implemented as subroutine calls, though with no
>> explicit call syntax.


Note an interesting side effect of the way Fortran I/O is done
using library calls is the restrictions on so-called recursive-I/O.
C programmers have a hard time imagining what recursive-I/O would
be like, but it comes out in Fortran if you do something like:


            WRITE(6,*) A,F(X),B


where function F also does I/O. With a common I/O statement
implementation of a subroutine call to start the operation,
one for each list element (or possibly for each I/O data item),
and one to terminate the operation, the library routine can get
it wrong if a function tries to do I/O in the middle of another
I/O operation.


(snip)
> some of my early-on compilers produced this sort of "threaded code".
> at the time, I had not thought much of it...


> later on I ended up writing an interpreter, and using something vaguely
> similar (lots of function pointers placed into "opcode" structures) to
> essentially break past a prior performance bottleneck (what I had called the
> "switch limit", whereby nearly the entire running time in the interpreter
> ends up going into massive switch tables).


> oddly, a linked list of structs each containing a function pointer (and
> typically pre-decoded arguments), can be somewhat faster than the "read an
> opcode word, dispatch via switch, read-in args, execute" strategy.


Note the quote above. The different implementations can be faster
or slower, even on different versions of the same architecture.
Among others, the branch prediction logic may be sensitive
to the differences.


> the result is that an interpreter which pre-decodes the bytecode into such a
> linked-list structure can very possibly run somewhat faster than one which
> does not (although I have not benchmarked these strategies on their own, so
> I am not certain if this is "necessarily" the case, or just the case in the
> cases I tested).


> now, as for me, I see it that there are a number of levels
> between compilers and interpreters:
(snip)


> the reason I say "almost entirely" above is because, as noted,
> there are very few compilers which don't generate (at least some)
> hidden API calls in some cases.


Many C compilers on smaller systems do API calls for every floating
point operation. In the MS-DOS days when the x87 processor wasn't
so common, it wasn't unusual to see self-modifying code. The API
call would detect that the math processor was present, and then
patch over the call with the appropriate x87 opcode.


> something loosely similar, but typically used within the main codegen,
> is to preserve all registers, but pass/return values on the stack


This reminds me of the API call commonly used with MS-DOS to execute
the appropriate DOS INT (interrupt) call. The interrupt code is the
second byte of the INT instruction. While one could implement a table
full of INT instructions, it is commonly implemented by generating the
instruction in memory and then executing it, prety much self modifying
code.


-- glen



Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.