|Funny? email@example.com (JUKKA) (1997-04-13)|
|Re: Funny? danwang@atomic.CS.Princeton.EDU (1997-04-16)|
|Re: Funny? pfoxSPAMOFF@lehman.com (Paul David Fox) (1997-04-16)|
|Re: Funny? WStreett@shell.monmouth.com.spamguard (1997-04-18)|
|Re: Funny? firstname.lastname@example.org (William D Clinger) (1997-04-18)|
|Re: Funny? email@example.com (1997-04-18)|
|From:||William D Clinger <firstname.lastname@example.org>|
|Date:||18 Apr 1997 01:10:39 -0400|
I do not know what is going on here with Visual C++, but I do know
that a similar thing was true of both the UCSD P-system (for the Sage
II) and for MacScheme (an implementation of Scheme for the Macintosh).
I wrote most of MacScheme, so I can tell you precisely what was going
on in that system.
The MacScheme compiler generated either interpreted byte code (at
optimization levels 0 or 1) or native machine code (at levels 2, 3, or
4). Whenever native code was executing, certain values were cached in
hardware registers. The i/o primitives were written in a different
language, so those cached values had to be written to memory and then
restored around every i/o operation. This involved some changes of
representation, so it was slower than it sounds.
The i/o primitives were written in a language whose calling
conventions were close to those used by the byte code interpreter, so
there was less overhead when those primitives were called from
interpreted byte code. The byte code interpreter was really quite
fast, so this advantage was more than enough to offset the speed of
native code for some i/o-intensive programs.
Even if Visual C++ is always compiled to native code, the compiler
might think that optimizing for speed means keeping temporaries and
variables in registers instead of the stack. That might make
procedure calls slower, including i/o operations. Turning on the
debug option might force the compiler to keep everything on the stack,
making procedure calls faster. But this is just a guess.
Return to the
Search the comp.compilers archives again.