Re: ASSEMBLY vs C(++)

tgl@netcom.com (Tom Lane)
8 May 1997 21:11:16 -0400

          From comp.compilers

Related articles
ASSEMBLY vs C(++) nop05288@mail.telepac.pt (W!cked) (1997-05-04)
Re: ASSEMBLY vs C(++) tgl@netcom.com (1997-05-08)
Re: ASSEMBLY vs C(++) David.Monniaux@ens-lyon.fr (1997-05-08)
Re: ASSEMBLY vs C(++) mac@coos.dartmouth.edu (1997-05-08)
| List of all articles for this month |

From: tgl@netcom.com (Tom Lane)
Newsgroups: comp.compilers
Date: 8 May 1997 21:11:16 -0400
Organization: Netcom Online Communications Services
References: 97-05-047
Keywords: assembler, performance

Our esteemed moderator writes:
> [This is an ancient argument. You can usually tweak small routines so
> they're faster in assembler than in other languages, but large
> programs rarely turn out better, both because assembler programs are
> longer and so harder to write and debug, and because assemblers offer
> little support for sophisticated data structures so you have trouble
> using faster but more complicated data structures and
> algorithms. -John]


I know a couple additional reasons why large systems can turn out
faster in high level languages than in assembler:


1. The compiler never gets bored, nor tired. No matter how niggling
the optimization, it gets applied everyplace. A good assembly
programmer can beat any compiler if he's got the time and motivation
to squeeze a particular routine --- but maintaining that level of
tenseness over millions of instructions is impractical.


2. The compiler doesn't have to consider the readability of its output,
and can do things that an assembly programmer would reject as being
unmaintainable. Here's a simple example: in C, write "p->field += X;"
for some constant X. On the old PDP11, if p was in a register, this
would require one instruction
add #X,offset_of_field(Rn)
Now, if it just so happens that the constant X and the field offset
are numerically equal, one can save a word with
add @PC,offset_of_field(Rn)
The assembly programmer would likely not notice this equality, and
if he did would not exploit it if he had an ounce of sense, because
the code will break if either the constant or the structure layout
change. The compiler doesn't have a reason not to exploit it --- it
will recompile the code anyway if anything changes.


This is not a hypothetical example. The old Bliss-11 compiler did
exactly this optimization. And all of the pro-HLL arguments mentioned
in this article can be found in Wulf et al's book about Bliss-11, _The
Design of an Optimizing Compiler_, which is as close to sacred writ as
you'll get in this game... shame it's many years out of print...


Bottom line to all this is that if you want to build fast systems in a
reasonable amount of time, you let humans worry about algorithms and
data structures, and you let compilers worry about register- and
instruction-level optimizations.


regards, tom lane


PS: if you haven't read "The Story of Mel", concerning an old-line
assembly programmer who would not hesitate to apply the above
optimization, go look it up. It's Appendix A of _The Hacker's
Dictionary_, also available online at numerous places, such as
http://murrow.journalism.wisc.edu/jargon/jargon_48.html#SEC55
[That PDP-11 optimization showed up in the Ritchie pdp-11 C compiler
optimizer as well, with a nod to Bliss-11. It's the best example I
know of code that should only be written by a computer. -John]


--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.