Re: The RISC penalty

pardo@cs.washington.edu (David Keppel)
18 Dec 1995 19:09:44 -0500

          From comp.compilers

Related articles
The RISC penalty d.sand@ix.netcom.com (1995-12-09)
Re: The RISC penalty cdg@nullstone.com (1995-12-17)
Re: The RISC penalty pardo@cs.washington.edu (1995-12-18)
Re: The RISC penalty pardo@cs.washington.edu (1995-12-19)
Re: The RISC penalty jbuck@Synopsys.COM (1995-12-20)
Re: The RISC penalty pardo@cs.washington.edu (1995-12-21)
Re: The RISC penalty iank@dircon.co.uk (1995-12-28)
Re: The RISC penalty dlmoore@ix.netcom.com (1995-12-28)
Re: The RISC penalty meissner@cygnus.com (1995-12-30)
[2 later articles]
| List of all articles for this month |

From: pardo@cs.washington.edu (David Keppel)
Newsgroups: comp.compilers
Date: 18 Dec 1995 19:09:44 -0500
Organization: Computer Science & Engineering, U. of Washington, Seattle
References: 95-12-063 95-12-077
Keywords: architecture, performance

John Levine <compilers-request@iecc.com> writes:
>[I looked at Pittman's ``The RISC Penalty'' article, ... I'm not
> amazed at his conclusion that 68K chips run 68K object code better
> than RISCs.]


Probably true, but that wasn't his conclusion.


The article reported that a 68K *interpreter* written in RISC code had
a theoretical cost of 68 cycles, not including cache misses, and that
a dynamic cross-compiler for the same function produced code that
theoretically ran in 18 cycles. Indeed, for small bemchmarks, good
speedups were observed using the dynamic cross-compiler. However,
when run on one real x86 application (unnamed, code size not
specified), the dramatically larger code generated by the dynamic
cross-compiler ran *slower* than the interpreter code, because the
instruction cache miss rates were terrible with the larger code.


I believe that one of Pittman's conclusions was that code-expanding
transformations (``optimizations'') are less likely to be successful
with a RISC because they're already running ``close to saturation'' of
the instruction memory bandwidth. In short, that there IS a space
cost for code, and that RISC, with its larger code, is limited by the
space cost sooner than some other architecture (Pittman proposes stack
code, which is probably great for integer ops, but I'm unclear about
fp, which tends to work better when pipelined).


BTW, this isn't the first time Pittman has noted the space cost:


%A Thomas Pittman
%T Two-Level Hybrid Interpreter/Native Code Execution for Combined
Space-Time Program Efficiency
%D 1987
%J ACM SIGPLAN
%P 150-152


Three pages and well worth the read.


;-D on ( Me and my spacey costs ) Pardo
[All true, but when you look at his code examples, they have all sorts of crud
due to having to emulate details of the 68K architecture. I didn't see much
evidence that his observations would be applicable to other kinds of programs.
-John]


--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.