Re: Is There Still a Need for "Turbo" Compilers?

Gene <gene.ressler@gmail.com>
Tue, 18 Mar 2008 20:41:42 -0700 (PDT)

          From comp.compilers

Related articles
[2 earlier articles]
Re: Is There Still a Need for "Turbo" Compilers? nmh@t3x.org (Nils M Holm) (2008-03-18)
Re: Is There Still a Need for "Turbo" Compilers? marcov@stack.nl (Marco van de Voort) (2008-03-18)
Re: Is There Still a Need for "Turbo" Compilers? haberg_20080313@math.su.se (Hans Aberg) (2008-03-18)
Re: Is There Still a Need for "Turbo" Compilers? jacob@nospam.org (jacob navia) (2008-03-18)
Re: Is There Still a Need for "Turbo" Compilers? dot@dotat.at (Tony Finch) (2008-03-18)
Re: Is There Still a Need for "Turbo" Compilers? gah@ugcs.caltech.edu (glen herrmannsfeldt) (2008-03-18)
Re: Is There Still a Need for "Turbo" Compilers? gene.ressler@gmail.com (Gene) (2008-03-18)
Re: Is There Still a Need for "Turbo" Compilers? preston.briggs@gmail.com (preston.briggs@gmail.com) (2008-03-24)
| List of all articles for this month |

From: Gene <gene.ressler@gmail.com>
Newsgroups: comp.compilers
Date: Tue, 18 Mar 2008 20:41:42 -0700 (PDT)
Organization: Compilers Central
References: 08-03-067
Keywords: performance
Posted-Date: 18 Mar 2008 23:46:45 EDT

On Mar 17, 11:52 am, Jon Forrest <jlforr...@berkeley.edu> wrote:


> Those of us who have been around a while still remember the miracle of
> Borland's "Turbo" languages. > But, how fast could a compiler be given
today's vast amount of virtual
> memory and multiple-core CPUs?


You can try it. The old Borland turbo compilers are still out there,
and they run fairly well in an XP cmd shell.


I loved those old compilers and knew them very well. Some random
observations...


Source was parsed directly from the editor buffer.


The code from the pascal compilers was poor enough to have been
produced in one LR pass (in the parser). Not generating an IR is a
tremendous advantage.


At least the early TP versions had major components (perhaps the whole
compiler) hand coded in assembly language. They were "compile-and-
go." The machine code was built in memory and executed directly.
They only wrote to disk to generate the executable.


Because the parsers stopped on the first error, they needed no error
recovery mechanism, and they had minimal error messages. No doubt
this helped speed. I always felt the approach made a lot of sense.
If it only takes 1/10 of a second to compile a 2000 line source, then
why bother finding more than one error at a time?


I'm pretty sure from the way they aborted on larger sources that the
relative decline in performance of the Turbo C/C++ compilers wrt
Pascal was from adopting an IR.


The Turbo C++ compiler was the old Stroustrop dialect with no
templates or multiple inheritance - considerable simplifications.


Folks ought not to be so rough on gcc. It grew up over a long period
with many authors. This is not a recipe for speed. It pays
significant bills for its generality. The data structures are very
rich to handle many languages and processors. Even with optimizations
off, there are overheads for optimization.


It will be interesting to see the performance of a minimal compiler
built with LLVM and the engineered from scratch clang front end.
There is a nice Google video where Chris Latner talks about some of
the things that make gcc hungry for memory and time.
http://video.google.com/videoplay?docid=1921156852099786640


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.