|C code .vs. Assembly code for Microcontrollers/DSPs ? email@example.com (1996-03-01)|
|Re: C code .vs. Assembly code for Microcontrollers/DSPs ? john.r.strohm@BIX.com (1996-03-15)|
|FTN Compiler (was: C code .vs. Assembly code) firstname.lastname@example.org (1996-03-22)|
|Re: FTN Compiler (was: C code .vs. Assembly code) email@example.com (Milton Barber) (1996-03-22)|
|From:||Milton Barber <firstname.lastname@example.org>|
|Date:||22 Mar 1996 21:37:30 -0500|
|References:||96-03-006 96-03-104 96-03-147|
|Keywords:||Fortran, optimize, history|
Scott A. Berg wrote:
> The RUN FORTRAN compiler for the 1960s-vintage Control Data 6600
> scalar supercomputer did an incredibly good job of generating optimal
> code sequences that took into account ALL of the quirks and timing
> considerations of the processor. It was MUCH better than human expert
> assembly language programmers, for the kinds of things that FORTRAN
> compilers for a scalar supercomputer typically needed to do.
David L Moore wrote:
> I don't believe this. My memory of RUN was that it was an older and
> less optimizing compiler than FTN (perhaps I am wrong on that) ...
I worked on maintenance of RUN for awhile in the '60s. I also worked
on FTN for several years, starting with FTN 1.0.
David Moore is correct that RUN was an older and less optimizing
compiler. Scott Berg's assertion is just wrong. Code from RUN wasn't
very good. Code from FTN 1.0 was significantly better than any
version of RUN, and it got better across subsequent versions. FTN 1.0
was the first compiler to do what we call instruction scheduling
today. Later versions got better at scheduling and tenatively
explored global optimization.
> (PS. The FTN compiler had another interesting "hack". It generated
> assembly code and contained its own assembler. You could use the OS
> assembler to assemble this code rather than the inbuilt assembler, but
> if you did, the assembly phase took about twice as long as the entire
> compile would have using the inbuilt assembler.
> To get speed, they built a tree indexed by the next letter of an
> opcode. So, each node had a 26 word table attached. This allowed very
> fast opcode lookup - a three character opcode required three loads -
> but took a lot of space as the tree was sparse. They used the wasted
> space for the rest of the code of the assembly pass - that is, the
> code was embedded in the tables.
> The neat hack is this - there were a set of macros to generate the
> tables. These macros generated a free list of unused space. Then there
> were macros that were used around each basic block of code. These
> allocated memory from the heap previously created for the block of
> code and ORG'd it. Hence, the packing of the code into the table was
> completely automatic - this was heap management at assembly time using
> assembler psuedo-instructions!
> When I saw this as a young programmer I was awed. Now that I am an old
> grizzled programmer, I am still majorly impressed. I wonder if anyone
> remembers who wrote those macros?)
I never worked on the assembler part of the compiler, so I don't
remember this "hack" specifically, but I have seen lots of similar
stuff. Coding in assembler using a powerful macro assembler is a
different conceptual world than we have today, with the prevalent
"90-lb weakling" assemblers. We certainly used the assembler
capabilities to allocate storage in a variety of creative ways.
You might try Steve Jasik, of "MacNosy" fame. If he isn't the one who
wrote this hack, he would probably know who did. I'm sorry, but I
don't have a current reference as to how to contact Steve.
Return to the
Search the comp.compilers archives again.