|Threaded Interpretive Languages email@example.com (Andrew Tucker) (1993-09-14)|
|Re: Threaded Interpretive Languages firstname.lastname@example.org (Julian V. Noble) (1993-09-21)|
|Re: Threaded Interpretive Languages email@example.com (1993-09-23)|
|Re: Threaded Interpretive Languages N.Chapman@cs.ucl.ac.uk (Nigel Chapman) (1993-09-24)|
|Re: Threaded Interpretive Languages firstname.lastname@example.org (1993-09-26)|
|Re: Threaded Interpretive Languages email@example.com (pop) (1993-09-28)|
|Re: Threaded Interpretive Languages firstname.lastname@example.org (1993-09-28)|
|Re: Threaded Interpretive Languages email@example.com (1993-09-29)|
|From:||firstname.lastname@example.org (Cliff Click)|
|Organization:||Center for Research on Parallel Computations|
|Date:||Thu, 23 Sep 1993 17:56:22 GMT|
"Julian V. Noble" <email@example.com> writes:
The prototypical TIL is FORTH.
[... a great deal more about Forth...]
My Master's thesis was an integrated Forth environment with a
"Smalltalk-like" code browser, tree-structured dictionary, incremental
compilation, debugger, etc... Basically made a tight edit-compile-debug
cycle even tighter. I compiled to "subroutine-threaded" code instead of
threaded interpretive; twice as big code size but runs twice as fast. My
compiler also did *some* peephole optimization. It was used to write a
Postscript clone (a large software project). Here are my pro & cons.
Very fast edit/compile/debug cycle: basically you could edit & run, no
matter what or how much you changed - the incremental compiler was invoked
automatically, and was too fast to see. Even for 350K lines of code.
Mini-languages: you can easily (trivially?) write mini-parsers to handle
repeated-but-mildly-different problems. We used this to write mini-
languagues for specifying Postscript operator function types.
You can debug subroutines as stand-alones: after defining a routine, you
can immediately invoke it with some parameters. Really nice for testing
stuff piece-meal. Of course, modern debuggers can do this as well, but
you gotta compile/link/load-debugger before you can test this way.
Stack programming can be gotton used to, but it's not as fast or as
natural (especially for scientific formulas) as infix.
NO STATIC TYPE-CHECKING. This sucks on large projects, where the
compiler's type-checking is your friend. Many stupid bugs can be
caught by the compiler, that instead you have to sweat out (with the
nice debugger). No dynamic type-checking. But then neither C nor
Fortran do either.
Lousy code quality: symptomatic of fast incremental compilation. Small
scientific kernels can have amazing transformations done to them to get
speed out of modern micros (software pipelining, blocking for cache, etc).
But just your plan vanilla constant-propagation, dead-code-elimination,
strength-reduction stuff are not commonly available (duck&cover: I have
heard of standalone Forth compilers that do this stuff).
Inconvenient structured variables, object-oriented stuff, local variables.
All these can be cobbled into the language, but the "syntactic sugar" is
missing. Structures and objects are a darned sight easier in C++ and
How do you do source-code-control reasonably well?
(How do the Smalltalk folks do it?)
Still nice for embedded systems work, or the lone hacker.
Avoid for large projects, unless you already got a bunch of Forth'ers.
If I was to do my thesis again, I'd have:
a statically typed language (with an ML style type-checker),
revision control built-in,
integrated with what was already there:
edit, compile, link, debug, make, help, libraries.
firstname.lastname@example.org -- Massively Scalar Compiler Group, Rice University
Return to the
Search the comp.compilers archives again.