|run parse tree efficiently email@example.com (2004-01-02)|
|Re: run parse tree efficiently firstname.lastname@example.org (Joachim Durchholz) (2004-01-07)|
|Re: run parse tree efficiently email@example.com (2004-01-07)|
|Re: run parse tree efficiently Jeffrey.Kenton@comcast.net (Jeff Kenton) (2004-01-09)|
|From:||Joachim Durchholz <firstname.lastname@example.org>|
|Date:||7 Jan 2004 00:57:17 -0500|
|Organization:||Oberberg Online Infosysteme|
|Posted-Date:||07 Jan 2004 00:57:17 EST|
> I wrote a little interpreter by hand in c++ (without any tools) and
> use the following technique to run the parse tree: I have an abstract
> class "node" with a method "solve". The real tree nodes overload this
> method, in which the solve methods of the child nodes are called
> "recursively". To run the program the solve method of the main program
> node is called and the rest happens automatically.
> I want to know if this is the right/common technique to run a parse
It's OK if the language is more variable than the set of functions that
you'd want to apply to the parse tree.
It's a problem if the language is static, and you anticipate to add
further functions (tree transformations, code refactoring,
It's unsuitable if you plan to add both code transformations and new
functions. Any transformation would either have to be pretty sure to
avoid modifying the semantics (which pretty much invalidates all
transformations once you add prettyprinting), or add a machinery to keep
track of which transformations have changed which of the functions.
Which means that you'll have an overhead of O (N transformations X M
functions) to keep the code maintained, which (for me) translates into
I know of no good way to factor out changes both in language constructs
and functions applied to the constructs (since that will always explode
into O (N constructs X M functions) - the usual solution is to keep the
language small, or to define most of the language via a translation to a
"kernel language" and doing the real work in the pretranslated stuff,
but this makes referring back to the original source code more difficult).
> Is there any other more efficient solution?
Yes, of course, but concrete optimizations would imply concrete
assumptions, e.g. that the language won't change, or that you'll never
have another function, or that the language size (i.e. number of node
types) is kept small. Since we don't know any of your concrete plans,
it's a bit difficult to recommend a concrete optimization path.
Most techniques for fast interpreters don't operate on parse trees
though. IOW you'd have to translate the parse trees into some sequential
Notable exceptions are:
* The interpreter for some Modula-3 systems (I think).
* "Term/graph rewriting" interpreters. These are typical for languages
with "lazy evaluation", since the classical techniques for interpreting
such kinds of languages tend to incur high overheads.
Return to the
Search the comp.compilers archives again.