|Parsing a simple BASIC language email@example.com (paul.dunn4) (2001-04-04)|
|Re: Parsing a simple BASIC language firstname.lastname@example.org (Barry Kelly) (2001-04-10)|
|Re: Parsing a simple BASIC language email@example.com (2001-04-10)|
|Re: Parsing a simple BASIC language firstname.lastname@example.org (2001-04-12)|
|Re: Parsing a simple BASIC language email@example.com (Dunny) (2001-04-12)|
|Re: Parsing a simple BASIC language firstname.lastname@example.org (Barry Kelly) (2001-04-14)|
|Re: Parsing a simple BASIC language email@example.com (2001-04-18)|
|Re: Parsing a simple BASIC language firstname.lastname@example.org (2001-04-18)|
|Re: Parsing a simple BASIC language email@example.com (Dunny) (2001-04-22)|
|Re: Parsing a simple BASIC language firstname.lastname@example.org (Barry Kelly) (2001-04-22)|
|From:||email@example.com (Michael Haardt)|
|Date:||18 Apr 2001 02:26:09 -0400|
|Posted-Date:||18 Apr 2001 02:26:09 EDT|
> > The parser performs the following steps to parse:
> > 1) tokenises the input string to Keywords (and function names), and
> >types (numerics, strings, symbols) into a stack.
> A stack might not be the most appropriate data type, since it seems
> that you do not approach the data Last In, First Out. Consider using
> some other data type, e.g. an an array.
Or store only the tokenized text. You need to invent a junk token,
though, which attribute stores arbitrary text so you can load/edit
syntactically invalid programs. You'd be surprised how many old
programs contain syntax errors in rarely or never used code paths.
> > 2) runs the resulting stack through the rule reducer to reduce
> > all the expressions to their base types (numexpr, stringexpr etc)
> You do this before parsing? This is a strange approach, which I do not
> fully understand.
It sounds like semantic analysis, in particular the type check.
> > 3) runs the resulting (smaller) stack through the BNF parser using the
> >rule for that keyword. It is this part that performs error checking.
> Is this a hand-coded parser? Perhaps you wrote a parser that
> backtracks when confronted with something it cannot handle? Such a
> parser can be very slow, if it has to do lots of backtracking.
I wrote my own basic interpreter (http://www.moria.de/~michael/bas),
but benchmarked mostly execution speed, which is dominated by
expression evaluation. I tried recursive descending vs. LR(0) with
hand-resolved conflicts for that and on PCs both run at about the same
speed, whereas an old Sun 4c does much better with LR parsing, but
either way, parsing during execution is not going to yield a very
My interpreter stores programs tokenised. Before running a program,
it makes a first pass to build the symbol table and a second to resolve
symbol references and check types.
A line-by-line check only works for one-line statements, but BASIC is not
that simple at all. It only looks that way. ;)
Return to the
Search the comp.compilers archives again.