|EBNF email@example.com (2004-11-20)|
|Re: EBNF firstname.lastname@example.org (2004-11-28)|
|Re: EBNF email@example.com (Martin Bravenboer) (2004-11-28)|
|Re: EBNF firstname.lastname@example.org (2004-12-01)|
|Re: EBNF email@example.com (2004-12-11)|
|Re: EBNF firstname.lastname@example.org (Vidar Hokstad) (2004-12-16)|
|Re: EBNF cfc@shell01.TheWorld.com (Chris F Clark) (2004-12-17)|
|From:||Martin Bravenboer <email@example.com>|
|Date:||28 Nov 2004 23:21:03 -0500|
|Organization:||Dept of Computer Science, Utrecht University, The Netherlands|
|Posted-Date:||28 Nov 2004 23:21:03 EST|
> I would like to hear opinions about the differences
> between formal lexer and parser grammars.
Usually, the difference is in the expressiveness of the grammars: the
lexical syntax is a set of regular expressions and the 'parser grammar'
is a context-free grammar over the kind of tokens defined by the lexical
syntax. This separation is inspired by performance concerns: by
separating the input in tokens, the number of elements the parser has to
consider is reduced.
In scannerless parsing there is no separate lexical analysis phase:
every input character is a token. I think that this is conceptually even
*cleaner* then using a separate scanner. The problem of using a separate
scanner is that the context of the token in the input cannot be considered.
> IMO it's not a good idea to
> use the same meta language for both kinds of grammars
Why isn't this a good idea? Scanners
> even if it were
> possible to construct such a super language?
It is possible: SDF ( http://syntax-definition.org ) integrates the
definition of the 'lexer grammar' and 'parser grammar'. SDF is
implemented by scannerless generalized LR parsing.
Return to the
Search the comp.compilers archives again.