|Input buffer overflow in lex firstname.lastname@example.org (1993-01-04)|
|Re: Input buffer overflow in lex... email@example.com (John R. Levine) (1993-01-05)|
|Re: Input buffer overflow in lex... firstname.lastname@example.org (1993-01-05)|
|Re: Input buffer overflow in lex... email@example.com (1993-01-06)|
|Re: Input buffer overflow in lex... firstname.lastname@example.org (1993-01-08)|
|From:||email@example.com (Richard Wagner)|
|Date:||Wed, 6 Jan 1993 19:57:37 GMT|
In the version of "lex" I use, the generated lexer gets its input via an
"input" macro. One solution, which may be viewed as a "kludge" or "hack",
is to "#undef" the default macro and "#define" another with the same
functionality, but which also checks for overflow (and possibly takes some
recovery action, like copy what's in the buffer to an "infinitely" long
linked-list of buffers).
#define usual_input_macro() (...text of default "input" definition...)
#define input() ( \
('\0' != *(yytext + YYLMAX - 1)) \
? (token_too_long()) \
: usual_input_macro() \
#define token_too_long() (...recovery action...)
Disclaimer: I've never tried this.
Lets hope I'm not being blind to some reason this can't work. I
personally prefer this than just bumping YYLMAX, which may be fine,
practically speaking, but still nags in that it just begs the question.
Hope this helps,
[I still think you're better off writing your lexer so you don't get
enormous tokens. -John]
Return to the
Search the comp.compilers archives again.