Re: Testing strategy for compiler

Barry Kelly <barry.j.kelly@gmail.com>
Tue, 22 Jun 2010 22:22:45 +0100

          From comp.compilers

Related articles
[7 earlier articles]
Re: Testing strategy for compiler ott@mirix.org (Matthias-Christian Ott) (2010-06-19)
Re: Testing strategy for compiler gah@ugcs.caltech.edu (glen herrmannsfeldt) (2010-06-19)
Re: Testing strategy for compiler jm@bourguet.org (Jean-Marc Bourguet) (2010-06-21)
Re: Testing strategy for compiler dot@dotat.at (Tony Finch) (2010-06-21)
Re: Testing strategy for compiler gneuner2@comcast.net (George Neuner) (2010-06-21)
Re: Testing strategy for compiler news@cuboid.co.uk (Andy Walker) (2010-06-22)
Re: Testing strategy for compiler barry.j.kelly@gmail.com (Barry Kelly) (2010-06-22)
Re: Testing strategy for compiler gah@ugcs.caltech.edu (glen herrmannsfeldt) (2010-06-23)
| List of all articles for this month |

From: Barry Kelly <barry.j.kelly@gmail.com>
Newsgroups: comp.compilers
Date: Tue, 22 Jun 2010 22:22:45 +0100
Organization: TeraNews.com
References: 10-06-037
Keywords: Pascal, design, testing
Posted-Date: 23 Jun 2010 09:57:57 EDT

kuangpma wrote:


> Say I have written a hand crafted lexer/parser/code gen.... to make a
> complete compiler. The question is how to test it? Since user can have
> millions possible ways of writing their program (with many different
> types of syntax errors) and it is difficult to test all the possible
> cases. So is there any good ways to test the compiler? How do those
> big guys (MS/Borland...) tested their compiler? Thanks.


I can speak a little to how Borland tested their compiler, as I now help
maintain Delphi (and worked at Borland when it was still part of
Borland).


1) Large corpus of code which is expected to compile, and is developed
with continuous integration. If you break the compiler, the whole build
tree (including IDE etc.) will likely fail, or one of its tests. This
only checks the good case, of course.


2) Testing tools which feed code to the compiler, with either expected
compiler failures or success, and then run the code (if success) and
check for expected output.


3) A large corpus of tests for such tools built up over the years, from
three sources: compiler developers, QA, and bug reports.


4) Automating all this and running it continuously to discover
regressions as soon as possible


Sometimes it comes down to combinatorial testing. To elicit overload
resolution abnormalities, for example, I wrote a tool which generated
overloaded declarations and expressions with different kinds of types
(e.g. object types like Animal, Feline, Lion, Bird, Stool, interfaces
like IFourLegged) and then exhaustively constructed a sorted order to
determine which overloads were preferred by what kinds of arguments, and
which ones led to an ambiguity error.


Eric Lippert (a C# compiler dev for MS) wrote a recent series of blog
posts about enumerating all possible sentences for a particular grammar.
Obviously, that's an infinite set, but it's possible to pull random
cases out of it. Add in some fuzz testing (i.e. deliberate corruption of
that input), and you can also look for expected errors that aren't
flagged.


http://blogs.msdn.com/b/ericlippert/archive/2010/04/26/every-program-there-is-part-one.aspx
through to:
http://blogs.msdn.com/b/ericlippert/archive/2010/05/24/every-program-there-is-part-nine.aspx


-- Barry


http://blog.barrkel.com/


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.