Re: Debugging of optimized code
Fri, 27 Jan 1995 03:49:54 GMT

          From comp.compilers

Related articles
[3 earlier articles]
Re: Debugging of optimized code milt@Eng.Sun.COM (Milton Barber) (1995-01-23)
Re: Debugging of optimized code (Sean Levy) (1995-01-23)
Re: Debugging of optimized code (1995-01-24)
Re: Debugging of optimized code (1995-01-24)
Re: Debugging of optimized code (1995-01-26)
Re: Debugging of optimized code (1995-01-26)
Re: Debugging of optimized code (1995-01-27)
Debugging of optimized code (1995-01-27)
Re: Debugging of optimized code (Stefan Monnier) (1995-01-27)
Re: Debugging of optimized code (1995-01-27)
Re: Debugging of optimized code (1995-01-24)
Re: Debugging of optimized code (Charles Fiterman) (1995-01-25)
Re: Debugging of optimized code (1995-01-27)
[12 later articles]
| List of all articles for this month |

Newsgroups: comp.compilers
Keywords: optimize, debug
Organization: CCnet Communications (510-988-7140 guest)
References: 95-01-036
Date: Fri, 27 Jan 1995 03:49:54 GMT writes:

>A major irritant with all the optimizing compilers I know of, is that
>symbolic source-level debugging of the optimized version of released
>products becomes impossible or very very flaky.
>The programmers at my company (Tandem Computers) are very frustrated
>with this. With our prior range of stack-oriented CISC machines, the
>compilers could do so little to the code that they didn't bother to
>do much, and so symbolic debugging support and post-mortem analysis
>was excellent in all released products. With the MIPS compilers we
>now use, anyone trying to recreate a problem in its original context
>needs to learn how to puzzle out what the compiler did across an
>entire procedure, at the machine level. The standard answer seems
>to be to rebuild the product with ALL optimizations off, and try to
>recreate the customer's problem in that version.

I have found that just disabling register caching of variables
accomplishes most of what you are looking for. The reason is, of
course, that the "in memory" variables then reflect the real status
of the system. Of course, this presumes that the compiler has not
optimized the variable into a register based one entirely.
The rest becomes a problem of relating the source lines to object
lines in the face of code movement.
I suspect that the only way to truly accomplish debuggable optimized
code would be for the optimizer to accept source line numbers
as attached to intermediate objects. These would follow along with
code movement, and then be output as a dictionary for the result.
The compiler would also have to output "register maps" each time
the relationship of variables/registers changes (which can be more
than just at the start of the routine; it is certainly possible
to have a register keep different contents within the same routine).
The major factors after that are:

1. Is it possible to "lose" a source line marker. Take a trival

        x = 1;
        <some operation>
        x = 1;

If the compiler determines that the assign of x is redundant, it
is free to remove the line. But then there is no equivalent object
for the line.

2. Will the result simply be more confusing by out-of-order
execution ? This is important on risc. Even if you properly track
line relations, you can be stepping through code and see the
execution point jump back and forth (without following the program

3. Are there some optimizations that could not be done and still
maintain the line relation ?

I suspect that you could in fact get close to %100 optimization
with debug capability. It is just an information problem. The
question is whether the extra compiler effort is worth it, and
if the resulting (massive) compiler debug output file is also
worth it.


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.