Re: General profiling approaches

Chris F Clark <cfc@shell01.TheWorld.com>
13 Nov 2006 16:33:43 -0500

          From comp.compilers

Related articles
General profiling approaches free4trample@yahoo.com (fermineutron) (2006-11-04)
Re: General profiling approaches free4trample@yahoo.com (fermineutron) (2006-11-11)
Re: General profiling approaches cfc@shell01.TheWorld.com (Chris F Clark) (2006-11-13)
Re: General profiling approaches int2k@gmx.net (Wolfram Fenske) (2006-11-15)
| List of all articles for this month |

From: Chris F Clark <cfc@shell01.TheWorld.com>
Newsgroups: comp.compilers
Date: 13 Nov 2006 16:33:43 -0500
Organization: The World Public Access UNIX, Brookline, MA
References: 06-11-015 06-11-051
Keywords: performance, testing
Posted-Date: 13 Nov 2006 16:33:43 EST

"fermineutron" <free4trample@yahoo.com> writes:


> The best way i know of that a C or any other code can be profiled is
> to translate the code into assembly, marking begining and the end of
> the assembly code coresponding to each line of code / statement in C
> and then inserting time tracking code into this generated assembly
> code.


Well, that's not the best way, not even close. You don't need to
measure that much if you have a good model of the hardware (see
below). You only need to know which paths through the code are taken.
Much of the analysis can be done off-line. Even if you have hardware
level out-of-order execution that might affect your speed, it is
likely that the hardware has counters that can tell you which
instructions used which hardware features (or were ordered in which
way), so that you can determine how the code was executed even without
inserting measurements that will perturb the result.


Thus, unlike physics, one isn't close to running into the Planck limit
when profiling code, because you can measure things without perturbing
them.


Below: When I worked on the ATOM profiling tools (at DEC), it used
compiler techniques to pre-compute when certain actions would have no
effect and thus did not need to be measured. We often reduced the
number of "probes" inserted into the code by more than one order (and
often nearly two orders) of magnitude.


Other people have done other things to resolve the problem. For
example, (if I understood it correctly) the SHADE tool developed at
Sun, modeled the whole architecture in software, allowing them to
measure any attribute they wanted. At Intel, we have a very similar
tool called VMOD, which models the hardware at the gate level. Now,
both of these tools take significant effort to get the measurements
they need--you can't get this accuracy of measurement for free. (If I
recall correctly again, it takes literally days to simulate the entire
BOOT process that brings up Windows.) However, you can make it as
accurate as you want.


However, even without such techniques, even simple C profiling by
inserting C statements into the source can be done without permuting
the execution too much. My recollection is that the INSIGHT tool from
Parasoft did just that. Again, I don't believe they put start/stop
blocks around each statement, but you don't need to. When you are
profiling a program, you don't care about the relative cost of the two
statements "a = 1; b = 2;", you care about how decision logic (if and
loop statements) and routine calls affect the execution. Therefore,
you only need measurements at the "branches" and the "calls".
Everything else you can compute off-line, if you need to compute it at
all.


Hope this helps,
-Chris


*****************************************************************************
Chris Clark Internet : compres@world.std.com
Compiler Resources, Inc. Web Site : http://world.std.com/~compres
23 Bailey Rd voice : (508) 435-5016
Berlin, MA 01503 USA fax : (978) 838-0263 (24 hours)


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.