|Stanford Short Course on Code Optimization barnhill@Hudson.Stanford.EDU (1995-06-29)|
|From:||barnhill@Hudson.Stanford.EDU (Joleen Barnhill)|
|Date:||Thu, 29 Jun 1995 20:52:18 GMT|
The Western Institute of Computer Science announces a week-long course on:
Code Optimization in Modern Compilers
Dates: July 31-Aug. 4, 1995
Instructors: Krishna V. Palem and Vivek Sarkar
Location: Stanford University
Cost: $1440(early) / $1575 (late)
The primary goal of this course is to provide an in-depth study of
state-of-the-art code optimization techniques used in compilers for
modern processors. The performance gap between optimized and
unoptimized code continues to widen as modern processors evolve.
This evolution includes hardware features such as superscalar and
pipelined functional units for exploiting instruction-level
parallelism, and sophisticated memory hierarchies for exploiting data
locality. These hardware features are aimed at yiel ding high
performance, and are critically dependent on the code being optimized
appropriately to take advantage of them. Rather than burdening the
programmer with this additional complexity during code development,
modern compilers offer automatic support for code optimization.
The course is self-contained and begins with a detailed view of the
design of optimizing compilers. The rationale guiding the important
decisions underlying the design will be discussed. A typical
optimizing compiler contains a sequence of restructuring and code
optimization techniques, which will be covered during the classroom
lectures; issues related to interactions among individual
optimizations will also be discussed. Examples of optimizations and
accompanying performance improvements will be highlighted via
on-line demonstrations and a hands-on laboratory session.
The course will also discuss the impact of the source language and
the target architecture on the effectiveness of optimizations. The
optimizations covered are most relevant for RISC processors, such as
the IBM RS/6000, PowerPC, DEC Alpha, Sun Sparc, HP PA-RISC, and MIPS,
and third-generation programming languages such as Fortran and C.
Extensions for optimizing object-oriented languages such as C++ will
also be mentioned.
The course is relevant to systems programmers and analysts,
scientific programmers, computer designers, technical managers,
mathematics and computer science teachers, or anyone facing the
prospect of building, modifying, maintaining, writing or teaching
about compilers. At the conclusion of the course, the students
should be knowledgeable about optimization techniques used in modern
compilers, from the viewpoint of the compiler user as well as of the
Text: Compiler Design: Principles, Techniques and Tools, Aho, et. al.
and lecture notes.
1. Structure of optimizing compilers: front-end, intermediate
languages, optimization phases, code generation, linker, runtime
2. Internal program representations: dictionary, abstract syntax
tree, quadruples, expression trees, linearized expressions
3. Cost models for code optimization: execution frequencies and
profiling, completion time of schedules, initiation interval of
pipelined loops, spill costs and register pressure, amortized memory
4. Control flow graphs: structured vs. unstructured, acyclic vs.
cyclic, reducible vs. irreducible, dominators, postdominators.
5. Global data flow analysis: problem formulation, data flow
equations, forward vs. backward data flow analysis problems, static
single assignment form, constant propagation, value numbering.
6. Control and data dependence analysis: program dependence graphs,
computing control dependence, data dependence tests, direction
vectors, distance vectors.
7. Loop transformations: loop distribution, fusion, interchange,
8. Instruction scheduling for pipelined and superscalar processors:
basic blocks and list scheduling, priority and rank functions, global
scheduling, speculative scheduling, software pipelining
9. Register allocation: live ranges, interference graphs, graph
coloring, local and global allocation, register spills.
10. Interprocedural analysis and optimization: interprocedural data
flow analysis, constant propagation, alias analysis, inlining,
11. Symbolic debugging of optimized code: optimization levels vs.
debug levels, breakpoints and safepoints, debug tables.
12. Overview of optimizing compiler systems from industry.
For Whom/Prerequisites: Compiler writers, commercial and scientific
programmers, systems programmers and analysts, computer designers,
technical managers, mathematics and computer science teachers, or
anyone facing the prospect of building, modifying, maintaining,
writing, or teaching about compilers. Introductory courses in
Algorithms and Data Structures, Compiler Construction, Computer
Architecture, or equivalent, would be helpful.
DR. KRISHNA V. PALEM has been on the faculty of the Courant Institute
of Mathematical Sciences, NYU, since September 1994. Prior to this,
he has been a research staff member at the IBM T. J. Watson Research
Center since 1986, and an advanced technology consultant on compiler
optimizations at the IBM Santa Teresa Laboratory since 1993. He is
an expert in the area of compilation and optimization for superscalar
and parallel machines. He has been invited to teach short courses and
lecture internationally, o n these subjects. His technical
contributions have appeared in several journals and books. At IBM,
he has worked on developing a quantitative framework for
characterizing optimizations in product-quality compilers for
superscalar RISC machines.
DR. VIVEK SARKAR is a Senior Technical Staff Member at IBM Santa
Teresa Laboratory, and is also manager of the Application Development
Technology Institute (ADTI). He joined IBM in 1987, after obtaining
a Ph.D. from Stanford. His research interests are in the areas of
program optimizations, loop transformations, partitioning,
scheduling, multiprocessor parallelism, cache locality, instruction
parallelism, and register allocation. He is the author of several
papers in the areas of program optimization and compiling for
parallelism, as well as the book titled Partitioning and Scheduling
Parallel Programs for Multiprocessor Execution. At IBM, he worked on
the PTRAN research project from 1987 to 1990. From 1991 to 1993, he
led a product development effort to build a program transformation
system for optimizing locality and parallelism in uniprocessor and
Form Code Optimization in Modern Compilers July 31-Aug.4, 1995
Registration on or before July 17
[ ] Code Optimization in Modern Compilers $1,440
Registration after July 17
[ ] Code Optimization in Modern Compilers $1,575
Work Phone (________)___________________
Home Phone (________)___________________
Electronic Mail address __________________________
on network _____________________
Total amount enclosed: $___________
Method of payment
[ ] Check enclosed (payable to WICS)
[ ] Visa/Mastercard #________________________________
card exp. date__________
[ ] Bill my company. Purchase Order #__________________________
Write billing address below.
Return registration form with payment to:
Western Institute of Computer Science
P.O. Box 1238
Magalia, CA 95954-1238
Return to the
Search the comp.compilers archives again.