Stanford Short Course on COmpilers for High-Performance Processors

barnhill@Hudson.Stanford.EDU (Joleen Barnhill)
Thu, 29 Jun 1995 21:11:06 GMT

          From comp.compilers

Related articles
Stanford Short Course on COmpilers for High-Performance Processors barnhill@Hudson.Stanford.EDU (1995-06-29)
| List of all articles for this month |

Newsgroups: comp.compilers,comp.realtime
From: barnhill@Hudson.Stanford.EDU (Joleen Barnhill)
Keywords: courses, realtime
Organization: Stanford University
Date: Thu, 29 Jun 1995 21:11:06 GMT

The Western Institute of Computer Science announces a week-long course on:


COMPILERS FOR HIGH-PERFORMANCE UNIPROCESSORS AND MULTIPROCESSORS
August 21-25, 1995


Instructors: Drs. Monica S. Lam, Michael D. Smith, Mary W. Hall
Location: Stanford University
Cost: $1440(early) / $1575 (late)
Contact: barnhill@hudson.stanford.edu




INTRODUCTION


Overview of hardware issues discussed in course
* Caches
* Superscalar and VLIW processors
* Symmetric multiprocessors
* Non-uniform memory access (NUMA) machines
* Multicomputers (message passing machines)


Important issues in compilers for parallel machines
* Instruction scheduling
* Interprocedural analysis on arrays and pointers
* Data locality optimizations


SUPERSCALAR HARDWARE AND SOFTWARE


Introduction
* Instruction level parallelism and its potential
* Architectures for multiple instruction execution
* Constraints on exploitation of instruction level parallelism
Hardware Techniques
* Decoupling instruction fetch, execution and completion
* Register renaming
* Branch prediction
* Speculative execution
* Review of current microarchitectures
Compiler Techniques
* Background and basic list scheduling
* Trace-based techniques for global scheduling
* DAG-based techniques for global scheduling
* Software pipelining
* Interaction between scheduling and register allocation
* Techniques to improve branch prediction accuracy


BASIC LOOP PARALLELIZATION FOR MULTIPROCESSORS


Analysis Techniques
* Data dependence analysis
* Induction variable recognition
* Constant propagation
* Reduction analysis
Performance of Existing Parallelizing Compilers


INTERPROCEDURAL PARALLELIZATION ANALYSIS


Introduction to interprocedural analysis
* Flow-insensitive vs. flow-sensitive analysis
* Interval analysis
Interprocedural scalar data flow analysis
* Induction variable recognition
* Constant propagation
* Privatization
Interprocedural array analysis
* Data dependence analysis based on summaries
* Array privatization
* Generalized reduction
* Array reshapes at procedure boundaries
Interprocedural pointer analysis for C
Experimental results


LOOP & DATA OPTIMIZATIONS


Basic principles
* Unified transformations for arrays and loops
* Unimodular transformations (interchange, skew, reversal)
* Blocking
Optimization on uniprocessor caches
* Code transformation to improve locality
* Generating software prefetch instructions to hide latency
Shared address space machines
* Integrated code and data transform to minimize synchronization and communication
Distributed address space machines
* Communication optimization
* Automatic data and computation decomposition
* Code generation from High-Performance Fortran (HPF) or from
      automatically generated decompositions.


Instructors:


DR. MARY HALL is a Visiting Professor of Computer Science at
California Institute of Technology. She graduated magna cum laude
from Rice University in 1985, with a B.A. in Computer Science and
Mathematical Sciences. She received the M.S. and Ph.D. in Computer
Science from Rice University. Prior to joining Caltech, she was a
Research Scientist at both Stanford University and the Center for
Research on Parallel Computation at Rice University. Dr. Hall's
research focuses on developing techniques for inte rprocedural
optimizations that are nearly as effective as optimizing the entire
program as a single procedure, while simultaneously managing the
complexity of the compilation system. She has developed an
interprocedural framework called FIAT, which facilitates rapid
prototyping of interprocedural analysis systems. This tool has been
used to drive interprocedural optimization in the ParaScope and D
System Tools at Rice University, and as part of an interprocedural
automatic parallelization system in the SU IF compiler at Stanford
University.


Dr. MONICA S. LAM has been an Assistant Professor in the Computer
Science Department at Stanford University since 1988. She received
her B.S. from University of British Columbia in 1980, and a Ph.D. in
Computer Science from Carnegie Mellon University in 1987. She
received an NSF National Young Investigator Award in 1992, and is an
editor for the ACM Transactions on Computer Systems. Prof. Lam is
currently leading the SUIF compiler project, whose objective is to
develop and experiment with compiler techn ology for parallel
machines and to investigate hardware and software tradeoffs in
architectural designs. She has developed a number of techniques
currently used in commercial compilers, including a software
pipelining algorithm for scheduling superscalar and VLIW processors,
and locality optimization techniques for uniprocessors.


DR. MICHAEL D. SMITH is an Assistant Professor of Electrical
Engineering and Computer Science in the Division of Applied Sciences
at Harvard University. He received a B.S. degree in Electrical
Engineering and Computer Science from Princeton University in 1983,
an M.S. degree in Electrical Engineering from Worcester Polytechnic
Institute in 1985, and a Ph.D. in Electrical Engineering from
Stanford University in 1993. His research focuses on the
experimental realization of innovative compilation techniques and
novel computer architectures to improve the capability and
performance of computer systems. Recently, he has worked on
sophisticated instruction scheduling algorithms that exploit
compiler-directed speculative execution, static branch prediction
schemes that exploit branch correlation, and microarchitectures that
contain hardware-programmable functional units. His research team is
also building tools and environments to better understand the
performance bottlenecks in operating system and I/O-intens ive
applications. Before pursuing an academic career, Dr. Smith spent a
number of years working for Honeywell Information Systems, where he
designed several CPU boards and worked on a VLSI chip set for their
minicomputer product line. He is the recipient of a 1994 NSF Young
Investigator Award.
_____________________________________________________________Registration
Form COMPILERS FOR HIGH-PERFORMANCE UNIPROCESSORS AND MULTIPROCESSORS
August 21-25, 1995


Registration on or before August 7
[ ] COMPILERS FOR HIGH-PERFORMANCE UNIPROCESSORS AND MULTIPROCESSORS $1,440
Registration after August 7
[ ] COMPILERS FOR HIGH-PERFORMANCE UNIPROCESSORS AND MULTIPROCESSORS $1,575


Name____________________________________


Title___________________________________


Company_________________________________


Address_________________________________


________________________________________


City/State______________________________


Zip___________________


Country_________________


Work Phone (________)___________________


Home Phone (________)___________________


Electronic Mail address __________________________


  on network _____________________




Total amount enclosed: $___________


Method of payment
[ ] Check enclosed (payable to WICS)


[ ] Visa/Mastercard #________________________________
card exp. date__________


cardholder signature___________________________________________________


[ ] Bill my company. Purchase Order #__________________________
                Write billing address below.


Return registration form with payment to:
Western Institute of Computer Science
P.O. Box 1238
Magalia, CA 95954-1238


--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.