Advanced Course on Compilers for Parallel Computers (Eduard Mehofer)
Thu, 31 Mar 1994 14:02:01 GMT

          From comp.compilers

Related articles
Advanced Course on Compilers for Parallel Computers (1994-03-31)
| List of all articles for this month |

Newsgroups: comp.compilers
From: (Eduard Mehofer)
Keywords: courses
Organization: Vienna University Computer Center, Austria
Date: Thu, 31 Mar 1994 14:02:01 GMT



July 6-7, 1994

Institute for Software Technology
Parallel Systems


Bruenner Strasse 72, A-1210 Vienna, Austria


The Institute for Software Technology and Parallel Systems at the
University of Vienna will conduct an Advanced Course on Languages,
Compilers and Programming Environments for Scalable Parallel Computers
from July 6-7, 1994.

This course offers an overview of state-of-the-art developments in this
field in view of its importance for Scientific Computing, along with an
in-depth treatment of language features, compiling techniques and runtime
support required for data parallel High Performance Fortran languages such
as High Performance Fortran (HPF) and Vienna Fortran. Special emphasis will
be placed on strategies and tools that support the user in porting 'real'
application codes to parallel computers.

The main topics covered in the course include:

- an application-oriented introduction into HPF and Vienna Fortran,

- paths of future development for HPF,

- optimizing compiler technology for these languages,

- dynamic and unstructured application problems and their implementation,

- integrated, knowledge-based programming environments with advanced
    performance analysis and prediction tools.

In addition, recent research issues in languages and compilers will be
discussed, and a demonstration of the Vienna Fortran Compilation System
will be given.

The course is intended for developers of compilers, software tools and
applications for scalable parallel machines. It will focus on the
practically relevant developments in the field. The participants will be
expected to have a basic knowledge of the area.

The course consists of 10 lectures over a two-day period. The lecturers
are from the Institute for Software Technology and Parallel Systems at
the University of Vienna and from the Institute for Computer Applications
in Science and Engineering (ICASE), NASA Langley Research Center
in Hampton, Virginia.

The experiences of the lecturers in the subject area date back to the
mid 1980s and include the development of SUPERB, the first parallelization
system for Fortran targeted to distributed-memory computers (completed
in 1989) as well as the design and implementation of the Kali and Vienna
Fortran programming languages. Vienna Fortran, being the first fully
specified Fortran language extension for scalable parallel systems, was
one of the major inputs to the HPF effort, while offering a significantly
broader functionality than current HPF does. The Vienna Fortran Compilation
System is an advanced, integrated compilation environment whose features
include multi-dimensional data distributions, interprocedural optimization,
Fortran 77 work space management, the efficient handling of dynamic and
irregular problems, performance analysis and prediction tools, and graphical
support. The system is being used at ICASE and other important research
sites for the parallelization of complex application programs, and is
currently being considered for commercial exploitation.


Wednesday, July 6th 1994

o 8:00 Opening

o 8:15 Introduction
                Hans Zima

o 9:00 High Performance Fortran Languages
                Piyush Mehrotra

o 10:30 Coffee Break

o 11:00 Applications in Science and Engineering
                Barbara Chapman

o 12:30 Lunch Break

o 14:00 Research Issues in Languages for High-Performance Computing
                Piyush Mehrotra and Hans Zima

o 15:00 Coffee Break

o 15:30 Basic Compilation and Optimization Strategies
                Hans Zima

o 17:00 DEMO: Vienna Fortran Compilation System

o 19:30 Departure for Dinner

o 20:00 Dinner

Thursday, July 7th 1994

o 9:00 Compiling Irregular and Dynamic Problems
                Peter Brezany

o 10:30 Coffee Break

o 11:00 Vienna Fortran 90 and its Compilation
                Siegfried Benkner

o 12:30 Lunch Break

o 14:00 Performance Analysis and Prediction
                Thomas Fahringer

o 15:30 Coffee Break

o 16:00 The Vienna Fortran Compilation System
                Barbara Chapman

o 16:45 Research Issues in Compilers
                Barbara Chapman and Hans Zima

o 17:30 End of Course

Registration Information

All attendees must register (a registration from is appended). In
order to preserve an atmosphere conducive to interaction, attendance
will be limited.

Registration Fee

- Attendees from universities and research institutions,
    and participants in the ESPRIT projects
    PPPE and PREPARE ATS 4 800

- Attendees from industry ATS 9 600

The registration fee includes the lunches, coffee breaks, and the
dinner. In addition, handouts containing copies of the slides used
in the talks and recent publications of the Vienna group as well
as information on the Vienna Fortran Compilation System will be
given to each participant.

The current exchange rate of the Austrian Schilling (ATS) is about
12.20 ATS for one US $.

Further Information

For further information please contact

Maria Cherry
Institute for Software Technology and Parallel Systems
University of Vienna
Bruennerstrasse 72
A-1210 Vienna, Austria

Telephone:} +43 1 392647 222
Fax: +43 1 392647 224

Abstracts of the Lectures

Hans Zima: Introduction

This talk introduces scalable parallel architectures and their
programming paradigms, and outlines the current state-of-the-art
in languages, compilers and runtime support systems for these
machines. Finally, a short overview of the course is given.

Piyush Mehrotra: High Performance Fortran Languages

High Performance Fortran is a set of extensions to Fortran 90 designed
to exploit data paralleism on a wide avriety of parallel architectures.
This effort is based on earlier academic projects such as Kali and
Vienna Fortran along with commercial efforts such as CM Fortran for
the Connection Machine. In this talk, we will first describe the approach
taken by these High Performance Fortran like languages (HPFLs) -
in particular, High Performance Fortran and Vienna Fortran - wherein
the responsibility to exploit the parallelism is shared amongst the
user, the compiler and the runtime support system. We will then discuss
the strengths and weaknesses of HPF through a series of scientific

Barbara Chapman: Applications in Science and Engineering

The commercial availability of large-scale parallel systems providing
huge computing power has opened up the way for the computational treatment
of many new problems in Science and Engineering. Physics, astronomy,
meteorology, chemistry, biology, and medicine are some of the fields in
which these machines can be successfully employed for simulating the
behavior of the real world.

In this talk we first give an overview of a number of important
application problems. We then characterize the requirements for porting
solutions to these problems to scalable parallel architectures. Major
issues in this context are dynamic load balancing, irregular data
distributions, and parallel I/O.

Piyush Mehrotra and Hans Zima: Research Issues in Language Design

We discuss a number of language issues that have not been adequately
addressed in current HPFLs (in particular in the current version of
High Performance Fortran, HPF-1) and point out the likely directions
for future developments. We will focus on irregular problems, task
parallelism and its integration with data parallelism, and parallel I/O.

Hans Zima: Basic Compilation and Optimization Strategies

During the last decade, a standard technique for compiling HPFLs for
scalable parallel machines has evolved, and several systems have been
implemented based upon this approach. The talk describes these techniques,
with a special emphasis on strategies for the optimization of
communication and the underlying intra- and interprocedural analysis

The translation can be considered as consisting of four phases:
The Front End performs an initial analysis of the program, transforms
the source code into an internal representation, and normalizes the code.
Splitting partitions the source program into an I/O part and a
computational part. The third phase, Initial Parallelization,
generates a parallel program by processing the data distributions,
enforcing the owner computes paradigm, and inserting the required
communication. Finally, Optimization and Target Code Generation
optimizes communication and work load based on results from a range of
interprocedural analysis algorithms including distribution propagation
analysis. Messages are hoisted out of loops and combined, loops
are strip-mined across the processors, and procedures are
cloned if required by the distribution patterns of their arguments.

Peter Brezany: Compiling Irregular and Dynamic Problems

In irregular codes, such as those for solving PDEs on unstructured
meshes and sparse matrix algorithms, the access and communication
patterns depend on the values of runtime computations and input data.
In such a situation, compile-time analysis of dependence and
communication structures is not feasible. Furthermore, run time methods
for handling these codes may perform very poorly if the data is not
distributed to the processors in a way that reflects the structure of
the problem. In particular, regular distribution schemes, such as block
and cyclic distributions in the current version of HPF, are not adequate
to deal with such applications. Language features that provide more
general distribution facilities, as well as advanced compilation technology
and runtime support are needed.

This talk is based to a significant extent on joint work with
the University of Maryland and the properties of the PARTI and CHAOS
tools. We begin by describing the language and compiler interface to a
partitioner. Such a partitioner, which is called at runtime
with a representation of the communication pattern in a loop,
attempts to distribute arrays in such a way that the load
balance is maximized and communication is minimized.

Then message-passing code generation techniques for irregular
data-parallel loops and array statements with vector subscripted
accesses will be introduced. Specific program analysis methods will
be presented that enable to determine when it is safe to reuse the
results of runtime analysis, and find the right place for communication
insertion. Techniques for dynamic data redistribution will also be

Siegfried Benkner: Vienna Fortran 90 and its Compilation

In this lecture we focus on language features as well as compilation
techniques most relevant in the context of Fortran 90. The first part
gives an overview of Vienna Fortran 90 and discusses in more detail
advanced language features like distribution of derived type data
structures, distribution of pointer objects, and issues related to
procedure interfaces and the transfer of distributed arguments.

The second part describes the transformation of Vienna Fortran 90 programs
into Fortran 90 message passing programs which subsequently are mapped
to Fortran 77 message passing programs. We first consider the basic
compilation techniques and runtime support issues including data
descriptors, communication descriptors, management of non-local data,
and address translation. The implications of block-cyclic distributions
and alignment with respect to memory management and index conversion are
discussed. Finally both compile-time and runtime optimization techniques
for the parallelization of array assignment statements and procedure
interfaces are presented.

Thomas Fahringer: Performance Analysis and Prediction

Two different performance tools will be described. First, the Weight
Finder which is an advanced profiler for sequential Fortran programs
and, secondly, the PPPT which is a performance estimator for parallel
Fortran programs.

Existing Fortran codes are generally too large to analyze fully in
depth with respect to performance tuning. It is the responsibility of
the Weight Finder to detect the most important regions of code
in the program, as far as execution time is concerned. Program
transformation systems, compilers and users may then subsequently
concentrate their optimization efforts upon these areas in code.
Furthermore, program unknowns, such as loop iteration counts,
branching probabilities and statement execution counts, are derived.
Performance prediction systems require concrete values for
these unknowns in order to provide reasonably accurate

The PPPT, a Parameter based Performance Prediction Tool, supports the
Vienna Fortran Compilation System in parallelizing and optimizing
Fortran programs for scalable parallel computers. The PPPT automatically
computes at compile time a set of parallel program parameters which
predict the outcome of three of the most crucial performance aspects of
parallel programs: work distribution, communication overhead, and data
locality. After analyzing the strengths and limitations of the
performance estimator, experiments are shown that demonstrate the ability
of the PPPT to successfully guide both the programmer and the compiler
in the search for efficient data distribution strategies and program

Barbara Chapman: The Vienna Fortran Compilation System

The Vienna Fortran Compilation System is an advanced integrated
parallelization system including intra- and interprocedural analysis
services, a catalog of program transformations, performance analysis and
prediction tools, and graphical support.

In this talk, we first discuss the interactive user interface of
the system and the range of services it provides for the analysis
and transformation of sequential and parallel programs. We then
outline its internal structure and discuss the major design
decisions underlying its development.

Barbara Chapman and Hans Zima: Research Issues in Compilers

There are still many challenges facing the builders of compilation
systems for scalable parallel systems. Much improvement is needed with
regard to functionality, the efficiency of the generated code,
and the quality of the interaction with the user.

One research topic that will be addressed in this talk is the development
of support for the user in the crucial task of selecting a data
distribution. The techniques employed depend on the kind of program
under consideration and require extensive support from performance analysis

A related issue regards automatic translation. Evolving systems will
be expected to perform many tasks which require a global program
transformation strategy. There are different, and sometimes conflicting,
goals to be attained (e.g., load balancing vs. minimization of
communication), and non-trivial trade-offs to be considered. The results
of analysis, combined with a set of heuristics, may be used to select
predetermined transformation strategies.

Finally, we discuss methods of organizing an advanced programming
environment as a collection of knowledge-based subsystems with different
levels of expertise. The advantages of such an approach include the
possibility of rapid prototyping, relatively easy modification of the
programming environment if the underlying system changes, and the
availability of an explanation facility.

------------------------------ cut here ---------------------------------

To register for the course, complete the form below and return it as
soon as possible to the course secretariat at:

Institute for Software Technology and Parallel Systems
University of Vienna
Attn. Advanced Course
Bruenner Strasse 72
A-1210 Vienna
Tel.: +43 1 39 26 47 / 222 Fax: +43 1 39 26 47 / 224

                                ADVANCED COURSE ON LANGUAGES, COMPILERS,
                                      AND PROGRAMMING ENVIRONMENTS FOR
                                            SCALABLE PARALLEL COMPUTERS

                                                Vienna, July 6 - 7, 1994





City, State:



I need hotel information:

Special dietary requirements:

Registration fee (indicate fee applicable):

o Attendees from universities and
    research institutions, and participants
    in the ESPRIT projects PPPE and PREPARE: ATS 4.800,-

o Attendees from industry: ATS 9.600,-

Form of payment (indicate form applicable):

o Check in Austrian currency enclosed/mailed

o Bank transfer in Austrian currency

Note that payment must be in Schillings and that we are unable to
accept credit cards. All transfers must be made to the course account
no. 0522-00524/00 at the Creditanstalt Bankverein, Vienna. Make checks
payable to "Advanced Course Vienna". All money transfers must be free
of charge for the recipient.

Your registration will be confirmed when the payment is received.

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.