Login Menu

Articles

Debugging
The purpose of this guide is to introduce some tools and techniques that are generally applicable. By far the most popular debugging tool is the print statement. The idea is to produce a commentary of what your program is doing and deduce where the program fails from the point at which output stops and understand why it fails from the values of printed variables. Another important ally is the compiler; changing compilation options can often reveal bugs. More sophisticated (but sometimes no more effective) tools are also available such as the GNU Debugger (GDB) and proprietary tools such as TotalView (which is available on HECToR). These tools allow programmers to stop their programs at specified points, inspect variables and step through code line-by-line.
Decomposing the Potentially Parallel
This course provides an introduction to the issues involved in decomposing problems onto parallel machines, and to the types of architectures and programming styles commonly found in parallel computers. The list of topics discussed includes types of decomposition, task farming, regular domain decomposition, unbalanced grids, and parallel molecular dynamics.
HPC Survey Results
This report describes the results of the High Performance Computing (HPC) Survey as of May 18, 2009. The survey was available online. Members of the HPC University initiative disseminated the information to researchers, developers, educators and students among a variety of disciplines in mid-April of 2009 by sending it to a number of mailing lists, electronic newsletters, and bulletin boards.
Introduction to the Open Science Grid
Overview and architecture of the Open Science Grid. Principles, best practices, and service decomposition. Overview of governance, technical groups, activities, documentation, access.
IO
Performing IO properly becomes as important as having efficient numerical algorithms when running large-scale computations on supercomputers. A challenge facing programmers is to understand the capability of the system (including hardware features) in order to apply suitable IO techniques in applications. This guide covers some good practices regarding input/output from/to disk. The aim is to improve awareness of basic serial and parallel IO performance issues on HECToR. A variety of practical tips are introduced to help users utilize HECToR's IO system effectively, in order to manage large data sets properly.
Parallel Optimisation
An introduction to optimisation techniques that may improve parallel performance and scaling on HECToR. It assumes that the reader has some experience of parallel programming including basic MPI and OpenMP. Scaling is a measurement of the ability for a parallel code to use increasing numbers of cores efficiently. A scalable application is one that, when the number of processors is increased, performs better by a factor which justifies the additional resource employed. Making a parallel application scale to many thousands of processes requires not only careful attention to the communication, data and work distribution but also to the choice of the algorithms to use. Since the choice of algorithm is too broad a subject and very particular to application domain to include in this brief guide we concentrate on general good practices towards parallel optimisation on HECToR.
Performance Measurement: XE6
The purpose of this guide is to suggest how to monitor the runtime behaviour of the user application and hence to obtain better information to apply performance-tuning optimisations on the XE6 machine. This process is called performance measurement, and profiling tools (alternatively called performance measurement tools) are available on HECToR to help users identify the bottlenecks of their code. This guide covers what and how to measure.
Scientific Visualisation: A Practical Introduction
This course provides a practical introduction to scientific visualisation. The list of topics discussed include the Application Visualisation Systems (AVS), representation of graphical data, visualisation of volume data, and visualisation of vector data.
Serial Code Optimization
This guide presents the main features of serial optimisation for computationally intensive codes with a focus on the HECToR computing resources. From a user point of view, two main avenues can be followed when trying to optimise an application. One type of optimisations DO NOT involve modifying the source code (modification may not be desirable); optimisation consists of searching for the best compiler, set of flags and libraries. Another type of optimisations DO involve modifying the source code; in the first instance the programmer must evaluate if a new algorithm is necessary, followed by writing or rewriting optimised code. According to the these choices this guide presents optimisation as a problem of compiler and library selection, followed by a presentation of the key factors that must be considered when writing numerically intensive code.
Software Management
This guide aims to provide information about the different tools available for software management on HECToR. It provides an overview of the types of tools available and the reasons why they are useful. The main focus is on source code control and build systems. Details of the capabilities of specific tools available on HECToR are provided along with instructions on how to access them. Full details on how to use each of the tools are not provided as this is outside the scope of a good practice guide. However the guide provides links to relevant sources of learning materials and further information.
Writing Data Parallel Programs with High Performance Fortran
This course provides an introduction to parallel programming with High Performance Fortran.The list of topics discussed includes a brief history of HPF, Fortran 90 features, data mapping, HPF parallel features, procedure arguments, intrinsic functions and the HPF library, compiler specifics, and course exercises.
Writing Message Passing Parallel Programs with MPI
This course provides a comprehensive introduction to parallel programming using MPI. The list of topics discussed includes the MPI interface, point-to-point communication, non-blocking communication, derived datatypes, virtual topologies, collective communication, and a case study.