The parallelizing compilers have come a long way in helping the programmers automatically transform sequential programs into parallel ones. However, a compiler can only restructure a program and cannot change the underlying algorithm. As a result, parallelizability of a particular sequential program ultimately depends on how the computation is expressed. This course will emphasize the well-known fact that every computation, after all, is a composition of the data structures used and the algorithms supported on those data structures. The algorithms supported by the data structures greatly influence the chances of parallelizing the computation.
This course is divided into three parts. The first part will develop some of the general tools needed to design and argue about the correctness of concurrent algorithms. These include the concepts of linearizability, compositional linearizability, non-blocking and blocking synchronization on shared memory, and memory consistency models. The second part of the course will focus on developing efficient parallel algorithms for some of the frequently used data structures such as linked lists, queues, stacks, hash tables, skiplists, priority queues, etc. The primary emphasis will be on minimizing synchronization, often resorting to optimistic parallelization via dynamic conflict detection with the help of checkpoints. The third part of the course will establish the connection of transactional computation with these concurrent algorithms and present a brief overview of software and hardware implementation of transactional computation.
M. Herlihy and N. Shavit. The Art of Multiprocessor Programming. Morgan-Kaufmann publishers.