Home > > CS 698P: Applications of Markov Chains in Combinatorial Optimization and in Evolutionary Dynamics

CS 698P: Applications of Markov Chains in Combinatorial Optimization and in Evolutionary Dynamics

CS 698P: Applications of Markov Chains in Combinatorial Optimization and in Evolutionary Dynamics

Credits (L-T-D-P-C): 3-0-0-0-9

Course Contents:

The course deals with applications of Markov chains techniques in certain areas of computer
science and of biology. The unifying aspect in these applications is the role played by mixing
time analysis, which is the focus of the course.
In a typical combinatorial optimization problem, each instance x is associated with a state
space Sx, usually of size exponential in the size of x, where each element of Sx has an
associated cost (or a value). The problem is to nd a state with minimum cost (or with
maximum value). In the Markov chains approach to solving such a problem, a Markov
chain is associated with each instance with the property that the goal state has the highest
probability in the stationary distribution of the chain. Success of this approach crucially
depends on the mixing time of the associated chain, that is, how quickly does the chain
come close to its stationary distribution.
Markov chains have also been used to model evolution for the nite population case lending
themseves to stochastic e ects. For a population of size N of m types, a state of the chain
is the labelling of the N individuals each into one of the m types. The population goes from
one state to another through reproduction, mutation, and selection. The speci c way in
which each of these happens determine the the transition probability matrix of the chain.
The result of the evolution modelled by the chain is given by the stationary distribution
of the chain. In most cases however, it is not known how to determine the stationary
distribution directly, we need to simulate the chain `suciently long' and then sample from
the resulting distribution to obtain statistical properties of the distribution. We need to
determine the mixing time of the chain to nd out how long is `suciently long'. It has
been established recently that the expected motion of such evolutionary chains turns out
to be a dynamical system, the trajectory of which determines the mixing time properties
like rapid mixing.
Approximate outline of the course: Ergodicity theorem of Markov chains. Mixing time of
Markov chains. Combinatorial optimization using Markov chains. Metropolis algorithm.
Notion of conductance and its relation to rapid mixing. Necessary and sucient conditions
for Markov chains approach to succeed for combinatorial optimization. Markov chains
modelling of molecular evolution. Quasispecies model. Viral evolution and notion of er-
ror threshold. Evolution for the nite population case. Expected motion of evolutionary
Markov chains and corresponding dynamical system. Detailed analysis when the population
is of two types. A qualitative understanding of the general m types case.

Books and References:

  1. Finite Markov Chains and Algorithmic Applications, Olle Haggstrom, Cambridge Univ. Press, 2002.
  2. Algorithms for random Generation and Counting: A Markov Chain Approach, Alistair Sinclair, Birkhauser, 1993.
  3. Markov chains and mixing times, David A. Levin, Yuval Peres, and Elizabeth Wilmer, American Mathematical Society, 2006.
  4. Evolutionary Dynamics, M.A. Nowak, Harvard University Press, 2006.
  5. Relevant papers.