# « Iterated random functions » par Diaconis et Freedman

vendredi 9 novembre 2007, 9h30 - 10h45

Catherine MATIAS nous présentera l’article de
Diaconis, P. and Freedman, D.

intitulé:

{{Iterated random functions.}}

{SIAM Rev.} 41 (1999), no. 1, 45–76.

{{Abstract.}}
The authors present a persuasive account of how Markov chains used for modelling purposes can be presented as iterated random functions on a state space $S$. That is, there is a family $\{f_\theta\colon \theta\in\Theta\}$ of functions that map $S$ into itself, and a probability distribution $Âµ$ on $\Theta$. If the chain is at $x\in S$, it moves by choosing $\theta$ at random according to the distribution $Âµ$, and going to $f_\theta(x)$. The point is that the class of functions can often be represented by simple formulae with useful properties that lead to far-reaching consequences for the Markov chain. For instance, Lipschitz conditions lead to stability properties for the Markov chain, and the authors present a valuable new general result along these lines. Alternatively, monotonicity is a key ingredient in the wonderful Propp-Wilson algorithm for perfect simulation. The authors give a very readable account of the Propp-Wilson algorithm. It is one of a host of stimulating examples, for which the authors lay out deceptively simple calculations of great power, leading usually not just to ergodicity of the relevant Markov chain but to good bounds or exact evaluations of rates of convergence to stationarity. Dirichlet random measures, and (interesting) queueing-theory examples, merit whole sections. Finally, there is a parade of grotesques (counterexamples) to show how and why various regularity conditions are needed.