Independence.

Lecture 10

Indepdendence, while not a new concept in this class (it is directly based on conditional probability), is another central concept of probability theory. In terms of conditional probability, two events $E,F\subset S$ of a probability space $S$ are called independent if $$ P(E|F)=P(E)\text{ or }P(F|E)=P(F)\text{ or }P(E\cap F)=P(E)P(F), $$ which simply means that knowledge of the occurrence of one event does not influence the probability of the other. That the definition is symmetric is most obvious in the third characterization, which is a consequence of the definition of conditional probability. As usual various examples are presented.

Lecture 11

This lecture deals with examples of the use of conditional independence and conditioning in the computation of more involved problems, where a direct approach would require quite a lot of delicate accounting. Particular focus is given to the so-called gambler's ruin problem, which is our first encounter with a random walk.

Lecture 12

First we look at a concrete application of the gambler's ruin problem in the field of medicine. Then we show that, given an event $F$ (with $P(F)>0$), the conditional probability $$ P(\cdot|F):2^S\to [0,1],\: E \mapsto P(E|F) $$ is itself a probability on the sample space $S$ and can therefore be further conditioned upon. Notice that $2^S$ denotes the collection of all subsets of $S$.