Problems

Filters
Clear Filters

10 problems found

2018 Paper 1 Q11
D: 1500.0 B: 1513.7

A bag contains three coins. The probabilities of their showing heads when tossed are \(p_1\), \(p_2\) and \(p_3\).

  1. A coin is taken at random from the bag and tossed. What is the probability that it shows a head?
  2. A coin is taken at random from the bag (containing three coins) and tossed; the coin is returned to the bag and again a coin is taken at random from the bag and tossed. Let \(N_1\) be the random variable whose value is the number of heads shown on the two tosses. Find the expectation of \(N_1\) in terms of \(p\), where \(p = \frac{1}{3}(p_1+p_2+p_3)\,\), and show that \(\var(N_1) =2p(1-p)\,\).
  3. Two of the coins are taken at random from the bag (containing three coins) and tossed. Let \(N_2\) be the random variable whose value is the number of heads showing on the two coins. Find \(\E(N_2)\) and \(\var(N_2)\).
  4. Show that \(\var(N_2)\le \var(N_1)\), with equality if and only if \(p_1=p_2=p_3\,\).


Solution:

  1. \(\mathbb{P}(\text{head}) = \mathbb{P}(\text{head}|1)\mathbb{P}(\text{coin 1}) + \mathbb{P}(\text{head}|2)\mathbb{P}(\text{coin 2})+\mathbb{P}(\text{head}|3)\mathbb{P}(\text{coin 3}) = \frac13(p_1+p_2+p_3)\)
  2. \(N_1 = X_1 + X_2\) where \(X_i \sim Bernoulli(p)\), therefore \(\mathbb{E}(N_1) = 2p\) and \(\textrm{Var}(N_1) = \textrm{Var}(X_1)+ \textrm{Var}(X_2) = p(1-p)+p(1-p) = 2p(1-p)\)
  3. Let \(Y_i\) be the indicator for the \(i\)th coin is heads. Then \(\mathbb{E}(Y_i) = p\) and so \(\mathbb{E}(N_2) = 2p\). \begin{align*} && \textrm{Var}(N_2) &= \mathbb{E}(N_2^2) - [\mathbb{E}(N_2)]^2\\ &&&= 2^2 \cdot \left (\frac13 \left (p_1p_2+p_2p_3+p_3p_1 \right) \right) + 1 \cdot \left (\frac13 \left (p_1 (1-p_2) + (1-p_1)p_2 + p_2(1-p_3) +(1-p_2)p_3 + p_3(1-p_1) + (1-p_3)p_1 \right) \right) - [\mathbb{E}(N_2)]^2 \\ &&&= \frac43\left (p_1p_2+p_2p_3+p_3p_1 \right) + \frac13 \left ( 2(p_1+p_2+p_3) - 2(p_1p_2+p_2p_3+p_3p_1)\right)-[\mathbb{E}(N_2)]^2 \\ &&&= \frac23\left (p_1p_2+p_2p_3+p_3p_1 \right) + \frac23 \left ( p_1+p_2+p_3 \right)-[\mathbb{E}(N_2)]^2\\ &&&= \frac23\left (p_1p_2+p_2p_3+p_3p_1 \right) + \frac23 \left ( p_1+p_2+p_3 \right)-\left[\frac23(p_1+p_2+p_3)\right]^2\\ &&&= \frac23\left (p_1p_2+p_2p_3+p_3p_1 \right) +2p(1-2p)\\ \end{align*}
  4. \(\,\) \begin{align*} && \textrm{Var}(N_1) - \textrm{Var}(N_2) &= 2p(1-p) - \left (\frac23\left (p_1p_2+p_2p_3+p_3p_1 \right) +2p(1-2p) \right) \\ &&&= 2p^2-\frac23\left (p_1p_2+p_2p_3+p_3p_1 \right) \\ &&&= \frac23 \left ( \frac13(p_1+p_2+p_3)^2 -\left (p_1p_2+p_2p_3+p_3p_1 \right)\right)\\ &&&= \frac29 \left (p_1^2+p_2^2+p_3^2 -(p_1p_2+p_2p_3+p_3p_1) \right)\\ &&&= \frac19 \left ((p_1-p_2)^2+(p_2-p_3)^2+(p_3-p_1)^2 \right) &\geq 0 \end{align*} with equality iff \(p_1 = p_2 = p_3\)

2017 Paper 2 Q13
D: 1600.0 B: 1516.0

In a television game show, a contestant has to open a door using a key. The contestant is given a bag containing \(n\) keys, where \(n\ge2\). Only one key in the bag will open the door. There are three versions of the game. In each version, the contestant starts by choosing a key at random from the bag.

  1. In version 1, after each failed attempt at opening the door the key that has been tried is put back into the bag and the contestant again selects a key at random from the bag. By considering the binomial expansion of \(( 1 - q)^{-2}\), or otherwise, find the expected number of attempts required to open the door.
  2. In version 2, after each failed attempt at opening the door the key that has been tried is put aside and the contestant selects another key at random from the bag. Find the expected number of attempts required to open the door.
  3. In version 3, after each failed attempt at opening the door the key that has been tried is put back into the bag and another incorrect key is added to the bag. The contestant then selects a key at random from the bag. Show that the probability that the contestant draws the correct key at the \(k\)th attempt is \[ \frac{n-1}{(n+k-1)(n+k-2)} \,.\] Show also, using partial fractions, that the expected number of attempts required to open the door is infinite. You may use without proof the result that \(\displaystyle\sum_{m=1}^N \dfrac 1 m \to \infty \,\) as \(N\to \infty\,\).


Solution:

  1. The probability they pull the key out on the \(k\)th attempt will be \(\left ( \frac{n-1}{n} \right)^{k-1} \frac1n\), so we want: \begin{align*} \E[G_1] &= \sum_{k=1}^{\infty} k \cdot \left ( \frac{n-1}{n} \right)^{k-1} \frac1n \\ &= \frac{1}n \sum_{k=1}^{\infty} k \cdot \left ( \frac{n-1}{n} \right)^{k-1} \\ &= \frac1n \frac{1}{\left (1 - \frac{n-1}{n} \right)^2} \\ &= \frac{1}{n} \frac{n^2}{1^2} = n \end{align*}
  2. In version 2, the probability the correct key comes out at the \(k\)th attempt is \(\frac1n\) (assume we take out all the keys, then the correct key is equally likely to appear in all of the space). Therefore \(\E[G_2] = \frac1n (1 + 2 + \cdots + n) = \frac{n+1}{2}\)
  3. The probability the key comes out on the correct attempt is: \begin{align*} && \mathbb{P}(G_3 = k) &= \frac{n-1}{n} \cdot \frac{n}{n+1} \cdot \frac{n+1}{n+2} \cdots \frac{n+k-3}{n+k-2} \cdot \frac{1}{n+k-1} \\ &&&= \frac{n-1}{(n+k-2)(n+k-1)} \\ \\ &&k \cdot \mathbb{P}(G_3 = k) &= \frac{k(n-1)}{(n+k-2)(n+k-1)} \\ &&&= \frac{(n-1)(2-n)}{n+k-2} + \frac{(n-1)^2}{n+k-1} \\ &&&= \frac{(n-1)^2}{n+k-1} - \frac{(n-1)^2}{n+k-2} + \frac{n-1}{n-k+2} \\ \Rightarrow && \E[G_3] &= \sum_{k=1}^{\infty} k \cdot \mathbb{P}(G_3 = k) \\ &&&= \sum_{k=1}^{\infty} \left ( \frac{(n-1)^2}{n+k-1} - \frac{(n-1)^2}{n+k-2} + \frac{n-1}{n+k-2} \right) \\ &&&= \sum_{k=1}^{\infty} \left ( \frac{(n-1)^2}{n+k-1} - \frac{(n-1)^2}{n+k-2} \right) +\underbrace{\sum_{k=1}^{\infty} \frac{n-1}{n-k+2}}_{\to \infty} \\ \end{align*}

2011 Paper 3 Q13
D: 1700.0 B: 1500.0

In this question, the notation \(\lfloor x \rfloor\) denotes the greatest integer less than or equal to \(x\), so for example \(\lfloor \pi\rfloor = 3\) and \(\lfloor 3 \rfloor =3\).

  1. A bag contains \(n\) balls, of which \(b\) are black. A sample of \(k\) balls is drawn, one after another, at random with replacement. The random variable \(X\) denotes the number of black balls in the sample. By considering \[ \frac{\P(X=r+1)}{\P(X=r)}\,, \] show that, in the case that it is unique, the most probable number of black balls in the sample is \[ \left\lfloor \frac{(k+1)b}{n}\right\rfloor. \] Under what circumstances is the answer not unique?
  2. A bag contains \(n\) balls, of which \(b\) are black. A sample of \(k\) balls (where \(k\le b\)) is drawn, one after another, at random without replacement. Find, in the case that it is unique, the most probable number of black balls in the sample. Under what circumstances is the answer not unique?


Solution:

  1. \(\mathbb{P}(X = r) = \binom{k}{r}p^rq^{k-r}\) where \(p = \frac{b}{n}, q = 1-p\). Therefore \begin{align*} && \frac{\mathbb{P}(X=r+1)}{\mathbb{P}(X=r)} &= \frac{\binom{k}{r+1}p^{r+1}q^{k-r-1}}{\binom{k}{r}p^rq^{k-r}} \\ &&&= \frac{(k-r)p}{(r+1)q} \\ &&&= \frac{(k-r)b}{(r+1)(n-b)} \end{align*} Comparing this to \(1\) we find: \begin{align*} && 1 & < \frac{\mathbb{P}(X=r+1)}{\mathbb{P}(X=r)} \\ \Leftrightarrow && 1 &< \frac{(k-r)b}{(r+1)(n-b)} \\ \Leftrightarrow && (r+1)(n-b) &<(k-r)b \\ \Leftrightarrow && rn& < (k+1)b - n \\ \Leftrightarrow && r &< \frac{(k+1)b}{n} - 1\\ \end{align*} If this equation is true, then \(\mathbb{P}(X=r+1)\) is larger, so \(r_{max} = \left \lfloor \frac{(k+1)b}{n} \right \rfloor\)
  2. Let \(Y\) be the number of black balls in our sample, ie \(\mathbb{P}(Y = r) = \binom{b}{r}\binom{n-b}{k-r}/\binom{n}{k}\), so \begin{align*} && \frac{\mathbb{P}(Y = r+1)}{\mathbb{P}(Y=r)} &= \frac{\binom{b}{r+1}\binom{n-b}{k-(r+1)}/\binom{n}{k}}{\binom{b}{r}\binom{n-b}{k-r}/\binom{n}{k}} \\ &&&= \frac{b-r}{r+1} \frac{k-r}{n-b-k+r+1} \\ && 1 &< \frac{\mathbb{P}(Y = r+1)}{\mathbb{P}(Y=r)} \\ \Leftrightarrow && (r+1)(n-b-k+r+1) &< (b-r)(k-r) \\ \Leftrightarrow &&r(n-b-k+1)+(n-b-k+1) &< -r(b+k)+bk \\ \Leftrightarrow &&r(n+1) &< bk+b+k+1-n \\ \Leftrightarrow && r &< \frac{(b+1)(k+1)}{n+1} - \frac{n}{n+1} \end{align*} Therefore \(r = \left \lfloor \frac{ (b+1)(k+1)}{n+1}\right \rfloor\), it is not unique if \(n+1\) divides \((b+1)(k+1)\)

2006 Paper 1 Q14
D: 1500.0 B: 1502.6

  1. A bag of sweets contains one red sweet and \(n\) blue sweets. I take a sweet from the bag, note its colour, return it to the bag, then shake the bag. I repeat this until the sweet I take is the red one. Find an expression for the probability that I take the red sweet on the \(r\)th attempt. What value of \(n\) maximises this probability?
  2. Instead, I take sweets from the bag, without replacing them in the bag, until I take the red sweet. Find an expression for the probability that I take the red sweet on the \(r\)th attempt. What value of \(n\) maximises this probability?


Solution:

  1. This is the probability of having the sequence \(\underbrace{BB\cdots B}_{r-1 \text{ times}}R\) which has probability \(\displaystyle \left ( \frac{n}{n+1} \right)^{r-1}\frac{1}{n+1}\). Maximising this, is equivalent to maximising \(\log\) of it, ie \begin{align*} && y &= (r-1) \ln n - r \ln (n+1) \\ \Rightarrow && \frac{\d y}{\d n} &= \frac{r-1}{n} - \frac{r}{n+1} \\ &&&= \frac{(r-1)(n+1)-rn}{n(n+1)} \\ &&&= \frac{r-n-1}{n(n+1)} \end{align*} Therefore this is maximised when \(n = r-1\)

2005 Paper 3 Q13
D: 1700.0 B: 1487.7

A pack of cards consists of \(n+1\) cards, which are printed with the integers from \(0\) to \(n\). A~game consists of drawing cards repeatedly at random from the pack until the card printed with 0 is drawn, at which point the game ends. After each draw, the player receives \(\pounds 1\) if the card drawn shows any of the integers from \(1\) to \(w\) inclusive but receives nothing if the card drawn shows any of the integers from \(w+1\) to \(n\) inclusive.

  1. In one version of the game, each card drawn is replaced immediately and randomly in the pack. Explain clearly why the probability that the player wins a total of exactly \(\pounds 3\) is equal to the probability of the following event occurring: out of the first four cards drawn which show numbers in the range \(0\) to \(w\), the numbers on the first three are non-zero and the number on the fourth is zero. Hence show that the probability that the player wins a total of exactly \(\pounds 3\) is equal to \(\displaystyle \frac{w^3}{(w+1)^4}\). Write down the probability that the player wins a total of exactly \(\pounds r\) and hence find the expected total win.
  2. In another version of the game, each card drawn is removed from the pack. Show that the expected total win in this version is half of the expected total win in the other version.

2003 Paper 1 Q12
D: 1500.0 B: 1484.0

In a bag are \(n\) balls numbered 1, 2, \(\ldots\,\), \(n\,\). When a ball is taken out of the bag, each ball is equally likely to be taken.

  1. A ball is taken out of the bag. The number on the ball is noted and the ball is replaced in the bag. The process is repeated once. Explain why the expected value of the product of the numbers on the two balls is \[ \frac 1 {n^2} \sum_{r=1}^n\sum_{s=1}^n rs \] and simplify this expression.
  2. A ball is taken out of the bag. The number on the ball is noted and the ball is not replaced in the bag. Another ball is taken out of the bag and the number on this ball is noted. Show that the expected value of the product of the two numbers is \[ \frac{(n+1)(3n+2)}{12}\;. \]
Note: \(\displaystyle \sum_{r=1}^n r = \frac12 n(n+1)\) and \(\displaystyle \sum_{r=1}^n r^2 = \frac16 n(n+1)(2n+1)\;\).


Solution:

  1. Since the second draw is independently of the first draw \(\mathbb{P}(F=r, S=s) = \mathbb{P}(F=r)\mathbb{P}(S=s) = \frac{1}{n^2}\) and so \begin{align*} && \E[FS] &= \sum_{r=1}^n \sum_{s=1}^n rs \mathbb{P}(F=r, S = s) \\ &&&= \frac{1}{n^2} \sum_{r=1}^n \sum_{s=1}^n rs \\ &&&= \frac{1}{n^2} \left ( \sum_{r=1}^n r \right)\left ( \sum_{s=1}^n s \right) \\ &&&= \frac{1}{n^2} \left ( \frac{n(n+1)}{2} \right)^2 \\ &&&= \frac{(n+1)^2}{4} \end{align*}
  2. Under this methodology, \(\mathbb{P}(F=r, S=s) = \frac{1}{n(n-1)}\) as long as \(r \neq s\), therefore \begin{align*} && \E[FS] &= \sum_{r=1}^n \sum_{s\neq r} rs \mathbb{P}(F = r, S = s) \\ &&&= \frac{1}{n(n-1)} \sum_{r=1}^n \left ( \sum_{s=1}^n rs - r^2 \right) \\ &&&= \frac{1}{n(n-1)} \left ( \frac{n^2(n+1)^2}{4} - \frac{n(n+1)(2n+1)}{6} \right) \\ &&&= \frac{n+1}{12(n-1)} \left ( 3n(n+1)-2(2n+1) \right) \\ &&&= \frac{n+1}{12(n-1)} \left ( 3n^2 -n -2 \right) \\ &&&= \frac{n+1}{12(n-1)} (3n+2)(n-1)\\ &&&= \frac{(n+1)(3n+2)}{12} \end{align*}

2000 Paper 1 Q12
D: 1500.0 B: 1480.9

I have \(k\) different keys on my key ring. When I come home at night I try one key after another until I find the key that fits my front door. What is the probability that I find the correct key in exactly \(n\) attempts in each of the following three cases?

  1. At each attempt, I choose a key that I have not tried before but otherwise each choice is equally likely.
  2. At each attempt, I choose a key from all my keys and each of the \(k\) choices is equally likely.
  3. At the first attempt, I choose from all my keys and each of the \(k\) choices is equally likely. Thereafter, I choose from the keys that I did not try the previous time but otherwise each choice is equally likely.

1996 Paper 1 Q12
D: 1484.0 B: 1485.4

An examiner has to assign a mark between 1 and \(m\) inclusive to each of \(n\) examination scripts (\(n\leqslant m\)). He does this randomly, but never assigns the same mark twice. If \(K\) is the highest mark that he assigns, explain why \[ \mathrm{P}(K=k)=\left.\binom{k-1}{n-1}\right/\binom{m}{n} \] for \(n\leqslant k\leqslant m,\) and deduce that \[ \sum_{k=n}^{m}\binom{k-1}{n-1}=\binom{m}{n}\,. \] Find the expected value of \(K\).


Solution: If the highest mark is \(k\), then there are \(n-1\) remaining marks to give, and they have to be chosen from the numbers \(1, 2, \ldots, k-1\), ie in \(\binom{k-1}{n-1}\) ways. There are \(n\) numbers to be chosen from \(1, 2, \ldots, m\) in total, therefore \(\displaystyle \mathbb{P}(K=k) = \left.\binom{k-1}{n-1} \right/ \binom{m}{n}\) Since \(K\) can take any of the values \(n, \cdots, m\), we must have \begin{align*} && 1 &= \sum_{k=n}^m \mathbb{P}(K=k) \\ &&&= \sum_{k=n}^m \left.\binom{k-1}{n-1} \right/ \binom{m}{n} \\ \Rightarrow && \binom{m}{n} &= \sum_{k=n}^m \binom{k-1}{n-1} \\ \\ && \mathbb{E}(K) &= \sum_{k=n}^m k \cdot \mathbb{P}(K=k) \\ &&&= \sum_{k=n}^m k \cdot \left.\binom{k-1}{n-1} \right/ \binom{m}{n} \\ &&&= n\binom{m}{n}^{-1} \sum_{k=n}^m \frac{k}{n} \cdot \binom{k-1}{n-1} \\ &&&= n\binom{m}{n}^{-1} \sum_{k=n}^m \binom{k}{n} \\ &&&= n\binom{m}{n}^{-1} \sum_{k=n+1}^{m+1} \binom{k-1}{n+1-1} \\ &&&= n\binom{m}{n}^{-1} \binom{m+1}{n+1} \\ &&&= n \cdot \frac{m+1}{n+1} \end{align*}

1994 Paper 1 Q13
D: 1500.0 B: 1512.0

I have a bag containing \(M\) tokens, \(m\) of which are red. I remove \(n\) tokens from the bag at random without replacement. Let \[ X_{i}=\begin{cases} 1 & \mbox{ if the ith token I remove is red;}\\ 0 & \mbox{ otherwise.} \end{cases} \] Let \(X\) be the total number of red tokens I remove.

  1. Explain briefly why \(X=X_{1}+X_{2}+\cdots+X_{n}.\)
  2. Find the expectation \(\mathrm{E(}X_{i}).\)
  3. Show that \(\mathrm{E}(X)=mn/M\).
  4. Find \(\mathrm{P}(X=k)\) for \(k=0,1,2,\ldots,n\).
  5. Deduce that \[ \sum_{k=1}^{n}k\binom{m}{k}\binom{M-m}{n-k}=m\binom{M-1}{n-1}. \]


Solution:

  1. The left hand side counts the number of red tokens we have taken. The right hand side counts the number of red tokens we have taken at each point, across all points. Therefore these must be the same.
  2. \(\E[X_i] = \mathbb{P}(i\text{th token is red}) = \frac{m}{M}\) (since there is nothing special about the \(i\)th token.
  3. Therefore \(\E[X] = \E[X_1 + \cdots + X_n] = n\E[X_i] = \frac{nm}{M}\)
  4. \(\mathbb{P}(X=k) = \binom{m}{k}\binom{M-m}{n-k}/\binom{M}{n}\) since this is the number of ways we can choose \(k\) of the \(m\) red objects, \(n-k\) of the \(M-m\) non-red objects divided by the total number of ways we can choose our \(n\) tokens.
  5. \(\,\) \begin{align*} && \frac{mn}{M} &= \E[X] \\ &&&= \sum_{k=1}^n k \mathbb{P}(X=k) \\ &&&= \sum_{k=1}^n k \binom{m}{k}\binom{M-m}{n-k}/\binom{M}{n} \\ \Rightarrow && \sum_{k=1}^n k \binom{m}{k}\binom{M-m}{n-k} &= m \frac{n}{M} \binom{M}{n} = m \binom{M-1}{n-1} \end{align*}
This question is a nice example of how to find the mean of the hypergeometric distribution

1988 Paper 3 Q16
D: 1700.0 B: 1610.5

Balls are chosen at random without replacement from an urn originally containing \(m\) red balls and \(M-m\) green balls. Find the probability that exactly \(k\) red balls will be chosen in \(n\) choices \((0\leqslant k\leqslant m,0\leqslant n\leqslant M).\) The random variables \(X_{i}\) \((i=1,2,\ldots,n)\) are defined for \(n\leqslant M\) by \[ X_{i}=\begin{cases} 0 & \mbox{ if the \(i\)th ball chosen is green}\\ 1 & \mbox{ if the \(i\)th ball chosen is red. } \end{cases} \] Show that

  1. \(\mathrm{P}(X_{i}=1)=\dfrac{m}{M}.\)
  2. \(\mathrm{P}(X_{i}=1\mbox{ and }X_{j}=1)=\dfrac{m(m-1)}{M(M-1)}\), for \(i\neq j\).
Find the mean and variance of the random variable \(X\) defined by \[ X=\sum_{i=1}^{n}X_{i}. \]


Solution: There are \(\displaystyle \binom{m}{k} \binom{M-m}{n-k}\) ways to choose \(k\) red and and \(n-k\) green balls out of a total \(\displaystyle \binom{M}{n}\) ways to choose balls. Therefore the probability is: \[ \mathbb{P}(\text{exactly }k\text{ red balls in }n\text{ choices}) = \frac{\binom{m}{k} \binom{M-m}{n-k}}{ \binom{M}{n}}\]

  1. Note that there is nothing special about the \(i\)th ball chosen. (We could consider all draws look at the \(i\)th ball, or consider all draws apply a permutation to make the \(i\)th ball the first ball, and both would look like identical sequences). Therefore \(\mathbb{P}(X_i = 1) = \mathbb{P}(X_1 = 1) = \frac{m}{M}\).
  2. Similarly we could apply a permutation to all sequences which takes the \(i\)th ball to the first ball and the \(j\)th ball to the second ball, therefore: \begin{align*} \mathbb{P}(X_i = 1, X_j = 1) &= \mathbb{P}(X_1 = 1, X_2 = 1) \\ &= \mathbb{P}(X_1 = 1) \cdot \mathbb{P}(X_2 = 1 | X_1 = 1) \\ &= \frac{m}{M} \cdot \frac{m-1}{M-1} \\ &= \frac{m(m-1)}{M(M-1)} \end{align*}
So: \begin{align*} \mathbb{E}(X) &= \mathbb{E}(\sum_{i=1}^{n}X_{i}) \\ &= \sum_{i=1}^{n}\mathbb{E}(X_{i}) \\ &= \sum_{i=1}^{n} 1\cdot\mathbb{P}(X_i = 1) \\ &= \sum_{i=1}^{n} \frac{m}{M} \\ &= \frac{mn}{M} \end{align*} and \begin{align*} \mathbb{E}(X^2) &= \mathbb{E}\left[\left(\sum_{i=1}^{n}X_{i} \right)^2 \right] \\ &= \mathbb{E}\left[\sum_{i=1}^n X_i^2 + 2 \sum_{i < j} X_i X_j \right] \\ &= \sum_{i=1}^n \mathbb{E}(X_i^2) + 2 \sum_{i < j} \mathbb{E}(X_i X_j) \\ &= \frac{nm}{M} + n(n-1) \frac{m(m-1)}{M(M-1)} \\ \textrm{Var}(X) &= \mathbb{E}(X^2) - (\mathbb{E}(X))^2 \\ &= \frac{nm}{M} + n(n-1) \frac{m(m-1)}{M(M-1)} - \frac{n^2m^2}{M^2} \\ &= \frac{nm}{M} \left (1-\frac{nm}{M}+(n-1)\frac{m-1}{M-1} \right) \\ &= \frac{nm}{M} \left ( \frac{M(M-1)-(M-1)nm+(n-1)(m-1)M}{M(M-1)} \right) \\ &= \frac{nm}{M} \frac{(M-m)(M-n)}{M(M-1)} \\ &= n \frac{m}{M} \frac{M-m}{M} \frac{M-n}{M-1} \end{align*} Note: This is a very nice way of deriving the mean and variance of the hypergeometric distribution