Problems

Filters
Clear Filters

32 problems found

2025 Paper 2 Q11
D: 1500.0 B: 1500.0

  1. By considering the sum of a geometric series, or otherwise, show that \[\sum_{r=1}^{\infty} rx^{r-1} = \frac{1}{(1-x)^2} \quad \text{for } |x| < 1.\]
  2. Ali plays a game with a fair \(2k\)-sided die. He rolls the die until the first \(2k\) appears. Ali wins if all the numbers he rolls are even.
    1. Find the probability that Ali wins the game. If Ali wins the game, he earns £1 for each roll, including the final one. If he loses, he earns nothing.
    2. Find Ali's expected earnings from playing the game.
  3. Find a simplified expression for \[1 + 2\binom{n}{1}x + 3\binom{n}{2}x^2 + \ldots + (n+1)x^n,\] where \(n\) is a positive integer.
  4. Zen plays a different game with a fair \(2k\)-sided die. She rolls the die until the first \(2k\) appears, and wins if the numbers rolled are strictly increasing in size. For example, if \(k = 3\), she wins if she rolls 2, 6 or 1, 4, 5, 6, but not if she rolls 1, 4, 2, 6 or 1, 3, 3, 6. If Zen wins the game, she earns £1 for each roll, including the final one. If she loses, she earns nothing. Find Zen's expected earnings from playing the game.
  5. Using the approximation \[\left(1 + \frac{1}{n}\right)^n \approx e \quad \text{for large } n,\] show that, when \(k\) is large, Zen's expected earnings are a little over 35\% more than Ali's expected earnings.


Solution:

  1. Note that, \begin{align*} && \sum_{r = 0}^\infty x^r &= \frac{1}{1-x} && |x| < 1\\ \underbrace{\Rightarrow}_{\frac{\d}{\d x}} && \sum_{r = 0}^\infty rx^{r-1} &= \frac{1}{(1-x)^2} && |x| < 1\\ && \sum_{r = 1}^\infty rx^{r-1} &= \frac{1}{(1-x)^2} && |x| < 1\\ \end{align*}
    1. \begin{align*} && \mathbb{P}(\text{Ali wins in }s\text{ rounds}) &= \left ( \frac{k-1}{2k} \right)^{s-1} \frac{1}{2k} \\ \Rightarrow && \mathbb{P}(\text{Ali wins}) &= \sum_{s=1}^\infty \mathbb{P}(\text{Ali wins in }s\text{ rounds}) \\ &&&=\sum_{s=1}^\infty \left ( \frac{k-1}{2k} \right)^{s-1} \frac{1}{2k} \\ &&&= \frac{1}{2k} \sum_{s=0}^\infty \left ( \frac{k-1}{2k} \right)^{s} \\ &&&= \frac{1}{2k} \frac{1}{1 - \frac{k-1}{2k}} \\ &&&= \frac{1}{2k - (k-1)} \\ &&&= \frac{1}{k+1} \end{align*}
    2. \begin{align*} \mathbb{E}(\text{Ali score}) &= \sum_{s=1}^{\infty} s \mathbb{P}(\text{Ali wins in }s\text{ rounds}) \\ &= \sum_{s=1}^{\infty} s \left ( \frac{k-1}{2k} \right)^{s-1} \frac{1}{2k} \\ &= \frac{1}{2k} \frac{1}{\left (1 - \frac{k-1}{2k} \right)^2} \\ &= \frac{2k}{(k+1)^2} \end{align*}
  2. \begin{align*} && (1+x)^{n} &= \sum_{k=0}^n \binom{n}{k} x^k \\ \Rightarrow && x(1+x)^n &= \sum_{k=0}^n \binom{n}{k} x^{k+1} \\ \Rightarrow && (1+x)^n + nx(1+x)^{n-1} &= \sum_{k=0}^n (k+1)\binom{n}{k} x^k \\ \Rightarrow && (1+x)^{n-1}(1+(n+1)x) &= 1 + 2\binom{n}{1}x + 3\binom{n}{2}x^2 + \ldots + (n+1)x^n \end{align*}
  3. \begin{align*} \mathbb{E}(\text{Zen score}) &= \sum_{s=1}^{2k} s \mathbb{P} \left ( \text{Zen gets }s\text{ numbers in increasing order ending with }2k \right) \\ &= \sum_{s=1}^{2k} s \binom{2k-1}{s-1} \frac{1}{(2k)^s} \\ &= \frac{1}{2k}\sum_{s=0}^{2k-1} (s+1) \binom{2k-1}{s} \frac{1}{(2k)^s} \\ &= \frac{1}{2k} \left ( 1 + \frac{1}{2k} \right)^{2k-2} \left ( 1 + (2k-1+1) \frac{1}{2k} \right) \\ &= \frac{1}{k}\left ( 1 + \frac{1}{2k} \right)^{2k-2} \end{align*}
  4. Therefore as \(k \to \infty\) \begin{align*} \frac{\mathbb{E}(\text{Zen score})}{\mathbb{E}(\text{Ali score}) } &= \frac{1}{k}\left ( 1 + \frac{1}{2k} \right)^{2k-2} \big / \frac{2k}{(k+1)^2} \\ &= \frac{(k+1)^2}{2k^2} \cdot \left ( 1 + \frac{1}{2k} \right)^{2k} \cdot \left ( 1 + \frac{1}{2k} \right)^{-2} \\ &\to \frac12 e \approx 2.7/2 = 1.35 \end{align*} ie Zen's expected earnings are \(\approx 35\%\) more.

2020 Paper 3 Q12
D: 1500.0 B: 1500.0

\(A\) and \(B\) both toss the same biased coin. The probability that the coin shows heads is \(p\), where \(0 < p < 1\), and the probability that it shows tails is \(q = 1 - p\). Let \(X\) be the number of times \(A\) tosses the coin until it shows heads. Let \(Y\) be the number of times \(B\) tosses the coin until it shows heads.

  1. The random variable \(S\) is defined by \(S = X + Y\) and the random variable \(T\) is the maximum of \(X\) and \(Y\). Find an expression for \(\mathrm{P}(S = s)\) and show that \[ \mathrm{P}(T = t) = pq^{t-1}(2 - q^{t-1} - q^t). \]
  2. The random variable \(U\) is defined by \(U = |X - Y|\), and the random variable \(W\) is the minimum of \(X\) and \(Y\). Find expressions for \(\mathrm{P}(U = u)\) and \(\mathrm{P}(W = w)\).
  3. Show that \(\mathrm{P}(S = 2 \text{ and } T = 3) \neq \mathrm{P}(S = 2) \times \mathrm{P}(T = 3)\).
  4. Show that \(U\) and \(W\) are independent, and show that no other pair of the four variables \(S\), \(T\), \(U\) and \(W\) are independent.

2019 Paper 1 Q11
D: 1500.0 B: 1500.0

  1. Two people adopt the following procedure for deciding where to go for a cup of tea: either to a hotel or to a tea shop. Each person has a coin which has a probability \(p\) of showing heads and \(q\) of showing tails (where \(p+q = 1\)). In each round of the procedure, both people toss their coins once. If both coins show heads, then both people go to the hotel; if both coins show tails, then both people go to the tea shop; otherwise, they continue to the next round. This process is repeated until a decision is made. Show that the probability that they make a decision on the \(n\)th round is $$(q^2 + p^2)(2qp)^{n-1}.$$ Show also that the probability that they make a decision on or before the \(n\)th round is at least $$1 - \frac{1}{2^n}$$ whatever the value of \(p\).
  2. Three people adopt the following procedure for deciding where to go for a cup of tea: either to a hotel or to a tea shop. Each person has a coin which has a probability \(p\) of showing heads and \(q\) of showing tails (where \(p + q = 1\)). In the first round of the procedure, all three people toss their coins once. If all three coins show heads, then all three people go to the hotel; if all three coins show tails, then all three people go to the tea shop; otherwise, they continue to the next round. In the next round the two people whose coins showed the same face toss again, but the third person just turns over his or her coin. If all three coins show heads, then all three people go to the hotel; if all three coins show tails, then all three people go to the tea shop; otherwise, they go to the third round. Show that the probability that they make a decision on or before the second round is at least \(\frac{7}{16}\), whatever the value of \(p\).


Solution:

  1. The probability they don't make a decision in a round is \(qp + pq = 2qp\) (TH and HT). The probability they make a decision in a round is \(q^2+p^2\) (TT and HH). Therefore the probability they make a decision in the \(n\)th round is: \[ (q^2+p^2)(2qp)^{n-1} \] by having \(n-1\) failures and one success. The probability they make a decision on or before the \(n\)th round is the \(1-\) the probability they don't, ie \(1 - (2qp)^n\). Notice that \(\sqrt{qp} \leq \frac{p+1}{2} = \frac12 \Rightarrow qp \leq \frac14\) so \(1-(2pq)^n \leq 1 - \frac1{2^n}\)
  2. The probability it's decided in the first round is \(p^3 + q^3\) (HHH, TTT). The probability it's decided in the second round is \(3p^2q \cdot p^2 + 3qq^2 \cdot q^2 = 3pq(p^3+q^3)\) (HHT -> HHH) and (TTH -> TTT) with reorderings). Therefore the probability of making a decision in the first or second round is \((p^3+q^3)(1 + 3pq)\) which is minimised when \(p = q\) by Muirhead (or whatever your favourite inequality is). So \(\frac{2}{8} \cdot \left ( 1 + \frac{3}{4} \right) = \frac{7}{16}\)

2018 Paper 2 Q12
D: 1600.0 B: 1500.0

In a game, I toss a coin repeatedly. The probability, \(p\), that the coin shows Heads on any given toss is given by \[ p= \frac N{N+1} \,, \] where \(N\) is a positive integer. The outcomes of any two tosses are independent. The game has two versions. In each version, I can choose to stop playing after any number of tosses, in which case I win £\(H\), where \(H\) is the number of Heads I have tossed. However, the game may end before that, in which case I win nothing.

  1. In version 1, the game ends when the coin first shows Tails (if I haven't stopped playing before that). I decide from the start to toss the coin until a total of \(h\) Heads have been shown, unless the game ends before then. Find, in terms of \(h\) and \(p\), an expression for my expected winnings and show that I can maximise my expected winnings by choosing \(h=N\).
  2. In version 2, the game ends when the coin shows Tails on two consecutive tosses (if I haven't stopped playing before that). I decide from the start to toss the coin until a total of \(h\) Heads have been shown, unless the game ends before then. Show that my expected winnings are \[ \frac{ hN^h (N+2)^h}{(N+1)^{2h}} \,.\] In the case \(N=2\,\), use the approximation \(\log_3 2 \approx 0.63\) to show that the maximum value of my expected winnings is approximately £3.


Solution:

  1. Since we either win \(h\) or \(0\), to calculate the expected winnings we just need to calculate the probability that we get \(h\) consecutive heads, therefore: \begin{align*} && \mathbb{E}(\text{winnings}) &= E_h \\ &&&= h \cdot \left ( \frac{N}{N+1} \right)^h \\ && \frac{E_{h+1}}{E_h} &= \frac{h+1}{h }\left ( \frac{N}{N+1} \right) \end{align*} Therefore \(E_h\) is increasing if \(h \leq N\), so we can maximise our winnings by taking \(h = N\). (In fact, we could take \(h = N\) or \(h = N+1\), but arguably \(h = N\) is better as we have the same expected value but lower variance).
  2. We can have up to \(h\) tails appearing (if we imagine slots for tails of the form \(\underbrace{\_H\_H\_H\_\cdots\_H}_{h\text{ spaces and }h\, H}\) so, we have \begin{align*} && \mathbb{P}(\text{wins}) &= \sum_{t = 0}^h \mathbb{P}(\text{wins and } t\text{ tails}) \\ &&&= \sum_{t = 0}^h\binom{h}{t} \left ( \frac{N}{N+1} \right)^h\left ( \frac{1}{N+1} \right)^t \\ &&&= \left ( \frac{N}{N+1} \right)^h \sum_{t = 0}^h\binom{h}{t}\left ( \frac{1}{N+1} \right)^t \cdot 1^{h-t} \\ &&&= \left ( \frac{N}{N+1} \right)^h \left ( 1 + \left ( \frac{1}{N+1} \right) \right)^h \\ &&&= \left ( \frac{N}{N+1} \right)^h \left ( \frac{N+2}{N+1}\right)^h \\ &&&= \frac{N^h(N+2)^h}{(N+1)^{2h}} \\ \Rightarrow && \E(\text{winnings}) &= h \cdot \frac{N^h(N+2)^h}{(N+1)^{2h}} \end{align*} If \(N = 2\), we have \begin{align*} && \E(\text{winnings}) &= E_h \\ &&&= h \cdot \frac{2^h\cdot2^{2h}}{3^{2h}}\\ &&&= h \cdot \frac{2^{3h}}{3^{2h}} \\ \Rightarrow && \frac{E_{h+1}}{E_h} &= \frac{h+1}{h} \frac{8}{9} \\ \end{align*} Therefore to maximise the winnings we should take \(h = 8\), and the expected winnings will be: \begin{align*} && E_8 &= 8 \cdot \frac{2^{24}}{3^{16}} \\ \Rightarrow && \log_3 E_8 &= 27 \log_3 2 - 16 \\ &&&\approx 24 \cdot 0.63 - 16 \\ &&&\approx 17 - 16 \\ &&&\approx 1 \\ \Rightarrow && E_8 &\approx 3 \end{align*}

2017 Paper 2 Q13
D: 1600.0 B: 1516.0

In a television game show, a contestant has to open a door using a key. The contestant is given a bag containing \(n\) keys, where \(n\ge2\). Only one key in the bag will open the door. There are three versions of the game. In each version, the contestant starts by choosing a key at random from the bag.

  1. In version 1, after each failed attempt at opening the door the key that has been tried is put back into the bag and the contestant again selects a key at random from the bag. By considering the binomial expansion of \(( 1 - q)^{-2}\), or otherwise, find the expected number of attempts required to open the door.
  2. In version 2, after each failed attempt at opening the door the key that has been tried is put aside and the contestant selects another key at random from the bag. Find the expected number of attempts required to open the door.
  3. In version 3, after each failed attempt at opening the door the key that has been tried is put back into the bag and another incorrect key is added to the bag. The contestant then selects a key at random from the bag. Show that the probability that the contestant draws the correct key at the \(k\)th attempt is \[ \frac{n-1}{(n+k-1)(n+k-2)} \,.\] Show also, using partial fractions, that the expected number of attempts required to open the door is infinite. You may use without proof the result that \(\displaystyle\sum_{m=1}^N \dfrac 1 m \to \infty \,\) as \(N\to \infty\,\).


Solution:

  1. The probability they pull the key out on the \(k\)th attempt will be \(\left ( \frac{n-1}{n} \right)^{k-1} \frac1n\), so we want: \begin{align*} \E[G_1] &= \sum_{k=1}^{\infty} k \cdot \left ( \frac{n-1}{n} \right)^{k-1} \frac1n \\ &= \frac{1}n \sum_{k=1}^{\infty} k \cdot \left ( \frac{n-1}{n} \right)^{k-1} \\ &= \frac1n \frac{1}{\left (1 - \frac{n-1}{n} \right)^2} \\ &= \frac{1}{n} \frac{n^2}{1^2} = n \end{align*}
  2. In version 2, the probability the correct key comes out at the \(k\)th attempt is \(\frac1n\) (assume we take out all the keys, then the correct key is equally likely to appear in all of the space). Therefore \(\E[G_2] = \frac1n (1 + 2 + \cdots + n) = \frac{n+1}{2}\)
  3. The probability the key comes out on the correct attempt is: \begin{align*} && \mathbb{P}(G_3 = k) &= \frac{n-1}{n} \cdot \frac{n}{n+1} \cdot \frac{n+1}{n+2} \cdots \frac{n+k-3}{n+k-2} \cdot \frac{1}{n+k-1} \\ &&&= \frac{n-1}{(n+k-2)(n+k-1)} \\ \\ &&k \cdot \mathbb{P}(G_3 = k) &= \frac{k(n-1)}{(n+k-2)(n+k-1)} \\ &&&= \frac{(n-1)(2-n)}{n+k-2} + \frac{(n-1)^2}{n+k-1} \\ &&&= \frac{(n-1)^2}{n+k-1} - \frac{(n-1)^2}{n+k-2} + \frac{n-1}{n-k+2} \\ \Rightarrow && \E[G_3] &= \sum_{k=1}^{\infty} k \cdot \mathbb{P}(G_3 = k) \\ &&&= \sum_{k=1}^{\infty} \left ( \frac{(n-1)^2}{n+k-1} - \frac{(n-1)^2}{n+k-2} + \frac{n-1}{n+k-2} \right) \\ &&&= \sum_{k=1}^{\infty} \left ( \frac{(n-1)^2}{n+k-1} - \frac{(n-1)^2}{n+k-2} \right) +\underbrace{\sum_{k=1}^{\infty} \frac{n-1}{n-k+2}}_{\to \infty} \\ \end{align*}

2015 Paper 1 Q13
D: 1500.0 B: 1501.1

A fair die with faces numbered \(1, \ldots, 6\) is thrown repeatedly. The events \(A\), \(B\), \(C\), \(D\) and \(E\) are defined as follows. \begin{align*} A: && \text{the first 6 arises on the \(n\)th throw.}\\ B: && \text{at least one 5 arises before the first 6.} \\ C: && \text{at least one 4 arises before the first 6.}\\ D: && \text{exactly one 5 arises before the first 6.}\\ E: && \text{exactly one 4 arises before the first 6.} \end{align*} Evaluate the following probabilities:

  1. \(\P(A)\)
  2. \(\P(B)\)
  3. \(\P(B\cap C)\)
  4. \(\P(D)\)
  5. \(\P(D\cup E)\)
For some parts of this question, you may want to make use of the binomial expansion in the form: \[ (1-x)^{-n} = 1 +nx +\frac {n(n+1)}2 x^2 + \cdots + \frac {(n+r-1)!}{r! (n-1)!}x^r +\cdots\ .\]


Solution:

  1. \(\,\) \begin{align*} \mathbb{P}(A) &= \mathbb{P}(\text{the first 6 arises on the \(n\)th throw.}) \\ &= \mathbb{P}(\text{\(n-1\) not 6s, followed by a 6.})\\ &= \left ( \frac56\right)^{n-1} \cdot \frac16 = \frac{5^{n-1}}{6^n} \end{align*}
  2. There is nothing special about \(5\) or \(6\), so which comes first is \(50:50\), therefore this probability is \(\frac12\)
  3. There is nothing special about \(4\), \(5\) or \(6\) so this is the probability that \(6\) appears last out of these three numbers, hence \(\frac13\)
  4. \(\,\) \begin{align*} \mathbb{P}(D) &= \mathbb{P}(\text{exactly one 5 arises before the first 6.}) \\ &=\sum_{n=2}^{\infty} \mathbb{P}(\text{exactly one 5 arises before the first 6 which appears on the \(n\)th roll.}) \\ &= \sum_{n=2}^{\infty} \binom{n-1}{1} \left ( \frac46 \right)^{n-2} \frac16 \cdot \frac16 \\ &= \frac1{36} \sum_{n=2}^{\infty} (n-1) \left ( \frac23 \right)^{n-2} \\ &= \frac1{36} \sum_{n=1}^{\infty} n \left ( \frac23 \right)^{n-1} \\ &= \frac1{36} \frac{1}{\left ( 1- \frac23 \right)^2} = \frac14 \end{align*}
  5. \(\,\) \begin{align*} \mathbb{P}(D \cup E) &= \mathbb{P}(D) + \mathbb{P}(E) - \mathbb{P}(D \cap E) \\ &= \frac12 - \mathbb{P}(D \cap E) \\ &=\frac12 - \sum_{n=3}^{\infty} \mathbb{P}(\text{exactly one 5 and one 4 arises before the first 6 which appears on the \(n\)th roll.}) \\ &=\frac12 - \sum_{n=3}^{\infty} 2\binom{n-1}{2} \left ( \frac36 \right)^{n-3}\cdot \frac16 \cdot \frac16 \cdot \frac16 \\ &=\frac12 - \frac2{6^3}\sum_{n=3}^{\infty} \frac{(n-1)(n-2)}{2} \left ( \frac12 \right)^{n-3} \\ &=\frac12 - \frac2{6^3}\frac{1}{(1-\tfrac12)^3}\\ &= \frac12 - \frac{2}{27} \\ &= \frac{23}{54} \end{align*}

2014 Paper 3 Q13
D: 1700.0 B: 1500.0

I play a game which has repeated rounds. Before the first round, my score is \(0\). Each round can have three outcomes:

  1. my score is unchanged and the game ends;
  2. my score is unchanged and I continue to the next round;
  3. my score is increased by one and I continue to the next round.
The probabilities of these outcomes are \(a\), \(b\) and \(c\), respectively (the same in each round), where \(a+b+c=1\) and \(abc\ne0\). The random variable \(N\) represents my score at the end of a randomly chosen game. Let \(G(t)\) be the probability generating function of \(N\).
  1. Suppose in the first round, the game ends. Show that the probability generating function conditional on this happening is 1.
  2. Suppose in the first round, the game continues to the next round with no change in score. Show that the probability generating function conditional on this happening is \(G(t)\).
  3. By comparing the coefficients of \(t^n\), show that $ G(t) = a + bG(t) + ctG(t)\,. $ Deduce that, for \(n\ge0\), \[ P(N=n) = \frac{ac^n}{(1-b)^{n+1}}\,. \]
  4. Show further that, for \(n\ge0\), \[ P(N=n) = \frac{\mu^n}{(1+\mu)^{n+1}}\,, \] where \(\mu=\E(N)\).


Solution:

  1. If the game ends in the first round then the score is exactly \(0\) and the pgf is \(1\cdot x^0 = 1\)
  2. If the game moves onto the next round with no change in the first round then it's as if nothing happened, therefore the pgf is the original pgf \(G(t)\)
  3. If the game moves into the next round with the score increased by one, then the pgf is \(tG(t)\) since all the scores are increased by \(1\). Therefore \begin{align*} && G(t) &= \E[t^N] \\ &&&= \E[\E[t^N | \text{first round}]] \\ &&&= a + bG(t) + ctG(t) \\ \Rightarrow && G(t)(1-b-ct) = a \\ \Rightarrow && G(t) &= \frac{a}{(1-b)-ct} \\ &&&= \frac{a}{(1-b)} \frac{1}{1- \left(\frac{c}{1-b}\right)t} \\ &&&= \sum_{n=0}^\infty \frac{a}{1-b} \frac{c^n}{(1-b)^n} t^n\\ &&&= \sum_{n=0}^{\infty} \frac{ac^n}{(1-b)^{n+1}}t^n \end{align*} Therefore by comparing coefficients, \(\mathbb{P}(N=n) = \frac{ac^n}{(1-b)^{n+1}}\)
  4. \(\,\) \begin{align*} && \E[N] &= G'(1) \\ &&&= \frac{ac}{((1-b)-c)^2} \\ &&&= \frac{ac}{a^2} = \frac{c}{a} \\ \\ && \frac{\mu^n}{(1+\mu)^{n+1}} &= \frac{c^na^{-n}}{(a+c)^{n+1}a^{-n-1}} \\ &&&= \frac{ac^n}{(a+c)^{n+1}}\\ &&&= \frac{ac^n}{(1-b)^{n+1}}\\ &&&= \mathbb{P}(N=n) \end{align*} as required

2013 Paper 2 Q13
D: 1600.0 B: 1516.0

A biased coin has probability \(p\) of showing a head and probability \(q\) of showing a tail, where \(p\ne0\), \(q\ne0\) and \(p\ne q\). When the coin is tossed repeatedly, runs occur. A straight run of length \(n\) is a sequence of \(n\) consecutive heads or \(n\) consecutive tails. An alternating run of length \(n\) is a sequence of length \(n\) alternating between heads and tails. An alternating run can start with either a head or a tail. Let \(S\) be the length of the longest straight run beginning with the first toss and let \(A\) be the length of the longest alternating run beginning with the first toss.

  1. Explain why \(\P(A=1)=p^2+q^2\) and find \(\P(S=1)\). Show that \(\P(S=1)<\P(A=1)\).
  2. Show that \(\P(S=2)= \P(A=2)\) and determine the relationship between \(\P(S=3)\) and \( \P(A=3)\).
  3. Show that, for \(n>1\), \(\P(S=2n)>\P(A=2n)\) and determine the corresponding relationship between \(\P(S=2n+1)\) and \(\P(A=2n+1)\). [You are advised not to use \(p+q=1\) in this part.]


Solution:

  1. The only way \(A = 1\) is if we get \(HH\) or \(TT\) which has probability \(p^2+q^2\). The only way we get \(S=1\) is if we have \(HT\) to \(TH\), ie \(2pq\). Since \((p-q)^2 = p^2 + q^2 - 2pq >0\) we must have \(\mathbb{P}(A=1) > \mathbb{P}(S=1)\).
  2. \(\,\) \begin{align*} \mathbb{P}(S=2) &= p^2q + q^2p \\ \mathbb{P}(A=2) &= pq^2 + qp^2 = \mathbb{P}(S=2) \\ \\ \mathbb{P}(S=3) &= p^3q + q^3p = pq(p^2+q^2) \\ \mathbb{P}(A=3) &= pqp^2 + qpq^2 = pq(p^2+q^2) = \mathbb{P}(S=3) \end{align*}
  3. For \(n > 1\) we must have \begin{align*} && \mathbb{P}(S = 2n) &= p^{2n}q + q^{2n}p \\ && \mathbb{P}(A=2n) &= (pq)^{n}q + (qp)^{n}p \\ &&&= p^nq^{n+1} + q^np^{n+1} \\ && \mathbb{P}(S = 2n) &> \mathbb{P}(A = 2n) \\ \Leftrightarrow && p^{2n}q + q^{2n}p & > p^nq^{n+1} + q^np^{n+1}\\ \Leftrightarrow && 0 & < p^{2n}q+q^{2n}p - p^nq^{n+1} -q^np^{n+1}\\ &&&= (p^n-q^n)(qp^n - pq^n) \end{align*} which is clearly true. \begin{align*} && \mathbb{P}(S=2n+1) &= p^{2n+1}q + q^{2n+1}p \\ && \mathbb{P}(A=2n+1) &= (pq)^{n}p^2 + (qp)^{n}q^2 \\ &&&= p^{n+2}q^n + q^{n+2}p^n \end{align*} The same factoring logic shows that \(\mathbb{P}(S = 2n+1) > \mathbb{P}(A=2n+1)\)

2011 Paper 3 Q12
D: 1700.0 B: 1516.0

The random variable \(N\) takes positive integer values and has pgf (probability generating function) \(\G(t)\). The random variables \(X_i\), where \(i=1\), \(2\), \(3\), \(\ldots,\) are independently and identically distributed, each with pgf \({H}(t)\). The random variables \(X_i\) are also independent of \(N\). The random variable \(Y\) is defined by \[ Y= \sum_{i=1}^N X_i \;. \] Given that the pgf of \(Y\) is \(\G(H(t))\), show that \[ \E(Y) = \E(N)\E(X_i) \text{ and } \var(Y) = \var(N)\big(\E(X_i)\big)^2 + \E(N) \var(X_i) \,.\] A fair coin is tossed until a head occurs. The total number of tosses is \(N\). The coin is then tossed a further \(N\) times and the total number of heads in these \(N\) tosses is \(Y\). Find in this particular case the pgf of \(Y\), \(\E(Y)\), \(\var(Y)\) and \(\P(Y=r)\).


Solution: Recall that for a random variable \(Z\) with pgf \(F(t)\) we have \(F(1) = 1\), \(\E[Z] = F'(1)\) and \(\E[Z^2] = F''(1) +F'(1)\) so \begin{align*} && \E[Y] &= G'(H(1))H'(1) \\ &&&= G'(1)H'(1) \\ &&&= \E[N]\E[X_i] \\ \\ && \E[Y^2] &= G''(H(1))(H'(1))^2+G'(H(1))H''(1) + G'(H(1))H'(1) \\ &&&= G''(1)(H'(1))^2+G'(1)H''(1) + G'(1)H'(1) \\ &&&= (\E[N^2]-\E[N])(\E[X_i])^2 + \E[N](\E[X_i^2]-\E[X_i]) + \E[N]\E[X_i] \\ &&&= (\E[N^2]-\E[N])(\E[X_i])^2 + \E[N]\E[X_i^2] \\ && \var[Y] &= (\E[N^2]-\E[N])(\E[X_i])^2 + \E[N]\E[X_i^2] - (\E[N])^2(\E[X_i])^2\\ &&&= (\var[N]+(\E[N])^2-\E[N])(\E[X_i])^2 + \E[N](\var[X_i]+\E[X_i]^2) - (\E[N])^2(\E[X_i])^2\\ &&&= \var[N](\E[X_i])^2 + \E[N]\var[X_i] \end{align*} Notice that \(N \sim Geo(\tfrac12)\) and \(Y = \sum_{i=1}^N X_i\) where \(X_i\) are Bernoulli. We have that \(G(t) = \frac{\frac12}{1-\frac12z}\) and \(H(t) = \frac12+\frac12p\) so the pgf of \(Y\) is \(G(H(t) = \frac{\frac12}{1 - \frac14-\frac14p} = \frac{2}{3-p}\). \begin{align*} && \E[X_i] &= \frac12\\ && \var[X_i] &= \frac14 \\ && \E[N] &= 2 \\ && \var[N] &= 2 \\ \\ && \E[Y] &= 2 \cdot \frac12 = 1 \\ && \var[Y] &= 2 \cdot \frac14 + 2 \frac14 = 1 \\ && \mathbb{P}(Y=r) &= \tfrac23 \left ( \tfrac13 \right)^r \end{align*}

2010 Paper 1 Q12
D: 1500.0 B: 1508.1

A discrete random variable \(X\) takes only positive integer values. Define \(\E(X)\) for this case, and show that \[\E(X) =\sum^{\infty}_{n=1}\P\left(X\ge n \right).\] I am collecting toy penguins from cereal boxes. Each box contains either one daddy penguin or one mummy penguin. The probability that a given box contains a daddy penguin is \(p\) and the probability that a given box contains a mummy penguin is \(q\), where \(p\ne0\), \(q\ne0\) and \(p+q=1\,\). Let \(X\) be the number of boxes that I need to open to get at least one of each kind of penguin. Show that \(\P(X\ge 4)= p^{3}+q^{3}\), and that \[ \E(X)=\frac{1}{pq}-1.\, \] Hence show that \(\E(X)\ge 3\,\).


Solution: \[ \E[X] := \sum_{n=1}^{\infty} n \mathbb{P}(X=n) \] \begin{align*} && \sum^{\infty}_{n=1}\mathbb{P}\left(X\ge n \right) &= \sum^{\infty}_{n=1}\sum_{k=n}^\infty \mathbb{P}(X=k) \\ &&&= \sum_{k=1}^\infty k \cdot \mathbb{P}(X=k) \\ &&&= \E[X] \end{align*} \begin{align*} &&\mathbb{P}(X \geq 4) &= \mathbb{P}(\text{first 3 are daddies}) +\mathbb{P}(\text{first 3 are mummies}) \\ &&&= p^3 + q^3 \\ \Rightarrow && \E[X] &= \sum_{n=1}^{\infty} \mathbb{P}\left(X\ge n \right) \\ &&&= 1+\sum_{n=2}^{\infty} \left ( p^{n-1} + q^{n-1}\right) \\ &&&= 1+\frac{p}{1-p} + \frac{q}{1-q} \\ &&&= 1+\frac{p}q + \frac{q}p \\ &&&= 1+\frac{p^2+q^2}{pq} \\ &&&= 1+\frac{(p+q)^2-2pq}{pq} \\ &&&= \frac{1}{pq} -1 \\ &&& \underbrace{\geq}_{AM-GM} \frac{1}{4}-1 = 3 \end{align*}

2010 Paper 3 Q12
D: 1700.0 B: 1500.0

The infinite series \(S\) is given by \[ S = 1 + (1 + d)r + (1 + 2d)r^2 + \cdots + (1+nd)r^n +\cdots\; ,\] for \(\vert r \vert <1\,\). By considering \(S - rS\), or otherwise, prove that \[ S = \frac 1{1-r} + \frac {rd}{(1-r)^2} \,.\] Arthur and Boadicea shoot arrows at a target. The probability that an arrow shot by Arthur hits the target is \(a\); the probability that an arrow shot by Boadicea hits the target is \(b\). Each shot is independent of all others. Prove that the expected number of shots it takes Arthur to hit the target is \(1/a\). Arthur and Boadicea now have a contest. They take alternate shots, with Arthur going first. The winner is the one who hits the target first. The probability that Arthur wins the contest is \(\alpha\) and the probability that Boadicea wins is \(\beta\). Show that \[ \alpha = \frac a {1-a'b'}\,, \] where \(a' = 1-a\) and \(b'=1-b\), and find \(\beta\). Show that the expected number of shots in the contest is \(\displaystyle \frac \alpha a + \frac \beta b\,.\)


Solution: Notice that \begin{align*} && S - rS &= 1 + dr + dr^2 + \cdots \\ &&&= 1 + dr(1 + r+r^2+ \cdots) \\ &&&= 1 + \frac{rd}{1-r} \\ \Rightarrow && S &= \frac{1}{1-r} + \frac{rd}{(1-r)^2} \end{align*} The number of shots Arthur takes is \(\textrm{Geo}(a)\), so it's expectation is \(1/a\). The probability Arthur wins is: \begin{align*} \alpha &= a + a'b'a + (a'b')^2a + \cdots \\ &= a(1+a'b' + \cdots) \\ &= \frac{a}{1-a'b'} \\ \\ \beta &= a'b + a'b'a'b + \cdots \\ &= a'b(1+b'a' + (b'a')^2 + \cdots ) \\ &= \frac{a'b}{1-a'b'} \end{align*} The expected number of shots in the contest is: \begin{align*} E &= a + 2a'b + 3a'b'a + 4a'b'a'b + \cdots \\ &= a(1 + 3a'b' + 5(a'b')^2 + \cdots) + 2a'b(1 + 2(a'b') + 3(a'b')^2 + \cdots) \\ &= a \left ( \frac{1}{1-a'b'} + \frac{2a'b'}{(1-a'b')^2} \right) + 2a'b \left ( \frac{1}{1-a'b'} + \frac{a'b'}{(1-a'b')^2}\right) \\ &= \frac{a}{1-a'b'} \left (1 + \frac{2a'b'}{(1-a'b')} \right) + 2\frac{a'b}{1-a'b'} \left ( 1 + \frac{a'b'}{(1-a'b')}\right) \\ &= \alpha \frac{1+a'b'}{1-a'b'} + \beta \frac{2}{1-a'b'} \\ &= \alpha \frac{1+1-a-b+ab}{1-a'b'} + \beta \frac{2}{1-a'b'} \\ \end{align*}

2009 Paper 3 Q12
D: 1700.0 B: 1516.0

  1. Albert tosses a fair coin \(k\) times, where \(k\) is a given positive integer. The number of heads he gets is \(X_1\). He then tosses the coin \(X_1\) times, getting \(X_2\) heads. He then tosses the coin \(X_2\) times, getting \(X_3\) heads. The random variables \(X_4\), \(X_5\), \(\ldots\) are defined similarly. Write down \(\E(X_1)\). By considering \(\E(X_2 \; \big\vert \; X_1 = x_1)\), or otherwise, show that \(\E(X_2) = \frac14 k\). Find \(\displaystyle \sum_{i=1}^\infty \E(X_i)\).
  2. Bertha has \(k\) fair coins. She tosses the first coin until she gets a tail. The number of heads she gets before the first tail is \(Y_1\). She then tosses the second coin until she gets a tail and the number of heads she gets with this coin before the first tail is \(Y_2\). The random variables \(Y_3, Y_4, \ldots\;\), \(Y_k\) are defined similarly, and \(Y= \sum\limits_{i=1}^k Y_i\,\). Obtain the probability generating function of \(Y\), and use it to find \(\E(Y)\), \(\var(Y)\) and \(\P(Y=r)\).


Solution:

  1. \(X_1 \sim B(k, \tfrac12)\) so \(\E[X_1] = \frac{k}{2}\) Note that \(X_2 | X_1 = x_1 \sim B(x_1, \tfrac12)\) so \(\E[X_2 | X_1 = x_1) = \frac{x_1}{2}\) or \(\E[X_2 | X_1] = \frac12 X_1\). Therefore by the tower law, \(\E[\E[X_2|X_1]] = \E[\frac12 X_1] = \frac14k\) Notice also that \(\E[X_n] = \frac1{2^n} k\) and so \begin{align*} && \sum_{i=1}^\infty \E[X_i] &= \sum_{i=1}^{\infty} \frac1{2^i} k \\ &&&= \frac{\frac12 k}{1-\frac12} = k \end{align*}
  2. Note that \(Y_1 \sim Geo(\tfrac12)-1\) which has generating function \(\E[t^{Y_1}] = \E[t^{G-1}] = \frac{\frac12 t}{1-(1-\frac12)t}\frac1{t} = \frac{\frac12}{1-\frac12t}\). Notice that \begin{align*} && \E \left [ t^Y \right] &= \E \left [ t^{\sum_{i=1}^kY_i} \right] \\ &&&= \prod_{i=1}^k \E[t^{Y_i}] \\ &&&= \frac{1}{(2-t)^k} \end{align*} Therefore \(\E[Y] = G'(1) = k(2-1)^{-(k+1)} = k\) \(\E[Y^2] = (tG'(t))'|_{t=1} = k(k+1)(2-1)^{-(k+2)}+k(2-1)^{-(k+1)} = k^2+2k\) so \(\var[Y] = k^2+2k - k^2 =2 k\). Finally \(\mathbb{P}(Y=r) = \binom{k+r-1}{k} \frac{1}{2^{r+k}}\)
[Note: this second distribution is a negative binomial distribution]

2007 Paper 2 Q12
D: 1600.0 B: 1484.0

I have two identical dice. When I throw either one of them, the probability of it showing a 6 is \(p\) and the probability of it not showing a 6 is \(q\), where \(p+q=1\). As an experiment to determine \(p\), I throw the dice simultaneously until at least one die shows a 6. If both dice show a six on this throw, I stop. If just one die shows a six, I throw the other die until it shows a 6 and then stop.

  1. Show that the probability that I stop after \(r\) throws is \(pq^{r-1}(2-q^{r-1}-q^r)\), and find an expression for the expected number of throws. [{\bf Note:} You may use the result $\ds \sum_{r=0}^\infty rx^r = x(1-x)^{-2}\(.]
  2. In a large number of such experiments, the mean number of throws was \)m\(. Find an estimate for \)p\( in terms of \)m$.


Solution:

  1. \(\,\) \begin{align*} \mathbb{P}(\text{stop after r}) &= \mathbb{P}(\text{both stop at r}) + 2\mathbb{P}(\text{first stops before r second stops at r})\\ &= (q^2)^{r-1} p^2 + 2\cdot q^{r-1} p\cdot(1-q^{r-1}) \\ &= q^{r-1}p\left (2-2q^{r-1}+pq^{r-1} \right) \\ &= q^{r-1}p\left (2-q^{r-1}(1+p+q-p) \right) \\ &= q^{r-1}p\left (2-q^{r-1}-q^r\right) \\ \end{align*} \begin{align*} \E[\text{throws}] &= \sum_{r=1}^{\infty} r \mathbb{P}(\text{stop after r}) \\ &= \sum_{r=1}^{\infty} r q^{r-1}p\left (2-q^{r-1}-q^r\right) \\ &= \sum_{r=1}^{\infty} 2r q^{r-1}p-\sum_{r=1}^{\infty}r pq^{2r-2}-\sum_{r=1}^{\infty}r q^{2r-1}p \\ &=2p \sum_{r=1}^{\infty} r q^{r-1}-pq^{-2}\sum_{r=1}^{\infty}r q^{2r}-pq^{-1}\sum_{r=1}^{\infty}r q^{2r} \\ &= 2p(1-q)^{-2} - pq^{-2}q^2(1-q^2)^{-2}-pq^{-1}q^2(1-q^2)^{-2} \\ &= 2pp^{-2} -p(1+q)(1-q^2)^{-2} \\ &= 2p^{-1}-p(1+q)(1+q)^{-2}p^{-2} \\ &= 2p^{-1}-p^{-1}(1+q)^{-1} \\ &= \frac{2(1+q)-1}{p(1+q)} \\ &= \frac{1+2q}{p(1+q)} \\ &= \frac{3-2p}{p(2-p)} \end{align*}
  2. \(\,\) \begin{align*} && m &= \frac{3-2p}{p(2-p)} \\ \Rightarrow && 2mp-mp^2 &= 3-2p \\ \Rightarrow && 0 &= mp^2-(2m+2)p + 3 \\ \Rightarrow && p &= \frac{2m+2 \pm \sqrt{(2m+2)^2-12m}}{2m} \\ &&&= \frac{m+1- \sqrt{m^2-m + 1}}{m} \\ \end{align*} If we are looking for an approximation, we could say \(p^2 \approx 0\) and \(p \approx \frac{3}{2(m+1)}\)

2006 Paper 1 Q13
D: 1484.0 B: 1468.0

A very generous shop-owner is hiding small diamonds in chocolate bars. Each diamond is hidden independently of any other diamond, and on average there is one diamond per kilogram of chocolate.

  1. I go to the shop and roll a fair six-sided die once. I decide that if I roll a score of \(N\), I will buy \(100N\) grams of chocolate. Show that the probability that I will have no diamonds is \[ \frac{\e^{-0.1}}{ 6} \l \frac{1 - \e^{-0.6} }{ 1 - \e^{-0.1}} \r \] Show also that the expected number of diamonds I find is 0.35.
  2. Instead, I decide to roll a fair six-sided die repeatedly until I score a 6. If I roll my first 6 on my \(T\)th throw, I will buy \(100T\) grams of chocolate. Show that the probability that I will have no diamonds is \[ \frac{\e^{-0.1}}{ 6 - 5\e^{-0.1}} \] Calculate also the expected number of diamonds that I find. (You may find it useful to consider the the binomial expansion of \(\l 1 - x \r^{-2}\).)


Solution: Not that the number of diamonds per kilogram is \(1\) so we are assuming it is \(Po(M)\) where \(M\) is the mass in kg. In particular \(\E[X] = M\) and \(\mathbb{P}(X = 0) = e^{-M}\)

  1. \(\,\) \begin{align*} && \mathbb{P}(\text{no diamonds}) &= \sum_{n=1}^6\mathbb{P}(\text{no diamonds and roll }n) \\ &&&= \sum_{n=1}^6 \tfrac16 e^{-\frac{n}{10}} \\ &&&= \frac{e^{-0.1}}6 \left ( \frac{1-e^{-0.6}}{1-e^{-0.1}}\right) \\ && \E[\text{diamonds}] &= \sum_{n=1}^6 \E(\text{diamonds}|N = n)\mathbb{P}(N = n) \\ &&&= \sum_{n=1}^6 0.1n \cdot \frac16 \\ &&&= 0.1 \cdot \frac{7}{2} = 0.35 \end{align*}
  2. \(\mathbb{P}(T = k) = \left ( \frac56 \right)^{t-1} \frac16\), so \begin{align*} && \mathbb{P}(\text{no diamonds}) &= \sum_{n=1}^\infty\mathbb{P}(\text{no diamonds and }T=n) \\ &&&= \sum_{n=1}^\infty e^{-0.1n} \left ( \frac56 \right)^{n-1} \frac16 \\ &&&= \frac{e^{-0.1}}{6} \frac1{1- \frac56 e^{-0.1}} \\ &&&= \frac{e^{-0.1}}{6 - 5e^{-0.1}} \\ \\ && \E[\text{diamonds}] &= \sum_{n=1}^\infty \E(\text{diamonds}|T = n)\mathbb{P}(T = n) \\ &&&= \sum_{n=1}^\infty 0.1n \cdot \left ( \frac56 \right)^{n-1} \frac16 \\ &&&= \frac{0.1}{6} \sum_{n=1}^\infty n \cdot \left ( \frac56 \right)^{n-1} \\ &&&= \frac{1}{60} \frac{1}{(1- \tfrac56)^2} \\ &&&= \frac{6}{10} = \frac35 \end{align*}

2006 Paper 1 Q14
D: 1500.0 B: 1502.6

  1. A bag of sweets contains one red sweet and \(n\) blue sweets. I take a sweet from the bag, note its colour, return it to the bag, then shake the bag. I repeat this until the sweet I take is the red one. Find an expression for the probability that I take the red sweet on the \(r\)th attempt. What value of \(n\) maximises this probability?
  2. Instead, I take sweets from the bag, without replacing them in the bag, until I take the red sweet. Find an expression for the probability that I take the red sweet on the \(r\)th attempt. What value of \(n\) maximises this probability?


Solution:

  1. This is the probability of having the sequence \(\underbrace{BB\cdots B}_{r-1 \text{ times}}R\) which has probability \(\displaystyle \left ( \frac{n}{n+1} \right)^{r-1}\frac{1}{n+1}\). Maximising this, is equivalent to maximising \(\log\) of it, ie \begin{align*} && y &= (r-1) \ln n - r \ln (n+1) \\ \Rightarrow && \frac{\d y}{\d n} &= \frac{r-1}{n} - \frac{r}{n+1} \\ &&&= \frac{(r-1)(n+1)-rn}{n(n+1)} \\ &&&= \frac{r-n-1}{n(n+1)} \end{align*} Therefore this is maximised when \(n = r-1\)