Problems

Filters
Clear Filters

183 problems found

2017 Paper 1 Q12
D: 1500.0 B: 1513.9

In a lottery, each of the \(N\) participants pays \(\pounds c\) to the organiser and picks a number from \(1\) to \(N\). The organiser picks at random the winning number from \(1\) to \(N\) and all those participants who picked this number receive an equal share of the prize, \(\pounds J\).

  1. The participants pick their numbers independently and with equal probability. Obtain an expression for the probability that no participant picks the winning number, and hence determine the organiser's expected profit. Use the approximation \[ \left( 1 - \frac{a}{N} \right)^N \approx \e^{-a} \tag{\(*\)} \] to show that if \(2Nc = J\) then the organiser will expect to make a loss. Note: \(\e > 2\).
  2. Instead of the numbers being equally popular, a fraction \(\gamma\) of the numbers are popular and the rest are unpopular. For each participant, the probability of picking any given popular number is \(\dfrac{a}{N}\) and the probability of picking any given unpopular number is \(\dfrac{b}{N}\,\). Find a relationship between \(a\), \(b\) and \(\gamma\). Show that, using the approximation \((*)\), the organiser's expected profit can be expressed in the form \[ A\e^{-a} + B\e^{-b} +C \,, \] where \(A\), \(B\) and \(C\) can be written in terms of \(J\), \(c\), \(N\) and \(\gamma\). In the case \(\gamma = \frac18\) and \(a=9b\), find \(a\) and \(b\). Show that, if \(2Nc = J\), then the organiser will expect to make a profit. Note: \(\e < 3\).


Solution:

  1. The probability no-one picks the winning number is \(\left ( 1 - \frac{1}{N}\right)^N \approx \frac1e\). \begin{align*} && \mathbb{E}(\text{profit}) &= Nc - (1-e^{-1})J \\ &&& < Nc -(1- \tfrac12 )J \\ &&& < Nc - \frac12 J \\ &&&= \frac{2Nc-J}{2} \end{align*} Therefore if \(J = 2Nc\) the expected profit is negative.
  2. \(\,\) \begin{align*} && 1 &= \sum_{\text{all numbers}} \mathbb{P}(\text{pick }i) \\ &&&= \sum_{\text{popular numbers}} \mathbb{P}(\text{pick }i)+\sum_{\text{unpopular numbers}} \mathbb{P}(\text{pick }i) \\ &&&=\gamma N \frac{a}{N} + (1-\gamma)N \frac{b}{N} \\ &&&= \gamma a + (1-\gamma)b \end{align*} \begin{align*} && \mathbb{P}(\text{no-one picks winning number}) &= \mathbb{P}(\text{no-one picks winning number} | \text{winning number is popular})\mathbb{P})(\text{winning number is popular}) + \\ &&&\quad + \mathbb{P}(\text{no-one picks} | \text{unpopular})\mathbb{P}(\text{unpopular}) \\ &&&= \left (1 - \frac{a}{N} \right)^N \gamma + \left (1 - \frac{b}{N} \right)^N (1-\gamma) \\ &&&\approx \gamma e^{-a} + (1-\gamma)e^{-b} \\ \\ && \mathbb{E}(\text{profit}) &= Nc - (1-\gamma e^{-a} - (1-\gamma)e^{-b})J \\ &&&= Nc-J+J\gamma e^{-a} +J(1-\gamma)e^{-b} \end{align*} If \(\gamma = \frac18\) and \(a=9b\), then \(1=\frac18 a + \frac78b = 2b \Rightarrow b = \frac12, a = \frac92\) and \begin{align*} && \mathbb{E}(\text{profit}) &= Nc-J +J\tfrac18e^{-9/2}+J\tfrac78e^{-1/2} \\ &&&= Nc-J+\tfrac18Je^{-1/2}(e^{-4}+7) \end{align*} If we can show \(e^{-1/2}\frac{e^{-4}+7}{8} > \frac12\) we'd be done, so \begin{align*} && e^{-1/2}\frac{e^{-4}+7}{8} &> \frac12 \\ \Leftrightarrow && e^{-4}+7 &>4e^{1/2} \\ \Leftrightarrow && 49+14e^{-4}+e^{-8} &>16e \\ \end{align*} But clearly the LHS \(>49\) and the RHS \(<48\) so we're done

2017 Paper 1 Q13
D: 1500.0 B: 1484.0

I have a sliced loaf which initially contains \(n\) slices of bread. Each time I finish setting a STEP question, I make myself a snack: either toast, using one slice of bread; or a sandwich, using two slices of bread. I make toast with probability \(p\) and I make a sandwich with probability \(q\), where \(p+q=1\), unless there is only one slice left in which case I must, of course, make toast. Let \(s_r\) (\(1 \le r \le n\)) be the probability that the \(r\)th slice of bread is the second of two slices used to make a sandwich and let \(t_r\) (\(1 \le r \le n\)) be the probability that the \(r\)th slice of bread is used to make toast. What is the value of \(s_1\)? Explain why the following equations hold: \begin{align*} \phantom{\hspace{2cm} (2\le r \le n-1)} t_r &= (s_{r-1}+ t_{r-1})\,p \hspace{2cm} (2\le r \le n-1)\,; \\ \phantom{\hspace{1.53cm} (2\le r \le n) } s_r &= 1- (s_{r-1} + t_{r-1}) \hspace{1.53cm} ( 2\le r \le n )\,. \end{align*} Hence, or otherwise, show that \(s_{r} = q(1-s_{r-1})\) for \(2\le r\le n-1\). Show further that \[ \phantom{\hspace{2.7cm} (1\le r\le n)\,,} s_r = \frac{q+(-q)^r}{1+q} \hspace{2.7cm} (1\le r\le n-1)\,, \, \hspace{0.14cm} \] and find the corresponding expression for \(t_r\). Find also expressions for \(s_n\) and \(t_n\) in terms of \(q\).


Solution: The \(1\)st slice of bread can only be the first slice in a sandwich or a slice of toast. Therefore \(s_1 = 0\) \begin{align*} && t_r &= \underbrace{s_{r-1}}_{r-1\text{th is the end of a sandwich}} \cdot \underbrace{p}_{\text{and we make toast}} + \underbrace{t_{r-1}}_{r-1\text{th is toast}} \cdot \underbrace{p}_{\text{and we make toast}} \\ &&&= (s_{r-1}+t_{r-1})p \\ \\ && s_r &= 1-\mathbb{P}(\text{previous slice is not the first of a sandwich}) \\ &&&= 1-(s_{r-1} + t_{r-1}) \\ \\ \Rightarrow && s_r &= 1 - \frac{t_r}{p} \\ \Rightarrow && t_r &= p - ps_r \\ \Rightarrow && s_r &= 1 - s_{r-1} - (p-ps_{r-1}) \\ &&&= 1 -p -(1-p)s_{r-1} \\ &&&= q(1-s_{r-1}) \end{align*} Therefore since \(s_r + qs_{r-1} = q\) we should look for a solution of the form \(s_r = A(-q)^r + B\). The particular solution will have \((1+q)B = q \Rightarrow B = \frac{q}{1+q}\), the initial condition will have \(s_1 = \frac{q}{1+q} +A(-q) = 0 \Rightarrow q = \frac{1}{1+q}\), so we must have \begin{align*} && s_r &= \frac{q+(-q)^r}{1+q}\\ \Rightarrow && t_r &= p(1-s_r) \\ &&&= p \frac{1+q-q-(-q)^r}{1+q} \\ &&&= \frac{(1-q)(1-(-q)^r)}{1+q} \\ && s_n &= 1-\frac{q+(-q)^{n-1}}{1+q} - \frac{p(1-(-q)^{n-1})}{1+q} \\ &&&= 1-\frac{1+(1-p)(-q)^{n-1}}{1+q}\\ &&&= 1-\frac{1-(-q)^n}{1+q}\\ &&&= \frac{q+(-q)^n}{1+q}\\ && t_n &=1-s_n \\ &&&=\frac{1-(-q)^n}{1+q} \end{align*}

2017 Paper 2 Q12
D: 1600.0 B: 1563.6

Adam and Eve are catching fish. The number of fish, \(X\), that Adam catches in any time interval is Poisson distributed with parameter \(\lambda t\), where \(\lambda\) is a constant and \(t\) is the length of the time interval. The number of fish, \(Y\), that Eve catches in any time interval is Poisson distributed with parameter \(\mu t\), where \(\mu\) is a constant and \(t\) is the length of the time interval The two Poisson variables are independent. You may assume that the expected time between Adam catching a fish and Adam catching his next fish is \(\lambda^{-1}\), and similarly for Eve.

  1. By considering \(\P( X + Y = r)\), show that the total number of fish caught by Adam and Eve in time \(T\) also has a Poisson distribution.
  2. Given that Adam and Eve catch a total of \(k\) fish in time \(T\), where \(k\) is fixed, show that the number caught by Adam has a binomial distribution.
  3. Given that Adam and Eve start fishing at the same time, find the probability that the first fish is caught by Adam.
  4. Find the expected time from the moment Adam and Eve start fishing until they have each caught at least one fish.
[Note This question has been redrafted to make the meaning clearer.]


Solution:

  1. \(\,\) \begin{align*} && \mathbb{P}(X+Y=r) &= \sum_{k=0}^r \mathbb{P}(X = k, Y = r-k) \\ &&&= \sum_{k=0}^r \mathbb{P}(X = k)\mathbb{P}( Y = r-k) \\ &&&= \sum_{k=0}^r \frac{e^{-\lambda T} (\lambda T)^k}{k!}\frac{e^{-\mu T} (\mu T)^{r-k}}{(r-k)!}\\ &&&= \frac{e^{-(\mu+\lambda)T}}{r!}\sum_{k=0}^r \binom{r}{k}(\lambda T)^k (\mu T)^{r-k}\\ &&&= \frac{e^{-(\mu+\lambda)T}((\mu+\lambda)T)^r}{r!} \end{align*} Therefore \(X+Y \sim Po \left ( (\mu+\lambda)T \right)\)
  2. \(\,\) \begin{align*} && \mathbb{P}(X = r | X+Y = k) &= \frac{\mathbb{P}(X=r, Y = k-r)}{\mathbb{P}(X+Y=k)} \\ &&&= \frac{\frac{e^{-\lambda T} (\lambda T)^r}{r!}\frac{e^{-\mu T} (\mu T)^{k-r}}{(k-r)!}}{\frac{e^{-(\mu+\lambda)T}((\mu+\lambda)T)^k}{k!}} \\ &&&= \binom{k}{r} \left ( \frac{\lambda}{\lambda + \mu} \right)^r \left ( \frac{\mu}{\lambda + \mu} \right)^{k-r} \end{align*} Therefore \(X|X+Y=k \sim B(k, \frac{\lambda}{\lambda + \mu})\)
  3. \(P(X=1|X+Y = 1) = \frac{\lambda}{\lambda + \mu}\)
  4. Let \(X_1, Y_1\) be the time to the first fish are caught by Adam and Eve, then \begin{align*} && \mathbb{P}(X_1, Y_1 > t) &= \mathbb{P}(X_1> t) \mathbb{P}( Y_1 > t) \\ &&&= e^{-\lambda t}e^{-\mu t} \\ &&&= e^{-(\lambda+\mu)t} \\ \Rightarrow && f_{\max(X_1,Y_1)}(t) &= (\lambda+\mu)e^{-(\lambda+\mu)} \end{align*} Therefore the expected time is \(\frac1{\mu+\lambda}\)

2017 Paper 2 Q13
D: 1600.0 B: 1516.0

In a television game show, a contestant has to open a door using a key. The contestant is given a bag containing \(n\) keys, where \(n\ge2\). Only one key in the bag will open the door. There are three versions of the game. In each version, the contestant starts by choosing a key at random from the bag.

  1. In version 1, after each failed attempt at opening the door the key that has been tried is put back into the bag and the contestant again selects a key at random from the bag. By considering the binomial expansion of \(( 1 - q)^{-2}\), or otherwise, find the expected number of attempts required to open the door.
  2. In version 2, after each failed attempt at opening the door the key that has been tried is put aside and the contestant selects another key at random from the bag. Find the expected number of attempts required to open the door.
  3. In version 3, after each failed attempt at opening the door the key that has been tried is put back into the bag and another incorrect key is added to the bag. The contestant then selects a key at random from the bag. Show that the probability that the contestant draws the correct key at the \(k\)th attempt is \[ \frac{n-1}{(n+k-1)(n+k-2)} \,.\] Show also, using partial fractions, that the expected number of attempts required to open the door is infinite. You may use without proof the result that \(\displaystyle\sum_{m=1}^N \dfrac 1 m \to \infty \,\) as \(N\to \infty\,\).


Solution:

  1. The probability they pull the key out on the \(k\)th attempt will be \(\left ( \frac{n-1}{n} \right)^{k-1} \frac1n\), so we want: \begin{align*} \E[G_1] &= \sum_{k=1}^{\infty} k \cdot \left ( \frac{n-1}{n} \right)^{k-1} \frac1n \\ &= \frac{1}n \sum_{k=1}^{\infty} k \cdot \left ( \frac{n-1}{n} \right)^{k-1} \\ &= \frac1n \frac{1}{\left (1 - \frac{n-1}{n} \right)^2} \\ &= \frac{1}{n} \frac{n^2}{1^2} = n \end{align*}
  2. In version 2, the probability the correct key comes out at the \(k\)th attempt is \(\frac1n\) (assume we take out all the keys, then the correct key is equally likely to appear in all of the space). Therefore \(\E[G_2] = \frac1n (1 + 2 + \cdots + n) = \frac{n+1}{2}\)
  3. The probability the key comes out on the correct attempt is: \begin{align*} && \mathbb{P}(G_3 = k) &= \frac{n-1}{n} \cdot \frac{n}{n+1} \cdot \frac{n+1}{n+2} \cdots \frac{n+k-3}{n+k-2} \cdot \frac{1}{n+k-1} \\ &&&= \frac{n-1}{(n+k-2)(n+k-1)} \\ \\ &&k \cdot \mathbb{P}(G_3 = k) &= \frac{k(n-1)}{(n+k-2)(n+k-1)} \\ &&&= \frac{(n-1)(2-n)}{n+k-2} + \frac{(n-1)^2}{n+k-1} \\ &&&= \frac{(n-1)^2}{n+k-1} - \frac{(n-1)^2}{n+k-2} + \frac{n-1}{n-k+2} \\ \Rightarrow && \E[G_3] &= \sum_{k=1}^{\infty} k \cdot \mathbb{P}(G_3 = k) \\ &&&= \sum_{k=1}^{\infty} \left ( \frac{(n-1)^2}{n+k-1} - \frac{(n-1)^2}{n+k-2} + \frac{n-1}{n+k-2} \right) \\ &&&= \sum_{k=1}^{\infty} \left ( \frac{(n-1)^2}{n+k-1} - \frac{(n-1)^2}{n+k-2} \right) +\underbrace{\sum_{k=1}^{\infty} \frac{n-1}{n-k+2}}_{\to \infty} \\ \end{align*}

2017 Paper 3 Q12
D: 1700.0 B: 1500.2

The discrete random variables \(X\) and \(Y\) can each take the values \(1\), \(\ldots\,\), \(n\) (where \(n\ge2\)). Their joint probability distribution is given by \[ \P(X=x, \ Y=y) = k(x+y) \,, \] where \(k\) is a constant.

  1. Show that \[ \P(X=x) = \dfrac{n+1+2x}{2n(n+1)}\,. \] Hence determine whether \(X\) and \(Y\) are independent.
  2. Show that the covariance of \(X\) and \(Y\) is negative.


Solution:

  1. \(\,\) \begin{align*} && \mathbb{P}(X = x) &= \sum_{y=1}^n \mathbb{P}(X=x,Y=y) \\ &&&= \sum_{y=1}^n k(x+y) \\ &&&= nkx + k\frac{n(n+1)}2 \\ \\ && 1 &= \sum_{x=1}^n \mathbb{P}(X=x) \\ &&&= nk\frac{n(n+1)}{2} + kn\frac{n(n+1)}2 \\ &&&= kn^2(n+1) \\ \Rightarrow && k &= \frac{1}{n^2(n+1)} \\ \Rightarrow && \mathbb{P}(X = x) &= \frac{nx}{n^2(n+1)} + \frac{n(n+1)}{2n^2(n+1)} \\ &&&= \frac{n+1+2x}{2n(n+1)} \\ \\ && \mathbb{P}(X=x)\mathbb{P}(Y=y) &= \frac{(n+1)^2+2(n+1)(x+y)+4xy}{4n^2(n+1)^2} \\ &&&\neq \frac{x+y}{n^2(n+1)} \end{align*} Therefore \(X\) and \(Y\) are not independent.
  2. \(\,\) \begin{align*} && \E[X] &= \sum_{x=1}^n x \mathbb{P}(X=x) \\ &&&= \sum_{x=1}^n x \mathbb{P}(X=x)\\ &&&= \sum_{x=1}^n x \frac{n+1+2x}{2n(n+1)} \\ &&&= \frac{1}{2n(n+1)} \left ( (n+1) \sum x + 2\sum x^2\right)\\ &&&= \frac{1}{2n(n+1)} \left ( \frac{n(n+1)^2}{2} + \frac{n(n+1)(2n+1)}{3} \right) \\ &&&= \frac{1}{2} \left ( \frac{n+1}{2} + \frac{2n+1}{3} \right)\\ &&&= \frac{1}{2} \left ( \frac{7n+5}{6} \right)\\ &&&= \frac{7n+5}{12} \\ \\ && \textrm{Cov}(X,Y) &= \mathbb{E}\left[XY\right] - \E[X] \E[Y] \\ &&&= \sum_{x=1}^n \sum_{y=1}^n xy \frac{x+y}{n^2(n+1)} - \E[X]^2 \\ &&&= \frac{1}{n^2(n+1)} \sum \sum (x^2 y+xy^2) - \E[X]^2 \\ &&&= \frac{1}{n^2(n+1)} \left (\sum y \right )\left (\sum x^2\right ) - \E[X]^2 \\ &&&=\frac{(n+1)(2n+1)}{12} - \left ( \frac{7n+5}{12}\right)^2 \\ &&&= \frac1{144} \left (12(2n^2+3n+1) - (49n^2+70n+25) \right)\\ &&&= \frac{1}{144} \left (-25n^2-34n-13 \right) \\ &&& < 0 \end{align*} since \(\Delta = 34^2 - 4 \cdot 25 \cdot 13 = 4(17^2-25 \times 13) = -4 \cdot 36 < 0\)

2016 Paper 1 Q12
D: 1516.0 B: 1484.7

  1. Alice tosses a fair coin twice and Bob tosses a fair coin three times. Calculate the probability that Bob gets more heads than Alice.
  2. Alice tosses a fair coin three times and Bob tosses a fair coin four times. Calculate the probability that Bob gets more heads than Alice.
  3. Let \(p_1\) be the probability that Bob gets the same number of heads as Alice, and let~\(p_2\) be the probability that Bob gets more heads than Alice, when Alice and Bob each toss a fair coin \(n\) times. Alice tosses a fair coin \(n\) times and Bob tosses a fair coin \(n+1\) times. Express the probability that Bob gets more heads than Alice in terms of \(p_1\) and \(p_2\), and hence obtain a generalisation of the results of parts (i) and (ii).


Solution:

  1. There are several possibilities \begin{array}{c|c|c} \text{Alice} & \text{Bob} & P \\ \hline 0 & 1 & \frac1{2^2} \cdot 3 \cdot \frac{1}{2^3} = \frac{3}{2^5} \\ 0 & 2 & \frac1{2^2} \cdot 3 \cdot \frac{1}{2^3} = \frac{3}{2^5} \\ 0 & 3 & \frac1{2^2} \cdot \frac{1}{2^3} = \frac{1}{2^5} \\ 1 & 2 & 2 \cdot \frac1{2^2} \cdot 3 \cdot \frac{1}{2^3} = \frac{6}{2^5} \\ 1 & 3 & 2\cdot \frac1{2^2} \cdot \frac{1}{2^3} = \frac{2}{2^5} \\ 2 & 3 & \frac1{2^2} \cdot \frac{1}{2^3} = \frac{1}{2^5} \\ \hline && \frac{1}{2^5}(3+3+1+6+2+1) = \frac{16}{2^5} = \frac12 \end{array}
  2. There are several possibilities \begin{array}{c|c|c} A & B & \text{count} \\ \hline 0 & 1 & 4 \\ 0 & 2 & 6 \\ 0 & 3 & 4 \\ 0 & 4 & 1 \\ 1 & 2 & 3\cdot6 \\ 1 & 3 & 3\cdot4 \\ 1 & 4 & 3 \\ 2 & 3 & 3\cdot4 \\ 2 & 4 & 3 \\ 3 & 4 & 1 \\ \hline && 64 \end{array} Therefore the total probability is \(\frac12\)
  3. \(\mathbb{P}(\text{Bob more than Alice}) = p_1 \cdot \underbrace{\frac12}_{\text{he wins by breaking the tie on his last flip}} + p_2\) If \(p_3\) is the probability that Alice gets more heads than Bob, then by symmetry \(p_3 = p_2\) and \(p_1 + p_2 + p_3 = 1\). Therefore \(p_1 + 2p_2 = 1\). ie \(\frac12 p_1 + p_2 = \frac12\) therefore the answer is always \(\frac12\) for all values of \(n\).

2016 Paper 2 Q12
D: 1600.0 B: 1503.2

Starting with the result \(\P(A\cup B) = \P(A)+P(B) - \P(A\cap B)\), prove that \[ \P(A\cup B\cup C) = \P(A)+\P(B)+\P(C) - \P(A\cap B) - \P(B\cap C) - \P(C \cap A) + \P(A\cap B\cap C) \,. \] Write down, without proof, the corresponding result for four events \(A\), \(B\), \(C\) and \(D\). A pack of \(n\) cards, numbered \(1, 2, \ldots, n\), is shuffled and laid out in a row. The result of the shuffle is that each card is equally likely to be in any position in the row. Let \(E_i\) be the event that the card bearing the number \(i\) is in the \(i\)th position in the row. Write down the following probabilities:

  1. \(\P(E_i)\);
  2. \(\P(E_i\cap E_j)\), where \(i\ne j\);
  3. \(\P(E_i\cap E_j\cap E_k)\), where \(i\ne j\), \(j\ne k\) and \(k\ne i\).
Hence show that the probability that at least one card is in the same position as the number it bears is \[ 1 - \frac 1 {2!} + \frac 1{3!} - \cdots + (-1)^{n+1} \frac 1 {n!}\,. \] Find the probability that exactly one card is in the same position as the number it bears


Solution: \begin{align*} && \mathbb{P}(A \cup B \cup C) &= \mathbb{P}(A \cup B) + \mathbb{P}(C) - \mathbb{P}((A \cup B) \cap C) \tag{applying with \(A\cup B\) and \(C\)} \\ &&&= \mathbb{P}(A \cup B) + \mathbb{P}(C) - \mathbb{P}((A \cap C) \cup (B \cap C)) \\ &&&= \mathbb{P}(A)+\mathbb{P}(B) - \mathbb{P}(A\cap B) + \mathbb{P}(C) - \mathbb{P}((A \cap C) \cup (B \cap C)) \tag{applying with \(A\) and \(B\)}\\ &&&= \mathbb{P}(A)+\mathbb{P}(B) - \mathbb{P}(A\cap B) + \mathbb{P}(C) - \left ( \mathbb{P}(A \cap C) +\mathbb{P}(B \cap C) - \mathbb{P}( (A \cap C) \cap (B \cap C) )\right) \\ &&&= \mathbb{P}(A)+\mathbb{P}(B) +\mathbb{P}(C)- \mathbb{P}(A\cap B)- \mathbb{P}(A \cap C) -\mathbb{P}(B \cap C)+\mathbb{P}( A \cap B \cap C) \end{align*} \[ \mathbb{P}(A_1 \cup A_2 \cup A_3 \cup A_4) = \sum_i \mathbb{P}(A_i) - \sum_{i \neq j} \mathbb{P}(A_i \cap A_j) + \sum_{i \neq j \neq j} \mathbb{P}(A_i \cap A_j \cap A_k) - \mathbb{P}(A_1 \cap A_2 \cap A_3 \cap A_4) \]

  1. \(\mathbb{P}(E_i) = \frac{1}{n}\)
  2. \(\mathbb{P}(E_i \cap E_j) = \frac{1}{n} \cdot \frac{1}{n-1} = \frac{1}{n(n-1)}\)
  3. \(\mathbb{P})(E_i \cap E_j \cap E_k) = \frac{1}{n(n-1)(n-2)}\)
First notice that the probability that \(k\) (or more) cards are in the correct place is \(\frac{(n-k)!}{n!}\) (place the other \(n-k\) cards in any order. We are interested in: \begin{align*} \mathbb{P} \left ( \bigcup_{i=1}^n E_i \right) &= \sum_{i} \mathbb{P}(E_i) - \sum_{i \neq j} \mathbb{P}(E_i \cap E_j) + \sum_{i \neq j \neq k} \mathbb{P}(E_i \cap E_j \cap E_k) - \cdots \\ &= \sum_i \frac1n - \sum_{i\neq j} \frac{1}{n(n-1)} + \sum_{i \neq j \neq k} \frac{1}{n(n-1)(n-2)} - \cdots + (-1)^{k+1} \sum_{i_1 \neq i_2 \neq \cdots \neq i_k} \frac{(n-k)!}{n!} + \cdots\\ &= 1 - \binom{n}{2} \frac{1}{n(n-1)} + \binom{n}{3} \frac{1}{n(n-1)(n-2)} - \cdots + (-1)^{k+1} \binom{n}{k} \frac{(n-k)}{n!} + \cdots \\ &= 1 - \frac12 + \frac1{3!} - \cdots + (-1)^{k+1} \frac{n!}{k!(n-k)!} \frac{(n-k)!}{n!} + \cdots \\ &= 1 - \frac1{2!} + \frac1{3!} - \cdots + (-1)^{k+1} \frac{1}{k!} + \cdots + (-1)^{n+1} \frac{1}{n!} \end{align*} The probability exactly one card is in the right place is the probability none of the other \(n-1\) are in the right place, which is: \(\frac1n \left (1 - \left (1 - \frac1{2!} + \frac1{3!} - \cdots + (-1)^{k+1} \frac{1}{k!} + \cdots + (-1)^{n} \frac{1}{(n-1)!} \right) \right)\) but there are also \(n\) cards we can choose to be the card in the right place, hence \(\frac{1}{2!} - \frac{1}{3!} + \cdots +(-1)^n \frac{1}{(n-1)!}\)

2016 Paper 2 Q13
D: 1600.0 B: 1516.0

  1. The random variable \(X\) has a binomial distribution with parameters \(n\) and \(p\), where \(n=16\) and \(p=\frac12\). Show, using an approximation in terms of the standard normal density function $\displaystyle \tfrac{1}{\sqrt{2\pi}} \, \e ^{-\frac12 x^2} $, that \[ \P(X=8) \approx \frac 1{2\sqrt{2\pi}} \,. \]
  2. By considering a binomial distribution with parameters \(2n\) and \(\frac12\), show that \[ (2n)! \approx \frac {2^{2n} (n!)^2}{\sqrt{n\pi}} \,. \]
  3. By considering a Poisson distribution with parameter \(n\), show that \[ n! \approx \sqrt{2\pi n\, } \, \e^{-n} \, n^n \,. \]


Solution:

  1. \(X \sim B(16, \tfrac12)\), then \(X \approx N(8, 2^2)\), in particular \begin{align*} && \mathbb{P}(X = 8) &\approx \mathbb{P} \left ( 8 - \frac12 \leq 2Z + 8 \leq 8 + \frac12 \right) \\ &&&= \mathbb{P} \left (-\frac14 \leq Z \leq \frac14 \right) \\ &&&= \int_{-\frac14}^{\frac14} \frac{1}{\sqrt{2 \pi}}e^{-\frac12 x^2} \d x \\ &&&\approx \frac{1}{\sqrt{2\pi}} \int_{-\frac14}^{\frac14} 1\d x\\ &&&= \frac{1}{2 \sqrt{2\pi}} \end{align*}
  2. Suppose \(X \sim B(2n, \frac12)\) then \(X \approx N(n, \frac{n}{2})\), and \begin{align*} && \mathbb{P}(X = n) &\approx \mathbb{P} \left ( n - \frac12 \leq \sqrt{\frac{n}{2}} Z + n \leq n + \frac12 \right) \\ &&&= \mathbb{P} \left ( - \frac1{\sqrt{2n}} \leq Z \leq \frac1{\sqrt{2n}}\right) \\ &&&= \int_{-\frac1{\sqrt{2n}}}^{\frac1{\sqrt{2n}}} \frac{1}{\sqrt{2 \pi}} e^{-\frac12 x^2} \d x \\ &&&\approx \frac{1}{\sqrt{n\pi}}\\ \Rightarrow && \binom{2n}{n}\frac1{2^n} \frac{1}{2^n} & \approx \frac{1}{\sqrt{n \pi}} \\ \Rightarrow && (2n)! &\approx \frac{2^{2n}(n!)^2}{\sqrt{n\pi}} \end{align*}
  3. \(X \sim Po(n)\), then \(X \approx N(n, (\sqrt{n})^2)\), therefore \begin{align*} && \mathbb{P}(X = n) &\approx \mathbb{P} \left (-\frac12 \leq \sqrt{n} Z \leq \frac12 \right) \\ &&&= \int_{-\frac{1}{2 \sqrt{n}}}^{\frac{1}{2 \sqrt{n}}} \frac{1}{\sqrt{2\pi}}e^{-\frac12 x^2} \d x \\ &&&\approx \frac{1}{\sqrt{2 \pi n}} \\ \Rightarrow && e^{-n} \frac{n^n}{n!} & \approx \frac{1}{\sqrt{2 \pi n}} \\ \Rightarrow && n! &\approx \sqrt{2 \pi n} e^{-n}n^n \end{align*}

2015 Paper 1 Q12
D: 1500.0 B: 1461.6

The number \(X\) of casualties arriving at a hospital each day follows a Poisson distribution with mean 8; that is, \[ \P(X=n) = \frac{ \e^{-8}8^n}{n!}\,, \ \ \ \ n=0, \ 1, \ 2, \ \ldots \ . \] Casualties require surgery with probability \(\frac14\). The number of casualties arriving on any given day is independent of the number arriving on any other day and the casualties require surgery independently of one another.

  1. What is the probability that, on a day when exactly \(n\) casualties arrive, exactly \(r\) of them require surgery?
  2. Prove (algebraically) that the number requiring surgery each day also follows a Poisson distribution, and state its mean.
  3. Given that in a particular randomly chosen week a total of 12 casualties require surgery on Monday and Tuesday, what is the probability that 8 casualties require surgery on Monday? You should give your answer as a fraction in its lowest terms.


Solution:

  1. \(\mathbb{P}(r \text{ need surgery}|n \text{ casualties}) = \binom{n}{r} \left ( \frac14\right)^r \left ( \frac34\right)^{n-r}\)
  2. \(\,\) \begin{align*} && \mathbb{P}(r \text{ need surgery}) &= \sum_{n=r}^{\infty} \mathbb{P}(r \text{ need surgery} |n \text{ casualties}) \mathbb{P}(n \text{ casualties}) \\ &&&= \sum_{n=r}^{\infty} \binom{n}{r}\left ( \frac14\right)^r \left ( \frac34\right)^{n-r} \frac{e^{-8} 8^n}{n!} \\ &&&= \sum_{n=r}^{\infty} \frac{n!}{(n-r)!r!}\left ( \frac14\right)^r \left ( \frac34\right)^{n-r} \frac{e^{-8} 8^n}{n!} \\ &&&= \frac{e^{-8}8^r}{r!}\left ( \frac14\right)^r \sum_{n=r}^{\infty} \frac{8^{n-r}}{(n-r)} \left ( \frac34\right)^{n-r} \\ &&&= \frac{e^{-8}8^r}{r!}\left ( \frac14\right)^r \sum_{n=r}^{\infty} \frac{6^{n-r}}{(n-r)} \\ &&&= \frac{e^{-8}2^r}{r!} e^6 \\ &&&= \frac{e^{-2}2^r}{r!} \end{align*} Therefore the number requiring surgery is \(Po(2)\) with mean \(2\).
  3. \(\,\) \begin{align*} && \mathbb{P}(X_1 = 8| X_1 + X_2 =12) &= \frac{\mathbb{P}(X_1 = 8,X_2 =4)} {\mathbb{P}(X_1+X_2 = 12)}\\ &&&= \frac{\frac{e^{-2}2^8}{8!} \cdot \frac{e^{-2}2^4}{4!}}{\frac{e^{-4}4^{12}}{12!}} \\ &&&= \frac{12!}{8!4!} \frac{1}{2^{12}} \\ &&&= \binom{12}4 \left ( \frac12 \right)^4\left ( \frac12 \right)^8 \\ &&&= \frac{495}{4096} \end{align*}

2015 Paper 1 Q13
D: 1500.0 B: 1501.1

A fair die with faces numbered \(1, \ldots, 6\) is thrown repeatedly. The events \(A\), \(B\), \(C\), \(D\) and \(E\) are defined as follows. \begin{align*} A: && \text{the first 6 arises on the \(n\)th throw.}\\ B: && \text{at least one 5 arises before the first 6.} \\ C: && \text{at least one 4 arises before the first 6.}\\ D: && \text{exactly one 5 arises before the first 6.}\\ E: && \text{exactly one 4 arises before the first 6.} \end{align*} Evaluate the following probabilities:

  1. \(\P(A)\)
  2. \(\P(B)\)
  3. \(\P(B\cap C)\)
  4. \(\P(D)\)
  5. \(\P(D\cup E)\)
For some parts of this question, you may want to make use of the binomial expansion in the form: \[ (1-x)^{-n} = 1 +nx +\frac {n(n+1)}2 x^2 + \cdots + \frac {(n+r-1)!}{r! (n-1)!}x^r +\cdots\ .\]


Solution:

  1. \(\,\) \begin{align*} \mathbb{P}(A) &= \mathbb{P}(\text{the first 6 arises on the \(n\)th throw.}) \\ &= \mathbb{P}(\text{\(n-1\) not 6s, followed by a 6.})\\ &= \left ( \frac56\right)^{n-1} \cdot \frac16 = \frac{5^{n-1}}{6^n} \end{align*}
  2. There is nothing special about \(5\) or \(6\), so which comes first is \(50:50\), therefore this probability is \(\frac12\)
  3. There is nothing special about \(4\), \(5\) or \(6\) so this is the probability that \(6\) appears last out of these three numbers, hence \(\frac13\)
  4. \(\,\) \begin{align*} \mathbb{P}(D) &= \mathbb{P}(\text{exactly one 5 arises before the first 6.}) \\ &=\sum_{n=2}^{\infty} \mathbb{P}(\text{exactly one 5 arises before the first 6 which appears on the \(n\)th roll.}) \\ &= \sum_{n=2}^{\infty} \binom{n-1}{1} \left ( \frac46 \right)^{n-2} \frac16 \cdot \frac16 \\ &= \frac1{36} \sum_{n=2}^{\infty} (n-1) \left ( \frac23 \right)^{n-2} \\ &= \frac1{36} \sum_{n=1}^{\infty} n \left ( \frac23 \right)^{n-1} \\ &= \frac1{36} \frac{1}{\left ( 1- \frac23 \right)^2} = \frac14 \end{align*}
  5. \(\,\) \begin{align*} \mathbb{P}(D \cup E) &= \mathbb{P}(D) + \mathbb{P}(E) - \mathbb{P}(D \cap E) \\ &= \frac12 - \mathbb{P}(D \cap E) \\ &=\frac12 - \sum_{n=3}^{\infty} \mathbb{P}(\text{exactly one 5 and one 4 arises before the first 6 which appears on the \(n\)th roll.}) \\ &=\frac12 - \sum_{n=3}^{\infty} 2\binom{n-1}{2} \left ( \frac36 \right)^{n-3}\cdot \frac16 \cdot \frac16 \cdot \frac16 \\ &=\frac12 - \frac2{6^3}\sum_{n=3}^{\infty} \frac{(n-1)(n-2)}{2} \left ( \frac12 \right)^{n-3} \\ &=\frac12 - \frac2{6^3}\frac{1}{(1-\tfrac12)^3}\\ &= \frac12 - \frac{2}{27} \\ &= \frac{23}{54} \end{align*}

2015 Paper 2 Q12
D: 1600.0 B: 1500.0

Four players \(A\), \(B\), \(C\) and \(D\) play a coin-tossing game with a fair coin. Each player chooses a sequence of heads and tails, as follows: Player A: HHT; Player B: THH; Player C: TTH; Player D: HTT. The coin is then tossed until one of these sequences occurs, in which case the corresponding player is the winner.

  1. Show that, if only \(A\) and \(B\) play, then \(A\) has a probability of \(\frac14\) of winning.
  2. If all four players play together, find the probabilities of each one winning.
  3. Only \(B\) and \(C\) play. What is the probability of \(C\) winning if the first two tosses are TT? Let the probabilities of \(C\) winning if the first two tosses are HT, TH and HH be \(p\), \(q\) and \(r\), respectively. Show that \(p=\frac12 +\frac12q\). Find the probability that \(C\) wins.


Solution:

  1. The only way \(A\) can win is if the sequence starts HH, if it does not start like this, then the only way HHT can appear is after a sequence of THH...H, but then THH has already appeared and \(B\) has won. Therefore the probability is \(\frac14\)
  2. If HH appears before TT then either \(A\) or \(B\) will win. If HH appears first, then \(A\) has a \(\frac14\) probability of winning. So \(A\): \(\frac18\), \(B:\), \(\frac38\), \(C:\), \(\frac18\), \(D: \frac38\)
  3. If the first two tosses are TT then \(C\) will win. If the first two tosses are HT, then either the next toss is T and \(C\) wins, or the next toss is H, and it's as if we started TH. ie \(p = \frac12 + \frac12 q\). If the first two tosses are TH, then either the next toss is H and \(C\) losses or the next toss is T and it's like starting HT. So \(q = \frac12 p\). Therefore \(p = \frac12 + \frac14p \Rightarrow p = \frac13\) If the first two tosses are HH, then eventually a T appears, and it's the same as starting HT. Therefore the probability \(C\) wins is: \(\frac14 + \frac14 \cdot \frac13 + \frac14 \cdot \frac16 + \frac14 \cdot \frac13 = \frac{11}{24}\)

2015 Paper 2 Q13
D: 1600.0 B: 1516.0

The maximum height \(X\) of flood water each year on a certain river is a random variable with probability density function \(\f\) given by \[ \f(x) = \begin{cases} \lambda \e^{-\lambda x} & \text{for \(x\ge0\)}\,, \\ 0 & \text{otherwise,} \end{cases} \] where \(\lambda\) is a positive constant. It costs \(ky\) pounds each year to prepare for flood water of height \(y\) or less, where \(k\) is a positive constant and \(y\ge0\). If \(X \le y\) no further costs are incurred but if \(X> y\) the additional cost of flood damage is \(a(X - y )\) pounds where \(a\) is a positive constant.

  1. Let \(C\) be the total cost of dealing with the floods in the year. Show that the expectation of \(C\) is given by \[\mathrm{E}(C)=ky+\frac{a}{\lambda}\mathrm{e}^{-\lambda y} \, . \] How should \(y\) be chosen in order to minimise \(\mathrm{E}(C)\), in the different cases that arise according to the value of \(a/k\)?
  2. Find the variance of \(C\), and show that the more that is spent on preparing for flood water in advance the smaller this variance.


Solution:

  1. \(\,\) \begin{align*} && \mathbb{E}(C) &= \int_0^\infty \text{cost}(x) f(x) \d x \\ &&&= ky + \int_y^{\infty} a(x-y) \lambda e^{-\lambda x} \d x\\ &&&= ky + \int_0^{\infty} a u \lambda e^{-\lambda u -\lambda y} \d x \\ &&&= ky + ae^{-\lambda y} \left( \left [ -ue^{-\lambda u} \right]_0^\infty -\int_0^\infty e^{-\lambda u} \d u\right) \\ &&&= ky + \frac{a}{\lambda}e^{-\lambda y} \\ \\ && \frac{\d \mathbb{E}(C)}{\d y} &= k - ae^{-\lambda y} \\ \Rightarrow && y &= \frac{1}{\lambda}\ln \left ( \frac{a}{k} \right) \end{align*} Since \(\mathbb{E}(C)\) is clearly increasing when \(y\) is very large, the optimal value will be \(\frac{1}{\lambda}\ln \left ( \frac{a}{k} \right)\), if \(\frac{a}{k} > 1\), otherwise you should spend nothing on flood defenses.
  2. \begin{align*} && \mathbb{E}(C^2) &= \int_0^{\infty} \text{cost}(x)^2 f(x) \d x \\ &&&= \int_0^{\infty}(ky + a(x-y)\mathbb{1}_{x > y})^2 f(x) \d x \\ &&&= k^2y^2 + \int_y^{\infty}2kya(x-y)f(x)\d x + \int_y^{\infty}a^2 (x-y)^2 f(x) \d x \\ &&&= k^2y^2 + \frac{2kya}{\lambda}e^{- \lambda y}+a^2e^{-\lambda y}\int_{u=0}^\infty u^2 \lambda e^{-\lambda u} \d u \\ &&&= k^2y^2 + \frac{2kya}{\lambda}e^{-\lambda y}+a^2e^{-\lambda y}(\textrm{Var}(Exp(\lambda)) + \mathbb{E}(Exp(\lambda))^2\\ &&&= k^2y^2 + \frac{2kya}{\lambda}e^{-\lambda y} + a^2e^{-\lambda y} \frac{2}{\lambda^2} \\ && \textrm{Var}(C) &= k^2y^2 + \frac{2kya}{\lambda}e^{-\lambda y} + a^2e^{-\lambda y} \frac{2}{\lambda^2} - \left ( ky + \frac{a}{\lambda} e^{-\lambda y}\right)^2 \\ &&&= a^2e^{-\lambda y} \frac{2}{\lambda^2} - a^2 e^{-2\lambda y}\frac{1}{\lambda^2} \\ &&&= \frac{a^2}{\lambda^2} e^{-\lambda y}\left (2 - e^{-\lambda y} \right) \\ \\ && \frac{\d \textrm{Var}(C)}{\d y} &= \frac{a^2}{\lambda^2} \left (-2\lambda e^{-\lambda y} +2\lambda e^{-2\lambda y} \right) \\ &&&= \frac{2a^2}{\lambda} e^{-\lambda y}\left (e^{-\lambda y}-1 \right) \leq 0 \end{align*} so \(\textrm{Var}(C)\) is decreasing in \(y\).

2015 Paper 3 Q13
D: 1700.0 B: 1500.0

Each of the two independent random variables \(X\) and \(Y\) is uniformly distributed on the interval~\([0,1]\).

  1. By considering the lines \(x+y =\) \(\mathrm{constant}\) in the \(x\)-\(y\) plane, find the cumulative distribution function of \(X+Y\).
  2. Hence show that the probability density function \(f\) of \((X+Y)^{-1}\) is given by \[ \f(t) = \begin{cases} 2t^{-2} -t^{-3} & \text{for \( \tfrac12 \le t \le 1\)} \\ t^{-3} & \text{for \(1\le t <\infty\)}\\ 0 & \text{otherwise}. \end{cases} \] Evaluate \(\E\Big(\dfrac1{X+Y}\Big)\,\).
  3. Find the cumulative distribution function of \(Y/X\) and use this result to find the probability density function of \(\dfrac X {X+Y}\). Write down \(\E\Big( \dfrac X {X+Y}\Big)\) and verify your result by integration.


Solution:

  1. \(\mathbb{P}(X + Y \leq c) \) is the area between the \(x\)-axis, \(y\)-axis and the line \(x + y = c\). There are two cases for this: \[\mathbb{P}(X + Y \leq c) = \begin{cases} 0 & \text{ if } c \leq 0 \\ \frac{c^2}{2} & \text{ if } c \leq 1 \\ 1- \frac{(2-c)^2}{2} & \text{ if } 1 \leq c \leq 2 \\ 1 & \text{ otherwise} \end{cases}\]
  2. \begin{align*} && \mathbb{P}((X + Y)^{-1} \leq t) &= 1- \mathbb{P}(X + Y \leq \frac1{t}) \\ \Rightarrow && f_{(X+Y)^{-1}}(t) &= 0 -\begin{cases} 0 & \text{ if } \frac1{t} \leq 0 \\ \frac{\d}{\d t}\frac{1}{2t^2} & \text{ if } \frac{1}{t} \leq 1 \\ \frac{\d}{\d t} \l 1- \frac{(2-\frac1t)^2}{2} \r & \text{ if } 1 \leq \frac{1}{t} \leq 2 \\ 0 & \text{ otherwise}\end{cases} \\ && &= \begin{cases} t^{-3} & \text{ if } t \geq 1 \\ (2-\frac1t)t^{-2} & \text{ if } \frac12 \leq t \leq 1\\ 0 & \text{ otherwise}\end{cases} \\ && &= \begin{cases} t^{-3} & \text{ if } t \geq 1 \\ 2t^{-2}-t^{-3} & \text{ if } \frac12 \leq t \leq 1\\ 0 & \text{ otherwise}\end{cases} \end{align*} Therefore, \begin{align*} \E \Big(\dfrac1{X+Y}\Big) &= \int_{\frac12}^{\infty} t f_{(X+Y)^{-1}}(t) \, \d t \\ &= \int_{\frac12}^{1} t f_{(X+Y)^{-1}}(t) \, \d t + \int_{1}^{\infty} t f_{(X+Y)^{-1}}(t) \d t\\ &= \int_{\frac12}^{1} \l 2t^{-1} - t^{-2} \r \, \d t + \int_{1}^{\infty} t^{-2} \d t\\ &= \left [ 2 \ln (t) + t^{-1} \right]_{\frac12}^{1} + \left [ -t^{-1} \right ]_{1}^{\infty} \\ &= 1 + 2 \ln 2 -2 + 1 \\ &= 2 \ln 2 \end{align*}
  3. \begin{align*} &&\mathbb{P} \l \frac{Y}{X} \leq c \r &= \mathbb{P}( Y \leq c X) \\ &&&= \begin{cases} 0 & \text{if } c \leq 0 \\ \frac{c}{2} & \text{if } 0 \leq c \leq 1 \\ 1-\frac{1}{2c} & \text{if } 1 \leq c \end{cases} \\ \\ \Rightarrow && \mathbb{P} \l \frac{X}{X+Y} \leq t\r &= \mathbb{P} \l \frac{1}{1+\frac{Y}{X}} \leq t\r \\ &&&= \mathbb{P} \l \frac{1}{t} \leq 1+\frac{Y}{X}\r \\ &&&= \mathbb{P} \l \frac{1}{t} - 1\leq \frac{Y}{X}\r \\ &&&= 1- \mathbb{P} \l \frac{Y}{X} \leq \frac{1}{t} - 1\r \\ &&&= 1 - \begin{cases} 0 & \text{if } \frac1{t} \leq 0 \\ \frac{1}{2t} - \frac{1}{2} & \text{if } 0 \leq \frac1{t} \leq 1 \\ 1-\frac{t}{2-2t} & \text{if } 1 \leq \frac1{t} \end{cases} \\ && f_{\frac{X}{X+Y}}(t) &= \begin{cases} 0 & \text{if } \frac1{t} \leq 0 \\ \frac{1}{2t^2} & \text{if } t \geq 1 \\ \frac{1}{2(1-t)^2} & \text{if } 0 \leq t \leq 1 \end{cases} \\ \Rightarrow && \mathbb{E} \l \frac{X}{X+Y} \r &= \int_0^\infty t f(t) \d t \\ &&&= \int_0^1 \frac{1}{2(1-t)^2} \d t + \int_1^\infty \frac{1}{t^2} \d t \\ &&& = \frac{1}{4} + \frac{1}{4} = \frac{1}{2} \\ \\ && \mathbb{E} \l \frac{X}{X+Y} \r &= \int_0^1 \int_0^1 \frac{x}{x+y} \d y\d x \\ &&&= \int_0^1 \l x \ln (x+1) - x \ln x \r \d x \\ &&&= \left [\frac{x^2}2 \ln(x+1) - \frac{x^2}{2} \ln(x) \right]_0^1 -\int_0^1 \l \frac{x^2}{2(x+1)} - \frac{x}{2} \r \d x \\ &&&= \frac{\ln 2}{2} + \frac{1}{4} - \int_0^1 \frac{x^2-1+1}{2(x+1)}\d x \\ &&&= \frac{\ln 2}{2} + \frac{1}{4} - \int_0^1 \frac{x -1}{2} + \frac{1}{2(x+1)}\d x \\ &&&= \frac{\ln 2}{2} + \frac{1}{4} - \frac{1}{4} + \frac{1}{2} - \frac{\ln 2}{2} \\ &&&= \frac{1}{2} \end{align*} We can also notice that \(1 = \mathbb{E} \l \frac{X+Y}{X+Y} \r = \mathbb{E} \l \frac{X}{X+Y} \r + \mathbb{E} \l \frac{Y}{X+Y} \r = 2 \mathbb{E} \l \frac{X}{X+Y} \r\) so it's clearly true as long as we can show that the integral converges.

2014 Paper 1 Q12
D: 1484.0 B: 1441.7

A game in a casino is played with a fair coin and an unbiased cubical die whose faces are labelled \(1, 1, 1, 2, 2\) and \(3.\) In each round of the game, the die is rolled once and the coin is tossed once. The outcome of the round is a random variable \(X\). The value, \(x\), of \(X\) is determined as follows. If the result of the toss is heads then \(x= \vert ks -1\vert\), and if the result of the toss is tails then \(x=\vert k-s\vert\), where \(s\) is the number on the die and \(k\) is a given number. Show that \(\mathbb{E}(X^2) = k +13(k-1)^2 /6\). Given that both \(\mathbb{E}(X^2)\) and \(\mathbb{E}(X)\) are positive integers, and that \(k\) is a single-digit positive integer, determine the value of \(k\), and write down the probability distribution of \(X\). A gambler pays \(\pounds 1\) to play the game, which consists of two rounds. The gambler is paid:

  • \(\pounds w\), where \(w\) is an integer, if the sum of the outcomes of the two rounds exceeds \(25\);
  • \(\pounds 1\) if the sum of the outcomes equals \(25\);
  • nothing if the sum of the outcomes is less that \(25\).
Find, in terms of \(w\), an expression for the amount the gambler expects to be paid in a game, and deduce the maximum possible value of \(w\), given that the casino's owners choose \(w\) so that the game is in their favour.


Solution: \begin{align*} && \mathbb{E}(X^2) &= \frac12 \left (\frac16 \left ( 3(k -1)^2+2(2k-1)^2+(3k-1)^2 \right) +\frac16 \left ( 3(k -1)^2+2(k-2)^2+(k-3)^2 \right) \right) \\ &&&= \frac12 \left (\frac16 \left (20k^2-20k+6 \right) + \frac16 \left ( 6k^2-20k+20\right) \right) \\ &&&= \frac1{12} \left (26k^2-40k+ 26\right) \\ &&&= \frac{13}{6} (k^2+1) - \frac{10}{3}k \\ &&&= \frac{13}{6}(k-1)^2+k \end{align*} Since \(k\) a single digit positive number and \(\mathbb{E}(X^2)\) is an integer, \(6 \mid k-1 \Rightarrow k = 1, 7\). \begin{align*} \mathbb{E}(X | k=1) &= \frac12 \left (\frac16 \left ( 2+2 \right) +\frac16 \left ( 2+2 \right) \right) = \frac23 \not \in \mathbb{Z}\\ \mathbb{E}(X | k=7) &= \frac12 \left (\frac16 \left ( 3\cdot6+2\cdot13+20 \right) +\frac16 \left ( 3\cdot6+2\cdot5+4 \right) \right) = 8 \end{align*} Therefore \(k = 7\) The probability distribution is \begin{align*} && \mathbb{P}(X=4) = \frac1{12} \\ && \mathbb{P}(X=5) = \frac1{6} \\ && \mathbb{P}(X=6) = \frac12 \\ && \mathbb{P}(X=13) = \frac1{6} \\ && \mathbb{P}(X=20)= \frac1{12} \\ \end{align*} The only ways to score more than \(25\) are: \(20+6, 20+13, 20+20, 13+13\) The only ways to score exactly \(25\) are \(20+5\) \begin{align*} \mathbb{P}(>25) &= \frac1{12} \cdot\left(2\cdot \frac12+2\cdot\frac16+\frac1{12}\right) + \frac{1}{6^2} \\ &= \frac{7}{48} \\ \mathbb{P}(=25) &= \frac{2}{12 \cdot 6} = \frac{1}{36} \\ \\ \mathbb{E}(\text{payout}) &= \frac{7}{48}w + \frac{1}{36} = \frac{21w+4}{144} \end{align*} The casino needs \(\frac{21w+4}{144} < 1 \Rightarrow 21w< 140 \Rightarrow w < \frac{20}{3}\)

2014 Paper 2 Q12
D: 1600.0 B: 1484.8

The lifetime of a fly (measured in hours) is given by the continuous random variable \(T\) with probability density function \(f(t)\) and cumulative distribution function \(F(t)\). The hazard function, \(h(t)\), is defined, for \(F(t) < 1\), by \[ h(t) = \frac{f(t)}{1-F(t)}\,. \]

  1. Given that the fly lives to at least time \(t\), show that the probability of its dying within the following \(\delta t\) is approximately \(h (t) \, \delta t\) for small values of \(\delta t\).
  2. Find the hazard function in the case \(F(t) = t/a\) for \(0< t < a\). Sketch \(f(t)\) and \(h(t)\) in this case.
  3. The random variable \(T\) is distributed on the interval \(t > a\), where \(a>0\), and its hazard function is \(t^{-1}\). Determine the probability density function for \(T\).
  4. Show that \(h(t)\) is constant for \(t > b\) and zero otherwise if and only if \(f(t) =ke^{-k(t-b)}\) for \(t > b\), where \(k\) is a positive constant.
  5. The random variable \(T\) is distributed on the interval \(t > 0\) and its hazard function is given by \[ h(t) = \left(\frac{\lambda}{\theta^\lambda}\right)t^{\lambda-1}\,, \] where \(\lambda\) and \(\theta\) are positive constants. Find the probability density function for \(T\).


Solution:

  1. \(\,\) \begin{align*} && \mathbb{P}(T > t + \delta t | T > t) &= \frac{\mathbb{P}(T < t + \delta t)}{\mathbb{P}(T > t )} \\ &&&= \frac{\int_t^{t+\delta t} f(s) \d s}{1-F(t)} \\ &&&\approx \frac{f(t)\delta t}{1-F(t)} \\ &&&= h(t) \delta t \end{align*}
  2. If \(F(t) = t/a\) then \(f(t) = 1/a\) and \(h(t) = \frac{1/a}{1-t/a} = \frac{1}{a-t}\)
    TikZ diagram
  3. \(\,\) \begin{align*} && \frac{F'}{1-F} &= \frac{1}{t} \\ \Rightarrow && -\ln (1-F) &= \ln t + C\\ \Rightarrow && 1-F &= \frac{A}{t} \\ && F &= 1 - \frac{A}{t} \\ F(a) = 0: && F &= 1 - \frac{a}{t} \\ && f(t) &= \frac{a}{t^2} \end{align*}
  4. (\(\Rightarrow\)) \begin{align*} && \frac{F'}{1-F} &= k \\ \Rightarrow && -\ln(1-F) &= kt+C \\ \Rightarrow && 1-F &= Ae^{-kt} \\ F(b) = 0: && 1 &= Ae^{-kb} \\ \Rightarrow && 1-F &= e^{-k(t-b)}\\ \Rightarrow && f &= ke^{-k(t-b)} \\ \end{align*} (\(\Leftarrow\)) \(f(t) = ke^{-k(t-b)} \Rightarrow F(t) = 1-e^{-k(t-b)}\) and the result is clear.
  5. \(\,\) \begin{align*} && \frac{F'}{1-F} &= \left ( \frac{\lambda}{\theta^{\lambda}} \right) t^{\lambda-1} \\ \Rightarrow && -\ln(1-F) &= \left ( \frac{t}{\theta} \right)^{\lambda} +C\\ \Rightarrow && F &= 1-A\exp \left (- \left ( \frac{t}{\theta} \right)^{\lambda} \right) \\ F(0) = 0: && 0 &= 1-A \\ \Rightarrow && F &= 1 - \exp \left (- \left ( \frac{t}{\theta} \right)^{\lambda} \right) \\ \Rightarrow && f &= \lambda t^{\lambda -1} \theta^{-\lambda} \exp \left (- \left ( \frac{t}{\theta} \right)^{\lambda} \right) \end{align*}