Problems

Filters
Clear Filters

31 problems found

2022 Paper 3 Q12
D: 1500.0 B: 1500.0

  1. The point \(A\) lies on the circumference of a circle of radius \(a\) and centre \(O\). The point \(B\) is chosen at random on the circumference, so that the angle \(AOB\) has a uniform distribution on \([0, 2\pi]\). Find the expected length of the chord \(AB\).
  2. The point \(C\) is chosen at random in the interior of a circle of radius \(a\) and centre \(O\), so that the probability that it lies in any given region is proportional to the area of the region. The random variable \(R\) is defined as the distance between \(C\) and \(O\). Find the probability density function of \(R\). Obtain a formula in terms of \(a\), \(R\) and \(t\) for the length of a chord through \(C\) that makes an acute angle of \(t\) with \(OC\). Show that as \(C\) varies (with \(t\) fixed), the expected length \(\mathrm{L}(t)\) of such chords is given by \[ \mathrm{L}(t) = \frac{4a(1-\cos^3 t)}{3\sin^2 t}\,. \] Show further that \[ \mathrm{L}(t) = \frac{4a}{3}\left(\cos t + \tfrac{1}{2}\sec^2(\tfrac{1}{2}t)\right). \]
  3. The random variable \(T\) is uniformly distributed on \([0, \frac{1}{2}\pi]\). Find the expected value of \(\mathrm{L}(T)\).

2020 Paper 3 Q11
D: 1500.0 B: 1500.0

The continuous random variable \(X\) is uniformly distributed on \([a,b]\) where \(0 < a < b\).

  1. Let \(\mathrm{f}\) be a function defined for all \(x \in [a,b]\)
    • with \(\mathrm{f}(a) = b\) and \(\mathrm{f}(b) = a\),
    • which is strictly decreasing on \([a,b]\),
    • for which \(\mathrm{f}(x) = \mathrm{f}^{-1}(x)\) for all \(x \in [a,b]\).
    The random variable \(Y\) is defined by \(Y = \mathrm{f}(X)\). Show that \[ \mathrm{P}(Y \leqslant y) = \frac{b - \mathrm{f}(y)}{b - a} \quad \text{for } y \in [a,b]. \] Find the probability density function for \(Y\) and hence show that \[ \mathrm{E}(Y^2) = -ab + \int_a^b \frac{2x\,\mathrm{f}(x)}{b-a} \; \mathrm{d}x. \]
  2. The random variable \(Z\) is defined by \(\dfrac{1}{Z} + \dfrac{1}{X} = \dfrac{1}{c}\) where \(\dfrac{1}{c} = \dfrac{1}{a} + \dfrac{1}{b}\). By finding the variance of \(Z\), show that \[ \ln\left(\frac{b-c}{a-c}\right) < \frac{b-a}{c}. \]

2019 Paper 2 Q11
D: 1500.0 B: 1500.0

  1. The three integers \(n_1\), \(n_2\) and \(n_3\) satisfy \(0 < n_1 < n_2 < n_3\) and \(n_1 + n_2 > n_3\). Find the number of ways of choosing the pair of numbers \(n_1\) and \(n_2\) in the cases \(n_3 = 9\) and \(n_3 = 10\). Given that \(n_3 = 2n + 1\), where \(n\) is a positive integer, write down an expression (which you need not prove is correct) for the number of ways of choosing the pair of numbers \(n_1\) and \(n_2\). Simplify your expression. Write down and simplify the corresponding expression when \(n_3 = 2n\), where \(n\) is a positive integer.
  2. You have \(N\) rods, of lengths \(1, 2, 3, \ldots, N\) (one rod of each length). You take the rod of length \(N\), and choose two more rods at random from the remainder, each choice of two being equally likely. Show that, in the case \(N = 2n + 1\) where \(n\) is a positive integer, the probability that these three rods can form a triangle (of non-zero area) is $$\frac{n - 1}{2n - 1}.$$ Find the corresponding probability in the case \(N = 2n\), where \(n\) is a positive integer.
  3. You have \(2M + 1\) rods, of lengths \(1, 2, 3, \ldots, 2M + 1\) (one rod of each length), where \(M\) is a positive integer. You choose three at random, each choice of three being equally likely. Show that the probability that the rods can form a triangle (of non-zero area) is $$\frac{(4M + 1)(M - 1)}{2(2M + 1)(2M - 1)}.$$ Note: \(\sum_{k=1}^{K} k^2 = \frac{1}{6}K(K + 1)(2K + 1)\).


Solution:

  1. If \(n_3 = 9\) and we are looking for \(0 < n_1 < n_2 < n_3\) we can consider values for each \(n_2\). \begin{array}{clc|c} n_2 & \text{range} & \text{count} \\ \hline 6 & 4-5 & 2 \\ 7 & 3-6 & 4 \\ 8 & 2-7 & 6 \\ \hline & & 12 \end{array} When \(n_3 = 10\) \begin{array}{clc|c} n_2 & \text{range} & \text{count} \\ \hline 6 & 5 & 1 \\ 7 & 4-6 & 3 \\ 8 & 3-7 & 5 \\ 9 & 2-8 & 7 \\ \hline & & 16 \end{array} When \(n_3 = 2n+1\) we can have \(2 + 4 + \cdots + 2n-2 = n(n-1)\) When \(n_3 = 2n\) we can have \(1 + 3 + \cdots + 2n-3 = (n-1)^2\)
  2. For the 3 rods to form a triangle, it suffices for the sum of the lengths of the shorter rods to be larger than \(N\). When \(N = 2n+1\) there are \(n(n-1)\) ways this can happen, out of \(\binom{2n}{2}\) ways to choos the numbers, ie \begin{align*} && P &= \frac{n(n-1)}{\frac{2n(2n-1)}{2}} \\ &&&= \frac{n-1}{2n-1} \end{align*} When \(N = 2n\) there are \((n-1)^2\) ways this can happen, out of \(\binom{2n-1}{2}\) ways, ie \begin{align*} && P &= \frac{(n-1)^2}{\frac{(2n-1)(2n-2)}{2}} \\ &&&= \frac{n-1}{2n-1} \end{align*}
  3. The number of ways this can happen is: \begin{align*} C &= \sum_{k=3}^{2M+1} \# \{ \text{triangles where }k\text{ is largest} \} \\ &= \sum_{k=1}^{M} \# \{ \text{triangles where }2k+1\text{ is largest} \} +\sum_{k=1}^{M} \# \{ \text{triangles where }2k\text{ is largest} \}\\ &= \sum_{k=1}^{M} n(n-1)+\sum_{k=1}^{M} (n-1)^2\\ &= \sum_{k=1}^{M} (2n^2-3n+1)\\ &= \frac26M(M+1)(2M+1) - \frac32M(M+1) + M \\ &= \frac16 M(4M+1)(M-1) \end{align*} Therefore the probability is \begin{align*} && P &= \frac{M(4M+1)(M-1)}{6 \binom{2M+1}{3}} \\ &&&= \frac{M(4M+1)(M-1)}{(2M+1)2M(2M-1)} \\ &&&= \frac{(4M+1)(M-1)}{2(2M+1)(2M-1)} \end{align*}

2018 Paper 3 Q12
D: 1700.0 B: 1516.0

A random process generates, independently, \(n\) numbers each of which is drawn from a uniform (rectangular) distribution on the interval 0 to 1. The random variable \(Y_k\) is defined to be the \(k\)th smallest number (so there are \(k-1\) smaller numbers).

  1. Show that, for \(0\le y\le1\,\), \[ {\rm P}\big(Y_k\le y) =\sum^{n}_{m=k}\binom{n}{m}y^{m}\left(1-y\right)^{n-m} . \tag{\(*\)} \]
  2. Show that \[ m\binom n m = n \binom {n-1}{m-1} \] and obtain a similar expression for \(\displaystyle (n-m) \, \binom n m\,\). Starting from \((*)\), show that the probability density function of \(Y_k\) is \[ n\binom{ n-1}{k-1} y^{k-1}\left(1-y\right)^{ n-k} \,.\] Deduce an expression for \(\displaystyle \int_0^1 y^{k-1}(1-y)^{n-k} \, \d y \,\).
  3. Find \(\E(Y_k) \) in terms of \(n\) and \(k\).


Solution:

  1. \begin{align*} && \mathbb{P}(Y_k \leq y) &= \sum_{j=k}^n\mathbb{P}(\text{exactly }j \text{ values less than }y) \\ &&&= \sum_{j=k}^m \binom{m}{j} y^j(1-y)^{n-j} \end{align*}
  2. This is the number of ways to choose a committee of \(m\) people with the chair from those \(m\) people. This can be done in two ways. First: choose the committee in \(\binom{n}{m}\) ways and choose the chair in \(m\) ways so \(m \binom{n}{m}\). Alternatively, choose the chain in \(n\) ways and choose the remaining \(m-1\) committee members in \(\binom{n-1}{m-1}\) ways. Therefore \(m \binom{n}{m} = n \binom{n-1}{m-1}\) \begin{align*} (n-m) \binom{n}{m} &= (n-m) \binom{n}{n-m} \\ &= n \binom{n-1}{n-m-1} \\ &= n \binom{n-1}{m} \end{align*} \begin{align*} f_{Y_k}(y) &= \frac{\d }{\d y} \l \sum^{n}_{m=k}\binom{n}{m}y^{m}\left(1-y\right)^{n-m} \r \\ &= \sum^{n}_{m=k} \l \binom{n}{m}my^{m-1}\left(1-y\right)^{n-m} -\binom{n}{m}(n-m)y^{m}\left(1-y\right)^{n-m-1} \r \\ &= \sum^{n}_{m=k} \l n \binom{n-1}{m-1}y^{m-1}\left(1-y\right)^{n-m} -n \binom{n-1}{m} y^{m}\left(1-y\right)^{n-m-1} \r \\ &= n\sum^{n}_{m=k} \binom{n-1}{m-1}y^{m-1}\left(1-y\right)^{n-m} -n\sum^{n+1}_{m=k+1} \binom{n-1}{m-1} y^{m-1}\left(1-y\right)^{n-m} \\ &= n \binom{n-1}{k-1} y^{k-1}(1-y)^{n-k} \end{align*} \begin{align*} &&1 &= \int_0^1 f_{Y_k}(y) \d y \\ &&&= \int_0^1 n \binom{n-1}{k-1} y^{k-1}(1-y)^{n-k} \d y \\ &&&= n \binom{n-1}{k-1} \int_0^1 y^{k-1}(1-y)^{n-k} \d y \\ \Rightarrow && \frac{1}{n \binom{n-1}{k-1}} &= \int_0^1 y^{k-1}(1-y)^{n-k} \d y \\ \end{align*}
  3. \begin{align*} && \mathbb{E}(Y_k) &= \int_0^1 y f_{Y_k}(y) \d y \\ &&&= \int_0^1 n \binom{n-1}{k-1} y^{k}(1-y)^{n-k} \\ &&&= n \binom{n-1}{k-1}\int_0^1 y^{k}(1-y)^{n-k} \d y \\ &&&= n \binom{n-1}{k-1}\int_0^1 y^{k+1-1}(1-y)^{n+1-(k+1)} \d y \\ &&&= n \binom{n-1}{k-1} \frac{1}{(n+1) \binom{n}{k}}\\ &&&= \frac{n}{n+1} \cdot \frac{k}{n} \\ &&&= \frac{k}{n+1} \end{align*}

2017 Paper 1 Q12
D: 1500.0 B: 1513.9

In a lottery, each of the \(N\) participants pays \(\pounds c\) to the organiser and picks a number from \(1\) to \(N\). The organiser picks at random the winning number from \(1\) to \(N\) and all those participants who picked this number receive an equal share of the prize, \(\pounds J\).

  1. The participants pick their numbers independently and with equal probability. Obtain an expression for the probability that no participant picks the winning number, and hence determine the organiser's expected profit. Use the approximation \[ \left( 1 - \frac{a}{N} \right)^N \approx \e^{-a} \tag{\(*\)} \] to show that if \(2Nc = J\) then the organiser will expect to make a loss. Note: \(\e > 2\).
  2. Instead of the numbers being equally popular, a fraction \(\gamma\) of the numbers are popular and the rest are unpopular. For each participant, the probability of picking any given popular number is \(\dfrac{a}{N}\) and the probability of picking any given unpopular number is \(\dfrac{b}{N}\,\). Find a relationship between \(a\), \(b\) and \(\gamma\). Show that, using the approximation \((*)\), the organiser's expected profit can be expressed in the form \[ A\e^{-a} + B\e^{-b} +C \,, \] where \(A\), \(B\) and \(C\) can be written in terms of \(J\), \(c\), \(N\) and \(\gamma\). In the case \(\gamma = \frac18\) and \(a=9b\), find \(a\) and \(b\). Show that, if \(2Nc = J\), then the organiser will expect to make a profit. Note: \(\e < 3\).


Solution:

  1. The probability no-one picks the winning number is \(\left ( 1 - \frac{1}{N}\right)^N \approx \frac1e\). \begin{align*} && \mathbb{E}(\text{profit}) &= Nc - (1-e^{-1})J \\ &&& < Nc -(1- \tfrac12 )J \\ &&& < Nc - \frac12 J \\ &&&= \frac{2Nc-J}{2} \end{align*} Therefore if \(J = 2Nc\) the expected profit is negative.
  2. \(\,\) \begin{align*} && 1 &= \sum_{\text{all numbers}} \mathbb{P}(\text{pick }i) \\ &&&= \sum_{\text{popular numbers}} \mathbb{P}(\text{pick }i)+\sum_{\text{unpopular numbers}} \mathbb{P}(\text{pick }i) \\ &&&=\gamma N \frac{a}{N} + (1-\gamma)N \frac{b}{N} \\ &&&= \gamma a + (1-\gamma)b \end{align*} \begin{align*} && \mathbb{P}(\text{no-one picks winning number}) &= \mathbb{P}(\text{no-one picks winning number} | \text{winning number is popular})\mathbb{P})(\text{winning number is popular}) + \\ &&&\quad + \mathbb{P}(\text{no-one picks} | \text{unpopular})\mathbb{P}(\text{unpopular}) \\ &&&= \left (1 - \frac{a}{N} \right)^N \gamma + \left (1 - \frac{b}{N} \right)^N (1-\gamma) \\ &&&\approx \gamma e^{-a} + (1-\gamma)e^{-b} \\ \\ && \mathbb{E}(\text{profit}) &= Nc - (1-\gamma e^{-a} - (1-\gamma)e^{-b})J \\ &&&= Nc-J+J\gamma e^{-a} +J(1-\gamma)e^{-b} \end{align*} If \(\gamma = \frac18\) and \(a=9b\), then \(1=\frac18 a + \frac78b = 2b \Rightarrow b = \frac12, a = \frac92\) and \begin{align*} && \mathbb{E}(\text{profit}) &= Nc-J +J\tfrac18e^{-9/2}+J\tfrac78e^{-1/2} \\ &&&= Nc-J+\tfrac18Je^{-1/2}(e^{-4}+7) \end{align*} If we can show \(e^{-1/2}\frac{e^{-4}+7}{8} > \frac12\) we'd be done, so \begin{align*} && e^{-1/2}\frac{e^{-4}+7}{8} &> \frac12 \\ \Leftrightarrow && e^{-4}+7 &>4e^{1/2} \\ \Leftrightarrow && 49+14e^{-4}+e^{-8} &>16e \\ \end{align*} But clearly the LHS \(>49\) and the RHS \(<48\) so we're done

2015 Paper 3 Q13
D: 1700.0 B: 1500.0

Each of the two independent random variables \(X\) and \(Y\) is uniformly distributed on the interval~\([0,1]\).

  1. By considering the lines \(x+y =\) \(\mathrm{constant}\) in the \(x\)-\(y\) plane, find the cumulative distribution function of \(X+Y\).
  2. Hence show that the probability density function \(f\) of \((X+Y)^{-1}\) is given by \[ \f(t) = \begin{cases} 2t^{-2} -t^{-3} & \text{for \( \tfrac12 \le t \le 1\)} \\ t^{-3} & \text{for \(1\le t <\infty\)}\\ 0 & \text{otherwise}. \end{cases} \] Evaluate \(\E\Big(\dfrac1{X+Y}\Big)\,\).
  3. Find the cumulative distribution function of \(Y/X\) and use this result to find the probability density function of \(\dfrac X {X+Y}\). Write down \(\E\Big( \dfrac X {X+Y}\Big)\) and verify your result by integration.


Solution:

  1. \(\mathbb{P}(X + Y \leq c) \) is the area between the \(x\)-axis, \(y\)-axis and the line \(x + y = c\). There are two cases for this: \[\mathbb{P}(X + Y \leq c) = \begin{cases} 0 & \text{ if } c \leq 0 \\ \frac{c^2}{2} & \text{ if } c \leq 1 \\ 1- \frac{(2-c)^2}{2} & \text{ if } 1 \leq c \leq 2 \\ 1 & \text{ otherwise} \end{cases}\]
  2. \begin{align*} && \mathbb{P}((X + Y)^{-1} \leq t) &= 1- \mathbb{P}(X + Y \leq \frac1{t}) \\ \Rightarrow && f_{(X+Y)^{-1}}(t) &= 0 -\begin{cases} 0 & \text{ if } \frac1{t} \leq 0 \\ \frac{\d}{\d t}\frac{1}{2t^2} & \text{ if } \frac{1}{t} \leq 1 \\ \frac{\d}{\d t} \l 1- \frac{(2-\frac1t)^2}{2} \r & \text{ if } 1 \leq \frac{1}{t} \leq 2 \\ 0 & \text{ otherwise}\end{cases} \\ && &= \begin{cases} t^{-3} & \text{ if } t \geq 1 \\ (2-\frac1t)t^{-2} & \text{ if } \frac12 \leq t \leq 1\\ 0 & \text{ otherwise}\end{cases} \\ && &= \begin{cases} t^{-3} & \text{ if } t \geq 1 \\ 2t^{-2}-t^{-3} & \text{ if } \frac12 \leq t \leq 1\\ 0 & \text{ otherwise}\end{cases} \end{align*} Therefore, \begin{align*} \E \Big(\dfrac1{X+Y}\Big) &= \int_{\frac12}^{\infty} t f_{(X+Y)^{-1}}(t) \, \d t \\ &= \int_{\frac12}^{1} t f_{(X+Y)^{-1}}(t) \, \d t + \int_{1}^{\infty} t f_{(X+Y)^{-1}}(t) \d t\\ &= \int_{\frac12}^{1} \l 2t^{-1} - t^{-2} \r \, \d t + \int_{1}^{\infty} t^{-2} \d t\\ &= \left [ 2 \ln (t) + t^{-1} \right]_{\frac12}^{1} + \left [ -t^{-1} \right ]_{1}^{\infty} \\ &= 1 + 2 \ln 2 -2 + 1 \\ &= 2 \ln 2 \end{align*}
  3. \begin{align*} &&\mathbb{P} \l \frac{Y}{X} \leq c \r &= \mathbb{P}( Y \leq c X) \\ &&&= \begin{cases} 0 & \text{if } c \leq 0 \\ \frac{c}{2} & \text{if } 0 \leq c \leq 1 \\ 1-\frac{1}{2c} & \text{if } 1 \leq c \end{cases} \\ \\ \Rightarrow && \mathbb{P} \l \frac{X}{X+Y} \leq t\r &= \mathbb{P} \l \frac{1}{1+\frac{Y}{X}} \leq t\r \\ &&&= \mathbb{P} \l \frac{1}{t} \leq 1+\frac{Y}{X}\r \\ &&&= \mathbb{P} \l \frac{1}{t} - 1\leq \frac{Y}{X}\r \\ &&&= 1- \mathbb{P} \l \frac{Y}{X} \leq \frac{1}{t} - 1\r \\ &&&= 1 - \begin{cases} 0 & \text{if } \frac1{t} \leq 0 \\ \frac{1}{2t} - \frac{1}{2} & \text{if } 0 \leq \frac1{t} \leq 1 \\ 1-\frac{t}{2-2t} & \text{if } 1 \leq \frac1{t} \end{cases} \\ && f_{\frac{X}{X+Y}}(t) &= \begin{cases} 0 & \text{if } \frac1{t} \leq 0 \\ \frac{1}{2t^2} & \text{if } t \geq 1 \\ \frac{1}{2(1-t)^2} & \text{if } 0 \leq t \leq 1 \end{cases} \\ \Rightarrow && \mathbb{E} \l \frac{X}{X+Y} \r &= \int_0^\infty t f(t) \d t \\ &&&= \int_0^1 \frac{1}{2(1-t)^2} \d t + \int_1^\infty \frac{1}{t^2} \d t \\ &&& = \frac{1}{4} + \frac{1}{4} = \frac{1}{2} \\ \\ && \mathbb{E} \l \frac{X}{X+Y} \r &= \int_0^1 \int_0^1 \frac{x}{x+y} \d y\d x \\ &&&= \int_0^1 \l x \ln (x+1) - x \ln x \r \d x \\ &&&= \left [\frac{x^2}2 \ln(x+1) - \frac{x^2}{2} \ln(x) \right]_0^1 -\int_0^1 \l \frac{x^2}{2(x+1)} - \frac{x}{2} \r \d x \\ &&&= \frac{\ln 2}{2} + \frac{1}{4} - \int_0^1 \frac{x^2-1+1}{2(x+1)}\d x \\ &&&= \frac{\ln 2}{2} + \frac{1}{4} - \int_0^1 \frac{x -1}{2} + \frac{1}{2(x+1)}\d x \\ &&&= \frac{\ln 2}{2} + \frac{1}{4} - \frac{1}{4} + \frac{1}{2} - \frac{\ln 2}{2} \\ &&&= \frac{1}{2} \end{align*} We can also notice that \(1 = \mathbb{E} \l \frac{X+Y}{X+Y} \r = \mathbb{E} \l \frac{X}{X+Y} \r + \mathbb{E} \l \frac{Y}{X+Y} \r = 2 \mathbb{E} \l \frac{X}{X+Y} \r\) so it's clearly true as long as we can show that the integral converges.

2014 Paper 2 Q13
D: 1600.0 B: 1469.5

A random number generator prints out a sequence of integers \(I_1, I_2, I_3, \dots\). Each integer is independently equally likely to be any one of \(1, 2, \dots, n\), where \(n\) is fixed. The random variable \(X\) takes the value \(r\), where \(I_r\) is the first integer which is a repeat of some earlier integer. Write down an expression for \(\mathbb{P}(X=4)\).

  1. Find an expression for \(\mathbb{P}(X=r)\), where \(2\le r\le n+1\). Hence show that, for any positive integer \(n\), \[ \frac 1n + \left(1-\frac1n\right) \frac 2 n + \left(1-\frac1n\right)\left(1-\frac2n\right) \frac3 n + \cdots \ = \ 1 \,. \]
  2. Write down an expression for \(\mathbb{E}(X)\). (You do not need to simplify it.)
  3. Write down an expression for \(\mathbb{P}(X\ge k)\).
  4. Show that, for any discrete random variable \(Y\) taking the values \(1, 2, \dots, N\), \[ \mathbb{E}(Y) = \sum_{k=1}^N \mathbb{P}(Y\ge k)\,. \] Hence show that, for any positive integer \(n\), \[ \left(1-\frac{1^2}n\right) + \left(1-\frac1n\right)\left(1-\frac{2^2}n\right) + \left(1-\frac1n\right)\left(1-\frac{2}n\right)\left(1-\frac{3^2}n\right) + \cdots \ = \ 0. \]


Solution: \begin{align*} && \mathbb{P}(X > 4) &= 1 \cdot \frac{n-1}{n} \cdot \frac{n-2}{n} \cdot \frac{n-3}{n} \\ && \mathbb{P}(X > 3) &= 1 \cdot \frac{n-1}{n} \cdot \frac{n-2}{n} \\ \Rightarrow && \mathbb{P}(X =4) &= \mathbb{P}(X > 3) - \mathbb{P}(X > 4) \\ &&&= \frac{(n-1)(n-2)}{n^2} \left (1 - \frac{n-3}{n} \right) \\ &&&= \frac{3(n-1)(n-2)}{n^3} \end{align*}

  1. Notice that \begin{align*} && \mathbb{P}(X > r) &= \frac{n-1}{n} \cdots \frac{n-r+1}{n} \\ \Rightarrow && \mathbb{P}(X = r) &= \frac{n-1}{n} \cdots \frac{n-r+2}{n} \left (1 - \frac{n-r+1}{n} \right) \\ &&&= \frac{(n-1)\cdots(n-r+2)(r-1)}{n^{r-1}} \\ &&&= \left (1 - \frac{1}n \right)\left (1 - \frac{2}{n} \right) \cdots \left (1 - \frac{r-2}{n} \right) \frac{r-1}{n} \\ \Rightarrow && 1 &= \sum \mathbb{P}(X = r) \\ &&&= \sum_{r=2}^{n+1} \mathbb{P}(X = r) \\ &&&= \frac 1n + \left(1-\frac1n\right) \frac 2 n + \left(1-\frac1n\right)\left(1-\frac2n\right) \frac3 n + \cdots \end{align*}
  2. \(\,\) \begin{align*} \mathbb{E}(X) &= \sum_{r=2}^{n+1} r\cdot\mathbb{P}(X = r) \\ &= \frac 2n + \left(1-\frac1n\right) \frac {2\cdot3} n + \left(1-\frac1n\right)\left(1-\frac2n\right) \frac{3\cdot4} n + \cdots \end{align*}
  3. \(\displaystyle \mathbb{P}(X \geq k) = \frac{n-1}{n} \cdots \frac{n-r+2}{n}\)
  4. \(\,\) \begin{align*} && \mathbb{E}(Y) &= \sum_{r=1}^N r \cdot \mathbb{P}(Y = r) \\ &&&= \sum_{r=1}^N \sum_{j=1}^r \mathbb{P}(Y = r) \\ &&&= \sum_{j=1}^N \sum_{r=j}^N \mathbb{P}(Y=r) \\ &&&= \sum_{j=1}^N \mathbb{P}(Y \geq j) \end{align*} Let \(P_k = \left(1-\frac1n\right)\left(1-\frac2n\right) \cdots \left(1-\frac1n\right)\left(1-\frac{k}n\right) \) \begin{align*} && \mathbb{E}(X) &= P_1 \frac{1 \cdot 2 }{n} + P_2 \cdot \frac{2 \cdot 3}{n} + \cdots + P_k \cdot \frac{k(k+1)}{n} + \cdots \\ && &= \sum_{k=1}^{n} \frac{k^2}{n}P_k + \sum_{k=1}^{n} \frac{k}{n}P_k \\ && \text{Using the identity } & \frac{k}{n}P_k = \frac{k}{n} \prod_{i=1}^{k-1} \left(1-\frac{i}{n}\right) = P_k - P_{k+1}: \\ && \sum_{k=1}^{n} \frac{k}{n}P_k &= (P_1 - P_2) + (P_2 - P_3) + \cdots + (P_n - P_{n+1}) \\ && &= P_1 - P_{n+1} = 1 - 0 = 1 \\ \\ \Rightarrow && \mathbb{E}(X) &= \sum_{k=1}^{n} \frac{k^2}{n}P_k + 1 \\ && &= \mathbb{P}(X \geq 1) + \mathbb{P}(X \geq 2) + \mathbb{P}(X \geq 3) + \cdots \\ && &= 1 + P_1 + P_2 + P_3 + \cdots \\ && &= 1 + \sum_{k=1}^{n} P_k \\ \\ \Rightarrow && 1 + \sum_{k=1}^{n} P_k &= \sum_{k=1}^{n} \frac{k^2}{n}P_k + 1 \\ \Rightarrow && \sum_{k=1}^{n} P_k &= \sum_{k=1}^{n} \frac{k^2}{n}P_k \\ \Rightarrow && 0 &= \sum_{k=1}^{n} P_k \left( 1 - \frac{k^2}{n} \right) \end{align*}

2009 Paper 3 Q13
D: 1700.0 B: 1488.4

  1. The point \(P\) lies on the circumference of a circle of unit radius and centre \(O\). The angle, \(\theta\), between \(OP\) and the positive \(x\)-axis is a random variable, uniformly distributed on the interval \(0\le\theta<2\pi\). The cartesian coordinates of \(P\) with respect to \(O\) are \((X,Y)\). Find the probability density function for \(X\), and calculate \(\var (X)\). Show that \(X\) and \(Y\) are uncorrelated and discuss briefly whether they are independent.
  2. The points \(P_i\) (\(i=1\), \(2\), \(\ldots\) , \(n\)) are chosen independently on the circumference of the circle, as in part (i), and have cartesian coordinates \((X_i, Y_i)\). The point \(\overline P\) has coordinates \((\overline X, \overline Y)\), where \(\overline X =\dfrac1n \sum\limits _{i=1}^n X_i\) and \(\overline Y =\dfrac1n \sum\limits _{i=1}^n Y_i\). Show that \(\overline X\) and \(\overline Y\) are uncorrelated. Show that, for large \(n\), \(\displaystyle \P\left(\vert \overline X \vert \le \sqrt{\frac2n}\right)\approx 0.95\,\).


Solution:

  1. \(X = \cos \theta\) \(\theta \sim U(0, 2\pi)\). Noting that \(\mathbb{P}(X \geq t ) = \frac{2}{2\pi}\cos^{-1} t\) so \(f_X(t) = \frac{1}{\pi} \frac{1}{\sqrt{1-x^2}}\) \begin{align*} && \E[X] &= 0 \tag{by symmetry} \\ && \E[X^2] &= \int_0^{2\pi} \cos^2 \theta \frac{1}{2 \pi} \d \theta \\ &&&= \frac{1}{2} \cdot 2\pi \cdot \frac{1}{2\pi} \\ &&&= \frac12 \\ \Rightarrow & &\var[X] &= \frac12 \\ \\ && \E[XY] &= \int_0^{2\pi} \cos \theta \sin \theta \frac{1}{2 \pi} \d \theta \\ &&&= \frac{1}{4\pi} \int_0^{2\pi} \sin 2\theta \d \theta \\ &&& =0 = \E[X]\E[Y] \end{align*} But note that clearly \(X\) and \(Y\) are not independent, since given \(X\) there are only two possible values of \(Y\).
  2. \(\,\) \begin{align*} && \E \left [ XY \right] &= \E \left [ \left ( \frac1n \sum_{i=1}^n X_i \right)\left ( \frac1n \sum_{i=1}^n Y_i\right) \right] \\ &&&= \frac{1}{n^2} \sum_{i=1}^n \sum_{j=1}^n \E [X_i Y_j] \\ &&&= 0 = \E[X] \E[Y] \end{align*} Therefore \(X\) and \(Y\) are uncorrelated. Note that \(\E[X_i] = 0, \var[X_i] = \frac12\) so we can apply the central limit theorem to see that \(X \approx N(0, \frac{1}{2n})\), in particular \begin{align*} && 0.95 &\approx \mathbb{P}(|Z| < 2) \\ &&&= \mathbb{P} \left ( \Big |\frac{X}{\sqrt{\frac{1}{2n}}} \Big | < 2 \right ) \\ &&&= \mathbb{P}\left (|X| < \sqrt{\frac{2}{n}} \right) \end{align*}

2007 Paper 3 Q14
D: 1700.0 B: 1500.0

  1. My favourite dartboard is a disc of unit radius and centre \(O\). I never miss the board, and the probability of my hitting any given area of the dartboard is proportional to the area. Each throw is independent of any other throw. I throw a dart \(n\) times (where \(n>1\)). Find the expected area of the smallest circle, with centre \(O\), that encloses all the \(n\) holes made by my dart. Find also the expected area of the smallest circle, with centre \(O\), that encloses all the \((n-1)\) holes nearest to \(O\).
  2. My other dartboard is a square of side 2 units, with centre \(Q\). I never miss the board, and the probability of my hitting any given area of the dartboard is proportional to the area. Each throw is independent of any other throw. I throw a dart \(n\) times (where \(n>1\)). Find the expected area of the smallest square, with centre \(Q\), that encloses all the \(n\) holes made by my dart.
  3. Determine, without detailed calculations, whether the expected area of the smallest circle, with centre \(Q\), on my square dartboard that encloses all the \(n\) holes made by my darts is larger or smaller than that for my circular dartboard.


Solution:

  1. Firstly, we consider the probability that all darts lie within a distance \(s\) from the centre, ie \begin{align*} \mathbb{P}(\text{all darts within }s) &= \prod_{k=1}^s \mathbb{P}(\text{dart within }s) \\ &= \left ( \frac{\pi s^2}{\pi} \right)^n \\ &= s^{2n} \end{align*} Therefore the pdf is \(2ns^{2n-1}\), and the expected area is \(\int_{s=0}^1 \pi s^2 \cdot 2n s^{2n-1} \d s = 2n \pi \frac{1}{2n+2} = \frac{n}{n+1} \pi\). \begin{align*} \mathbb{P}(\text{n-1 within }s) &= \underbrace{s^{2n}}_{\text{all within }s} + \underbrace{ns^{2n-2}(1-s^2)}_{\text{all but 1 within }s}\\ &= ns^{2n-2}-(n-1)s^{2n} \end{align*} Therefore the pdf is \(n(2n-2)s^{2n-3} - 2n(n-1)s^{2n-1} = 2n(n-1)(s^{2n-3}-s^{2n-1})\) and the expected area is: \begin{align*} \int \pi s^2 \cdot2n(n-1)(s^{2n-3}-s^{2n-1})\d s &= 2n(n-1) \pi \left ( \frac{1}{2n} - \frac{1}{2n+2} \right) \\ &= n(n-1)\pi \frac{2}{n(n+1)} \\ &= \frac{n-1}{n+1} \pi \end{align*}
  2. Now consider a square of side-length \(s\), we must have \(\mathbb{P}(\text{all darts within square}) = \left ( \frac{s^2}{4} \right)^n\) and therefore the pdf is \(n \frac{s^{n-1}}{4^n}\). Therefore the expected area is \(\displaystyle \int_0^2 s^2 \cdot n \frac{s^{n-1}}{4^n} \d s = \frac{n}{n+1} \frac{2^{2n+1}}{2^{2n}} = \frac{4n}{n+1}\)
  3. It is clearly larger as the square dartboard contains all of the circular dartboard, and there will be some probability that the darts land outside the circular dartboard, making the circle much larger.

2006 Paper 3 Q13
D: 1700.0 B: 1530.6

Two points are chosen independently at random on the perimeter (including the diameter) of a semicircle of unit radius. The area of the triangle whose vertices are these two points and the midpoint of the diameter is denoted by the random variable \(A\). Show that the expected value of \(A\) is \((2+\pi)^{-1}\).


Solution: There are \(3\) possible numbers of points on the curved part of the perimeter. \(0\): The area is \(0\) \(1\):

TikZ diagram
The area of the triangle is \(\frac12 |x| \sin \theta\) Where \(X\) is the point along the diameter which is \(U[-1,1]\) and \(\theta \sim U(0, \pi)\) Therefore \begin{align*} \mathbb{E}(A|\text{one on diameter}) &= \int_{0}^\pi \frac{1}{\pi} \int_{-1}^1\frac{1}{2}\frac12 |x| \sin \theta \d x \d \theta \\ &= \frac{1}{2\pi}\frac12 \int_{0}^\pi \sin \theta \d \theta \cdot 2\int_{0}^1 x\d x \\ &=\frac{1}{2\pi}\cdot 2 \cdot \frac12 = \frac{1}{2\pi} \end{align*} \(2\): If both are on the curved section
TikZ diagram
Then the area is \(\frac12 \sin \theta\) where \(\theta = |\theta_1 - \theta_2|\) and \(\theta_i \sim U[0, \pi]\) Therefore the area is \begin{align*} \mathbb{E}(A|\text{none on diameter}) &= \int_{0}^\pi\frac{1}{\pi} \int_{0}^\pi\frac{1}{\pi} \frac12 \sin |\theta_1 - \theta_2| \d \theta_1 \d \theta_2 \\ &= \frac{1}{\pi^2}\frac12 \int_{0}^\pi \left (\int_{0}^{\theta_2} \sin (\theta_2 - \theta_1) \d \theta_1-\int_{\theta_2}^{\pi} \sin (\theta_2 - \theta_1) \d \theta_1 \right)\d \theta_2 \\ &= \frac{1}{\pi^2}\frac12 \int_{0}^\pi \left [2\cos(\theta_2 - \theta_2)-\cos(\theta_2 - 0)-\cos(\theta_2 - \pi) \right]\d \theta_2 \\ &= \frac{1}{\pi} \end{align*} Therefore the expected area is: \begin{align*} \mathbb{E}(A ) &= \mathbb{E}(A|\text{one on diameter})\cdot \mathbb{P}(\text{one on diameter}) + \mathbb{E}(A|\text{none on diameter})\cdot \mathbb{P}(\text{none on diameter}) \\ &= \frac{1}{2\pi}\mathbb{P}(\text{one on diameter}) + \frac{1}{\pi}\cdot \mathbb{P}(\text{none on diameter}) \\ &= \frac{1}{2\pi} \cdot 2 \cdot \frac{\pi}{\pi + 2} \cdot \frac{2}{\pi + 2} + \frac1{\pi} \cdot \frac{\pi}{\pi + 2} \cdot \frac{\pi}{\pi+2} \\ &= \frac{2 + \pi}{(\pi+2)^2} \\ &= \frac{1}{\pi+2} \end{align*}

2005 Paper 2 Q14
D: 1600.0 B: 1469.5

The probability density function \(\f(x)\) of the random variable \(X\) is given by $$\f(x) = k\left[{\phi}(x) + {\lambda}\g(x)\right]$$ where \({\phi}(x)\) is the probability density function of a normal variate with mean 0 and variance 1, \(\lambda \) is a positive constant, and \(\g(x)\) is a probability density function defined by \[ \g(x)= \begin{cases} 1/\lambda & \mbox{for \(0 \le x \le {\lambda}\)}\,;\\ 0& \mbox{otherwise} . \end{cases} \] Find \(\mu\), the mean of \(X\), in terms of \(\lambda\), and prove that \(\sigma\), the standard deviation of \(X\), satisfies. $$\sigma^2 = \frac{\lambda^4 +4{\lambda}^3+12{\lambda}+12} {12(1 + \lambda )^2}\;.$$ In the case \(\lambda=2\):

  1. draw a sketch of the curve \(y=\f(x)\);
  2. express the cumulative distribution function of \(X\) in terms of \(\Phi(x)\), the cumulative distribution function corresponding to \(\phi(x)\);
  3. evaluate \(\P(0 < X < \mu+2\sigma)\), given that \(\Phi (\frac 23 + \frac23 \surd7)=0.9921\).


Solution: \begin{align*} && 1 &= \int_{-\infty}^{\infty} f(x) \d x \\ &&&= k[1 + \lambda] \\ \Rightarrow && k &= \frac{1}{1+\lambda} \\ \\ && \mu &= \int_{-\infty}^\infty x f(x) \d x \\ &&&= k \int_{-\infty}^\infty x \phi(x) \d x + k \lambda \int_{-\infty}^{\infty} x g(x) \d x \\ &&&= k \cdot 0 + k \lambda \cdot \frac{\lambda}{2} \\ &&&= \frac{\lambda^2}{2(1+\lambda)} \\ \\ && \E[X^2] &= \int_{-\infty}^\infty x^2 f(x) \d x \\ &&&= k \int_{-\infty}^\infty x^2 \phi(x) \d x + k \lambda \int_{-\infty}^{\infty} x^2 g(x) \d x \\ &&&= k \cdot 1 + k \lambda \int_0^{\lambda} \frac{x^2}{\lambda} \d \lambda \\ &&&= k + \frac{k \lambda^3}{3} \\ &&&= \frac{3+\lambda^3}{3(1+\lambda)} \\ && \var[X] &= \frac{3+\lambda^3}{3(1+\lambda)} - \frac{\lambda^4}{4(1+\lambda)^2} \\ &&& = \frac{(3+\lambda^3)4(1+\lambda) - 3\lambda^4}{12(1+\lambda)^2} \\ &&&= \frac{\lambda^4+4\lambda^3+12\lambda + 12}{12(1+\lambda)^2} \end{align*}

  1. \(\,\)
    TikZ diagram
  2. \(\,\) \begin{align*} && \mathbb{P}(X \leq x) &= \int_{-\infty}^x f(x) \d x \\ &&&= \begin{cases} \frac13 \Phi(x) & \text{if } x < 0 \\ \frac13\Phi(x) + \frac13x & \text{if } 0 \leq x \leq 2 \\ \frac13 \Phi(x) + \frac23 & \text{if } 2 < x \end{cases} \end{align*} When \(\lambda = 2\), \(\mu = \frac{4}{6} = \frac23\), \(\sigma^2 = \frac{16+32+24+12}{12 \cdot 9} = \frac{7}{9}\), so \(\mu + 2 \sigma = \frac23 + \frac{2\sqrt7}{3}>2\). Therefore \begin{align*} && \P(0 < X < \mu + 2\sigma) &= \frac13 \Phi\left (\frac{2+2\sqrt{7}}{3} \right) + \frac23 - \Phi(0) \\ &&&= \tfrac13 \cdot 0.9921 +\tfrac23 - \tfrac12 \\ &&&= 0.4974 \end{align*}

2001 Paper 3 Q14
D: 1700.0 B: 1484.0

A random variable \(X\) is distributed uniformly on \([\, 0\, , \, a\,]\). Show that the variance of \(X\) is \({1 \over 12} a^2\). A sample, \(X_1\) and \(X_2\), of two independent values of the random variable is drawn, and the variance \(V\) of the sample is determined. Show that \(V = {1 \over 4} \l X_1 -X_2 \r ^2\), and hence prove that \(2 V\) is an unbiased estimator of the variance of X. Find an exact expression for the probability that the value of \(V\) is less than \({1 \over 12} a^2\) and estimate the value of this probability correct to one significant figure.


Solution: \begin{align*} && \E[X] &= \frac{a}{2}\tag{by symmetry} \\ &&\E[X^2] &= \int_0^a \frac{1}{a} x^2 \d x \\ &&&= \frac{a^3}{3a} = \frac{a^2}{3} \\ \Rightarrow && \var[X] &= \frac{a^2}{3} - \frac{a^2}{4} = \frac{a^2}{12} \\ \end{align*} \begin{align*} && V &=\frac{1}{2} \left ( \left ( X_1 - \frac{X_1+X_2}{2} \right )^2+\left ( X_2- \frac{X_1+X_2}{2} \right )^2 \right ) \\ &&&= \frac{1}{8} ((X_1 - X_2)^2 + (X_2 - X_1)^2 ) \\ &&&= \frac14 (X_1-X_2)^2 \\ \\ && \E[2V] &= \E \left [ \frac12 (X_1 - X_2)^2 \right] \\ &&&= \frac12 \E[X_1^2] - \E[X_1X_2] + \frac12 \E[X_2^2] \\ &&&= \frac{a^2}{3} - \frac{a^2}{4} = \frac{a^2}{12} \end{align*} Therefore \(2V\) is an unbiased estimator of the variance of \(X\).

TikZ diagram
We need \(|X_1 - X_2| < \frac{a}{\sqrt{3}}\) We are interested in the blue area, which is \(a^2 - a^2(1- \frac{1}{\sqrt{3}})^2 = a^2 \left ( \frac{2}{\sqrt{3}} - \frac13 \right)\) ie the probability is \(\frac{2\sqrt{3}-1}{3} \approx 0.8\)

2000 Paper 2 Q14
D: 1600.0 B: 1484.0

The random variables \(X_1\), \(X_2\), \(\ldots\) , \(X_{2n+1}\) are independently and uniformly distributed on the interval \(0 \le x \le 1\). The random variable \(Y\) is defined to be the median of \(X_1\), \(X_2\), \(\ldots\) , \(X_{2n+1}\). Given that the probability density function of \(Y\) is \(\g(y)\), where \[ \mathrm{g}(y)=\begin{cases} ky^{n}(1-y)^{n} & \mbox{ if }0\leqslant y\leqslant1\\ 0 & \mbox{ otherwise} \end{cases} \] use the result $$ \int_0^1 {y^{r}}{{(1-y)}^{s}}\,\d y = \frac{r!s!}{(r+s+1)!} $$ to show that \(k={(2n+1)!}/{{(n!)}^2}\), and evaluate \(\E(Y)\) and \({\rm Var}\,(Y)\). Hence show that, for any given positive number \(d\), the inequality $$ {\P\left({\vert {Y - 1/2} \vert} < {d/{\sqrt {n}}} \right)} < {\P\left({\vert {{\bar X} - 1/2} \vert} < {d/{\sqrt {n}}} \right)} $$ holds provided \(n\) is large enough, where \({\bar X}\) is the mean of \(X_1\), \(X_2\), \(\ldots\) , \(X_{2n+1}\). [You may assume that \(Y\) and \(\bar X\) are normally distributed for large \(n\).]

1999 Paper 2 Q13
D: 1600.0 B: 1484.0

A stick is broken at a point, chosen at random, along its length. Find the probability that the ratio, \(R\), of the length of the shorter piece to the length of the longer piece is less than \(r\). Find the probability density function for \(R\), and calculate the mean and variance of \(R\).


Solution: Let \(X \sim U[0, \tfrac12]\) be the shorter piece, so \(R = \frac{X}{1-X}\), and \begin{align*} && \mathbb{P}(R \leq r) &= \mathbb{P}(\tfrac{X}{1-X} \leq r) \\ &&&= \mathbb{P}(X \leq r - rX) \\ &&&= \mathbb{P}((1+r)X \leq r) \\ &&&= \mathbb{P}(X \leq \tfrac{r}{1+r} ) \\ &&&= \begin{cases} 0 & r < 0 \\ \frac{2r}{1+r} & 0 \leq r \leq 1 \\ 1 & r > 1 \end{cases} \\ \\ && f_R(r) &= \begin{cases} \frac{2}{(1+r)^2} & 0 \leq r \leq 1 \\ 0 & \text{otherwise} \end{cases} \end{align*} Let \(Y \sim U[\tfrac12, 1]\) be the longer piece, then \(R = \frac{1-Y}{Y} = Y^{-1} - 1\) and \begin{align*} \E[R] &= \int_{\frac12}^1 (y^{-1}-1) 2 \d y \\ &= 2\left [\ln y - y \right]_{\frac12}^1 \\ &= -2 + 2\ln2 +2\frac12 \\ &= 2\ln2 -1 \\ \\ \E[R^2] &= \int_{\frac12}^1 (y^{-1}-1)^2 2 \d y\\ &= 2\left [-y^{-1} -2\ln y + 1 \right]_{\frac12}^1 \\ &= 2 \left ( 2 - 2\ln 2+\frac12\right) \\ &= 3-4\ln 2 \\ \var[R] &= 3 - 4 \ln 2 -(2\ln 2-1)^2 \\ &= 2 - 4(\ln 2)^2 \end{align*}

1998 Paper 3 Q14
D: 1700.0 B: 1500.0

A hostile naval power possesses a large, unknown number \(N\) of submarines. Interception of radio signals yields a small number \(n\) of their identification numbers \(X_i\) (\(i=1,2,...,n\)), which are taken to be independent and uniformly distributed over the continuous range from \(0\) to \(N\). Show that \(Z_1\) and \(Z_2\), defined by $$ Z_1 = {n+1\over n} {\max}\{X_1,X_2,...,X_n\} \hspace{0.3in} {\rm and} \hspace{0.3in} Z_2 = {2\over n} \sum_{i=1}^n X_i \;, $$ both have means equal to \(N\). Calculate the variance of \(Z_1\) and of \(Z_2\). Which estimator do you prefer, and why?