Problems

Filters
Clear Filters

71 problems found

2025 Paper 3 Q12
D: 1500.0 B: 1484.0

  1. Show that, for any functions \(f\) and \(g\), and for any \(m \geq 0\), $$\sum_{r=1}^{m+1} f(r)\sum_{s=r-1}^m g(s) = \sum_{s=0}^m g(s)\sum_{r=1}^{s+1} f(r)$$
  2. The random variables \(X_0, X_1, X_2, \ldots\) are defined as follows:
    • \(X_0\) takes the value \(0\) with probability \(1\);
    • \(X_{n+1}\) takes the values \(0, 1, \ldots, X_n + 1\) with equal probability, for \(n = 0, 1, \ldots\)
    1. Write down \(E(X_1)\). Find \(P(X_2 = 0)\) and \(P(X_2 = 1)\) and show that \(P(X_2 = 2) = \frac{1}{6}\). Hence calculate \(E(X_2)\).
    2. For \(n \geq 1\), show that $$P(X_n = 0) = \sum_{s=0}^{n-1} \frac{P(X_{n-1} = s)}{s+2}$$ and find a similar expression for \(P(X_n = r)\), for \(r = 1, 2, \ldots, n\).
    3. Hence show that \(E(X_n) = \frac{1}{2}(1 + E(X_{n-1}))\). Find an expression for \(E(X_n)\) in terms of \(n\), for \(n = 1, 2, \ldots\)


Solution:

  1. \begin{align*} \sum_{r=1}^{m+1} \left (f(r) \sum_{s=r-1}^m g(s) \right) &= \sum_{r=1}^{m+1} \sum_{s=r-1}^m f(r)g(s) \\ &= \sum_{(r,s) \in \{(r,s) : 1 \leq r \leq m+1, 0 \leq s \leq m, s \geq r-1\}} f(r)g(s) \\ &= \sum_{(r,s) \in \{(r,s) : 0 \leq s \leq m, 1 \leq r \leq m+1, r \leq s+1\}} f(r)g(s) \\ &= \sum_{s=0}^m \sum_{r=1}^{s+1} f(r)g(s) \\ &= \sum_{s=0}^m \left ( g(s) \sum_{r=1}^{s+1} f(r) \right) \end{align*}
  2. \(X_1\) takes the values \(0, 1\) with equal probabilities (since \(X_0 = 0\)). Therefore \(\mathbb{E}(X_1) = \frac12\).
    1. \begin{align*} \mathbb{P}(X_2 = 0) &= \mathbb{P}(X_2 = 0 | X_1 = 0) \mathbb{P}(X_1 = 0) + \mathbb{P}(X_2 = 0 | X_1 = 1) \mathbb{P}(X_1 = 1) \\ &= \frac12 \cdot \frac12 + \frac13 \cdot \frac12 \\ &= \frac5{12} \\ \\ \mathbb{P}(X_2 = 1) &= \mathbb{P}(X_2 = 1 | X_1 = 0) \mathbb{P}(X_1 = 0) + \mathbb{P}(X_2 = 1 | X_1 = 1) \mathbb{P}(X_1 = 1) \\ &= \frac12 \cdot \frac12 + \frac13 \cdot \frac12 \\ &= \frac5{12} \\ \\ \mathbb{P}(X_2 = 3) &= 1 - \mathbb{P}(X_2 = 0) - \mathbb{P}(X_2 = 1) \\ &= 1 - \frac{10}{12} = \frac16 \\ \\ \mathbb{E}(X_2) &= \frac{5}{12} + 2\cdot \frac{1}{6} \\ &= \frac34 \end{align*}
    2. \begin{align*} \mathbb{P}(X_n = 0) &= \sum_{s=0}^{n-1} \mathbb{P}(X_n = 0 | X_{n-1} = s)\mathbb{P}(X_{n-1} = s) \\ &= \sum_{s=0}^{n-1} \frac{1}{s+2}\mathbb{P}(X_{n-1} = s) \\ \end{align*} as required. (Where \(\mathbb{P}(X_n = 0 | X_{n-1} = s) = \frac{1}{s+2}\) since if \(X_{n-1} = s\) there are \(0, 1, \ldots, s + 1\) values \(X_n\) can take with equal chance (ie \(s+2\) different values). \begin{align*} \mathbb{P}(X_n = r) &= \sum_{s=0}^{n-1} \mathbb{P}(X_n = r | X_{n-1} = s)\mathbb{P}(X_{n-1} = s) \\ &= \sum_{s=r-1}^{n-1} \frac{\mathbb{P}(X_{n-1}=s)}{s+2} \end{align*}
    3. \begin{align*} \mathbb{E}(X_n) &= \sum_{r=1}^{n} r \cdot \mathbb{P}(X_n = r) \\ &= \sum_{r=1}^{n} r \cdot \sum_{s=r-1}^{n-1} \frac{\mathbb{P}(X_{n-1}=s)}{s+2} \\ &= \sum_{s=0}^{n-1} \frac{\mathbb{P}(X_{n-1}=s)}{s+2} \sum_{r=1}^{s+1} r \\ &= \sum_{s=0}^{n-1} \frac{\mathbb{P}(X_{n-1}=s)}{s+2} \frac{(s+1)(s+2)}{2} \\ &= \frac12 \sum_{s=0}^{n-1} (s+1)\mathbb{P}(X_{n-1}=s) \\ &= \frac12 \sum_{s=0}^{n-1} s\mathbb{P}(X_{n-1}=s) + \frac12 \sum_{s=0}^{n-1} \mathbb{P}(X_{n-1}=s) \\\\ &= \frac12 \left ( \mathbb{E}(X_{n-1}) + 1 \right) \end{align*} Suppose \(\mathbb{E}(X_n) = 1-2^{-n}\), then notice that this expression matches for \(n = 0, 1, 2\) and also: \(\frac12(1 - 2^{-n} + 1) = 1-2^{-n-1}\) satisfies the recusive formula. Therefore by induction (or similar) we can show that \(\mathbb{E}(X_n) = 1- 2^{-n}\).

2024 Paper 3 Q11
D: 1500.0 B: 1500.0

In this question, you may use without proof the results \[\sum_{r=0}^{n} \binom{n}{r} = 2^n \quad \text{and} \quad \sum_{r=0}^{n} r\binom{n}{r} = n\,2^{n-1}.\]

  1. Show that \[r\binom{2n}{r} = (2n+1-r)\binom{2n}{2n+1-r}\] for \(1 \leqslant r \leqslant 2n\). Hence show that \[\sum_{r=0}^{2n} r\binom{2n}{r} = 2\sum_{r=n+1}^{2n} r\binom{2n}{r}.\]
  2. A fair coin is tossed \(2n\) times. The value of the random variable \(X\) is whichever is the larger of the number of heads and the number of tails shown. If \(n\) heads and \(n\) tails are shown, then \(X = n\). Show that \[\mathrm{E}(X) = n\left(1 + \frac{1}{2^{2n}}\binom{2n}{n}\right).\]
  3. Show that \(\dfrac{1}{2^{2n}}\dbinom{2n}{n}\) decreases as \(n\) increases.
  4. In a game, you choose a value of \(n\) and pay \(\pounds n\); then a fair coin is tossed \(2n\) times. You win an amount in pounds equal to the larger of the number of heads and the number of tails shown. If \(n\) heads and \(n\) tails are shown, then you win \(\pounds n\). How should you choose \(n\) to maximise your expected winnings per pound paid?

2023 Paper 3 Q11
D: 1500.0 B: 1500.0

Show that \[\sum_{k=1}^{\infty} \frac{k+1}{k!}\, x^k = (x+1)\mathrm{e}^x - 1\,.\] In the remainder of this question, \(n\) is a fixed positive integer.

  1. Random variable \(Y\) has a Poisson distribution with mean \(n\). One observation of \(Y\) is taken. Random variable \(D\) is defined as follows. If the observed value of \(Y\) is zero then \(D = 0\). If the observed value of \(Y\) is \(k\), where \(k \geqslant 1\), then a fair \(k\)-sided die (with sides numbered \(1\) to \(k\)) is rolled once and \(D\) is the number shown on the die.
    1. Write down \(\mathrm{P}(D = 0)\).
    2. Show, from the definition of the expectation of a random variable, that \[\mathrm{E}(D) = \sum_{d=1}^{\infty} \left[ d \sum_{k=d}^{\infty} \left( \frac{1}{k} \cdot \frac{n^k}{k!}\, \mathrm{e}^{-n} \right) \right].\] Show further that \[\mathrm{E}(D) = \sum_{k=1}^{\infty} \left( \frac{1}{k} \cdot \frac{n^k}{k!}\, \mathrm{e}^{-n} \sum_{d=1}^{k} d \right).\]
    3. Show that \(\mathrm{E}(D) = \frac{1}{2}(n + 1 - \mathrm{e}^{-n})\).
  2. Random variables \(X_1, X_2, \ldots, X_n\) all have Poisson distributions. For each \(k \in \{1, 2, \ldots, n\}\), the mean of \(X_k\) is \(k\). A fair \(n\)-sided die, with sides numbered \(1\) to \(n\), is rolled. When \(k\) is the number shown, one observation of \(X_k\) is recorded. Let \(Z\) be the number recorded.
    1. Find \(\mathrm{P}(Z = 0)\).
    2. Show that \(\mathrm{E}(Z) > \mathrm{E}(D)\).

2022 Paper 2 Q12
D: 1500.0 B: 1500.0

The random variable \(X\) has probability density function \[\mathrm{f}(x) = \begin{cases} kx^n(1-x) & 0 \leqslant x \leqslant 1\,,\\ 0 & \text{otherwise}\,,\end{cases}\] where \(n\) is an integer greater than 1.

  1. Show that \(k = (n+1)(n+2)\) and find \(\mu\), where \(\mu = \mathrm{E}(X)\).
  2. Show that \(\mu\) is less than the median of \(X\) if \[6 - \frac{8}{n+3} < \left(1 + \frac{2}{n+1}\right)^{n+1}.\] By considering the first four terms of the expansion of the right-hand side of this inequality, or otherwise, show that the median of \(X\) is greater than \(\mu\).
  3. You are given that, for positive \(x\), \(\left(1 + \dfrac{1}{x}\right)^{x+1}\) is a decreasing function of \(x\). Show that the mode of \(X\) is greater than its median.


Solution:

  1. \(\,\) \begin{align*} && 1 &= \int_0^1 kx^n(1-x) \d x \\ &&&= k\left [\frac{x^{n+1}}{n+1} - \frac{x^{n+2}}{n+2} \right]_0^1 \\ &&&= \frac{k}{(n+1)(n+2)} \\ \Rightarrow && k &= (n+1)(n+2) \\ \\ && \E[X] &= \int_0^1 kx^{n+1}(1-x) \d x \\ &&&= k\left [ \frac{x^{n+2}}{n+2} - \frac{x^{n+3}}{n+3} \right]_0^1 \\ &&&= \frac{(n+1)(n+2)}{(n+2)(n+3)} \\ &&&= \frac{n+1}{n+3} \end{align*}
  2. If \(\mu\) is less than the median then \begin{align*} && \frac12 &> \int_0^{\mu} kx^{n}(1-x) \d x \\ &&&= \left [k \frac{x^{n+1}}{n+1} - \frac{x^{n+2}}{n+2} \right]_0^\mu \\ &&&= \mu^{n+1}\left ((n+2) - (n+1) \frac{n+1}{n+3} \right) \\ \Rightarrow && \left ( 1 + \frac{2}{n+1} \right)^{n+1} &> 2 \frac{(n+2)(n+3) - (n+1)^2}{n+3} \\ &&&= \frac{6n+10}{n+3} = 6 - \frac{8}{n+3} \end{align*} \begin{align*} && \left ( 1 + \frac{2}{n+1} \right)^{n+1} &= 1 + (n+1) \frac{2}{n+1} + \frac{(n+1)n}{2} \frac{4}{(n+1)^2} + \frac{(n+1)n(n-1)}{6} \frac{8}{(n+1)^3} + \cdots \\ &&&= 1 + 2 + \frac{2n}{n+1} + \frac{4n(n-1)}{3(n+1)^2} + \cdots \\ &&&= 5 - \frac{2}{n+1} + \frac{4((n+1)^2-3(n+1)+3)}{3(n+1)^2} \\ &&&= 6 + \frac13 - \frac{6}{n+1} - \frac{4}{3(n+1)^2} \\ &&&> 6 - \frac{8}{n+3} \end{align*} as required.
  3. To find the mode, we need to find the maximum of \(x^n(1-x)\) which occurs when \(nx^{n-1}(1-x) - x^n = x^{n-1}(n-(n+1)x)\) ie where \(x = \frac{n}{n+1}\), therefore we need to look at: \begin{align*} && \mathbb{P}(X \leq \tfrac{n}{n+1}) &= \int_0^{n/(n+1)} k x^n(1-x) \d x \\ &&&= \frac{n^{n+1}}{(n+1)^{n+1}} \left ( (n+2) - (n+1) \frac{n}{n+1} \right) \\ &&&= 2\frac{n^{n+1}}{(n+1)^{n+1}} \\ &&&= 2 \left ( \frac{1}{1+\frac1n} \right)^{n+1} \\ &&&\geq 2 \frac{1}{(1 + \frac11)^2} = \frac12 \end{align*} as required

2021 Paper 3 Q11
D: 1500.0 B: 1500.0

The continuous random variable \(X\) has probability density function \[ f(x) = \begin{cases} \lambda e^{-\lambda x} & \text{for } x \geqslant 0, \\ 0 & \text{otherwise,} \end{cases} \] where \(\lambda\) is a positive constant. The random variable \(Y\) is the greatest integer less than or equal to \(X\), and \(Z = X - Y\).

  1. Show that, for any non-negative integer \(n\), \[ \mathrm{P}(Y = n) = (1 - e^{-\lambda})\,e^{-n\lambda}. \]
  2. Show that \[ \mathrm{P}(Z < z) = \frac{1 - e^{-\lambda z}}{1 - e^{-\lambda}} \qquad \text{for } 0 \leqslant z \leqslant 1. \]
  3. Evaluate \(\mathrm{E}(Z)\).
  4. Obtain an expression for \[ \mathrm{P}(Y = n \text{ and } z_1 < Z < z_2), \] where \(0 \leqslant z_1 < z_2 \leqslant 1\) and \(n\) is a non-negative integer. Determine whether \(Y\) and \(Z\) are independent.


Solution:

  1. \(\,\) \begin{align*} && \mathbb{P}(Y = n) &= \mathbb{P}(X \in [n, n+1)) \\ &&&= \int_n^{n+1} \lambda e^{-\lambda x} \d x \\ &&&= \left [-e^{-\lambda x} \right]_n^{n+1} \\ &&&= e^{-\lambda n} - e^{-\lambda(n+1)} \\ &&&= e^{-\lambda n}(1- e^{-\lambda}) \end{align*}
  2. \(,\) \begin{align*} && \mathbb{P}(Z < z) &= \sum_{i=0}^{\infty} \mathbb{P}(X \in (n, n+z)) \\ &&&= \sum_{i=0}^{\infty} \int_{n}^{n+z} \lambda e^{-\lambda x} \d x \\ &&&= \sum_{i=0}^{\infty} [-e^{-\lambda x}]_{n}^{n+z} \\ &&&= \sum_{i=0}^{\infty} (1-e^{-\lambda x})e^{-\lambda n} \\ &&&= \frac{1-e^{-\lambda x}}{1-e^{-\lambda}} \end{align*}
  3. Give the cdf of \(Z\), we see that \(f_Z(z) = \frac{\lambda e^{-\lambda z}}{1-e^{-\lambda}}\) so \begin{align*} && \E[Z] &= \int_0^1 z \frac{\lambda e^{-\lambda z}}{1-e^{-\lambda}} \d z \\ &&&= \frac{\lambda}{1-e^{-\lambda}} \int_0^1 ze^{-\lambda z} \d z \\ &&&= \frac{\lambda}{1-e^{-\lambda}} \left ( \left [-\frac{1}{\lambda} ze^{-\lambda z} \right]_0^1+\int_0^1 \frac{1}{\lambda} e^{-\lambda z} \d z \right) \\ &&&= \frac{\lambda}{1-e^{-\lambda}} \left ( -\frac{e^{-\lambda}}{\lambda} + \frac{1-e^{-\lambda}}{\lambda^2} \right) \\ &&&= \frac{1-e^{-\lambda}(1+\lambda)}{\lambda (1-e^{-\lambda})} \end{align*}
  4. \(\,\) \begin{align*} && \mathbb{P}(Y = n \text{ and }z_1 < Z < z_2)&= \mathbb{P}(X \in (n+z_1, n+z_2) ) \\ &&&= \int_{n+z_1}^{n+z_2} \lambda e^{-\lambda x} \d x \\ &&&= e^{-n\lambda}(e^{-\lambda z_1} - e^{-\lambda z_2}) \end{align*} Note that \(\mathbb{P}(z_1 < Z < z_2) = \mathbb{P}( Z < z_2) -\mathbb{P}(Z< z_1) =\frac{e^{-\lambda z_1} - e^{-\lambda z_2}}{1-e^{-\lambda}}\) Therefore \begin{align*} && \mathbb{P}(Y = n \text{ and }z_1 < Z < z_2) &= e^{-n\lambda}(e^{-\lambda z_1} - e^{-\lambda z_2}) \\ &&&= e^{-\lambda n}(1-e^{-\lambda}) \frac{e^{-\lambda z_1} - e^{-\lambda z_2}}{1-e^{-\lambda}} \\ &&&= \mathbb{P}(Y=n) \mathbb{P}(z_1 < Z < z_2) \end{align*} So they are independent, which is to be expected from the memorylessness property of the exponential distribution.

2020 Paper 3 Q11
D: 1500.0 B: 1500.0

The continuous random variable \(X\) is uniformly distributed on \([a,b]\) where \(0 < a < b\).

  1. Let \(\mathrm{f}\) be a function defined for all \(x \in [a,b]\)
    • with \(\mathrm{f}(a) = b\) and \(\mathrm{f}(b) = a\),
    • which is strictly decreasing on \([a,b]\),
    • for which \(\mathrm{f}(x) = \mathrm{f}^{-1}(x)\) for all \(x \in [a,b]\).
    The random variable \(Y\) is defined by \(Y = \mathrm{f}(X)\). Show that \[ \mathrm{P}(Y \leqslant y) = \frac{b - \mathrm{f}(y)}{b - a} \quad \text{for } y \in [a,b]. \] Find the probability density function for \(Y\) and hence show that \[ \mathrm{E}(Y^2) = -ab + \int_a^b \frac{2x\,\mathrm{f}(x)}{b-a} \; \mathrm{d}x. \]
  2. The random variable \(Z\) is defined by \(\dfrac{1}{Z} + \dfrac{1}{X} = \dfrac{1}{c}\) where \(\dfrac{1}{c} = \dfrac{1}{a} + \dfrac{1}{b}\). By finding the variance of \(Z\), show that \[ \ln\left(\frac{b-c}{a-c}\right) < \frac{b-a}{c}. \]

2019 Paper 2 Q12
D: 1500.0 B: 1500.0

The random variable \(X\) has the probability density function on the interval \([0, 1]\): $$f(x) = \begin{cases} nx^{n-1} & 0 \leq x \leq 1, \\ 0 & \text{elsewhere}, \end{cases}$$ where \(n\) is an integer greater than 1.

  1. Let \(\mu = E(X)\). Find an expression for \(\mu\) in terms of \(n\), and show that the variance, \(\sigma^2\), of \(X\) is given by $$\sigma^2 = \frac{n}{(n + 1)^2(n + 2)}.$$
  2. In the case \(n = 2\), show without using decimal approximations that the interquartile range is less than \(2\sigma\).
  3. Write down the first three terms and the \((k + 1)\)th term (where \(0 \leq k \leq n\)) of the binomial expansion of \((1 + x)^n\) in ascending powers of \(x\). By setting \(x = \frac{1}{n}\), show that \(\mu\) is less than the median and greater than the lower quartile. Note: You may assume that $$1 + \frac{1}{1!} + \frac{1}{2!} + \frac{1}{3!} + \cdots < 4.$$


Solution:

  1. \(\,\) \begin{align*} && \mu &= \E[X] \\ &&&= \int_0^1 x f(x) \d x \\ &&&= \int_0^1 nx^n \d x \\ &&&= \frac{n}{n+1} \\ \\ && \var[X] &= \sigma^2 \\ &&&= \E[X^2] - \mu^2 \\ &&&= \int_0^1 x^2 f(x) \d x - \mu^2 \\ &&&= \int_0^1 nx^{n+1} \d x - \mu^2 \\ &&&= \frac{n}{n+2} - \frac{n^2}{(n+1)^2} \\ &&&= \frac{n(n+1)^2 - n^2(n+2)}{(n+1)^2(n+2)} \\ &&&= \frac{n}{(n+1)^2(n+2)} \end{align*}
  2. \(\,\) \begin{align*} && \frac14 &= \int_0^{Q_1} 2x \d x \\ &&&= Q_1^2 \\ \Rightarrow && Q_1 &= \frac12 \\ && \frac34 &= \int_0^{Q_3} 2x \d x \\ &&&= Q_3^2 \\ \Rightarrow && Q_3 &= \frac{\sqrt{3}}2 \\ \\ \Rightarrow && IQR &= Q_3 - Q_1 = \frac{\sqrt{3}-1}{2} \\ && 2 \sigma &= 2\sqrt{\frac{2}{3^2 \cdot 4}} \\ &&&= \frac{\sqrt{2}}{3} \\ \\ && 2\sigma - IRQ &= \frac{\sqrt{2}}{3} - \frac{\sqrt{3}-1}{2} \\ &&&= \frac{2\sqrt{2}-3\sqrt{3}+3}{6} \\ && (3+2\sqrt{2})^2 &= 17+12\sqrt{2} > 29 \\ && (3\sqrt{3})^2 &= 27 \end{align*} Therefore \(2\sigma > IQR\)
  3. \[ (1+x)^n = 1 + nx + \frac{n(n-1)}2 x^2 + \cdots + \binom{n}{k} x^k+ \cdots \] \begin{align*} && Q_1^{-n} &= 4 \\ && Q_2^{-n} &= 2\\ && \mu &=\frac{n}{n+1} \\ \Rightarrow && \mu^{-n} &= \left (1 + \frac1n \right)^n\\ &&&\geq 1 + n \frac1n + \cdots > 2 \\ \Rightarrow && \mu &< Q_2 \\ \\ && \mu^{-n} &= \left (1 + \frac1n \right)^n\\ &&&= 1 + n \frac1n + \frac{n(n-1)}{2!} \frac{1}{n^2} + \cdots + \frac{n(n-1) \cdots (n-k+1)}{k!} \frac{1}{n^k} + \cdots \\ &&&= 1 + 1 + \left (1 - \frac1n \right ) \frac1{2!} + \cdots + \left (1 - \frac1n \right)\cdot\left (1 - \frac2n \right) \cdots \left (1 - \frac{k-1}n \right) \frac{1}{k!} + \cdots \\ &&&< 1 + 1 + \frac1{2!} + \cdots + \frac1{k!} \\ &&&< 4 \\ \Rightarrow && \mu &> Q_1 \end{align*}

2018 Paper 1 Q11
D: 1500.0 B: 1513.7

A bag contains three coins. The probabilities of their showing heads when tossed are \(p_1\), \(p_2\) and \(p_3\).

  1. A coin is taken at random from the bag and tossed. What is the probability that it shows a head?
  2. A coin is taken at random from the bag (containing three coins) and tossed; the coin is returned to the bag and again a coin is taken at random from the bag and tossed. Let \(N_1\) be the random variable whose value is the number of heads shown on the two tosses. Find the expectation of \(N_1\) in terms of \(p\), where \(p = \frac{1}{3}(p_1+p_2+p_3)\,\), and show that \(\var(N_1) =2p(1-p)\,\).
  3. Two of the coins are taken at random from the bag (containing three coins) and tossed. Let \(N_2\) be the random variable whose value is the number of heads showing on the two coins. Find \(\E(N_2)\) and \(\var(N_2)\).
  4. Show that \(\var(N_2)\le \var(N_1)\), with equality if and only if \(p_1=p_2=p_3\,\).


Solution:

  1. \(\mathbb{P}(\text{head}) = \mathbb{P}(\text{head}|1)\mathbb{P}(\text{coin 1}) + \mathbb{P}(\text{head}|2)\mathbb{P}(\text{coin 2})+\mathbb{P}(\text{head}|3)\mathbb{P}(\text{coin 3}) = \frac13(p_1+p_2+p_3)\)
  2. \(N_1 = X_1 + X_2\) where \(X_i \sim Bernoulli(p)\), therefore \(\mathbb{E}(N_1) = 2p\) and \(\textrm{Var}(N_1) = \textrm{Var}(X_1)+ \textrm{Var}(X_2) = p(1-p)+p(1-p) = 2p(1-p)\)
  3. Let \(Y_i\) be the indicator for the \(i\)th coin is heads. Then \(\mathbb{E}(Y_i) = p\) and so \(\mathbb{E}(N_2) = 2p\). \begin{align*} && \textrm{Var}(N_2) &= \mathbb{E}(N_2^2) - [\mathbb{E}(N_2)]^2\\ &&&= 2^2 \cdot \left (\frac13 \left (p_1p_2+p_2p_3+p_3p_1 \right) \right) + 1 \cdot \left (\frac13 \left (p_1 (1-p_2) + (1-p_1)p_2 + p_2(1-p_3) +(1-p_2)p_3 + p_3(1-p_1) + (1-p_3)p_1 \right) \right) - [\mathbb{E}(N_2)]^2 \\ &&&= \frac43\left (p_1p_2+p_2p_3+p_3p_1 \right) + \frac13 \left ( 2(p_1+p_2+p_3) - 2(p_1p_2+p_2p_3+p_3p_1)\right)-[\mathbb{E}(N_2)]^2 \\ &&&= \frac23\left (p_1p_2+p_2p_3+p_3p_1 \right) + \frac23 \left ( p_1+p_2+p_3 \right)-[\mathbb{E}(N_2)]^2\\ &&&= \frac23\left (p_1p_2+p_2p_3+p_3p_1 \right) + \frac23 \left ( p_1+p_2+p_3 \right)-\left[\frac23(p_1+p_2+p_3)\right]^2\\ &&&= \frac23\left (p_1p_2+p_2p_3+p_3p_1 \right) +2p(1-2p)\\ \end{align*}
  4. \(\,\) \begin{align*} && \textrm{Var}(N_1) - \textrm{Var}(N_2) &= 2p(1-p) - \left (\frac23\left (p_1p_2+p_2p_3+p_3p_1 \right) +2p(1-2p) \right) \\ &&&= 2p^2-\frac23\left (p_1p_2+p_2p_3+p_3p_1 \right) \\ &&&= \frac23 \left ( \frac13(p_1+p_2+p_3)^2 -\left (p_1p_2+p_2p_3+p_3p_1 \right)\right)\\ &&&= \frac29 \left (p_1^2+p_2^2+p_3^2 -(p_1p_2+p_2p_3+p_3p_1) \right)\\ &&&= \frac19 \left ((p_1-p_2)^2+(p_2-p_3)^2+(p_3-p_1)^2 \right) &\geq 0 \end{align*} with equality iff \(p_1 = p_2 = p_3\)

2018 Paper 3 Q12
D: 1700.0 B: 1516.0

A random process generates, independently, \(n\) numbers each of which is drawn from a uniform (rectangular) distribution on the interval 0 to 1. The random variable \(Y_k\) is defined to be the \(k\)th smallest number (so there are \(k-1\) smaller numbers).

  1. Show that, for \(0\le y\le1\,\), \[ {\rm P}\big(Y_k\le y) =\sum^{n}_{m=k}\binom{n}{m}y^{m}\left(1-y\right)^{n-m} . \tag{\(*\)} \]
  2. Show that \[ m\binom n m = n \binom {n-1}{m-1} \] and obtain a similar expression for \(\displaystyle (n-m) \, \binom n m\,\). Starting from \((*)\), show that the probability density function of \(Y_k\) is \[ n\binom{ n-1}{k-1} y^{k-1}\left(1-y\right)^{ n-k} \,.\] Deduce an expression for \(\displaystyle \int_0^1 y^{k-1}(1-y)^{n-k} \, \d y \,\).
  3. Find \(\E(Y_k) \) in terms of \(n\) and \(k\).


Solution:

  1. \begin{align*} && \mathbb{P}(Y_k \leq y) &= \sum_{j=k}^n\mathbb{P}(\text{exactly }j \text{ values less than }y) \\ &&&= \sum_{j=k}^m \binom{m}{j} y^j(1-y)^{n-j} \end{align*}
  2. This is the number of ways to choose a committee of \(m\) people with the chair from those \(m\) people. This can be done in two ways. First: choose the committee in \(\binom{n}{m}\) ways and choose the chair in \(m\) ways so \(m \binom{n}{m}\). Alternatively, choose the chain in \(n\) ways and choose the remaining \(m-1\) committee members in \(\binom{n-1}{m-1}\) ways. Therefore \(m \binom{n}{m} = n \binom{n-1}{m-1}\) \begin{align*} (n-m) \binom{n}{m} &= (n-m) \binom{n}{n-m} \\ &= n \binom{n-1}{n-m-1} \\ &= n \binom{n-1}{m} \end{align*} \begin{align*} f_{Y_k}(y) &= \frac{\d }{\d y} \l \sum^{n}_{m=k}\binom{n}{m}y^{m}\left(1-y\right)^{n-m} \r \\ &= \sum^{n}_{m=k} \l \binom{n}{m}my^{m-1}\left(1-y\right)^{n-m} -\binom{n}{m}(n-m)y^{m}\left(1-y\right)^{n-m-1} \r \\ &= \sum^{n}_{m=k} \l n \binom{n-1}{m-1}y^{m-1}\left(1-y\right)^{n-m} -n \binom{n-1}{m} y^{m}\left(1-y\right)^{n-m-1} \r \\ &= n\sum^{n}_{m=k} \binom{n-1}{m-1}y^{m-1}\left(1-y\right)^{n-m} -n\sum^{n+1}_{m=k+1} \binom{n-1}{m-1} y^{m-1}\left(1-y\right)^{n-m} \\ &= n \binom{n-1}{k-1} y^{k-1}(1-y)^{n-k} \end{align*} \begin{align*} &&1 &= \int_0^1 f_{Y_k}(y) \d y \\ &&&= \int_0^1 n \binom{n-1}{k-1} y^{k-1}(1-y)^{n-k} \d y \\ &&&= n \binom{n-1}{k-1} \int_0^1 y^{k-1}(1-y)^{n-k} \d y \\ \Rightarrow && \frac{1}{n \binom{n-1}{k-1}} &= \int_0^1 y^{k-1}(1-y)^{n-k} \d y \\ \end{align*}
  3. \begin{align*} && \mathbb{E}(Y_k) &= \int_0^1 y f_{Y_k}(y) \d y \\ &&&= \int_0^1 n \binom{n-1}{k-1} y^{k}(1-y)^{n-k} \\ &&&= n \binom{n-1}{k-1}\int_0^1 y^{k}(1-y)^{n-k} \d y \\ &&&= n \binom{n-1}{k-1}\int_0^1 y^{k+1-1}(1-y)^{n+1-(k+1)} \d y \\ &&&= n \binom{n-1}{k-1} \frac{1}{(n+1) \binom{n}{k}}\\ &&&= \frac{n}{n+1} \cdot \frac{k}{n} \\ &&&= \frac{k}{n+1} \end{align*}

2018 Paper 3 Q13
D: 1700.0 B: 1484.0

The random variable \(X\) takes only non-negative integer values and has probability generating function \(\G(t)\). Show that \[ \P(X = 0 \text{ or } 2 \text{ or } 4 \text { or } 6 \ \ldots ) = \frac{1}{2}\big(\G\left(1\right)+\G\left(-1\right)\big). \] You are now given that \(X\) has a Poisson distribution with mean \(\lambda\). Show that \[ \G(t) = \e^{-\lambda(1-t)} \,. \]

  1. The random variable \(Y\) is defined by \[ \P(Y=r)= \begin{cases} k\P(X=r) & \text{if \(r=0, \ 2, \ 4, \ 6, \ \ldots\) \ }, \\[2mm] 0& \text{otherwise}, \end{cases} \] where \(k\) is an appropriate constant. Show that the probability generating function of \(Y\) is \(\dfrac{\cosh\lambda t}{\cosh\lambda}\,\). Deduce that \(\E(Y) < \lambda\) for \(\lambda > 0\,\).
  2. The random variable \(Z\) is defined by \[\P(Z=r)= \begin{cases} c \P(X=r) & \text{if \(r = 0, \ 4, \ 8, \ 12, \ \ldots \ \)}, \\[2mm] 0& \text{otherwise,} \end{cases} \] where \(c\) is an appropriate constant. Is \(\E(Z) < \lambda\) for all positive values of \(\lambda\,\)?


Solution: \begin{align*} &&G_X(t) &= \mathbb{E}(t^N) \\ &&&= \sum_{k=0}^{\infty} \mathbb{P}(X = k) t^k \\ \Rightarrow && G_X(1) &= \sum_{k=0}^{\infty} \mathbb{P}(X = k) \\ \Rightarrow && G_X(-1) &= \sum_{k=0}^{\infty} (-1)^k\mathbb{P}(X = k) \\ \Rightarrow && \frac12 (G_X(1) + G_X(-1) &= \sum_{k=0}^{\infty} \frac12 (1 + (-1)^k) \mathbb{P}(X = k) \\ &&&= \sum_{k=0}^{\infty} \mathbb{P}(X =2k) \end{align*}

  1. \begin{align*} 1 &= \sum_r \mathbb{P}(Y = r) \\ &= \sum_{k=0}^\infty k \cdot \mathbb{P}(X = 2k) \\ &= k \cdot \frac12 \l e^{-\lambda(1-1) } + e^{-\lambda(1+1) }\r \\ &= \frac{k}{2}(1+e^{-2\lambda}) \end{align*} Therefore \(k = \frac{2}{1+e^{-2\lambda}} = e^{\lambda} \frac{1}{\cosh \lambda}\) \begin{align*} && G_X(t) + G_X(-t) &= \sum_{k=0}^\infty \mathbb{P}(X = k)t^k(1^k + (-1)^k) \\ &&&= \sum_{k=0}^\infty \mathbb{P}(X = k)t^k(1^k + (-1)^k) \\ &&&= 2\sum_{k=0}^\infty \mathbb{P}(X = 2k)t^{2k} \\ &&&= 2\sum_{k=0}^\infty \frac{1}{k}\mathbb{P}(Y = 2k)t^{2k} \\ &&&= \frac{2}{k}G_Y(t) \\ \Rightarrow && G_Y(t) &= k \cdot \frac{G_X(t) + G_X(-t)}{2} \\ &&&= k\frac{e^{-\lambda(1-t)} + e^{-\lambda(1+t)}}{2} \\ &&&= \frac{e^\lambda}{\cosh \lambda} \frac{e^{-\lambda} (e^{\lambda t}+e^{-\lambda t}) }{2} \\ &&&= \frac{\cosh \lambda t}{\cosh \lambda} \end{align*} Since \(\mathbb{E}(Y) = G_Y'(1)\) and \begin{align*} && G_Y'(t) &= \frac{\lambda \sinh \lambda t}{\cosh \lambda t} \\ \Rightarrow && G_Y'(1) &= \lambda \tanh \lambda \\ &&&< \lambda \end{align*} since \(\tanh x < 1\)
  2. \begin{align*} && \frac14 \l G_X(t) + G_X(it) +G_X(-t) + G_X(-it) \r &= \sum_{k=0}^\infty \mathbb{P}(X=k)t^k (1 + i^k + (-1)^k + (-i)^k) \\ &&&= \sum_{k=0}^\infty \mathbb{P}(X = 4k)t^{4k} \\ &&&= \frac{G_Z(t)}{c} \end{align*} Since \(G_Z(1) = 1\) we must have \(c = \frac1{\frac14 \l G_X(1) + G_X(i) +G_X(-1) + G_X(-i) \r}\) \begin{align*} && c &= \frac{4e^{\lambda}}{e^{\lambda} + e^{-\lambda} + e^{i\lambda} + e^{-i\lambda}} \\ &&&= \frac{2e^{\lambda}}{\cosh \lambda + \cos \lambda} \\ && G_Z(t) &= c \cdot \frac14 \l e^{-\lambda(1-t)}+e^{-\lambda(1-it)}+e^{-\lambda(1+t)}+e^{-\lambda(1+it)} \r \\ &&&= \frac{ce^{-\lambda t}}{4} \l 2\cosh \lambda t + 2 \cos \lambda t\r \\ &&&= \frac{\cosh \lambda t + \cos \lambda t}{\cosh \lambda + \cos \lambda} \end{align*} We are interested in \(G_Z'(1)\) so: \begin{align*} && G_Z'(t) &= \frac{\lambda (\sinh \lambda t - \sin \lambda t)}{\cosh \lambda + \cos \lambda } \end{align*} Considering various values of \(\lambda\), it makes sense to look at \(\lambda = \pi\) (since \(\cos \lambda = -1\) and the denominator will be small). From this we can see: \begin{align*} G'_Z(1) &= \frac{\pi (\sinh \pi-0)}{\cosh \pi-1} \\ &= \frac{\pi}{\tanh \frac{\pi}{2}} > \pi \end{align*} So \(\mathbb{E}(Z)\) is larger than \(\lambda\) for \(\lambda = \pi\) (and probably many others)

2017 Paper 3 Q13
D: 1700.0 B: 1500.0

The random variable \(X\) has mean \(\mu\) and variance \(\sigma^2\), and the function \({\rm V}\) is defined, for \(-\infty < x < \infty\), by \[ {\rm V}(x) = \E \big( (X-x)^2\big) . \] Express \({\rm V}(x)\) in terms of \(x\), \( \mu\) and \(\sigma\). The random variable \(Y\) is defined by \(Y={\rm V}(X)\). Show that \[ \E(Y) = 2 \sigma^2 %\text{ \ \ and \ \ } %\Var(Y) = \E(X-\mu)^4 -\sigma^4 . \tag{\(*\)} \] Now suppose that \(X\) is uniformly distributed on the interval \(0\le x \le1\,\). Find \({\rm V}(x)\,\). Find also the probability density function of \(Y\!\) and use it to verify that \((*)\) holds in this case.


Solution: \begin{align*} {\rm V}(x) &= \E \big( (X-x)^2\big) \\ &= \E \l X^2 - 2xX + x^2\r \\ &= \E [ X^2 ]- 2x\E[X] + x^2 \\ &= \sigma^2+\mu^2 - 2x\mu + x^2 \\ &= \sigma^2 + (\mu - x)^2 \end{align*} \begin{align*} \E[Y] &= \E[\sigma^2 + (\mu - X)^2] \\ &= \sigma^2 + \E[(\mu - X)^2]\\ &= \sigma^2 + \sigma^2 \\ &= 2\sigma^2 \end{align*} If \(X \sim U(0,1)\) then \(V(x) = \frac{1}{12} + (\frac12 - x)^2\). \begin{align*} \P(Y \leq y) &= \P(\frac1{12} + (\frac12 - X)^2 \leq y) \\ &= \P((\frac12 -X)^2 \leq y - \frac1{12}) \\ &= \P(|\frac12 -X| \leq \sqrt{y - \frac1{12}}) \\ &= \begin{cases} 1 & \text{if } y - \frac1{12} > \frac14 \\ 2 \sqrt{y - \frac1{12}} & \text{if } \frac14 > y - \frac1{12} > 0 \\ \end{cases} \\ &= \begin{cases} 1 & \text{if } y> \frac13 \\ \sqrt{4y - \frac1{3}} & \text{if } \frac13 > y > \frac1{12} \\ \end{cases} \end{align*} Therefore $f_Y(y) = \begin{cases} \frac{2}{\sqrt{4y-\frac{1}{3}}} & \text{if } \frac1{12} < y < \frac13 \\ 0 & \text{otherwise} \end{cases}$ \begin{align*} \E[Y] &= \int_{1/12}^{1/3} \frac{2x}{\sqrt{4x-\frac13}} \, dx \\ &= 2\int_{u = 0}^{u=1} \frac{\frac{1}{4}u +\frac1{12}}{\sqrt{u}} \,\frac{1}{4} du \tag{\(u = 4x - \frac13, \frac{du}{dx} = 4\)}\\ &= \frac{1}{2 \cdot 12}\int_{u = 0}^{u=1} 3\sqrt{u} +\frac{1}{\sqrt{u}} \, du \\ &= \frac{1}{2 \cdot 12} \left [2 u^{3/2} + 2u^{1/2} \right ]_0^1 \\ &= \frac{1}{2 \cdot 12} \cdot 4 \\ &= \frac{2}{12} \end{align*} as required

2016 Paper 3 Q13
D: 1700.0 B: 1500.0

Given a random variable \(X\) with mean \(\mu\) and standard deviation \(\sigma\), we define the kurtosis, \(\kappa\), of \(X\) by \[ \kappa = \frac{ \E\big((X-\mu)^4\big)}{\sigma^4} -3 \,. \] Show that the random variable \(X-a\), where \(a\) is a constant, has the same kurtosis as \(X\).

  1. Show by integration that a random variable which is Normally distributed with mean 0 has kurtosis 0.
  2. Let \(Y_1, Y_2, \ldots, Y_n\) be \(n\) independent, identically distributed, random variables with mean 0, and let \(T = \sum\limits_{r=1}^n Y_r\). Show that \[ \E(T^4) = \sum_{r=1}^n \E(Y_r^4) + 6 \sum_{r=1}^{n-1} \sum_{s=r+1}^{n} \E(Y^2_s) \E(Y^2_r) \,. \]
  3. Let \(X_1\), \(X_2\), \(\ldots\)\,, \(X_n\) be \(n\) independent, identically distributed, random variables each with kurtosis \(\kappa\). Show that the kurtosis of their sum is \(\dfrac\kappa n\,\).


Solution: \begin{align*} &&\kappa_{X-a} &= \frac{\mathbb{E}\left(\left(X-a-(\mu-a)\right)^4\right)}{\sigma_{X-a}^4}-3 \\ &&&= \frac{\mathbb{E}\left(\left(X-\mu\right)^4\right)}{\sigma_X^4}-3\\ &&&= \kappa_X \end{align*}

  1. \(\,\) \begin{align*} && \kappa &= \frac{\mathbb{E}((X-\mu)^4)}{\sigma^4} - 3 \\ &&&= \frac{\mathbb{E}((\mu+\sigma Z-\mu)^4)}{\sigma^4} - 3 \\ &&&= \frac{\mathbb{E}((\sigma Z)^4)}{\sigma^4} - 3 \\ &&&= \mathbb{E}(Z^4)-3\\ &&&= \int_{-\infty}^{\infty} x^4\frac{1}{\sqrt{2\pi}} \exp \left ( - \frac12x^2 \right)\d x -3 \\ &&&= \left [\frac{1}{\sqrt{2\pi}}x^{3} \cdot \left ( -\exp \left ( - \frac12x^2 \right)\right) \right]_{-\infty}^{\infty} + \frac{1}{\sqrt{2\pi}} \int_{-\infty}^\infty 3x^2 \exp \left ( - \frac12x^2 \right) \d x - 3 \\ &&&= 0 + 3 \textrm{Var}(Z) - 3 =0 \end{align*}
  2. \(\,\) \begin{align*} && \mathbb{E}(T^4) &= \mathbb{E} \left [\left ( \sum\limits_{r=1}^n Y_r\right)^4\right] \\ &&&= \mathbb{E} \left [ \sum_{r=1}^n Y_r^4+\sum_{i\neq j} 4Y_iY_j^3+\sum_{i\neq j} 6Y_i^2Y_j^2+\sum_{i\neq j \neq k} 12Y_iY_jY_k^2 +\sum_{i\neq j\neq k \neq l}24 Y_iY_jY_kY_l\right] \\ &&&= \sum_{r=1}^n \mathbb{E} \left [ Y_r^4 \right]+\sum_{i\neq j} \mathbb{E} \left [ 4Y_iY_j^3\right]+\sum_{i\neq j} \mathbb{E} \left [ 6Y_i^2Y_j^2\right]+\sum_{i\neq j \neq k} \mathbb{E} \left [ 12Y_iY_jY_k^2\right] +\sum_{i\neq j\neq k \neq l} \mathbb{E} \left [ 24 Y_iY_jY_kY_l\right] \\ &&&= \sum_{r=1}^n \mathbb{E} \left [ Y_r^4 \right]+4\sum_{i\neq j} \mathbb{E} \left [ Y_i]\mathbb{E}[Y_j^3\right]+6\sum_{i\neq j} \mathbb{E} \left [ Y_i^2]\mathbb{E}[Y_j^2\right]+12\sum_{i\neq j \neq k} \mathbb{E} \left [ Y_i]\mathbb{E}[Y_j]\mathbb{E}[Y_k^2\right] +24\sum_{i\neq j\neq k \neq l} \mathbb{E} \left [ Y_i]\mathbb{E}[Y_j]\mathbb{E}[Y_k]\mathbb{E}[Y_l\right] \\ &&&= \sum_{r=1}^n \mathbb{E} \left [ Y_r^4 \right]+6\sum_{i\neq j} \mathbb{E} \left [ Y_i^2]\mathbb{E}[Y_j^2\right] \end{align*}
  3. Without loss of generality, we may assume they all have mean zero. Therefore we can consider the sitatuion as in the previous case with \(T\) and \(Y_i\)s. Note that \(\mathbb{E}(Y_i^4) = \sigma^4(\kappa + 3)\) and \(\textrm{Var}(T) = n \sigma^2\) \begin{align*} && \kappa_T &= \frac{\mathbb{E}(T^4)}{(\textrm{Var}(T))^2} - 3 \\ &&&= \frac{\sum_{r=1}^n \mathbb{E} \left [ Y_r^4 \right]+6\sum_{i\neq j} \mathbb{E} \left [ Y_i^2\right]\mathbb{E}\left[Y_j^2\right]}{n^2\sigma^4}-3 \\ &&&= \frac{n\sigma^4(\kappa+3)+6\binom{n}{2}\sigma^4}{n^2\sigma^4} -3\\ &&&= \frac{\kappa}{n} + \frac{3n + \frac{6n(n-1)}{2}}{n^2} - 3 \\ &&&= \frac{\kappa}{n} + \frac{3n^2}{n^2}-3 \\ &&&= \frac{\kappa}{n} \end{align*}

2015 Paper 2 Q13
D: 1600.0 B: 1516.0

The maximum height \(X\) of flood water each year on a certain river is a random variable with probability density function \(\f\) given by \[ \f(x) = \begin{cases} \lambda \e^{-\lambda x} & \text{for \(x\ge0\)}\,, \\ 0 & \text{otherwise,} \end{cases} \] where \(\lambda\) is a positive constant. It costs \(ky\) pounds each year to prepare for flood water of height \(y\) or less, where \(k\) is a positive constant and \(y\ge0\). If \(X \le y\) no further costs are incurred but if \(X> y\) the additional cost of flood damage is \(a(X - y )\) pounds where \(a\) is a positive constant.

  1. Let \(C\) be the total cost of dealing with the floods in the year. Show that the expectation of \(C\) is given by \[\mathrm{E}(C)=ky+\frac{a}{\lambda}\mathrm{e}^{-\lambda y} \, . \] How should \(y\) be chosen in order to minimise \(\mathrm{E}(C)\), in the different cases that arise according to the value of \(a/k\)?
  2. Find the variance of \(C\), and show that the more that is spent on preparing for flood water in advance the smaller this variance.


Solution:

  1. \(\,\) \begin{align*} && \mathbb{E}(C) &= \int_0^\infty \text{cost}(x) f(x) \d x \\ &&&= ky + \int_y^{\infty} a(x-y) \lambda e^{-\lambda x} \d x\\ &&&= ky + \int_0^{\infty} a u \lambda e^{-\lambda u -\lambda y} \d x \\ &&&= ky + ae^{-\lambda y} \left( \left [ -ue^{-\lambda u} \right]_0^\infty -\int_0^\infty e^{-\lambda u} \d u\right) \\ &&&= ky + \frac{a}{\lambda}e^{-\lambda y} \\ \\ && \frac{\d \mathbb{E}(C)}{\d y} &= k - ae^{-\lambda y} \\ \Rightarrow && y &= \frac{1}{\lambda}\ln \left ( \frac{a}{k} \right) \end{align*} Since \(\mathbb{E}(C)\) is clearly increasing when \(y\) is very large, the optimal value will be \(\frac{1}{\lambda}\ln \left ( \frac{a}{k} \right)\), if \(\frac{a}{k} > 1\), otherwise you should spend nothing on flood defenses.
  2. \begin{align*} && \mathbb{E}(C^2) &= \int_0^{\infty} \text{cost}(x)^2 f(x) \d x \\ &&&= \int_0^{\infty}(ky + a(x-y)\mathbb{1}_{x > y})^2 f(x) \d x \\ &&&= k^2y^2 + \int_y^{\infty}2kya(x-y)f(x)\d x + \int_y^{\infty}a^2 (x-y)^2 f(x) \d x \\ &&&= k^2y^2 + \frac{2kya}{\lambda}e^{- \lambda y}+a^2e^{-\lambda y}\int_{u=0}^\infty u^2 \lambda e^{-\lambda u} \d u \\ &&&= k^2y^2 + \frac{2kya}{\lambda}e^{-\lambda y}+a^2e^{-\lambda y}(\textrm{Var}(Exp(\lambda)) + \mathbb{E}(Exp(\lambda))^2\\ &&&= k^2y^2 + \frac{2kya}{\lambda}e^{-\lambda y} + a^2e^{-\lambda y} \frac{2}{\lambda^2} \\ && \textrm{Var}(C) &= k^2y^2 + \frac{2kya}{\lambda}e^{-\lambda y} + a^2e^{-\lambda y} \frac{2}{\lambda^2} - \left ( ky + \frac{a}{\lambda} e^{-\lambda y}\right)^2 \\ &&&= a^2e^{-\lambda y} \frac{2}{\lambda^2} - a^2 e^{-2\lambda y}\frac{1}{\lambda^2} \\ &&&= \frac{a^2}{\lambda^2} e^{-\lambda y}\left (2 - e^{-\lambda y} \right) \\ \\ && \frac{\d \textrm{Var}(C)}{\d y} &= \frac{a^2}{\lambda^2} \left (-2\lambda e^{-\lambda y} +2\lambda e^{-2\lambda y} \right) \\ &&&= \frac{2a^2}{\lambda} e^{-\lambda y}\left (e^{-\lambda y}-1 \right) \leq 0 \end{align*} so \(\textrm{Var}(C)\) is decreasing in \(y\).

2015 Paper 3 Q13
D: 1700.0 B: 1500.0

Each of the two independent random variables \(X\) and \(Y\) is uniformly distributed on the interval~\([0,1]\).

  1. By considering the lines \(x+y =\) \(\mathrm{constant}\) in the \(x\)-\(y\) plane, find the cumulative distribution function of \(X+Y\).
  2. Hence show that the probability density function \(f\) of \((X+Y)^{-1}\) is given by \[ \f(t) = \begin{cases} 2t^{-2} -t^{-3} & \text{for \( \tfrac12 \le t \le 1\)} \\ t^{-3} & \text{for \(1\le t <\infty\)}\\ 0 & \text{otherwise}. \end{cases} \] Evaluate \(\E\Big(\dfrac1{X+Y}\Big)\,\).
  3. Find the cumulative distribution function of \(Y/X\) and use this result to find the probability density function of \(\dfrac X {X+Y}\). Write down \(\E\Big( \dfrac X {X+Y}\Big)\) and verify your result by integration.


Solution:

  1. \(\mathbb{P}(X + Y \leq c) \) is the area between the \(x\)-axis, \(y\)-axis and the line \(x + y = c\). There are two cases for this: \[\mathbb{P}(X + Y \leq c) = \begin{cases} 0 & \text{ if } c \leq 0 \\ \frac{c^2}{2} & \text{ if } c \leq 1 \\ 1- \frac{(2-c)^2}{2} & \text{ if } 1 \leq c \leq 2 \\ 1 & \text{ otherwise} \end{cases}\]
  2. \begin{align*} && \mathbb{P}((X + Y)^{-1} \leq t) &= 1- \mathbb{P}(X + Y \leq \frac1{t}) \\ \Rightarrow && f_{(X+Y)^{-1}}(t) &= 0 -\begin{cases} 0 & \text{ if } \frac1{t} \leq 0 \\ \frac{\d}{\d t}\frac{1}{2t^2} & \text{ if } \frac{1}{t} \leq 1 \\ \frac{\d}{\d t} \l 1- \frac{(2-\frac1t)^2}{2} \r & \text{ if } 1 \leq \frac{1}{t} \leq 2 \\ 0 & \text{ otherwise}\end{cases} \\ && &= \begin{cases} t^{-3} & \text{ if } t \geq 1 \\ (2-\frac1t)t^{-2} & \text{ if } \frac12 \leq t \leq 1\\ 0 & \text{ otherwise}\end{cases} \\ && &= \begin{cases} t^{-3} & \text{ if } t \geq 1 \\ 2t^{-2}-t^{-3} & \text{ if } \frac12 \leq t \leq 1\\ 0 & \text{ otherwise}\end{cases} \end{align*} Therefore, \begin{align*} \E \Big(\dfrac1{X+Y}\Big) &= \int_{\frac12}^{\infty} t f_{(X+Y)^{-1}}(t) \, \d t \\ &= \int_{\frac12}^{1} t f_{(X+Y)^{-1}}(t) \, \d t + \int_{1}^{\infty} t f_{(X+Y)^{-1}}(t) \d t\\ &= \int_{\frac12}^{1} \l 2t^{-1} - t^{-2} \r \, \d t + \int_{1}^{\infty} t^{-2} \d t\\ &= \left [ 2 \ln (t) + t^{-1} \right]_{\frac12}^{1} + \left [ -t^{-1} \right ]_{1}^{\infty} \\ &= 1 + 2 \ln 2 -2 + 1 \\ &= 2 \ln 2 \end{align*}
  3. \begin{align*} &&\mathbb{P} \l \frac{Y}{X} \leq c \r &= \mathbb{P}( Y \leq c X) \\ &&&= \begin{cases} 0 & \text{if } c \leq 0 \\ \frac{c}{2} & \text{if } 0 \leq c \leq 1 \\ 1-\frac{1}{2c} & \text{if } 1 \leq c \end{cases} \\ \\ \Rightarrow && \mathbb{P} \l \frac{X}{X+Y} \leq t\r &= \mathbb{P} \l \frac{1}{1+\frac{Y}{X}} \leq t\r \\ &&&= \mathbb{P} \l \frac{1}{t} \leq 1+\frac{Y}{X}\r \\ &&&= \mathbb{P} \l \frac{1}{t} - 1\leq \frac{Y}{X}\r \\ &&&= 1- \mathbb{P} \l \frac{Y}{X} \leq \frac{1}{t} - 1\r \\ &&&= 1 - \begin{cases} 0 & \text{if } \frac1{t} \leq 0 \\ \frac{1}{2t} - \frac{1}{2} & \text{if } 0 \leq \frac1{t} \leq 1 \\ 1-\frac{t}{2-2t} & \text{if } 1 \leq \frac1{t} \end{cases} \\ && f_{\frac{X}{X+Y}}(t) &= \begin{cases} 0 & \text{if } \frac1{t} \leq 0 \\ \frac{1}{2t^2} & \text{if } t \geq 1 \\ \frac{1}{2(1-t)^2} & \text{if } 0 \leq t \leq 1 \end{cases} \\ \Rightarrow && \mathbb{E} \l \frac{X}{X+Y} \r &= \int_0^\infty t f(t) \d t \\ &&&= \int_0^1 \frac{1}{2(1-t)^2} \d t + \int_1^\infty \frac{1}{t^2} \d t \\ &&& = \frac{1}{4} + \frac{1}{4} = \frac{1}{2} \\ \\ && \mathbb{E} \l \frac{X}{X+Y} \r &= \int_0^1 \int_0^1 \frac{x}{x+y} \d y\d x \\ &&&= \int_0^1 \l x \ln (x+1) - x \ln x \r \d x \\ &&&= \left [\frac{x^2}2 \ln(x+1) - \frac{x^2}{2} \ln(x) \right]_0^1 -\int_0^1 \l \frac{x^2}{2(x+1)} - \frac{x}{2} \r \d x \\ &&&= \frac{\ln 2}{2} + \frac{1}{4} - \int_0^1 \frac{x^2-1+1}{2(x+1)}\d x \\ &&&= \frac{\ln 2}{2} + \frac{1}{4} - \int_0^1 \frac{x -1}{2} + \frac{1}{2(x+1)}\d x \\ &&&= \frac{\ln 2}{2} + \frac{1}{4} - \frac{1}{4} + \frac{1}{2} - \frac{\ln 2}{2} \\ &&&= \frac{1}{2} \end{align*} We can also notice that \(1 = \mathbb{E} \l \frac{X+Y}{X+Y} \r = \mathbb{E} \l \frac{X}{X+Y} \r + \mathbb{E} \l \frac{Y}{X+Y} \r = 2 \mathbb{E} \l \frac{X}{X+Y} \r\) so it's clearly true as long as we can show that the integral converges.

2014 Paper 1 Q12
D: 1484.0 B: 1441.7

A game in a casino is played with a fair coin and an unbiased cubical die whose faces are labelled \(1, 1, 1, 2, 2\) and \(3.\) In each round of the game, the die is rolled once and the coin is tossed once. The outcome of the round is a random variable \(X\). The value, \(x\), of \(X\) is determined as follows. If the result of the toss is heads then \(x= \vert ks -1\vert\), and if the result of the toss is tails then \(x=\vert k-s\vert\), where \(s\) is the number on the die and \(k\) is a given number. Show that \(\mathbb{E}(X^2) = k +13(k-1)^2 /6\). Given that both \(\mathbb{E}(X^2)\) and \(\mathbb{E}(X)\) are positive integers, and that \(k\) is a single-digit positive integer, determine the value of \(k\), and write down the probability distribution of \(X\). A gambler pays \(\pounds 1\) to play the game, which consists of two rounds. The gambler is paid:

  • \(\pounds w\), where \(w\) is an integer, if the sum of the outcomes of the two rounds exceeds \(25\);
  • \(\pounds 1\) if the sum of the outcomes equals \(25\);
  • nothing if the sum of the outcomes is less that \(25\).
Find, in terms of \(w\), an expression for the amount the gambler expects to be paid in a game, and deduce the maximum possible value of \(w\), given that the casino's owners choose \(w\) so that the game is in their favour.


Solution: \begin{align*} && \mathbb{E}(X^2) &= \frac12 \left (\frac16 \left ( 3(k -1)^2+2(2k-1)^2+(3k-1)^2 \right) +\frac16 \left ( 3(k -1)^2+2(k-2)^2+(k-3)^2 \right) \right) \\ &&&= \frac12 \left (\frac16 \left (20k^2-20k+6 \right) + \frac16 \left ( 6k^2-20k+20\right) \right) \\ &&&= \frac1{12} \left (26k^2-40k+ 26\right) \\ &&&= \frac{13}{6} (k^2+1) - \frac{10}{3}k \\ &&&= \frac{13}{6}(k-1)^2+k \end{align*} Since \(k\) a single digit positive number and \(\mathbb{E}(X^2)\) is an integer, \(6 \mid k-1 \Rightarrow k = 1, 7\). \begin{align*} \mathbb{E}(X | k=1) &= \frac12 \left (\frac16 \left ( 2+2 \right) +\frac16 \left ( 2+2 \right) \right) = \frac23 \not \in \mathbb{Z}\\ \mathbb{E}(X | k=7) &= \frac12 \left (\frac16 \left ( 3\cdot6+2\cdot13+20 \right) +\frac16 \left ( 3\cdot6+2\cdot5+4 \right) \right) = 8 \end{align*} Therefore \(k = 7\) The probability distribution is \begin{align*} && \mathbb{P}(X=4) = \frac1{12} \\ && \mathbb{P}(X=5) = \frac1{6} \\ && \mathbb{P}(X=6) = \frac12 \\ && \mathbb{P}(X=13) = \frac1{6} \\ && \mathbb{P}(X=20)= \frac1{12} \\ \end{align*} The only ways to score more than \(25\) are: \(20+6, 20+13, 20+20, 13+13\) The only ways to score exactly \(25\) are \(20+5\) \begin{align*} \mathbb{P}(>25) &= \frac1{12} \cdot\left(2\cdot \frac12+2\cdot\frac16+\frac1{12}\right) + \frac{1}{6^2} \\ &= \frac{7}{48} \\ \mathbb{P}(=25) &= \frac{2}{12 \cdot 6} = \frac{1}{36} \\ \\ \mathbb{E}(\text{payout}) &= \frac{7}{48}w + \frac{1}{36} = \frac{21w+4}{144} \end{align*} The casino needs \(\frac{21w+4}{144} < 1 \Rightarrow 21w< 140 \Rightarrow w < \frac{20}{3}\)