Problems

Filters
Clear Filters

43 problems found

2025 Paper 3 Q11
D: 1500.0 B: 1500.0

  1. Let \(\lambda > 0\). The independent random variables \(X_1, X_2, \ldots, X_n\) all have probability density function $$f(t) = \begin{cases} \lambda e^{-\lambda t} & t \geq 0 \\ 0 & t < 0 \end{cases}$$ and cumulative distribution function \(F(x)\). The value of random variable \(Y\) is the largest of the values \(X_1, X_2, \ldots, X_n\). Show that the cumulative distribution function of \(Y\) is given, for \(y \geq 0\), by $$G(y) = (1 - e^{-\lambda y})^n$$
  2. The values \(L(\alpha)\) and \(U(\alpha)\), where \(0 < \alpha \leq \frac{1}{2}\), are such that $$P(Y < L(\alpha)) = \alpha \text{ and } P(Y > U(\alpha)) = \alpha$$ Show that $$L(\alpha) = -\frac{1}{\lambda}\ln(1 - \alpha^{1/n})$$ and write down a similar expression for \(U(\alpha)\).
  3. Use the approximation \(e^t \approx 1 + t\), for \(|t|\) small, to show that, for sufficiently large \(n\), $$\lambda L(\alpha) \approx \ln(n) - \ln\left(\ln\left(\frac{1}{\alpha}\right)\right)$$
  4. Hence show that the median of \(Y\) tends to infinity as \(n\) increases, but that the width of the interval \(U(\alpha) - L(\alpha)\) tends to a value which is independent of \(n\).
  5. You are given that, for \(|t|\) small, \(\ln(1 + t) \approx t\) and that \(e^3 \approx 20\). Show that, for sufficiently large \(n\), there is an interval of width approximately \(4\lambda^{-1}\) in which \(Y\) lies with probability \(0.9\).


Solution:

  1. Note that \(\displaystyle F(y) = \mathbb{P}(X_i < y) = \int_0^y \lambda e^{-\lambda t} \d t = 1-e^{-\lambda y}\). Notice also that \begin{align*} G(y) &= \mathbb{P}(Y < y) \\ &= \mathbb{P}(\max_i(X_i) < y) \\ &= \mathbb{P}(X_i < y \text{ for all }i) \\ &= \prod_{i=1}^n \mathbb{P}(X_i < y) \\ &= \prod_{i=1}^n (1-e^{-\lambda y})\\ &= (1-e^{-\lambda y})^n \end{align*} as required.
  2. \begin{align*} && \mathbb{P}(Y < L(\alpha)) &= \alpha \\ \Rightarrow && (1-e^{-\lambda L(\alpha)})^n &= \alpha \\ \Rightarrow && 1-e^{-\lambda L(\alpha)} &= \alpha^{\tfrac1n} \\ \Rightarrow && L(\alpha) &= -\frac{1}{\lambda}\ln \left (1-\alpha^{\tfrac1n} \right) \end{align*} Notice also: \begin{align*} && \mathbb{P}(Y > U(\alpha)) &= \alpha \\ \Rightarrow && 1 - (1-e^{-\lambda U(\alpha)})^n &= \alpha \\ \Rightarrow && U(\alpha) &= -\frac{1}{\lambda}\ln \left ( 1-(1-\alpha)^{\tfrac1n} \right) \end{align*}
  3. \begin{align*} \lambda L(\alpha) &= -\ln \left (1-\alpha^{\tfrac1n} \right) \\ &= -\ln \left (1-e^{\tfrac1n \ln \alpha} \right) \\ &\approx - \ln \left ( 1 - 1 - \frac1n \ln \alpha\right) \tag{\(e^t \approx 1 + t\)} \\ &= -\ln \left ( \frac{1}{n} \ln \frac{1}\alpha \right) \\ &= - \ln \frac{1}{n} - \ln \left ( \ln \frac{1}{\alpha} \right )\\ &= \ln n - \ln \left ( \ln \left ( \frac{1}{\alpha} \right ) \right) \end{align*} since if \(n\) is large, \(\frac{\ln \alpha}{n}\) is small.
  4. The median is the value where \(\mathbb{P}(Y < M) = \frac12\), or in other words \(L(\frac12)\), but this is \(\approx \frac{\ln n - \ln (\ln 2)}{\lambda} \to \infty\). \begin{align*} && \lambda U(\alpha) &\approx \ln n - \ln \left ( \ln \left ( \frac{1}{1-\alpha} \right ) \right) \\ \Rightarrow && \lambda(U(\alpha) - L(\alpha)) &\approx -\ln \left ( \ln \left ( \frac{1}{1-\alpha} \right ) \right)+ \ln \left ( \ln \left ( \frac{1}{\alpha} \right ) \right) \\ \Rightarrow && U(\alpha) - L(\alpha) &\to \frac{1}{\lambda} \left ( \ln \left ( \ln \left ( \frac{1}{\alpha} \right ) \right)-\ln \left ( \ln \left ( \frac{1}{1-\alpha} \right ) \right ) \right) \end{align*} which doesn't depend on \(n\).
  5. Suppose \(\alpha = \frac{1}{20}\) then \begin{align*} U(\alpha) - L(\alpha) &\approx \frac{1}{\lambda} \left (\ln \ln 20 - \ln \ln \frac{20}{19} \right) \\ &= \lambda^{-1} \left (\ln \ln 20 - \ln \ln (1 + \frac{1}{19}) \right) \\ &\approx \lambda^{-1} \left (\ln 3 - \ln \frac{1}{19} \right) \tag{\(\ln(1+t) \approx t\)} \\ &\approx \lambda^{-1} \ln 3 \cdot 19 \\ &\approx \lambda^{-1} (1 + 3) \\ &\approx 4\lambda^{-1} \end{align*} [Note that \(\ln \ln 20 - \ln \ln \frac{20}{19} = 4.0673\ldots\)]

2023 Paper 2 Q12
D: 1500.0 B: 1500.0

Each of the independent random variables \(X_1, X_2, \ldots, X_n\) has the probability density function \(\mathrm{f}(x) = \frac{1}{2}\sin x\) for \(0 \leqslant x \leqslant \pi\) (and zero otherwise). Let \(Y\) be the random variable whose value is the maximum of the values of \(X_1, X_2, \ldots, X_n\).

  1. Explain why \(\mathrm{P}(Y \leqslant t) = \big[\mathrm{P}(X_1 \leqslant t)\big]^n\) and hence, or otherwise, find the probability density function of \(Y\).
Let \(m(n)\) be the median of \(Y\) and \(\mu(n)\) be the mean of \(Y\).
  1. Find an expression for \(m(n)\) in terms of \(n\). How does \(m(n)\) change as \(n\) increases?
  2. Show that \[\mu(n) = \pi - \frac{1}{2^n}\int_0^{\pi} (1-\cos x)^n\,\mathrm{d}x\,.\]
    1. Show that \(\mu(n)\) increases with \(n\).
    2. Show that \(\mu(2) < m(2)\).

2022 Paper 2 Q12
D: 1500.0 B: 1500.0

The random variable \(X\) has probability density function \[\mathrm{f}(x) = \begin{cases} kx^n(1-x) & 0 \leqslant x \leqslant 1\,,\\ 0 & \text{otherwise}\,,\end{cases}\] where \(n\) is an integer greater than 1.

  1. Show that \(k = (n+1)(n+2)\) and find \(\mu\), where \(\mu = \mathrm{E}(X)\).
  2. Show that \(\mu\) is less than the median of \(X\) if \[6 - \frac{8}{n+3} < \left(1 + \frac{2}{n+1}\right)^{n+1}.\] By considering the first four terms of the expansion of the right-hand side of this inequality, or otherwise, show that the median of \(X\) is greater than \(\mu\).
  3. You are given that, for positive \(x\), \(\left(1 + \dfrac{1}{x}\right)^{x+1}\) is a decreasing function of \(x\). Show that the mode of \(X\) is greater than its median.


Solution:

  1. \(\,\) \begin{align*} && 1 &= \int_0^1 kx^n(1-x) \d x \\ &&&= k\left [\frac{x^{n+1}}{n+1} - \frac{x^{n+2}}{n+2} \right]_0^1 \\ &&&= \frac{k}{(n+1)(n+2)} \\ \Rightarrow && k &= (n+1)(n+2) \\ \\ && \E[X] &= \int_0^1 kx^{n+1}(1-x) \d x \\ &&&= k\left [ \frac{x^{n+2}}{n+2} - \frac{x^{n+3}}{n+3} \right]_0^1 \\ &&&= \frac{(n+1)(n+2)}{(n+2)(n+3)} \\ &&&= \frac{n+1}{n+3} \end{align*}
  2. If \(\mu\) is less than the median then \begin{align*} && \frac12 &> \int_0^{\mu} kx^{n}(1-x) \d x \\ &&&= \left [k \frac{x^{n+1}}{n+1} - \frac{x^{n+2}}{n+2} \right]_0^\mu \\ &&&= \mu^{n+1}\left ((n+2) - (n+1) \frac{n+1}{n+3} \right) \\ \Rightarrow && \left ( 1 + \frac{2}{n+1} \right)^{n+1} &> 2 \frac{(n+2)(n+3) - (n+1)^2}{n+3} \\ &&&= \frac{6n+10}{n+3} = 6 - \frac{8}{n+3} \end{align*} \begin{align*} && \left ( 1 + \frac{2}{n+1} \right)^{n+1} &= 1 + (n+1) \frac{2}{n+1} + \frac{(n+1)n}{2} \frac{4}{(n+1)^2} + \frac{(n+1)n(n-1)}{6} \frac{8}{(n+1)^3} + \cdots \\ &&&= 1 + 2 + \frac{2n}{n+1} + \frac{4n(n-1)}{3(n+1)^2} + \cdots \\ &&&= 5 - \frac{2}{n+1} + \frac{4((n+1)^2-3(n+1)+3)}{3(n+1)^2} \\ &&&= 6 + \frac13 - \frac{6}{n+1} - \frac{4}{3(n+1)^2} \\ &&&> 6 - \frac{8}{n+3} \end{align*} as required.
  3. To find the mode, we need to find the maximum of \(x^n(1-x)\) which occurs when \(nx^{n-1}(1-x) - x^n = x^{n-1}(n-(n+1)x)\) ie where \(x = \frac{n}{n+1}\), therefore we need to look at: \begin{align*} && \mathbb{P}(X \leq \tfrac{n}{n+1}) &= \int_0^{n/(n+1)} k x^n(1-x) \d x \\ &&&= \frac{n^{n+1}}{(n+1)^{n+1}} \left ( (n+2) - (n+1) \frac{n}{n+1} \right) \\ &&&= 2\frac{n^{n+1}}{(n+1)^{n+1}} \\ &&&= 2 \left ( \frac{1}{1+\frac1n} \right)^{n+1} \\ &&&\geq 2 \frac{1}{(1 + \frac11)^2} = \frac12 \end{align*} as required

2022 Paper 3 Q12
D: 1500.0 B: 1500.0

  1. The point \(A\) lies on the circumference of a circle of radius \(a\) and centre \(O\). The point \(B\) is chosen at random on the circumference, so that the angle \(AOB\) has a uniform distribution on \([0, 2\pi]\). Find the expected length of the chord \(AB\).
  2. The point \(C\) is chosen at random in the interior of a circle of radius \(a\) and centre \(O\), so that the probability that it lies in any given region is proportional to the area of the region. The random variable \(R\) is defined as the distance between \(C\) and \(O\). Find the probability density function of \(R\). Obtain a formula in terms of \(a\), \(R\) and \(t\) for the length of a chord through \(C\) that makes an acute angle of \(t\) with \(OC\). Show that as \(C\) varies (with \(t\) fixed), the expected length \(\mathrm{L}(t)\) of such chords is given by \[ \mathrm{L}(t) = \frac{4a(1-\cos^3 t)}{3\sin^2 t}\,. \] Show further that \[ \mathrm{L}(t) = \frac{4a}{3}\left(\cos t + \tfrac{1}{2}\sec^2(\tfrac{1}{2}t)\right). \]
  3. The random variable \(T\) is uniformly distributed on \([0, \frac{1}{2}\pi]\). Find the expected value of \(\mathrm{L}(T)\).

2021 Paper 3 Q11
D: 1500.0 B: 1500.0

The continuous random variable \(X\) has probability density function \[ f(x) = \begin{cases} \lambda e^{-\lambda x} & \text{for } x \geqslant 0, \\ 0 & \text{otherwise,} \end{cases} \] where \(\lambda\) is a positive constant. The random variable \(Y\) is the greatest integer less than or equal to \(X\), and \(Z = X - Y\).

  1. Show that, for any non-negative integer \(n\), \[ \mathrm{P}(Y = n) = (1 - e^{-\lambda})\,e^{-n\lambda}. \]
  2. Show that \[ \mathrm{P}(Z < z) = \frac{1 - e^{-\lambda z}}{1 - e^{-\lambda}} \qquad \text{for } 0 \leqslant z \leqslant 1. \]
  3. Evaluate \(\mathrm{E}(Z)\).
  4. Obtain an expression for \[ \mathrm{P}(Y = n \text{ and } z_1 < Z < z_2), \] where \(0 \leqslant z_1 < z_2 \leqslant 1\) and \(n\) is a non-negative integer. Determine whether \(Y\) and \(Z\) are independent.


Solution:

  1. \(\,\) \begin{align*} && \mathbb{P}(Y = n) &= \mathbb{P}(X \in [n, n+1)) \\ &&&= \int_n^{n+1} \lambda e^{-\lambda x} \d x \\ &&&= \left [-e^{-\lambda x} \right]_n^{n+1} \\ &&&= e^{-\lambda n} - e^{-\lambda(n+1)} \\ &&&= e^{-\lambda n}(1- e^{-\lambda}) \end{align*}
  2. \(,\) \begin{align*} && \mathbb{P}(Z < z) &= \sum_{i=0}^{\infty} \mathbb{P}(X \in (n, n+z)) \\ &&&= \sum_{i=0}^{\infty} \int_{n}^{n+z} \lambda e^{-\lambda x} \d x \\ &&&= \sum_{i=0}^{\infty} [-e^{-\lambda x}]_{n}^{n+z} \\ &&&= \sum_{i=0}^{\infty} (1-e^{-\lambda x})e^{-\lambda n} \\ &&&= \frac{1-e^{-\lambda x}}{1-e^{-\lambda}} \end{align*}
  3. Give the cdf of \(Z\), we see that \(f_Z(z) = \frac{\lambda e^{-\lambda z}}{1-e^{-\lambda}}\) so \begin{align*} && \E[Z] &= \int_0^1 z \frac{\lambda e^{-\lambda z}}{1-e^{-\lambda}} \d z \\ &&&= \frac{\lambda}{1-e^{-\lambda}} \int_0^1 ze^{-\lambda z} \d z \\ &&&= \frac{\lambda}{1-e^{-\lambda}} \left ( \left [-\frac{1}{\lambda} ze^{-\lambda z} \right]_0^1+\int_0^1 \frac{1}{\lambda} e^{-\lambda z} \d z \right) \\ &&&= \frac{\lambda}{1-e^{-\lambda}} \left ( -\frac{e^{-\lambda}}{\lambda} + \frac{1-e^{-\lambda}}{\lambda^2} \right) \\ &&&= \frac{1-e^{-\lambda}(1+\lambda)}{\lambda (1-e^{-\lambda})} \end{align*}
  4. \(\,\) \begin{align*} && \mathbb{P}(Y = n \text{ and }z_1 < Z < z_2)&= \mathbb{P}(X \in (n+z_1, n+z_2) ) \\ &&&= \int_{n+z_1}^{n+z_2} \lambda e^{-\lambda x} \d x \\ &&&= e^{-n\lambda}(e^{-\lambda z_1} - e^{-\lambda z_2}) \end{align*} Note that \(\mathbb{P}(z_1 < Z < z_2) = \mathbb{P}( Z < z_2) -\mathbb{P}(Z< z_1) =\frac{e^{-\lambda z_1} - e^{-\lambda z_2}}{1-e^{-\lambda}}\) Therefore \begin{align*} && \mathbb{P}(Y = n \text{ and }z_1 < Z < z_2) &= e^{-n\lambda}(e^{-\lambda z_1} - e^{-\lambda z_2}) \\ &&&= e^{-\lambda n}(1-e^{-\lambda}) \frac{e^{-\lambda z_1} - e^{-\lambda z_2}}{1-e^{-\lambda}} \\ &&&= \mathbb{P}(Y=n) \mathbb{P}(z_1 < Z < z_2) \end{align*} So they are independent, which is to be expected from the memorylessness property of the exponential distribution.

2020 Paper 3 Q11
D: 1500.0 B: 1500.0

The continuous random variable \(X\) is uniformly distributed on \([a,b]\) where \(0 < a < b\).

  1. Let \(\mathrm{f}\) be a function defined for all \(x \in [a,b]\)
    • with \(\mathrm{f}(a) = b\) and \(\mathrm{f}(b) = a\),
    • which is strictly decreasing on \([a,b]\),
    • for which \(\mathrm{f}(x) = \mathrm{f}^{-1}(x)\) for all \(x \in [a,b]\).
    The random variable \(Y\) is defined by \(Y = \mathrm{f}(X)\). Show that \[ \mathrm{P}(Y \leqslant y) = \frac{b - \mathrm{f}(y)}{b - a} \quad \text{for } y \in [a,b]. \] Find the probability density function for \(Y\) and hence show that \[ \mathrm{E}(Y^2) = -ab + \int_a^b \frac{2x\,\mathrm{f}(x)}{b-a} \; \mathrm{d}x. \]
  2. The random variable \(Z\) is defined by \(\dfrac{1}{Z} + \dfrac{1}{X} = \dfrac{1}{c}\) where \(\dfrac{1}{c} = \dfrac{1}{a} + \dfrac{1}{b}\). By finding the variance of \(Z\), show that \[ \ln\left(\frac{b-c}{a-c}\right) < \frac{b-a}{c}. \]

2019 Paper 2 Q12
D: 1500.0 B: 1500.0

The random variable \(X\) has the probability density function on the interval \([0, 1]\): $$f(x) = \begin{cases} nx^{n-1} & 0 \leq x \leq 1, \\ 0 & \text{elsewhere}, \end{cases}$$ where \(n\) is an integer greater than 1.

  1. Let \(\mu = E(X)\). Find an expression for \(\mu\) in terms of \(n\), and show that the variance, \(\sigma^2\), of \(X\) is given by $$\sigma^2 = \frac{n}{(n + 1)^2(n + 2)}.$$
  2. In the case \(n = 2\), show without using decimal approximations that the interquartile range is less than \(2\sigma\).
  3. Write down the first three terms and the \((k + 1)\)th term (where \(0 \leq k \leq n\)) of the binomial expansion of \((1 + x)^n\) in ascending powers of \(x\). By setting \(x = \frac{1}{n}\), show that \(\mu\) is less than the median and greater than the lower quartile. Note: You may assume that $$1 + \frac{1}{1!} + \frac{1}{2!} + \frac{1}{3!} + \cdots < 4.$$


Solution:

  1. \(\,\) \begin{align*} && \mu &= \E[X] \\ &&&= \int_0^1 x f(x) \d x \\ &&&= \int_0^1 nx^n \d x \\ &&&= \frac{n}{n+1} \\ \\ && \var[X] &= \sigma^2 \\ &&&= \E[X^2] - \mu^2 \\ &&&= \int_0^1 x^2 f(x) \d x - \mu^2 \\ &&&= \int_0^1 nx^{n+1} \d x - \mu^2 \\ &&&= \frac{n}{n+2} - \frac{n^2}{(n+1)^2} \\ &&&= \frac{n(n+1)^2 - n^2(n+2)}{(n+1)^2(n+2)} \\ &&&= \frac{n}{(n+1)^2(n+2)} \end{align*}
  2. \(\,\) \begin{align*} && \frac14 &= \int_0^{Q_1} 2x \d x \\ &&&= Q_1^2 \\ \Rightarrow && Q_1 &= \frac12 \\ && \frac34 &= \int_0^{Q_3} 2x \d x \\ &&&= Q_3^2 \\ \Rightarrow && Q_3 &= \frac{\sqrt{3}}2 \\ \\ \Rightarrow && IQR &= Q_3 - Q_1 = \frac{\sqrt{3}-1}{2} \\ && 2 \sigma &= 2\sqrt{\frac{2}{3^2 \cdot 4}} \\ &&&= \frac{\sqrt{2}}{3} \\ \\ && 2\sigma - IRQ &= \frac{\sqrt{2}}{3} - \frac{\sqrt{3}-1}{2} \\ &&&= \frac{2\sqrt{2}-3\sqrt{3}+3}{6} \\ && (3+2\sqrt{2})^2 &= 17+12\sqrt{2} > 29 \\ && (3\sqrt{3})^2 &= 27 \end{align*} Therefore \(2\sigma > IQR\)
  3. \[ (1+x)^n = 1 + nx + \frac{n(n-1)}2 x^2 + \cdots + \binom{n}{k} x^k+ \cdots \] \begin{align*} && Q_1^{-n} &= 4 \\ && Q_2^{-n} &= 2\\ && \mu &=\frac{n}{n+1} \\ \Rightarrow && \mu^{-n} &= \left (1 + \frac1n \right)^n\\ &&&\geq 1 + n \frac1n + \cdots > 2 \\ \Rightarrow && \mu &< Q_2 \\ \\ && \mu^{-n} &= \left (1 + \frac1n \right)^n\\ &&&= 1 + n \frac1n + \frac{n(n-1)}{2!} \frac{1}{n^2} + \cdots + \frac{n(n-1) \cdots (n-k+1)}{k!} \frac{1}{n^k} + \cdots \\ &&&= 1 + 1 + \left (1 - \frac1n \right ) \frac1{2!} + \cdots + \left (1 - \frac1n \right)\cdot\left (1 - \frac2n \right) \cdots \left (1 - \frac{k-1}n \right) \frac{1}{k!} + \cdots \\ &&&< 1 + 1 + \frac1{2!} + \cdots + \frac1{k!} \\ &&&< 4 \\ \Rightarrow && \mu &> Q_1 \end{align*}

2018 Paper 3 Q12
D: 1700.0 B: 1516.0

A random process generates, independently, \(n\) numbers each of which is drawn from a uniform (rectangular) distribution on the interval 0 to 1. The random variable \(Y_k\) is defined to be the \(k\)th smallest number (so there are \(k-1\) smaller numbers).

  1. Show that, for \(0\le y\le1\,\), \[ {\rm P}\big(Y_k\le y) =\sum^{n}_{m=k}\binom{n}{m}y^{m}\left(1-y\right)^{n-m} . \tag{\(*\)} \]
  2. Show that \[ m\binom n m = n \binom {n-1}{m-1} \] and obtain a similar expression for \(\displaystyle (n-m) \, \binom n m\,\). Starting from \((*)\), show that the probability density function of \(Y_k\) is \[ n\binom{ n-1}{k-1} y^{k-1}\left(1-y\right)^{ n-k} \,.\] Deduce an expression for \(\displaystyle \int_0^1 y^{k-1}(1-y)^{n-k} \, \d y \,\).
  3. Find \(\E(Y_k) \) in terms of \(n\) and \(k\).


Solution:

  1. \begin{align*} && \mathbb{P}(Y_k \leq y) &= \sum_{j=k}^n\mathbb{P}(\text{exactly }j \text{ values less than }y) \\ &&&= \sum_{j=k}^m \binom{m}{j} y^j(1-y)^{n-j} \end{align*}
  2. This is the number of ways to choose a committee of \(m\) people with the chair from those \(m\) people. This can be done in two ways. First: choose the committee in \(\binom{n}{m}\) ways and choose the chair in \(m\) ways so \(m \binom{n}{m}\). Alternatively, choose the chain in \(n\) ways and choose the remaining \(m-1\) committee members in \(\binom{n-1}{m-1}\) ways. Therefore \(m \binom{n}{m} = n \binom{n-1}{m-1}\) \begin{align*} (n-m) \binom{n}{m} &= (n-m) \binom{n}{n-m} \\ &= n \binom{n-1}{n-m-1} \\ &= n \binom{n-1}{m} \end{align*} \begin{align*} f_{Y_k}(y) &= \frac{\d }{\d y} \l \sum^{n}_{m=k}\binom{n}{m}y^{m}\left(1-y\right)^{n-m} \r \\ &= \sum^{n}_{m=k} \l \binom{n}{m}my^{m-1}\left(1-y\right)^{n-m} -\binom{n}{m}(n-m)y^{m}\left(1-y\right)^{n-m-1} \r \\ &= \sum^{n}_{m=k} \l n \binom{n-1}{m-1}y^{m-1}\left(1-y\right)^{n-m} -n \binom{n-1}{m} y^{m}\left(1-y\right)^{n-m-1} \r \\ &= n\sum^{n}_{m=k} \binom{n-1}{m-1}y^{m-1}\left(1-y\right)^{n-m} -n\sum^{n+1}_{m=k+1} \binom{n-1}{m-1} y^{m-1}\left(1-y\right)^{n-m} \\ &= n \binom{n-1}{k-1} y^{k-1}(1-y)^{n-k} \end{align*} \begin{align*} &&1 &= \int_0^1 f_{Y_k}(y) \d y \\ &&&= \int_0^1 n \binom{n-1}{k-1} y^{k-1}(1-y)^{n-k} \d y \\ &&&= n \binom{n-1}{k-1} \int_0^1 y^{k-1}(1-y)^{n-k} \d y \\ \Rightarrow && \frac{1}{n \binom{n-1}{k-1}} &= \int_0^1 y^{k-1}(1-y)^{n-k} \d y \\ \end{align*}
  3. \begin{align*} && \mathbb{E}(Y_k) &= \int_0^1 y f_{Y_k}(y) \d y \\ &&&= \int_0^1 n \binom{n-1}{k-1} y^{k}(1-y)^{n-k} \\ &&&= n \binom{n-1}{k-1}\int_0^1 y^{k}(1-y)^{n-k} \d y \\ &&&= n \binom{n-1}{k-1}\int_0^1 y^{k+1-1}(1-y)^{n+1-(k+1)} \d y \\ &&&= n \binom{n-1}{k-1} \frac{1}{(n+1) \binom{n}{k}}\\ &&&= \frac{n}{n+1} \cdot \frac{k}{n} \\ &&&= \frac{k}{n+1} \end{align*}

2017 Paper 3 Q13
D: 1700.0 B: 1500.0

The random variable \(X\) has mean \(\mu\) and variance \(\sigma^2\), and the function \({\rm V}\) is defined, for \(-\infty < x < \infty\), by \[ {\rm V}(x) = \E \big( (X-x)^2\big) . \] Express \({\rm V}(x)\) in terms of \(x\), \( \mu\) and \(\sigma\). The random variable \(Y\) is defined by \(Y={\rm V}(X)\). Show that \[ \E(Y) = 2 \sigma^2 %\text{ \ \ and \ \ } %\Var(Y) = \E(X-\mu)^4 -\sigma^4 . \tag{\(*\)} \] Now suppose that \(X\) is uniformly distributed on the interval \(0\le x \le1\,\). Find \({\rm V}(x)\,\). Find also the probability density function of \(Y\!\) and use it to verify that \((*)\) holds in this case.


Solution: \begin{align*} {\rm V}(x) &= \E \big( (X-x)^2\big) \\ &= \E \l X^2 - 2xX + x^2\r \\ &= \E [ X^2 ]- 2x\E[X] + x^2 \\ &= \sigma^2+\mu^2 - 2x\mu + x^2 \\ &= \sigma^2 + (\mu - x)^2 \end{align*} \begin{align*} \E[Y] &= \E[\sigma^2 + (\mu - X)^2] \\ &= \sigma^2 + \E[(\mu - X)^2]\\ &= \sigma^2 + \sigma^2 \\ &= 2\sigma^2 \end{align*} If \(X \sim U(0,1)\) then \(V(x) = \frac{1}{12} + (\frac12 - x)^2\). \begin{align*} \P(Y \leq y) &= \P(\frac1{12} + (\frac12 - X)^2 \leq y) \\ &= \P((\frac12 -X)^2 \leq y - \frac1{12}) \\ &= \P(|\frac12 -X| \leq \sqrt{y - \frac1{12}}) \\ &= \begin{cases} 1 & \text{if } y - \frac1{12} > \frac14 \\ 2 \sqrt{y - \frac1{12}} & \text{if } \frac14 > y - \frac1{12} > 0 \\ \end{cases} \\ &= \begin{cases} 1 & \text{if } y> \frac13 \\ \sqrt{4y - \frac1{3}} & \text{if } \frac13 > y > \frac1{12} \\ \end{cases} \end{align*} Therefore $f_Y(y) = \begin{cases} \frac{2}{\sqrt{4y-\frac{1}{3}}} & \text{if } \frac1{12} < y < \frac13 \\ 0 & \text{otherwise} \end{cases}$ \begin{align*} \E[Y] &= \int_{1/12}^{1/3} \frac{2x}{\sqrt{4x-\frac13}} \, dx \\ &= 2\int_{u = 0}^{u=1} \frac{\frac{1}{4}u +\frac1{12}}{\sqrt{u}} \,\frac{1}{4} du \tag{\(u = 4x - \frac13, \frac{du}{dx} = 4\)}\\ &= \frac{1}{2 \cdot 12}\int_{u = 0}^{u=1} 3\sqrt{u} +\frac{1}{\sqrt{u}} \, du \\ &= \frac{1}{2 \cdot 12} \left [2 u^{3/2} + 2u^{1/2} \right ]_0^1 \\ &= \frac{1}{2 \cdot 12} \cdot 4 \\ &= \frac{2}{12} \end{align*} as required

2016 Paper 1 Q13
D: 1500.0 B: 1500.0

An internet tester sends \(n\) e-mails simultaneously at time \(t=0\). Their arrival times at their destinations are independent random variables each having probability density function \(\lambda \e^{-\lambda t}\) (\(0\le t<\infty\), \( \lambda >0\)).

  1. The random variable \(T\) is the time of arrival of the e-mail that arrives first at its destination. Show that the probability density function of \(T\) is \[ n \lambda \e^{-n\lambda t}\,,\] and find the expected value of \(T\).
  2. Write down the probability that the second e-mail to arrive at its destination arrives later than time \(t\) and hence derive the density function for the time of arrival of the second e-mail. Show that the expected time of arrival of the second e-mail is \[ \frac{1}{\lambda} \left( \frac1{n-1} + \frac 1 n \right) \]


Solution:

  1. \(\,\) \begin{align*} && \mathbb{P}(T > t) &= \mathbb{P}(\text{all emails slower than }t) \\ &&&= \left ( \int_t^{\infty} \lambda e^{-\lambda x} \d x \right)^n \\ &&&= \left ( [- e^{-\lambda x}]_t^\infty\right)^n\\ &&&= e^{-n\lambda t} \\ \Rightarrow && f_T(t) &= n \lambda e^{-n\lambda t} \\ \end{align*} Therefore \(T \sim \text{Exp}(n \lambda)\) and \(\E[T] = \frac{1}{n \lambda}\)
  2. Let \(T_2\) be the time until the second email arrives, then. \begin{align*} && \P(T_2 > t) &= \P(\text{all emails} > t) + \P(\text{all but 1 emails} > t) \\ &&&= e^{-n\lambda t} + n \cdot e^{-(n-1)\lambda t}(1-e^{-\lambda t}) \\ &&&= (1-n)e^{-n\lambda t} + n \cdot e^{-(n-1)\lambda t} \\ \Rightarrow && f_{T_2}(t) &= - \left ( (1-n) n \lambda e^{-n \lambda t} -n(n-1)\lambda e^{-(n-1)\lambda t} \right) \\ &&&= n(n-1) \lambda \left (e^{-(n-1)\lambda t} - e^{-n\lambda t} \right) \\ \Rightarrow && \E[T_2] &= \int_0^{\infty} t \cdot n(n-1) \lambda \left (e^{-(n-1)\lambda t} - e^{-n\lambda t} \right) \d t \\ &&&= \int_0^{\infty} \left (n \cdot t (n-1) \lambda e^{-(n-1)\lambda t} -(n-1)\cdot tn \lambda e^{-n\lambda t} \right) \d t \\ &&&= \frac{n}{\lambda(n-1)} - \frac{n-1}{\lambda n} \\ &&&= \frac{1}{\lambda} \left (1+\frac{1}{n-1}- \left (1 - \frac{1}{n} \right) \right) \\ &&&= \frac{1}{\lambda} \left ( \frac{1}{n-1} + \frac{1}{n} \right) \end{align*} (We can also view this second expectation as expected time for first email + expected time (of the remaining \(n-1\) emails) for the first email, and we can see that will have that form by the memorilessness property of exponentials)

2015 Paper 3 Q13
D: 1700.0 B: 1500.0

Each of the two independent random variables \(X\) and \(Y\) is uniformly distributed on the interval~\([0,1]\).

  1. By considering the lines \(x+y =\) \(\mathrm{constant}\) in the \(x\)-\(y\) plane, find the cumulative distribution function of \(X+Y\).
  2. Hence show that the probability density function \(f\) of \((X+Y)^{-1}\) is given by \[ \f(t) = \begin{cases} 2t^{-2} -t^{-3} & \text{for \( \tfrac12 \le t \le 1\)} \\ t^{-3} & \text{for \(1\le t <\infty\)}\\ 0 & \text{otherwise}. \end{cases} \] Evaluate \(\E\Big(\dfrac1{X+Y}\Big)\,\).
  3. Find the cumulative distribution function of \(Y/X\) and use this result to find the probability density function of \(\dfrac X {X+Y}\). Write down \(\E\Big( \dfrac X {X+Y}\Big)\) and verify your result by integration.


Solution:

  1. \(\mathbb{P}(X + Y \leq c) \) is the area between the \(x\)-axis, \(y\)-axis and the line \(x + y = c\). There are two cases for this: \[\mathbb{P}(X + Y \leq c) = \begin{cases} 0 & \text{ if } c \leq 0 \\ \frac{c^2}{2} & \text{ if } c \leq 1 \\ 1- \frac{(2-c)^2}{2} & \text{ if } 1 \leq c \leq 2 \\ 1 & \text{ otherwise} \end{cases}\]
  2. \begin{align*} && \mathbb{P}((X + Y)^{-1} \leq t) &= 1- \mathbb{P}(X + Y \leq \frac1{t}) \\ \Rightarrow && f_{(X+Y)^{-1}}(t) &= 0 -\begin{cases} 0 & \text{ if } \frac1{t} \leq 0 \\ \frac{\d}{\d t}\frac{1}{2t^2} & \text{ if } \frac{1}{t} \leq 1 \\ \frac{\d}{\d t} \l 1- \frac{(2-\frac1t)^2}{2} \r & \text{ if } 1 \leq \frac{1}{t} \leq 2 \\ 0 & \text{ otherwise}\end{cases} \\ && &= \begin{cases} t^{-3} & \text{ if } t \geq 1 \\ (2-\frac1t)t^{-2} & \text{ if } \frac12 \leq t \leq 1\\ 0 & \text{ otherwise}\end{cases} \\ && &= \begin{cases} t^{-3} & \text{ if } t \geq 1 \\ 2t^{-2}-t^{-3} & \text{ if } \frac12 \leq t \leq 1\\ 0 & \text{ otherwise}\end{cases} \end{align*} Therefore, \begin{align*} \E \Big(\dfrac1{X+Y}\Big) &= \int_{\frac12}^{\infty} t f_{(X+Y)^{-1}}(t) \, \d t \\ &= \int_{\frac12}^{1} t f_{(X+Y)^{-1}}(t) \, \d t + \int_{1}^{\infty} t f_{(X+Y)^{-1}}(t) \d t\\ &= \int_{\frac12}^{1} \l 2t^{-1} - t^{-2} \r \, \d t + \int_{1}^{\infty} t^{-2} \d t\\ &= \left [ 2 \ln (t) + t^{-1} \right]_{\frac12}^{1} + \left [ -t^{-1} \right ]_{1}^{\infty} \\ &= 1 + 2 \ln 2 -2 + 1 \\ &= 2 \ln 2 \end{align*}
  3. \begin{align*} &&\mathbb{P} \l \frac{Y}{X} \leq c \r &= \mathbb{P}( Y \leq c X) \\ &&&= \begin{cases} 0 & \text{if } c \leq 0 \\ \frac{c}{2} & \text{if } 0 \leq c \leq 1 \\ 1-\frac{1}{2c} & \text{if } 1 \leq c \end{cases} \\ \\ \Rightarrow && \mathbb{P} \l \frac{X}{X+Y} \leq t\r &= \mathbb{P} \l \frac{1}{1+\frac{Y}{X}} \leq t\r \\ &&&= \mathbb{P} \l \frac{1}{t} \leq 1+\frac{Y}{X}\r \\ &&&= \mathbb{P} \l \frac{1}{t} - 1\leq \frac{Y}{X}\r \\ &&&= 1- \mathbb{P} \l \frac{Y}{X} \leq \frac{1}{t} - 1\r \\ &&&= 1 - \begin{cases} 0 & \text{if } \frac1{t} \leq 0 \\ \frac{1}{2t} - \frac{1}{2} & \text{if } 0 \leq \frac1{t} \leq 1 \\ 1-\frac{t}{2-2t} & \text{if } 1 \leq \frac1{t} \end{cases} \\ && f_{\frac{X}{X+Y}}(t) &= \begin{cases} 0 & \text{if } \frac1{t} \leq 0 \\ \frac{1}{2t^2} & \text{if } t \geq 1 \\ \frac{1}{2(1-t)^2} & \text{if } 0 \leq t \leq 1 \end{cases} \\ \Rightarrow && \mathbb{E} \l \frac{X}{X+Y} \r &= \int_0^\infty t f(t) \d t \\ &&&= \int_0^1 \frac{1}{2(1-t)^2} \d t + \int_1^\infty \frac{1}{t^2} \d t \\ &&& = \frac{1}{4} + \frac{1}{4} = \frac{1}{2} \\ \\ && \mathbb{E} \l \frac{X}{X+Y} \r &= \int_0^1 \int_0^1 \frac{x}{x+y} \d y\d x \\ &&&= \int_0^1 \l x \ln (x+1) - x \ln x \r \d x \\ &&&= \left [\frac{x^2}2 \ln(x+1) - \frac{x^2}{2} \ln(x) \right]_0^1 -\int_0^1 \l \frac{x^2}{2(x+1)} - \frac{x}{2} \r \d x \\ &&&= \frac{\ln 2}{2} + \frac{1}{4} - \int_0^1 \frac{x^2-1+1}{2(x+1)}\d x \\ &&&= \frac{\ln 2}{2} + \frac{1}{4} - \int_0^1 \frac{x -1}{2} + \frac{1}{2(x+1)}\d x \\ &&&= \frac{\ln 2}{2} + \frac{1}{4} - \frac{1}{4} + \frac{1}{2} - \frac{\ln 2}{2} \\ &&&= \frac{1}{2} \end{align*} We can also notice that \(1 = \mathbb{E} \l \frac{X+Y}{X+Y} \r = \mathbb{E} \l \frac{X}{X+Y} \r + \mathbb{E} \l \frac{Y}{X+Y} \r = 2 \mathbb{E} \l \frac{X}{X+Y} \r\) so it's clearly true as long as we can show that the integral converges.

2014 Paper 1 Q13
D: 1500.0 B: 1483.3

A continuous random variable \(X\) has a triangular distribution, which means that it has a probability density function of the form \[ \f(x) = \begin{cases} \g(x) & \text{for \(a< x \le c\)} \\ \h(x) & \text{for \(c\le x < b\)} \\ 0 & \text{otherwise,} \end{cases} \] where \(\g(x)\) is an increasing linear function with \(\g(a)=0\), \(\h(x)\) is a decreasing linear function with \(\h(b) =0\), and \(\g(c)=\h(c)\). Show that \(\g(x) = \dfrac{2(x-a)}{(b-a)(c-a)}\) and find a similar expression for \(\h(x)\).

  1. Show that the mean of the distribution is \(\frac13(a+b+c)\).
  2. Find the median of the distribution in the different cases that arise.


Solution: Since \(\int f(x) \, dx = 1\), and \(f(x)\) is a triangle with base \(b-a\), it must have height \(\frac{2}{b-a}\) in order to have the desired area. Since \(g(a) = 0, g(c) = \frac{2}{b-a}\), \(g(x) = A(x-a)\) and \(\frac{2}{b-a} = A (c-a) \Rightarrow g(x) = \frac{2(x-a)}{(b-a)(c-a)}\) as required. Similarly, \(h(x) = B(x-b)\) and \(\frac{2}{b-a} = B(c-b) \Rightarrow h(x) = \frac{2(b-x)}{(b-a)(b-c)}\) The mean of the distribution will be: \begin{align*} \int_a^b xf(x) \, dx &= \int_a^c xg(x) \, dx + \int_c^b xh(x) \, dx \\ &= \frac{2}{(b-a)(c-a)} \int_a^c x(x-a) dx + \frac{2}{(b-a)(b-c)} \int_c^b x(b-x) \, dx \\ &= \frac{2}{(b-a)} \l \frac{1}{c-a} \left [ \frac{x^3}{3} - a\frac{x^2}{2} \right ]_a^c + \frac{1}{b-c} \left [ b\frac{x^2}{2} - \frac{x^3}{3} \right ]_c^b\r \\ &= \frac{2}{(b-a)} \l \frac{1}{c-a} \l \frac{c^3}{3} - a\frac{c^2}{2} - \frac{a^3}{3} + \frac{a^3}{2} \r + \frac{1}{b-c} \l \frac{b^3}{2} - \frac{b^3}{3} - \frac{bc^2}{2} + \frac{c^3}{3} \r \r \\ &= \frac{2}{(b-a)} \l \l \frac{c^2+ac+a^2}{3} - \frac{a(a+c)}{2} \r +\l \frac{b(b+c)}{2} - \frac{b^2+bc+c^2}{3} \r\r \\ &= \frac{2}{(b-a)} \l \frac{2c^2+2ac+2a^2}{6} - \frac{3a^2+3ac}{6} + \frac{3b^2+3bc}{6} - \frac{2b^2+2bc+2c^2}{6} \r \\ &= \frac{2}{(b-a)} \l \frac{-a^2+b^2-ac+bc}{6} \r \\ &= \frac{a+b+c}{3} \\ \end{align*} The median \(M\) satisfies: \begin{align*} && \int_a^M f(x) \, dx &= \frac12 \\ \end{align*} The left hand triangle will have area: \(\frac{c-a}{b-a}\) which will be \(\geq \frac12\) if \(c \geq \frac{a+b}{2}\). In this case we need \begin{align*} && \frac{(M-a)^2}{(b-a)(c-a)} &= \frac12 \\ \Rightarrow && M &= a + \sqrt{\frac12 (b-a)(c-a)} \end{align*} Otherwise, we need: \begin{align*} && \frac{(b-M)^2}{(b-a)(b-c)} &= \frac12 \\ \Rightarrow && M &= b - \sqrt{\frac12 (b-a)(b-c)} \end{align*} These are consistent, if \(c = \frac{b+a}{2}\)

2014 Paper 2 Q12
D: 1600.0 B: 1484.8

The lifetime of a fly (measured in hours) is given by the continuous random variable \(T\) with probability density function \(f(t)\) and cumulative distribution function \(F(t)\). The hazard function, \(h(t)\), is defined, for \(F(t) < 1\), by \[ h(t) = \frac{f(t)}{1-F(t)}\,. \]

  1. Given that the fly lives to at least time \(t\), show that the probability of its dying within the following \(\delta t\) is approximately \(h (t) \, \delta t\) for small values of \(\delta t\).
  2. Find the hazard function in the case \(F(t) = t/a\) for \(0< t < a\). Sketch \(f(t)\) and \(h(t)\) in this case.
  3. The random variable \(T\) is distributed on the interval \(t > a\), where \(a>0\), and its hazard function is \(t^{-1}\). Determine the probability density function for \(T\).
  4. Show that \(h(t)\) is constant for \(t > b\) and zero otherwise if and only if \(f(t) =ke^{-k(t-b)}\) for \(t > b\), where \(k\) is a positive constant.
  5. The random variable \(T\) is distributed on the interval \(t > 0\) and its hazard function is given by \[ h(t) = \left(\frac{\lambda}{\theta^\lambda}\right)t^{\lambda-1}\,, \] where \(\lambda\) and \(\theta\) are positive constants. Find the probability density function for \(T\).


Solution:

  1. \(\,\) \begin{align*} && \mathbb{P}(T > t + \delta t | T > t) &= \frac{\mathbb{P}(T < t + \delta t)}{\mathbb{P}(T > t )} \\ &&&= \frac{\int_t^{t+\delta t} f(s) \d s}{1-F(t)} \\ &&&\approx \frac{f(t)\delta t}{1-F(t)} \\ &&&= h(t) \delta t \end{align*}
  2. If \(F(t) = t/a\) then \(f(t) = 1/a\) and \(h(t) = \frac{1/a}{1-t/a} = \frac{1}{a-t}\)
    TikZ diagram
  3. \(\,\) \begin{align*} && \frac{F'}{1-F} &= \frac{1}{t} \\ \Rightarrow && -\ln (1-F) &= \ln t + C\\ \Rightarrow && 1-F &= \frac{A}{t} \\ && F &= 1 - \frac{A}{t} \\ F(a) = 0: && F &= 1 - \frac{a}{t} \\ && f(t) &= \frac{a}{t^2} \end{align*}
  4. (\(\Rightarrow\)) \begin{align*} && \frac{F'}{1-F} &= k \\ \Rightarrow && -\ln(1-F) &= kt+C \\ \Rightarrow && 1-F &= Ae^{-kt} \\ F(b) = 0: && 1 &= Ae^{-kb} \\ \Rightarrow && 1-F &= e^{-k(t-b)}\\ \Rightarrow && f &= ke^{-k(t-b)} \\ \end{align*} (\(\Leftarrow\)) \(f(t) = ke^{-k(t-b)} \Rightarrow F(t) = 1-e^{-k(t-b)}\) and the result is clear.
  5. \(\,\) \begin{align*} && \frac{F'}{1-F} &= \left ( \frac{\lambda}{\theta^{\lambda}} \right) t^{\lambda-1} \\ \Rightarrow && -\ln(1-F) &= \left ( \frac{t}{\theta} \right)^{\lambda} +C\\ \Rightarrow && F &= 1-A\exp \left (- \left ( \frac{t}{\theta} \right)^{\lambda} \right) \\ F(0) = 0: && 0 &= 1-A \\ \Rightarrow && F &= 1 - \exp \left (- \left ( \frac{t}{\theta} \right)^{\lambda} \right) \\ \Rightarrow && f &= \lambda t^{\lambda -1} \theta^{-\lambda} \exp \left (- \left ( \frac{t}{\theta} \right)^{\lambda} \right) \end{align*}

2014 Paper 3 Q12
D: 1700.0 B: 1500.0

The random variable \(X\) has probability density function \(f(x)\) (which you may assume is differentiable) and cumulative distribution function \(F(x)\) where \(-\infty < x < \infty \). The random variable \(Y\) is defined by \(Y= \e^X\). You may assume throughout this question that \(X\) and \(Y\) have unique modes.

  1. Find the median value \(y_m\) of \(Y\) in terms of the median value \(x_m\) of \(X\).
  2. Show that the probability density function of \(Y\) is \(f(\ln y)/y\), and deduce that the mode \(\lambda\) of \(Y\) satisfies \(\f'(\ln \lambda) = \f(\ln \lambda)\).
  3. Suppose now that \(X \sim {\rm N} (\mu,\sigma^2)\), so that \[ f(x) = \frac{1}{\sigma \sqrt{2\pi}\,} \e^{-(x-\mu)^2/(2\sigma^2)} \,. \] Explain why \[\frac{1}{\sigma \sqrt{2\pi}\,} \int_{-\infty}^{\infty}\e^{-(x-\mu-\sigma^2)^2/(2\sigma^2)} \d x = 1 \] and hence show that \( \E(Y) = \e ^{\mu+\frac12\sigma^2}\).
  4. Show that, when \(X \sim {\rm N} (\mu,\sigma^2)\), \[ \lambda < y_m < \E(Y)\,. \]


Solution:

  1. \begin{align*} && \frac12 &= \mathbb{P}(X \leq x_m) \\ \Leftrightarrow && \frac12 &= \mathbb{P}(e^X \leq e^{x_m} = y_m) \end{align*} Therefore the median is \(y_m = e^{x_m}\)
  2. \begin{align*} && \mathbb{P}(Y \leq y) &= \mathbb{P}(e^X \leq y) \\ &&&= \mathbb{P}(X \leq \ln y) \\ &&&= F(\ln y) \\ \Rightarrow && f_Y(y) &= f(\ln y)/y \\ \\ && f'_Y(y) &= \frac{f'(\ln y) - f(\ln y)}{y^2} \end{align*} Therefore since the mode satisfies \(f'_Y = 0\) we must have \(f'(\ln \lambda ) = f(\ln \lambda)\)
  3. This is the integral of the pdf of \(N(\mu + \sigma^2, \sigma^2)\) and therefore is clearly \(1\). \begin{align*} && \E[Y] &= \int_{-\infty}^{\infty} e^x \cdot \frac{1}{\sqrt{2\pi \sigma^2}} e^{-(x-\mu)^2/(2\sigma^2)} \d x \\ &&&= \frac{1}{\sqrt{2\pi \sigma^2}} \int_{-\infty}^{\infty} \exp (x - (x-\mu)^2/(2\sigma^2)) \d x\\ &&&= \frac{1}{\sqrt{2\pi \sigma^2}} \int_{-\infty}^{\infty} \exp ((2x \sigma^2- (x-\mu)^2)/(2\sigma^2)) \d x\\ &&&= \frac{1}{\sqrt{2\pi \sigma^2}} \int_{-\infty}^{\infty} \exp (-(x-\mu-\sigma^2)^2+2\mu \sigma^2-\sigma^4)/(2\sigma^2)) \d x\\ &&&= \frac{1}{\sqrt{2\pi \sigma^2}} \int_{-\infty}^{\infty} \exp (-(x-\mu+\sigma^2)^2)/(2\sigma^2)+\mu +\frac12\sigma^2) \d x\\ &&&= \e^{\mu +\frac12\sigma^2}\frac{1}{\sqrt{2\pi \sigma^2}} \int_{-\infty}^{\infty} \exp (-(x-\mu-\sigma^2)^2)/(2\sigma^2)) \d x\\ &&&= \e^{\mu +\frac12\sigma^2} \end{align*}
  4. Notice that \(y_m = e^\mu < e^{\mu + \tfrac12 \sigma^2} = \E[Y]\), so it suffices to prove that \(\lambda < e^{\mu}\) Notice that \(f'(x) - f(x) = f(x)[-(x-\mu)/\sigma^2 - 1]\) and therefore \(\ln y - \mu = -\sigma^2\) so \(\lambda = e^{\mu - \sigma^2}\) which is clearly less than \(e^{\mu}\) as required.

2013 Paper 3 Q13
D: 1700.0 B: 1484.0

  1. The continuous random variable \(X\) satisfies \(0\le X\le 1\), and has probability density function \(\f(x)\) and cumulative distribution function \(\F(x)\). The greatest value of \(\f(x)\) is \(M\), so that \(0\le \f(x) \le M\).
    1. Show that \(0\le \F(x) \le Mx\) for \(0\le x\le1\).
    2. For any function \(\g(x)\), show that \[ \int_0^1 2 \g(x) \F(x) \f(x) \d x = \g(1) - \int_0^1 \g'(x) \big( \F(x)\big)^2 \d x \,. \]
  2. The continuous random variable \(Y\) satisfies \(0\le Y\le 1\), and has probability density function \(k \F(y) \f(y)\), where \(\f\) and \(\F\) are as above.
    1. Determine the value of the constant \(k\).
    2. Show that \[ 1+ \frac{nM}{n+1}\mu_{n+1} - \frac{nM}{n+1} \le \E(Y^n) \le 2M\mu_{n+1}\,, \] where \(\mu_{n+1} = \E(X^{n+1})\) and \(n\ge0\).
    3. Hence show that, for \(n\ge 1\), \[ \mu _n \ge \frac{n}{(n+1)M} -\frac{n-1}{n+1} \,.\]


Solution:

    1. \(\,\) \begin{align*} && 0 &\leq f(t) &\leq M \\ \Rightarrow && \int_0^x 0 \d t &\leq \int_0^x f(t) \d t & \leq \int_0^x M \d x \\ \Rightarrow && 0 &\leq F(x) &\leq Mx \end{align*}
    2. \(\,\) \begin{align*} && \int_0^1 2g(x)F(x)f(x) \d x &= \left [ g(x) F(x)^2 \right] - \int_0^1 g'(x) \left ( F(x)\right)^2 \d x \\ &&&= g(1) - \int_0^1 g'(x) \left ( F(x)\right)^2 \d x \end{align*}
    1. \(\,\) \begin{align*} && 1 &= \int_0^1 kF(y)f(y) \d y \\ &&&= k\left [ \frac12 F(y)^2\right]_0^1 \\ &&&= \frac{k}{2} \\ \Rightarrow && k &= 2 \end{align*}
    2. \(\,\) \begin{align*} \E[Y^n] &= \int_0^1 y^n 2F(y)f(y) \d y \\ &\geq \int_0^1 y^n 2My f(y) \d y \\ &= 2M\int_0^1 y^{n+1} f(y) \d y \\ &= 2M \E[X^{n+1}] = 2M\mu_{n+1} \\ \\ \E[Y^n] &= \int_0^1 y^n 2F(y)f(y) \d y \\ &= 1 - \int_0^1 ny^{n-1} F(y)^2 \d y \\ &\geq 1 - \int_0^1 ny^{n-1}My F(y) \d y \\ &= 1 - M\int_0^1 ny^n F(y) \d y \\ &= 1 - M[\frac{n}{n+1}y^{n+1} F(y)]_0^1 + M\int_0^1\frac{n}{n+1} y^{n+1} f(y) \d y \\ &= 1 - \frac{nM}{n+1} + \frac{nM}{n+1} \mu_{n+1} \end{align*}
    3. Since \(\E[Y^{n-1}] \geq 0\) we must have \begin{align*} && 2M\mu_n \geq 1 + \frac{(n-1)M}{n}\mu_n - \frac{(n-1)M}{n} \\ \Rightarrow && \mu_n \left (2M + \frac{(n-1)M}{n} \right) \geq 1 - \frac{(n-1)M}{n} \\ \Rightarrow && \mu_n \frac{3Mn-M}{n} & \geq \frac{n-(n-1)M}{n} \\ \Rightarrow && \mu_n & \geq \frac{n-(n-1)M}{3Mn-M} \end{align*}