Problems

Filters
Clear Filters
2025 Paper 3 Q11
D: 1500.0 B: 1500.0

  1. Let \(\lambda > 0\). The independent random variables \(X_1, X_2, \ldots, X_n\) all have probability density function $$f(t) = \begin{cases} \lambda e^{-\lambda t} & t \geq 0 \\ 0 & t < 0 \end{cases}$$ and cumulative distribution function \(F(x)\). The value of random variable \(Y\) is the largest of the values \(X_1, X_2, \ldots, X_n\). Show that the cumulative distribution function of \(Y\) is given, for \(y \geq 0\), by $$G(y) = (1 - e^{-\lambda y})^n$$
  2. The values \(L(\alpha)\) and \(U(\alpha)\), where \(0 < \alpha \leq \frac{1}{2}\), are such that $$P(Y < L(\alpha)) = \alpha \text{ and } P(Y > U(\alpha)) = \alpha$$ Show that $$L(\alpha) = -\frac{1}{\lambda}\ln(1 - \alpha^{1/n})$$ and write down a similar expression for \(U(\alpha)\).
  3. Use the approximation \(e^t \approx 1 + t\), for \(|t|\) small, to show that, for sufficiently large \(n\), $$\lambda L(\alpha) \approx \ln(n) - \ln\left(\ln\left(\frac{1}{\alpha}\right)\right)$$
  4. Hence show that the median of \(Y\) tends to infinity as \(n\) increases, but that the width of the interval \(U(\alpha) - L(\alpha)\) tends to a value which is independent of \(n\).
  5. You are given that, for \(|t|\) small, \(\ln(1 + t) \approx t\) and that \(e^3 \approx 20\). Show that, for sufficiently large \(n\), there is an interval of width approximately \(4\lambda^{-1}\) in which \(Y\) lies with probability \(0.9\).


Solution:

  1. Note that \(\displaystyle F(y) = \mathbb{P}(X_i < y) = \int_0^y \lambda e^{-\lambda t} \d t = 1-e^{-\lambda y}\). Notice also that \begin{align*} G(y) &= \mathbb{P}(Y < y) \\ &= \mathbb{P}(\max_i(X_i) < y) \\ &= \mathbb{P}(X_i < y \text{ for all }i) \\ &= \prod_{i=1}^n \mathbb{P}(X_i < y) \\ &= \prod_{i=1}^n (1-e^{-\lambda y})\\ &= (1-e^{-\lambda y})^n \end{align*} as required.
  2. \begin{align*} && \mathbb{P}(Y < L(\alpha)) &= \alpha \\ \Rightarrow && (1-e^{-\lambda L(\alpha)})^n &= \alpha \\ \Rightarrow && 1-e^{-\lambda L(\alpha)} &= \alpha^{\tfrac1n} \\ \Rightarrow && L(\alpha) &= -\frac{1}{\lambda}\ln \left (1-\alpha^{\tfrac1n} \right) \end{align*} Notice also: \begin{align*} && \mathbb{P}(Y > U(\alpha)) &= \alpha \\ \Rightarrow && 1 - (1-e^{-\lambda U(\alpha)})^n &= \alpha \\ \Rightarrow && U(\alpha) &= -\frac{1}{\lambda}\ln \left ( 1-(1-\alpha)^{\tfrac1n} \right) \end{align*}
  3. \begin{align*} \lambda L(\alpha) &= -\ln \left (1-\alpha^{\tfrac1n} \right) \\ &= -\ln \left (1-e^{\tfrac1n \ln \alpha} \right) \\ &\approx - \ln \left ( 1 - 1 - \frac1n \ln \alpha\right) \tag{\(e^t \approx 1 + t\)} \\ &= -\ln \left ( \frac{1}{n} \ln \frac{1}\alpha \right) \\ &= - \ln \frac{1}{n} - \ln \left ( \ln \frac{1}{\alpha} \right )\\ &= \ln n - \ln \left ( \ln \left ( \frac{1}{\alpha} \right ) \right) \end{align*} since if \(n\) is large, \(\frac{\ln \alpha}{n}\) is small.
  4. The median is the value where \(\mathbb{P}(Y < M) = \frac12\), or in other words \(L(\frac12)\), but this is \(\approx \frac{\ln n - \ln (\ln 2)}{\lambda} \to \infty\). \begin{align*} && \lambda U(\alpha) &\approx \ln n - \ln \left ( \ln \left ( \frac{1}{1-\alpha} \right ) \right) \\ \Rightarrow && \lambda(U(\alpha) - L(\alpha)) &\approx -\ln \left ( \ln \left ( \frac{1}{1-\alpha} \right ) \right)+ \ln \left ( \ln \left ( \frac{1}{\alpha} \right ) \right) \\ \Rightarrow && U(\alpha) - L(\alpha) &\to \frac{1}{\lambda} \left ( \ln \left ( \ln \left ( \frac{1}{\alpha} \right ) \right)-\ln \left ( \ln \left ( \frac{1}{1-\alpha} \right ) \right ) \right) \end{align*} which doesn't depend on \(n\).
  5. Suppose \(\alpha = \frac{1}{20}\) then \begin{align*} U(\alpha) - L(\alpha) &\approx \frac{1}{\lambda} \left (\ln \ln 20 - \ln \ln \frac{20}{19} \right) \\ &= \lambda^{-1} \left (\ln \ln 20 - \ln \ln (1 + \frac{1}{19}) \right) \\ &\approx \lambda^{-1} \left (\ln 3 - \ln \frac{1}{19} \right) \tag{\(\ln(1+t) \approx t\)} \\ &\approx \lambda^{-1} \ln 3 \cdot 19 \\ &\approx \lambda^{-1} (1 + 3) \\ &\approx 4\lambda^{-1} \end{align*} [Note that \(\ln \ln 20 - \ln \ln \frac{20}{19} = 4.0673\ldots\)]

2025 Paper 3 Q12
D: 1500.0 B: 1484.0

  1. Show that, for any functions \(f\) and \(g\), and for any \(m \geq 0\), $$\sum_{r=1}^{m+1} f(r)\sum_{s=r-1}^m g(s) = \sum_{s=0}^m g(s)\sum_{r=1}^{s+1} f(r)$$
  2. The random variables \(X_0, X_1, X_2, \ldots\) are defined as follows:
    • \(X_0\) takes the value \(0\) with probability \(1\);
    • \(X_{n+1}\) takes the values \(0, 1, \ldots, X_n + 1\) with equal probability, for \(n = 0, 1, \ldots\)
    1. Write down \(E(X_1)\). Find \(P(X_2 = 0)\) and \(P(X_2 = 1)\) and show that \(P(X_2 = 2) = \frac{1}{6}\). Hence calculate \(E(X_2)\).
    2. For \(n \geq 1\), show that $$P(X_n = 0) = \sum_{s=0}^{n-1} \frac{P(X_{n-1} = s)}{s+2}$$ and find a similar expression for \(P(X_n = r)\), for \(r = 1, 2, \ldots, n\).
    3. Hence show that \(E(X_n) = \frac{1}{2}(1 + E(X_{n-1}))\). Find an expression for \(E(X_n)\) in terms of \(n\), for \(n = 1, 2, \ldots\)


Solution:

  1. \begin{align*} \sum_{r=1}^{m+1} \left (f(r) \sum_{s=r-1}^m g(s) \right) &= \sum_{r=1}^{m+1} \sum_{s=r-1}^m f(r)g(s) \\ &= \sum_{(r,s) \in \{(r,s) : 1 \leq r \leq m+1, 0 \leq s \leq m, s \geq r-1\}} f(r)g(s) \\ &= \sum_{(r,s) \in \{(r,s) : 0 \leq s \leq m, 1 \leq r \leq m+1, r \leq s+1\}} f(r)g(s) \\ &= \sum_{s=0}^m \sum_{r=1}^{s+1} f(r)g(s) \\ &= \sum_{s=0}^m \left ( g(s) \sum_{r=1}^{s+1} f(r) \right) \end{align*}
  2. \(X_1\) takes the values \(0, 1\) with equal probabilities (since \(X_0 = 0\)). Therefore \(\mathbb{E}(X_1) = \frac12\).
    1. \begin{align*} \mathbb{P}(X_2 = 0) &= \mathbb{P}(X_2 = 0 | X_1 = 0) \mathbb{P}(X_1 = 0) + \mathbb{P}(X_2 = 0 | X_1 = 1) \mathbb{P}(X_1 = 1) \\ &= \frac12 \cdot \frac12 + \frac13 \cdot \frac12 \\ &= \frac5{12} \\ \\ \mathbb{P}(X_2 = 1) &= \mathbb{P}(X_2 = 1 | X_1 = 0) \mathbb{P}(X_1 = 0) + \mathbb{P}(X_2 = 1 | X_1 = 1) \mathbb{P}(X_1 = 1) \\ &= \frac12 \cdot \frac12 + \frac13 \cdot \frac12 \\ &= \frac5{12} \\ \\ \mathbb{P}(X_2 = 3) &= 1 - \mathbb{P}(X_2 = 0) - \mathbb{P}(X_2 = 1) \\ &= 1 - \frac{10}{12} = \frac16 \\ \\ \mathbb{E}(X_2) &= \frac{5}{12} + 2\cdot \frac{1}{6} \\ &= \frac34 \end{align*}
    2. \begin{align*} \mathbb{P}(X_n = 0) &= \sum_{s=0}^{n-1} \mathbb{P}(X_n = 0 | X_{n-1} = s)\mathbb{P}(X_{n-1} = s) \\ &= \sum_{s=0}^{n-1} \frac{1}{s+2}\mathbb{P}(X_{n-1} = s) \\ \end{align*} as required. (Where \(\mathbb{P}(X_n = 0 | X_{n-1} = s) = \frac{1}{s+2}\) since if \(X_{n-1} = s\) there are \(0, 1, \ldots, s + 1\) values \(X_n\) can take with equal chance (ie \(s+2\) different values). \begin{align*} \mathbb{P}(X_n = r) &= \sum_{s=0}^{n-1} \mathbb{P}(X_n = r | X_{n-1} = s)\mathbb{P}(X_{n-1} = s) \\ &= \sum_{s=r-1}^{n-1} \frac{\mathbb{P}(X_{n-1}=s)}{s+2} \end{align*}
    3. \begin{align*} \mathbb{E}(X_n) &= \sum_{r=1}^{n} r \cdot \mathbb{P}(X_n = r) \\ &= \sum_{r=1}^{n} r \cdot \sum_{s=r-1}^{n-1} \frac{\mathbb{P}(X_{n-1}=s)}{s+2} \\ &= \sum_{s=0}^{n-1} \frac{\mathbb{P}(X_{n-1}=s)}{s+2} \sum_{r=1}^{s+1} r \\ &= \sum_{s=0}^{n-1} \frac{\mathbb{P}(X_{n-1}=s)}{s+2} \frac{(s+1)(s+2)}{2} \\ &= \frac12 \sum_{s=0}^{n-1} (s+1)\mathbb{P}(X_{n-1}=s) \\ &= \frac12 \sum_{s=0}^{n-1} s\mathbb{P}(X_{n-1}=s) + \frac12 \sum_{s=0}^{n-1} \mathbb{P}(X_{n-1}=s) \\\\ &= \frac12 \left ( \mathbb{E}(X_{n-1}) + 1 \right) \end{align*} Suppose \(\mathbb{E}(X_n) = 1-2^{-n}\), then notice that this expression matches for \(n = 0, 1, 2\) and also: \(\frac12(1 - 2^{-n} + 1) = 1-2^{-n-1}\) satisfies the recusive formula. Therefore by induction (or similar) we can show that \(\mathbb{E}(X_n) = 1- 2^{-n}\).