Problems

Filters
Clear Filters

48 problems found

1987 Paper 3 Q14
D: 1500.0 B: 1500.0

It is given that the gravitational force between a disc, of radius \(a,\) thickness \(\delta x\) and uniform density \(\rho,\) and a particle of mass \(m\) at a distance \(b(\geqslant0)\) from the disc on its axis is \[ 2\pi mk\rho\delta x\left(1-\frac{b}{(a^{2}+b^{2})^{\frac{1}{2}}}\right), \] where \(k\) is a constant. Show that the gravitational force on a particle of mass \(m\) at the surface of a uniform sphere of mass \(M\) and radius \(r\) is \(kmM/r^{2}.\) Deduce that in a spherical cloud of particles of uniform density, which all attract one another gravitationally, the radius \(r\) and inward velocity \(v=-\dfrac{\d r}{\d t}\) of a particle at the surface satisfy the equation \[ v\frac{\mathrm{d}v}{\mathrm{d}r}=-\frac{kM}{r^{2}}, \] where \(M\) is the mass of the cloud. At time \(t=0\), the cloud is instantaneously at rest and has radius \(R\). Show that \(r=R\cos^{2}\alpha\) after a time \[ \left(\frac{R^{3}}{2kM}\right)^{\frac{1}{2}}(\alpha+\tfrac{1}{2}\sin2\alpha). \]


Solution: Suppose we divide a sphere of radius \(r\) up into slices of thickness \(\delta x\). Then the force acting on \(P\) will be: \begin{align*} F &= \sum_{\text{slices}} 2\pi mk\rho\delta x\left(1-\frac{b}{(a^{2}+b^{2})^{\frac{1}{2}}}\right) \\ &= \sum_{i=-r/\delta x}^{r/\delta x} 2\pi mk\frac{M}{\frac43 \pi r^3}\delta x\left(1-\frac{i \delta x}{((1-(i\delta x)^2)+(i \delta x)^{2})^{\frac{1}{2}}}\right) \\ &\to \int_{-r}^r \frac{1}{2} \frac{mkM}{r^3}(1-t) \d t \\ &=\frac{mkM}{r^2} \end{align*} We can see that the particle will have a force attracting it towards the centre, with magnitude \(\frac{kmM}{r^2}\), therefore and since \(\frac{\d v}{\d t} = \frac{\d v}{\d r} \frac{\d r}{\d t}\) we must have: \(v \frac{\d v}{\d r}m = - \frac{kmM}{r^2}\) and dividing by \(m\) we get exactly the result we seek. \begin{align*} && v \frac{\d v}{\d r} &= \frac{-kM}{r^2} \\ \Rightarrow && \frac{v^2}{2}+C &= \frac{kM}{r} \\ r = R, v =0: && C &= \frac{kM}{R} \\ \Rightarrow && v^2&= 2kM\left ( \frac1r - \frac1R\right ) \\ \Rightarrow && \frac{\d r}{\d t} &= -\sqrt{2kM\left ( \frac1r - \frac1R\right )} \\ \Rightarrow && -\sqrt{2kM}T &= \int_{r=R}^{r=R\cos^2 \alpha} \frac{1}{\sqrt{\frac1r-\frac1R}} \d r \\ r = R\cos^2 \theta: && -\sqrt{2kM}T &= \int_{\theta = 0}^{\theta = \alpha} \frac{\sqrt{R}}{\sqrt{\sec^2 \theta - 1}} \cdot R \cdot 2 \cdot (-\cos \theta) \cdot \sin \theta \d \theta \\ \Rightarrow && T &= \sqrt{\frac{R^3}{2kM}} \int_0^\alpha \frac{2 \cos \theta \sin \theta}{\sqrt{\sec^2 \theta - 1}} \d \theta \\ &&&= \sqrt{\frac{R^3}{2kM}} \int_0^\alpha \frac{2 \cos \theta \sin \theta}{\tan \theta} \d \theta \\ &&&= \sqrt{\frac{R^3}{2kM}} \int_0^\alpha 2\cos^2 \theta \d \theta \\ &&&= \sqrt{\frac{R^3}{2kM}} \int_0^\alpha 1 + \cos 2 \theta\d \theta \\ &&&= \sqrt{\frac{R^3}{2kM}} \left [1 + \frac12 \sin 2 \theta \right]_0^\alpha \\ &&&= \sqrt{\frac{R^3}{2kM}} \left (\alpha + \frac12 \sin 2 \alpha \right) \\ \end{align*}

1987 Paper 3 Q15
D: 1500.0 B: 1500.0

A patient arrives with blue thumbs at the doctor's surgery. With probability \(p\) the patient is suffering from Fenland fever and requires treatment costing \(\pounds 100.\) With probability \(1-p\) he is suffering from Steppe syndrome and will get better anyway. A test exists which infallibly gives positive results if the patient is suffering from Fenland fever but also has probability \(q\) of giving positive results if the patient is not. The test cost \(\pounds 10.\) The doctor decides to proceed as follows. She will give the test repeatedly until either the last test is negative, in which case she dismisses the patient with kind words, or she has given the test \(n\) times with positive results each time, in which case she gives the treatment. In the case \(n=0,\) she treats the patient at once. She wishes to minimise the expected cost \(\pounds E_{n}\) to the National Health Service.

  1. Show that \[ E_{n+1}-E_{n}=10p-10(1-p)q^{n}(9-10q), \] and deduce that if \(p=10^{-4},q=10^{-2},\) she should choose \(n=3.\)
  2. Show that if \(q\) is larger than some fixed value \(q_{0},\) to be determined explicitly, then whatever the value of \(p,\) she should choose \(n=0.\)


Solution:

  1. \(E_{n+1} - E_n\) is the additional cost of the extra test \(10p+10(1-p)q^n\) from people who have yet to fail a test plus the reduce cost of people who will fail the final test, \(-100(1-p)q^n(1-q)\) \begin{align*} E_{n+1}-E_{n} &= 10p+10(1-p)q^n-100(1-p)q^n(1-q) \\ &=10p +10(1-p)q^n(1-10(1-q)) \\ &= 10p +10(1-p)q^n(-9+10q) \\ &= 10p - 10(1-p)q^n(9-10q) \end{align*} \begin{align*} && 10p - 10(1-p)q^n(9-10q) &> 0 \\ \Leftrightarrow && \frac{p}{(1-p)(9-10q)} &>q^n \end{align*} If \(p = 10^{-4}, q = 10^{-2}\) we have: \begin{align*} \frac{p}{(1-p)(9-10q)} &= \frac{10^{-4}}{(1-10^{-4})(9-10^{-3})} \\ &\approx 10^{-5} \end{align*} and \(q^2 < 10^{-5} < q^3\) so she should stop after the 3rd test.
  2. She shouldn't bother testing if \begin{align*} && \frac{p}{(1-p)(9-10q)} &>1 \\ \Leftrightarrow && \frac{p}{1-p} &>9-10q \\ \Leftrightarrow && 10q &>9 \\ \Leftrightarrow && q &> \frac9{10} = q_0 \end{align*}

1987 Paper 3 Q16
D: 1500.0 B: 1500.0

  1. \(X_{1},X_{2},\ldots,X_{n}\) are independent identically distributed random variables drawn from a uniform distribution on \([0,1].\) The random variables \(A\) and \(B\) are defined by \[ A=\min(X_{1},\ldots,X_{n}),\qquad B=\max(X_{1},\ldots,X_{n}). \] For any fixed \(k\), such that \(0< k< \frac{1}{2},\) let \[ p_{n}=\mathrm{P}(A\leqslant k\mbox{ and }B\geqslant1-k). \] What happens to \(p_{n}\) as \(n\rightarrow\infty\)? Comment briefly on this result.
  2. Lord Copper, the celebrated and imperious newspaper proprietor, has decided to run a lottery in which each of the \(4,000,000\) readers of his newspaper will have an equal probability \(p\) of winning \(\pounds 1,000,000\) and their changes of winning will be independent. He has fixed all the details leaving to you, his subordinate, only the task of choosing \(p\). If nobody wins \(\pounds 1,000,000\), you will be sacked, and if more than two readers win \(\pounds 1,000,000,\) you will also be sacked. Explaining your reasoning, show that however you choose \(p,\) you will have less than a 60\% change of keeping your job.


Solution:

  1. \begin{align*} && p_n &= \mathrm{P}(A\leqslant k\mbox{ and }B\geqslant1-k) \\ &&&= \mathrm{P}(A\leqslant k) +\P(B\geqslant1-k) - \mathrm{P}(A\leqslant k\mbox{ or }B\geqslant1-k)\\ &&&= 1-\mathrm{P}(A\geq k) +1-\P(B \leq 1-k) - \l 1- \mathrm{P}(A\geq k\mbox{ and }B\leq 1-k)\r\\ &&&= 1 - \P(X_i \geq k) - \P(X_i \leq 1-k) + \P(k \leq X_i \leq 1-k) \\ &&&= 1 - k^n - (1-k)^n + (1-2k)^n \end{align*} Therefore as \(n \to \infty\) \(p_n \to 1\), since \(k, (1-k), (1-2k)\) are all between \(0\) and \(1\) and so their powers will tend to \(0\).
  2. Let \(N = 4\,000\,000\). The probability exactly one person wins is \(Np(1-p)^{N-1}\). The probability exactly two people win is \(\binom{N}{2} p^2 (1-p)^{N-2}\). We wish to maximise the sum of these probabilities. To find this maximum, differentiate wrt \(p\). \begin{align*} \frac{\d}{\d p} : && \small N(1-p)^{N-1}-N(N-1)p(1-p)^{N-2} + N(N-1)p(1-p)^{N-2} - \frac12 N(N-1)(N-2)p^2(1-p)^{N-3} \\ &&= N(1-p)^{N-3} \l (1-p)^2 - \frac12(N-1)(N-2)p^2\r \\ \Rightarrow && \frac{(1-p)}{p} = \sqrt{\frac{(N-1)(N-2)}{2}} \\ \Rightarrow && p = \frac{1}{1+ \sqrt{\frac{(N-1)(N-2)}{2}}} \end{align*} This will be a maximum, since this is an increasing function at \(p=0\) and decreasing at \(p=1\) and there's only one stationary point. Note that \(p > \frac{\sqrt{2}}{(N-2)}\) and \(p < \frac{\sqrt{2}}{N-1+\sqrt{2}} < \frac{\sqrt{2}}{N}\) and so: \begin{align*} Np(1-p)^{N-1} &< \sqrt{2}(1-\frac{\sqrt{2}}{N-2})^{N-1} \\ &\approx \sqrt{2} e^{-\sqrt{2}} \end{align*} \begin{align*} \frac{N(N-1)}{2}p^2(1-p)^{N-2} &<(1-\frac{\sqrt{2}}{N-2})^{N-1} \\ &\approx e^{-\sqrt{2}} \end{align*} Alternatively, we can use a Poisson approximation. The number of winners is \(B(N, p)\) where we are hoping \(np\) is small but not zero. Therefore it's reasonable to approximation \(B(N,p)\) by \(Po(Np)\). (Call this value \(\lambda\)). Then we wish to maximise: \begin{align*} && p &= e^{-\lambda} \l \lambda + \frac{\lambda^2}{2} \r \\ &&&= e^{-\lambda} \lambda \l 1+ \frac{\lambda}{2} \r \\ \Rightarrow && \ln p &= -\lambda + \ln \lambda + \ln(1+\frac12 \lambda) \\ \frac{\d}{\d \lambda}: && \frac{p'}{p} &= -1 + \frac{1}{\lambda} + \frac{1}{2+\lambda} \\ &&&= \frac{-(2+\lambda)\lambda+2+2\lambda}{\lambda(2+\lambda)} \\ &&&= \frac{2-\lambda^2}{\lambda(2+\lambda)} \\ \Rightarrow && \lambda &= \sqrt{2} \end{align*} \begin{align*} \frac{\sqrt{2}+1}{e^{\sqrt{2}}} &< \frac{\sqrt{2}+1}{1+\sqrt{2}+1+\frac{1}{3}\sqrt{2}+\frac{1}{6}} \\ &= \frac{30\sqrt{2}-18}{41} \end{align*} Either way, we find we want to estimate \(e^{-\sqrt{2}}(1+\sqrt{2})\)