183 problems found
The mountain villages \(A,B,C\) and \(D\) lie at the vertices of a tetrahedron, and each pair of villages is joined by a road. After a snowfall the probability that any road is blocked is \(p\), and is independent of the conditions of any other road. The probability that, after a snowfall, it is possible to travel from any village to any other village by some route is \(P\). Show that $$ P =1- p^2(6p^3-12p^2+3p+4). $$ %In the case \(p={1\over 3}\) show that this probability is \({208 \over 243}\).
Write down the probability of obtaining \(k\) heads in \(n\) tosses of a fair coin. Now suppose that \(k\) is known but \(n\) is unknown. A maximum likelihood estimator (MLE) of \(n\) is defined to be a value (which must be an integer) of \(n\) which maximizes the probability of \(k\) heads. A friend has thrown a fair coin a number of times. She tells you that she has observed one head. Show that in this case there are two MLEs of the number of tosses she has made. She now tells you that in a repeat of the exercise she has observed \(k\) heads. Find the two MLEs of the number of tosses she has made. She next uses a coin biased with probability \(p\) (known) of showing a head, and again tells you that she has observed \(k\) heads. Find the MLEs of the number of tosses made. What is the condition for the MLE to be unique?
Solution: \begin{align*} && \mathbb{P}(k \text{ heads} | n\text{ tosses}) &= \binom{n}k 2^{-n} \\ && \mathbb{P}(1 \text{ head} | n\text{ tosses}) &= n2^{-n} \\ \Rightarrow && \frac{ \mathbb{P}(1 \text{ head} | n+1\text{ tosses}) }{ \mathbb{P}(1 \text{ head} | n\text{ tosses}) } &= \frac{n+1}{2n} \end{align*} Which is less than \(1\) unless \(n \geq 1\). Therefore the MLE is \(n = 1\) or \(n= 2\). \begin{align*} \frac{ \mathbb{P}(k \text{ head} | n+1\text{ tosses}) }{ \mathbb{P}(k \text{ head} | n\text{ tosses}) } &= \frac{\binom{n+1}{k}}{2 \binom{n}{k}} \\ &= \frac{(n+1)!(n-k)!}{2n!(n+1-k)!} \\ &= \frac{n+1}{2(n+1-k)} \end{align*} This is less than or equal to \(1\) if \(n+1 = 2(n+1-k) \Leftrightarrow n= 2k-1\), therefore the MLEs are \(2k-1\) and \(2k\). If the coin is biased, we have \begin{align*} && \frac{ \mathbb{P}(k \text{ head} | n+1\text{ tosses}) }{ \mathbb{P}(k \text{ head} | n\text{ tosses}) } &= \frac{\binom{n+1}{k}p^kq^{n+1-k}}{\binom{n}{k}p^kq^{n-k}} \\ &&&= \frac{n+1}{(n+1-k)}q \\ \\ && 1 & \geq \frac{n+1}{(n+1-k)}q \\ \Leftrightarrow && (n+1)(1-q) &\geq k \\ \Leftrightarrow && n+1 & \geq \frac{k}{p} \end{align*} Therefore the probability is increasing until \(n+1 \geq \frac{k}{p}\). If \(\frac{k}p\) is an integer the MLEs are \(\frac{k}{p}-1\) and \(\frac{k}p\), otherwise it is \(\lfloor \frac{k}{p} \rfloor\) and the MLE is unique.
An experiment produces a random number \(T\) uniformly distributed on \([0,1]\). Let \(X\) be the larger root of the equation \[x^{2}+2x+T=0.\] What is the probability that \(X>-1/3\)? Find \(\mathbb{E}(X)\) and show that \(\mathrm{Var}(X)=1/18\). The experiment is repeated independently 800 times generating the larger roots \(X_{1}, X_{2}, \dots, X_{800}\). If \[Y=X_{1}+X_{2}+\dots+X_{800}.\] find an approximate value for \(K\) such that \[\mathrm{P}(Y\leqslant K)=0.08.\]
Solution: \((x+1)^2+T-1 = 0\) so the larger root is \(-1 + \sqrt{1-T}\) \begin{align*} && \mathbb{P}(X > -1/3) &= \mathbb{P}(-1 + \sqrt{1-T} > -1/3) \\ &&&= \mathbb{P}(\sqrt{1-T} > 2/3)\\ &&&= \mathbb{P}(1-T > 4/9)\\ &&&= \mathbb{P}\left (T < \frac59 \right) = \frac59 \end{align*} Similarly, for \(t \in [-1,0]\) \begin{align*} && \mathbb{P}(X \leq t) &= \mathbb{P}(-1 + \sqrt{1-T} \leq t) \\ &&&= \mathbb{P}(\sqrt{1-T} \leq t+1)\\ &&&= \mathbb{P}(1-T \leq (t+1)^2)\\ &&&= \mathbb{P}\left (T \geq 1-(t+1)^2\right) = (t+1)^2 \\ \Rightarrow && f_X(t) &= 2(t+1) \\ \Rightarrow && \E[X] &= \int_{-1}^0 x \cdot f_X(x) \d x \\ &&&= \int_{-1}^0 x2(x+1) \d x \\ &&&= \left [\frac23x^3+x^2 \right]_{-1}^0 \\ &&&= -\frac13 \\ && \E[X^2] &= \int_{-1}^0 x^2 \cdot f_X(x) \d x \\ &&&= \int_{-1}^0 2x^2(x+1) \d x \\ &&&= \left [ \frac12 x^4 + \frac23x^3\right]_{-1}^0 \\ &&&= \frac16 \\ \Rightarrow && \var[X] &= \E[X^2] - \left (\E[X] \right)^2 \\ &&&= \frac16 - \frac19 = \frac1{18} \end{align*} Notice that by the central limit theorem \(\frac{Y}{800} \approx N( -\tfrac13, \frac{1}{18 \cdot 800})\). Also notice that \(\Phi^{-1}(0.08) \approx -1.4 \approx -\sqrt{2}\) Therefore we are looking for roughly \(800 \cdot (-\frac13 -\frac{1}{\sqrt{18 \cdot 800}} \sqrt{2})) = -267-9 = -276\)
Mr Blond returns to his flat to find it in complete darkness. He knows that this means that one of four assassins Mr 1, Mr 2, Mr 3 or Mr 4 has set a trap for him. His trained instinct tells him that the probability that Mr \(i\) has set the trap is \(i/10\). His knowledge of their habits tells him that Mr \(i\) uses a deadly trained silent anaconda with probability \((i+1)/10\), a bomb with probability \(i/10\) and a vicious attack canary with probability \((9-2i)/10\) \([i=1,2,3,4]\). He now listens carefully and, hearing no singing, concludes correctly that no canary is involved. If he switches on the light and the trap is a bomb he has probability \(1/2\) of being killed but if the trap is an anaconda he has probability \(2/3\) of survival. If he does not switch on the light and the trap is a bomb he is certain to survive but, if the trap is an anaconda, he has a probability \(1/2\) of being killed. His professional pride means that he must enter the flat. Advise Mr Blond, giving reasons for your advice.
Solution: \begin{array}{c|c|c|c} & A & B & C \\ \hline 1 & \frac{1}{10} \cdot \frac{2}{10} & \frac{1}{10} \cdot \frac{1}{10} & \frac{1}{10} \cdot \frac{7}{10} \\ 2 & \frac{2}{10} \cdot \frac{3}{10} &\frac{2}{10} \cdot \frac{2}{10} &\frac{2}{10} \cdot \frac{5}{10} \\ 3 & \frac{3}{10} \cdot \frac{4}{10} &\frac{3}{10} \cdot \frac{3}{10} &\frac{3}{10} \cdot \frac{3}{10} \\ 4 & \frac{4}{10} \cdot \frac{5}{10} &\frac{4}{10} \cdot \frac{4}{10} &\frac{4}{10} \cdot \frac{1}{10} \\ \hline & \frac{2+6+12+20}{100} & \frac{1 + 4 + 9 + 16}{100} & \frac{7 + 10 + 9 + 4}{100} \end{array} Therefore \(\mathbb{P}(A) = \frac{4}{10}, \mathbb{P}(B) = \frac{3}{10}, \mathbb{P}(C) = \frac{3}{10}\), in particular, \begin{align*} \mathbb{P}(A | \text{not }C) &= \frac{4}{7} \\ \mathbb{P}(B | \text{not }C) &= \frac{3}{7} \\ \end{align*} If he switches the light on, his probability of survival is \(\frac47 \cdot \frac23 + \frac37 \cdot \frac12 = \frac{25}{42}\), if he doesn't his probability is \(\frac12 \cdot \frac47 +\frac37= \frac{5}{7} = \frac{30}{42}\) therefore he shouldn't switch the light on.
The maximum height \(X\) of flood water each year on a certain river is a random variable with density function \begin{equation*} {\mathrm f}(x)= \begin{cases} \exp(-x)&\text{if \(x\geqslant 0\),}\\ 0&\text{otherwise}. \end{cases} \end{equation*} It costs \(y\) megadollars each year to prepare for flood water of height \(y\) or less. If \(X\leqslant y\) no further costs are incurred but if \(X\geqslant y\) the cost of flood damage is \(r+s(X-y)\) megadollars where \(r,s>0\). The total cost \(T\) megadollars is thus given by \begin{equation*} T= \begin{cases} y&\text{if \(X\leqslant y\)},\\ y+r+s(X-y)&\text{if \(X>y\)}. \end{cases} \end{equation*} Show that we can minimise the expected total cost by taking \[y=\ln(r+s).\]
The game of Cambridge Whispers starts with the first participant Albert flipping an un-biased coin and whispering to his neighbour Bertha whether it fell `heads' or `tails'. Bertha then whispers this information to her neighbour, and so on. The game ends when the final player Zebedee whispers to Albert and the game is won, by all players, if what Albert hears is correct. The acoustics are such that the listeners have, independently at each stage, only a probability of 2/3 of hearing correctly what is said. Find the probability that the game is won when there are just three players. By considering the binomial expansion of \((a+b)^n+(a-b)^n\), or otherwise, find a concise expression for the probability \(P\) that the game is won when is it played by \(n\) players each having a probability \(p\) of hearing correctly. % Show in particular that, if \(n\) is even, %\(P(n,1/10) = P(n,9/10)\).% How do you explain this apparent anomaly? To avoid the trauma of a lost game, the rules are now modified to require Albert to whisper to Bertha what he hears from Zebedee, and so keep the game going, if what he hears from Zebedee is not correct. Find the expected total number of times that Albert whispers to Bertha before the modified game ends. \noindent [You may use without proof the fact that \(\sum_1^\infty kx^{k-1}=(1-x)^{-2}\) for \(\vert x\vert<1\).]
Traffic enters a tunnel which is 9600 metres long, and in which overtaking is impossible. The number of vehicles which enter in any given time is governed by the Poisson distribution with mean 6 cars per minute. All vehicles travel at a constant speed until forced to slow down on catching up with a slower vehicle ahead. I enter the tunnel travelling at 30 m\(\,\)s\(^{-1}\) and all the other traffic is travelling at 32 m\(\,\)s\(^{-1}\). What is the expected number of vehicles in the queue behind me when I leave the tunnel? Assuming again that I travel at 30 m\(\,\)s\(^{-1}\), but that all the other vehicles are independently equally likely to be travelling at 30 m\(\,\)s\(^{-1}\) or 32 m\(\,\)s\(^{-1}\), find the probability that exactly two vehicles enter the tunnel within 20 seconds of my doing so and catch me up before I leave it. Find also the probability that there are exactly two vehicles queuing behind me when I leave the tunnel. \noindent [Ignore the lengths of the vehicles.]
An examiner has to assign a mark between 1 and \(m\) inclusive to each of \(n\) examination scripts (\(n\leqslant m\)). He does this randomly, but never assigns the same mark twice. If \(K\) is the highest mark that he assigns, explain why \[ \mathrm{P}(K=k)=\left.\binom{k-1}{n-1}\right/\binom{m}{n} \] for \(n\leqslant k\leqslant m,\) and deduce that \[ \sum_{k=n}^{m}\binom{k-1}{n-1}=\binom{m}{n}\,. \] Find the expected value of \(K\).
Solution: If the highest mark is \(k\), then there are \(n-1\) remaining marks to give, and they have to be chosen from the numbers \(1, 2, \ldots, k-1\), ie in \(\binom{k-1}{n-1}\) ways. There are \(n\) numbers to be chosen from \(1, 2, \ldots, m\) in total, therefore \(\displaystyle \mathbb{P}(K=k) = \left.\binom{k-1}{n-1} \right/ \binom{m}{n}\) Since \(K\) can take any of the values \(n, \cdots, m\), we must have \begin{align*} && 1 &= \sum_{k=n}^m \mathbb{P}(K=k) \\ &&&= \sum_{k=n}^m \left.\binom{k-1}{n-1} \right/ \binom{m}{n} \\ \Rightarrow && \binom{m}{n} &= \sum_{k=n}^m \binom{k-1}{n-1} \\ \\ && \mathbb{E}(K) &= \sum_{k=n}^m k \cdot \mathbb{P}(K=k) \\ &&&= \sum_{k=n}^m k \cdot \left.\binom{k-1}{n-1} \right/ \binom{m}{n} \\ &&&= n\binom{m}{n}^{-1} \sum_{k=n}^m \frac{k}{n} \cdot \binom{k-1}{n-1} \\ &&&= n\binom{m}{n}^{-1} \sum_{k=n}^m \binom{k}{n} \\ &&&= n\binom{m}{n}^{-1} \sum_{k=n+1}^{m+1} \binom{k-1}{n+1-1} \\ &&&= n\binom{m}{n}^{-1} \binom{m+1}{n+1} \\ &&&= n \cdot \frac{m+1}{n+1} \end{align*}
I have a Penny Black stamp which I want to sell to my friend Jim, but we cannot agree a price. So I put the stamp under one of two cups, jumble them up, and let Jim guess which one it is under. If he guesses correctly, I add a third cup, jumble them up, and let Jim guess correctly, adding another cup each time. The price he pays for the stamp is \(\pounds N,\) where \(N\) is the number of cups present when Jim fails to guess correctly. Find \(\mathrm{P}(N=k)\). Show that \(\mathrm{E}(N)=\mathrm{e}\) and calculate \(\mathrm{Var}(N).\)
Solution: \begin{align*} && \mathbb{P}(N = k) &= \mathbb{P}(\text{guesses }k-1\text{ correctly then 1 wrong})\\ &&&= \frac12 \cdot \frac{1}{3} \cdots \frac{1}{k-1} \frac{k-1}{k} \\ &&&= \frac{k-1}{k!} \\ &&\mathbb{E}(N) &= \sum_{k=2}^\infty k \cdot \mathbb{P}(N=k) \\ &&&= \sum_{k=2}^{\infty} \frac{k(k-1)}{k!} \\ &&&= \sum_{k=0}^{\infty} \frac{1}{k!} = e \\ && \textrm{Var}(N) &= \mathbb{E}(N^2) - \mathbb{E}(N)^2 \\ && \mathbb{E}(N^2) &= \sum_{k=2}^{\infty} k^2 \mathbb{P}(N=k) \\ &&&= \sum_{k=2}^{\infty} \frac{k^2(k-1)}{k!} \\ &&&= \sum_{k=0}^{\infty} \frac{k+2}{k!} \\ &&&= \sum_{k=0}^{\infty} \frac{1}{k!} + 2 \sum_{k=0}^{\infty} \frac{1}{k!} = 3e \\ \Rightarrow && \textrm{Var}(N) &= 3e-e^2 \end{align*}
A biased coin, with a probability \(p\) of coming up heads and a probability \(q=1-p\) of coming up tails, is tossed repeatedly. Let \(A\) be the event that the first run of \(r\) successive heads occurs before the first run of \(s\) successive tails. If \(H\) is the even that on the first toss the coin comes up heads and \(T\) is the event that it comes up tails, show that \begin{alignat*}{1} \mathrm{P}(A|H) & =p^{\alpha}+(1-p^{\alpha})\mathrm{P}(A|T),\\ \mathrm{P}(A|T) & =(1-q^{\beta})\mathrm{P}(A|H), \end{alignat*} where \(\alpha\) and \(\beta\) are to be determined. Use these two equations to find \(\mathrm{P}(A|H),\) \(\mathrm{P}(A|T),\) and hence \(\mathrm{P}(A).\)
Solution: \begin{align*} && \P(A|H) &= \P(\text{achieve }r\text{ heads immediately}) + \P(\text{don't and then achieve it from having flipped a tail}) \\ &&&= p^{r-1} + (1-p^{r-1}) \cdot \P(A|T) \\ && \P(A|T) &= (1-q^{s-1})\P(A|H) \\ \\ &&\P(A|H) &= p^{r-1}+(1-p^{r-1})(1-q^{s-1})\P(A|H) \\ \Rightarrow && \P(A|H) &= \frac{p^{r-1}}{1-(1-p^{r-1})(1-q^{s-1})} \\ && \P(A|T) &= \frac{(1-q^{s-1})p^{r-1}}{1-(1-p^{r-1})(1-q^{s-1})} \\ && \P(A) &= \frac{(2-q^{s-1})p^{r-1}}{2(1-(1-p^{r-1})(1-q^{s-1}))} \end{align*}
By considering the coefficients of \(t^{n}\) in the equation \[(1+t)^{n}(1+t)^{n}=(1+t)^{2n},\] or otherwise, show that \[\binom{n}{0}\binom{n}{n}+\binom{n}{1}\binom{n}{n-1}+\cdots +\binom{n}{r}\binom{n}{n-r}+\cdots+\binom{n}{n}\binom{n}{0} =\binom{2n}{n}.\] The large American city of Triposville is laid out in a square grid with equally spaced streets running east-west and avenues running north-south. My friend is staying at a hotel \(n\) avenues west and \(n\) streets north of my hotel. Both hotels are at intersections. We set out from our own hotels at the same time. We walk at the same speed, taking 1 minute to go from one intersection to the next. Every time I reach an intersection I go north with probability \(1/2\) or west with probability \(1/2\). Every time my friend reaches an intersection she goes south with probability \(1/2\) or east with probability \(1/2\). Our choices are independent of each other and of our previous decisions. Indicate by a sketch or by a brief description the set of points where we could meet. Find the probability that we meet. Suppose that I oversleep and leave my hotel \(2k\) minutes later than my friend leaves hers, where \(k\) is an integer and \(0\leqslant 2k\leqslant n\). Find the probability that we meet. Have you any comment? If \(n=1\) and I leave my hotel \(1\) minute later than my friend leaves hers, what is the probability that we meet and why?
Solution: \begin{align*} && (1+t)^{n}(1+t)^{n}&=(1+t)^{2n} \\ [t^n]: &&\sum_{k=0}^n \underbrace{\binom{n}{k}}_{t^k\text{ from left bracket}} \underbrace{\binom{n}{n-k}}_{t^{n-k}\text{ from right bracket}} &= \binom{2n}{n} \end{align*}
Let \(X\) be a random variable which takes only the finite number of different possible real values \(x_{1},x_{2},\ldots,x_{n}.\) Define the expectation \(\mathbb{E}(X)\) and the variance \(\var(X)\) of \(X\). Show that , if \(a\) and \(b\) are real numbers, then \(\E(aX+b)=a\E(X)+b\) and express \(\var(aX+b)\) similarly in terms of \(\var(X)\). Let \(\lambda\) be a positive real number. By considering the contribution to \(\var(X)\) of those \(x_{i}\) for which \(\left|x_{i}-\E(X)\right|\geqslant\lambda,\) or otherwise, show that \[ \mathrm{P}\left(\left|X-\E(X)\right|\geqslant\lambda\right)\leqslant\frac{\var(X)}{\lambda^{2}}\,. \] Let \(k\) be a real number satisfying \(k\geqslant\lambda.\) If \(\left|x_{i}-\E(X)\right|\leqslant k\) for all \(i\), show that \[ \mathrm{P}\left(\left|X-\E(X)\right|\geqslant\lambda\right)\geqslant\frac{\var(X)-\lambda^{2}}{k^{2}-\lambda^{2}}\,. \]
Solution: Definition: \(\displaystyle \mathbb{E}(X) = \sum_{i=1}^n x_i \mathbb{P}(X = x_i)\) Definition: \(\displaystyle \mathrm{Var}(X) = \sum_{i=1}^n (x_i-\mathbb{E}(X))^2 \mathbb{P}(X = x_i)\) Claim: \(\mathbb{E}(aX+b) = a\mathbb{E}(X)+b\) Proof: \begin{align*} \mathbb{E}(aX+b) &= \sum_{i=1}^n (ax_i+b) \mathbb{P}(X = x_i) \\ &= a\sum_{i=1}^n x_i \mathbb{P}(X = x_i) + b\sum_{i=1}^n \mathbb{P}(X = x_i)\\ &= a \mathbb{E}(X) + b \end{align*} Claim: \(\mathrm{Var}(aX+b) = a^2 \mathrm{Var}(X)\) Claim: \(\mathrm{P}\left(\left|X-\mathrm{E}(X)\right|\geqslant\lambda\right)\leqslant\frac{\mathrm{var}(X)}{\lambda^{2}}\) Proof: \begin{align*} \mathrm{Var}(X) &= \sum_{i=1}^n (x_i-\mathbb{E}(X))^2 \mathbb{P}(X = x_i) \\ &\geq \sum_{|x_i - \mathbb{E}(X)| \geq \lambda} (x_i-\mathbb{E}(X))^2 \mathbb{P}(X = x_i) \\ &\geq \sum_{|x_i - \mathbb{E}(X)| \geq \lambda} \lambda^2 \mathbb{P}(X = x_i) \\ &= \lambda^2 \sum_{|x_i - \mathbb{E}(X)| \geq \lambda} \mathbb{P}(X = x_i) \\ &= \lambda^2 \mathrm{P}\left(\left|X-\mathrm{E}(X)\right|\geqslant\lambda\right) \end{align*} Claim: \[ \mathrm{P}\left(\left|X-\mathrm{E}(X)\right|\geqslant\lambda\right)\geqslant\frac{\mathrm{var}(X)-\lambda^{2}}{k^{2}-\lambda^{2}}\,. \] Proof: \begin{align*} && \mathrm{Var}(X) &= \sum_{i=1}^n (x_i-\mathbb{E}(X))^2 \mathbb{P}(X = x_i) \\ &&&= \sum_{|x_i - \mathbb{E}(X)| \geq \lambda} (x_i-\mathbb{E}(X))^2 \mathbb{P}(X = x_i) + \sum_{|x_i - \mathbb{E}(X)| < \lambda} (x_i-\mathbb{E}(X))^2 \mathbb{P}(X = x_i) \\ &&& \leq \sum_{|x_i - \mathbb{E}(X)| \geq \lambda} k^2 \mathbb{P}(X = x_i) + \sum_{|x_i - \mathbb{E}(X)| < \lambda} \lambda^2 \mathbb{P}(X = x_i) \\ &&&= k^2 \mathbb{P}\left(\left|X-\mathrm{E}(X)\right|\geqslant\lambda\right) + \lambda^2 \mathbb{P}\left(\left|X-\mathrm{E}(X)\right| < \lambda\right) \\ &&&= k^2 \mathbb{P}\left(\left|X-\mathrm{E}(X)\right|\geqslant\lambda\right) + \lambda^2(1- \mathbb{P}\left(\left|X-\mathrm{E}(X)\right| \leq \lambda\right) \\ &&&= (k^2 - \lambda^2) \mathbb{P}\left(\left|X-\mathrm{E}(X)\right|\geqslant\lambda\right) + \lambda^2 \\ \Rightarrow&& \frac{\mathrm{Var}(X)-\lambda^2}{k^2 - \lambda^2} &\leq \mathbb{P}\left(\left|X-\mathrm{E}(X)\right|\geqslant\lambda\right) \end{align*} [Note: This result is known as Chebyshev's inequality, and is an important starting point to understanding the behaviour of tails of random variables]
Whenever I go cycling I start with my bike in good working order. However if all is well at time \(t\), the probability that I get a puncture in the small interval \((t,t+\delta t)\) is \(\alpha\,\delta t.\) How many punctures can I expect to get on a journey during which my total cycling time is \(T\)? When I get a puncture I stop immediately to repair it and the probability that, if I am repairing it at time \(t\), the repair will be completed in time \((t,t+\delta t)\) is \(\beta\,\delta t.\) If \(p(t)\) is the probability that I am repairing a puncture at time \(t\), write down an equation relating \(p(t)\) to \(p(t+\delta t)\), and derive from this a differential equation relating \(p'(t)\) and \(p(t).\) Show that \[ p(t)=\frac{\alpha}{\alpha+\beta}(1-\mathrm{e}^{-(\alpha+\beta)t}) \] satisfies this differential equation with the appropriate initial condition. Find an expression, involving \(\alpha,\beta\) and \(T\), for the time expected to be spent mending punctures during a journey of total time \(T\). Hence, or otherwise, show that, the fraction of the journey expected to be spent mending punctures is given approximately by \[ \quad\frac{\alpha T}{2}\quad\ \mbox{ if }(\alpha+\beta)T\text{ is small, } \] and by \[ \frac{\alpha}{\alpha+\beta}\quad\mbox{ if }(\alpha+\beta)T\text{ is large.} \]
A school has \(n\) pupils, of whom \(r\) play hocket, where \(n\geqslant r\geqslant2.\) All \(n\) pupils are arranged in a row at random.
Solution: