17 problems found
Solution:
Solution:
Let \(X\) be a random variable with a Laplace distribution, so that its probability density function is given by \[ \f(x) = \frac12 \e^{-\vert x \vert }\;, \text{ \(-\infty < x < \infty \)}. \tag{\(*\)} \] Sketch \(\f(x)\). Show that its moment generating function \({\rm M}_X(\theta)\) is given by \({\rm M}_X(\theta)= (1-\theta^2)^{-1}\) and hence find the variance of \(X\). A frog is jumping up and down, attempting to land on the same spot each time. In fact, in each of \(n\) successive jumps he always lands on a fixed straight line but when he lands from the \(i\)th jump (\(i=1\,,2\,,\ldots\,,n\)) his displacement from the point from which he jumped is \(X_i\,\)cm, where \(X_i\) has the distribution \((*)\). His displacement from his starting point after \(n\) jumps is \(Y\,\)cm (so that \(Y=\sum\limits_{i=1}^n X_i\)). Each jump is independent of the others. Obtain the moment generating function for \(Y/ \sqrt {2n}\) and, by considering its logarithm, show that this moment generating function tends to \(\exp(\frac12\theta^2)\) as \(n\to\infty\). Given that \(\exp(\frac12\theta^2)\) is the moment generating function of the standard Normal random variable, estimate the least number of jumps such that there is a \(5\%\) chance that the frog lands 25 cm or more from his starting point.
Solution:
A team of \(m\) players, numbered from \(1\) to \(m\), puts on a set of a \(m\) shirts, similarly numbered from \(1\) to \(m\). The players change in a hurry, so that the shirts are assigned to them randomly, one to each player. Let \(C_i\) be the random variable that takes the value \(1\) if player \(i\) is wearing shirt \(i\), and 0 otherwise. Show that \(\mathrm{E}\left(C_1\right)={1 \over m}\) and find \(\var \left(C_1\right)\) and \(\mathrm{Cov}\left(C_1 \, , \; C_2 \right) \,\). Let \(\, N = C_1 + C_2 + \cdots + C_m \,\) be the random variable whose value is the number of players who are wearing the correct shirt. Show that \(\mathrm{E}\left(N\right)= \var \left(N\right) = 1 \,\). Explain why a Normal approximation to \(N\) is not likely to be appropriate for any \(m\), but that a Poisson approximation might be reasonable. In the case \(m = 4\), find, by listing equally likely possibilities or otherwise, the probability that no player is wearing the correct shirt and verify that an appropriate Poisson approximation to \(N\) gives this probability with a relative error of about \(2\%\). [Use \(\e \approx 2\frac{72}{100} \,\).]
Solution: There are \(m!\) different ways of assigning the shirts, and in \((m-1)!\) of them player \(1\) gets their own shirt, ie \(\mathbb{E}(C_1) = \mathbb{P}(\text{player }1\text{ gets own shirt}) = \frac{(m-1)!}{m!} = \frac{1}{m}\). \(\var(C_1) = \mathbb{E}(C_1^2) - [\mathbb{E}(C_1)]^2 = \frac{1}{m} - \frac{1}{m^2} = \frac{m-1}{m^2}\). If we have two players, there are \((m-2)!\) ways they both get their own shirts, therefore \(\textrm{Cov}(C_1,C_2) = \mathbb{E}(C_1C_2) - \mathbb{E}(C_1)\mathbb{E}(C_2) = \frac{(m-2)!}{m!} - \frac{1}{m^2} = \frac{1}{m(m-1)} - \frac{1}{m^2} = \frac{m-m+1}{m^2(m-1)} = \frac{1}{m^2(m-1)}\). \begin{align*} \mathbb{E}(N) &= \mathbb{E}(C_1 + C_2 + \cdots + C_m) \\ &= \mathbb{E}(C_1) + \mathbb{E}(C_2) + \cdots + \mathbb{E}(C_m) \\ &= \frac{1}{m} + \frac{1}{m} +\cdots+ \frac1m \\ &= 1 \\ \\ \var(N) &= \sum_{r=1}^m \var(C_r) + 2\sum_{r=1}^{m-1} \sum_{s=2}^{m} \textrm{Cov}(C_r,C_s) \\ &= m \frac{m-1}{m^2} + 2 \frac{m(m-1)}{2}\frac{1}{m^2(m-1)} \\ &=\frac{m-1}{m} + \frac{1}{m} \\ &= 1 \end{align*} If we were to take a normal approximation, we would want to take \(N(1,1)\), but this would say things like \(-1\) is as likely as \(3\) shirts being correct, which is clearly a bad model. A Poisson is much more likely to be a sensible model as they have the same mean and variance as the parameter, and if \(m\) is large, the covariance between shirts is going to be very small, so it will appear similar to random events occurring. We can have \begin{align*} BADC \\ BCDA \\ BDAC \\ CADB \\ CDAB\\ CDBA \\ DABC\\ DCAB \\ DCBA \end{align*} Ie \(\frac{9}{24}\) ways to have no player wearing their own shirt with \(4\) players. \(Po(1)\) would say this probability is \(e^{-1}\), giving a relative error of: \begin{align*} \frac{e^{-1}-\frac{9}{24}}{\frac9{24}} &\approx \frac{\frac{100}{272} - \frac{9}{24}}{\frac9{24}} \\ &= -\frac{1}{51} \\ &\approx -2\% \end{align*}
The random variable \(X\) takes the values \(k=1\), \(2\), \(3\), \(\dotsc\), and has probability distribution $$ \P(X=k)= A{{{\lambda}^k\e^{-{\lambda}}} \over {k!}}\,, $$ where \(\lambda \) is a positive constant. Show that \(A = (1-\e^{-\lambda})^{-1}\,\). Find the mean \({\mu}\) in terms of \({\lambda}\) and show that $$ \var(X) = {\mu}(1-{\mu}+{\lambda})\;. $$ Deduce that \({\lambda} < {\mu} < 1+{\lambda}\,\). Use a normal approximation to find the value of \(P(X={\lambda})\) in the case where \({\lambda}=100\,\), giving your answer to 2 decimal places.
Solution: Let \(Y \sim Po(\lambda)\) \begin{align*} && 1 &= \sum_{k=1}^\infty \mathbb{P}(X = k ) \\ &&&= \sum_{k=1}^\infty A \frac{\lambda^k e^{-\lambda}}{k!}\\ &&&= Ae^{-\lambda} \sum_{k=1}^{\infty} \frac{\lambda^k e^{-\lambda}}{k!} \\ &&&= Ae^{-\lambda} \left (e^{\lambda}-1 \right) \\ \Rightarrow && A &= (1-e^{-\lambda})^{-1} \\ \\ && \E[X] &= \sum_{k=1}^{\infty} k \cdot \mathbb{P}(X=k) \\ &&&= A\sum_{k=1}^{\infty} k \frac{\lambda^k e^{-\lambda}}{k!} \\ &&&= A\E[Y] = A\lambda = \lambda(1-e^{-\lambda})^{-1} \\ \\ && \var[X] &= \E[X^2] - (\E[X])^2 \\ &&&= A\sum_{k=1}^{\infty} k^2 \frac{\lambda^k e^{-\lambda}}{k!} - \mu^2 \\ &&&= A\E[Y^2] - \mu^2 \\ &&&= A(\var[Y]+\lambda^2) - \mu^2 \\ &&&= A(\lambda + \lambda^2) - \mu^2 \\ &&&= A\lambda(1+\lambda) - \mu^2 \\ &&&= \mu(1+\lambda - \mu) \end{align*} Since \(A > 1\) we must have \(\mu > \lambda\) and since \(\var[X] > 0\) we must have \(1 + \lambda > \mu\) as required. If \(\lambda = 100\), then \(A \approx 1\) and \(P(X=\lambda) \approx P(Y = \lambda)\) and \(Y \approx N(\lambda, \lambda)\) so the value is approximately \(\displaystyle \int_{-\frac12}^{\frac12} \frac{1}{\sqrt{2\pi \lambda}} e^{-\frac{x^2}{2\lambda}} \d x \approx \frac{1}{\sqrt{200\pi}} = \frac{1}{\sqrt{630.\ldots}} \approx \frac{1}{25} = 0.04 \)
On \(K\) consecutive days each of \(L\) identical coins is thrown \(M\) times. For each coin, the probability of throwing a head in any one throw is \(p\) (where \(0 < p < 1\)). Show that the probability that on exactly \(k\) of these days more than \(l\) of the coins will each produce fewer than \(m\) heads can be approximated by \[ {K \choose k}q^k(1-q)^{K-k}, \] where \[ q=\Phi\left( \frac{2h-2l-1}{2\sqrt{h} }\right), \ \ \ \ \ \ h=L\Phi\left( \frac{2m-1-2Mp}{2\sqrt{ Mp(1-p)}}\right) \] and \(\Phi(\cdot)\) is the cumulative distribution function of a standard normal variate. Would you expect this approximation to be accurate in the case \(K=7\), \(k=2\), \(L=500\), \(l=4\), \(M=100\), \(m=48\) and \(p=0.6\;\)?
Solution: Let \(H_i\) be the random variable of how many heads the \(i\)th coin throws on a given day. Then \(H_i \sim B(M,p)\), and the probability that a given coin produces fewer than \(m\) heads is \(p_h = \P(H_i < m)\) Let \(C\) be the random variable the number of coins producing fewer than \(m\) heads, then \(C \sim B(L, p_h)\). The probability that more than \(l\) of the coins produce fewer than \(m\) heads is therefore \(\P(C > l)\). Finally, the probability that on exactly \(k\) days more than \(l\) of the coins will produce fewer than \(m\) heads is: \[ \binom{K}{k} \cdot \P(C > l)^k \cdot (1-\P(C > l))^{K-k} \] Let's start by assuming that all our Binomials can be approximated by a normal distribution. \(B(M,p) \approx N(Mp, Mp(1-p))\) and so: \begin{align*} p_h &= \P(H_i < m) \\ &\approx \P( \sqrt{Mp(1-p)}Z+Mp < m-\frac12) \\ &= \P \l Z < \frac{2m-2Mp-1}{2\sqrt{Mp(1-p)}} \r \\ &= \Phi\l\frac{2m-2Mp-1}{2\sqrt{Mp(1-p)}} \r \end{align*} \(B(L, p_h) \approx B \l L, \P \l Z < \frac{2m-2Mp-1}{2\sqrt{Mp(1-p)}} \r\r = B(L, \frac{h}{L}) \approx N(h, \frac{h(L-h)}{L})\) Therefore \begin{align*} \P(C > l) &= 1-\P(C \leq l) \\ &\approx 1- \P \l \sqrt{\frac{h(L-h)}{L}} Z + h \leq l+\frac12 \r \\ &= 1 - \P \l Z \leq \frac{2l-2h+1}{2\sqrt{\frac{h(L-h)}{L}}}\r \\ &= 1- \Phi\l \frac{2l-2h+1}{2\sqrt{\frac{h(L-h)}{L}}} \r \\ &= \Phi\l \frac{2h-2l-1}{2\sqrt{\frac{h(L-h)}{L}}} \r \end{align*} If we can approximate \(\sqrt{1-\frac{h}{L}}\) by \(1\) then we obtain the approximation in the question. Alternatively, \(B(L, \frac{h}{L}) \approx Po(h)\) and \(Po(h) \approx N(h,h)\) so we obtain: \begin{align*} \P(C > l) &= 1-\P(C \leq l) \\ &\approx 1 - \P(\sqrt{h} Z +h < l + \frac12) \\ &= 1 - \P \l Z < \frac{2l-2h+1}{2\sqrt{h}} \r \\ &= \Phi \l \frac{2h - 2l -1}{2\sqrt{h}}\r \end{align*} as required. [I think this is what the examiners expected]. Considering the case \(K=7\), \(k=2\), \(L=500\), \(l=4\), \(M=100\), \(m=48\) and \(p=0.6\), we have the first normal approximation depends on \(Mp\) and \(M(1-p)\) being large. They are \(60\) and \(40\) respectively, so this is likely a good approximation. The first approximation finds that \begin{align*} h &= 500 \cdot \Phi \l \frac{2 \cdot 48 - 2 \cdot 60 - 1}{2\sqrt{24}} \r \\ &= 500 \cdot \Phi \l \frac{2 \cdot 48 - 2 \cdot 60 - 1}{2\sqrt{24}} \r \\ &= 500 \cdot \Phi \l \frac{-25}{2 \sqrt{24}} \r \\ &\approx 500 \cdot \Phi (-2.5) \\ &= 500 \cdot 0.0062 \\ &\approx 3.1 \end{align*} The second binomial approximation will be good if \(500 \cdot \frac{3.1}{500} = 3.1\) is large, but this is quite small. Therefore, we shouldn't expect this to be a good approximation. However, since \(m = 48\) is far from the mean (in a normalised sense), we might expect the percentage error to be large. [Alternatively, using what I expect the desired approach] The approximation of \(B(L, \frac{h}{L}) \approx Po(h)\) is acceptable since \(n>50\) and \(h < 5\). The approximation of \(Po(h) \sim N(h,h)\) is not acceptable since \(h\) is small (in particular \(h < 15\)) Finally, we can compute all these values exactly using a modern calculator. \begin{array}{l|cc} & \text{correct} & \text{approx} \\ \hline p_h & 0.005760\ldots & 0.005362\ldots \\ \P(C > l) & 0.164522\ldots & 0.133319\ldots \\ \text{ans} & 0.231389\ldots & 0.182516\ldots \end{array} We can also see how the errors propagate, by doing the calculations assuming the previous steps are correct, and also including the Poisson step. \begin{array}{lccc} & \text{correct} & \text{approx} & \text{using approx } p_h \\ \hline p_h & 0.005760\ldots & 0.005362\ldots & - \\ \P(C > l)\quad [Po(h)] & 0.164522\ldots & 0.165044\ldots & 0.134293\ldots \\ \P(C > l)\quad [N(h,h)] & 0.164522\ldots & 0.169953\ldots & 0.133319\ldots \\ \P(C > l)\quad [N(h,h(1-\frac{h}{L})] & 0.164522\ldots & 0.169255\ldots & 0.132677\ldots \\ \text{ans} & 0.231389\ldots & 0.231389\ldots \end{array} By doing this, we discover that the largest errors are actually coming not from approximating the second approximation but from the small absolute (but large relative error) in the first approximation. This is, in fact, a coincidence; we can observe it by investigating the specific values being used. The first approximation looks as follows:
Two coins \(A\) and \(B\) are tossed together. \(A\) has probability \(p\) of showing a head, and \(B\) has probability \(2p\), independent of \(A\), of showing a head, where \(0 < p < \frac12\). The random variable \(X\) takes the value 1 if \(A\) shows a head and it takes the value \(0\) if \(A\) shows a tail. The random variable \(Y\) takes the value 1 if \(B\) shows a head and it takes the value \(0\) if \(B\) shows a tail. The random variable \(T\) is defined by \[ T= \lambda X + {\textstyle\frac12} (1-\lambda)Y. \] Show that \(\E(T)=p\) and find an expression for \(\var(T)\) in terms of \(p\) and \(\lambda\). Show that as \(\lambda\) varies, the minimum of \(\var(T)\) occurs when \[ \lambda =\frac{1-2p}{3-4p}\;. \] The two coins are tossed \(n\) times, where \(n>30\), and \(\overline{T}\) is the mean value of \(T\). Let \(b\) be a fixed positive number. Show that the maximum value of \(\P\big(\vert \overline{T}-p\vert < b\big)\) as \(\lambda\) varies is approximately \(2\Phi(b/s)-1\), where \(\Phi\) is the cumulative distribution function of a standard normal variate and \[ s^2= \frac{p(1-p)(1-2p)}{(3-4p)n}\;. \]
Solution: \begin{align*} && \E[T] &= \E[\lambda X + \tfrac12(1-\lambda)Y] \\ &&&= \lambda \E[X] + \tfrac12(1-\lambda) \E[Y] \\ &&&= \lambda p + \tfrac12 (1-\lambda) 2p \\ &&&= p \\ \\ && \var[T] &= \var[\lambda X + \tfrac12(1-\lambda)Y] \\ &&&= \lambda^2 \var[X] + \tfrac14(1-\lambda)^2 \var[Y] \\ &&&= \lambda^2 p(1-p) + \tfrac14(1-\lambda)^22p(1-2p) \\ &&&= p(\lambda^2 + \tfrac12(1-\lambda)^2) -p^2(\lambda^2+(1-\lambda)^2)\\ &&&= p(\tfrac32\lambda^2 - \lambda + \tfrac12) -p^2(2\lambda^2 -2\lambda + 2) \end{align*} Differentiating \(\var[T]\) with respect to \(\lambda\), and noting it is a quadratic with positive leading coefficient, we get \begin{align*} && \frac{\d \var[T]}{\d \lambda} &= p(2\lambda -(1-\lambda)) - p^2(2 \lambda -2(1-\lambda)) \\ &&&= p(3\lambda - 1)-p^2(4\lambda - 2) \\ \Rightarrow && \lambda(4p-3) &= 2p-1 \\ \Rightarrow && \lambda &= \frac{1-2p}{3-4p} \end{align*} By the central limit theorem \(\overline{T} \sim N(p, \frac{\sigma^2}{n})\) in particular, \(\mathbb{P}(|\overline{T} - p| < b) = \mathbb{P}(\left \lvert |\frac{\overline{T}-p}{\frac{\sigma}{\sqrt{n}}} \right \lvert < \frac{b}{\frac{\sigma}{\sqrt{n}}}) = \mathbb{P}(|Z| < \frac{b\sqrt{n}}{\sigma}) = 2\Phi(b/s) - 1\) where \(s = \frac{\sigma}{\sqrt{n}}\) so \begin{align*} && s^2 &= \frac1n \sigma^2 \\ &&&= \frac1n \left ( \left (\left ( \frac{1-2p}{3-4p} \right)^2 + \tfrac12 \left (1-\frac{1-2p}{3-4p} \right)^2 \right)p - \left ( \left ( \frac{1-2p}{3-4p} \right)^2 + \left (1-\frac{1-2p}{3-4p} \right)^2\right)p^2 \right) \\ &&&= \frac1n \left ( \left (\left ( \frac{1-2p}{3-4p} \right)^2 + \tfrac12 \left (\frac{2-2p}{3-4p} \right)^2 \right)p - \left ( \left ( \frac{1-2p}{3-4p} \right)^2 + \left (\frac{2-2p}{3-4p} \right)^2\right)p^2 \right) \\ &&&= \frac{p}{n(3-4p)^2} \left ( (1 -4p + 4p^2 + 2-4p+2p^2) - (1-4p+4p^2+4-8p+4p^2)p \right) \\ &&&= \frac{p}{n(3-4p)^2} \left (3-13p+18p^2-8p^3 \right) \\ &&&= \frac{p}{n(3-4p)^2} (3-4p)(1-2p)(1-p) \\ &&&= \frac{p(1-p)(1-2p)}{(3-4p)n} \end{align*}
The random variables \(X_1\), \(X_2\), \(\ldots\) , \(X_{2n+1}\) are independently and uniformly distributed on the interval \(0 \le x \le 1\). The random variable \(Y\) is defined to be the median of \(X_1\), \(X_2\), \(\ldots\) , \(X_{2n+1}\). Given that the probability density function of \(Y\) is \(\g(y)\), where \[ \mathrm{g}(y)=\begin{cases} ky^{n}(1-y)^{n} & \mbox{ if }0\leqslant y\leqslant1\\ 0 & \mbox{ otherwise} \end{cases} \] use the result $$ \int_0^1 {y^{r}}{{(1-y)}^{s}}\,\d y = \frac{r!s!}{(r+s+1)!} $$ to show that \(k={(2n+1)!}/{{(n!)}^2}\), and evaluate \(\E(Y)\) and \({\rm Var}\,(Y)\). Hence show that, for any given positive number \(d\), the inequality $$ {\P\left({\vert {Y - 1/2} \vert} < {d/{\sqrt {n}}} \right)} < {\P\left({\vert {{\bar X} - 1/2} \vert} < {d/{\sqrt {n}}} \right)} $$ holds provided \(n\) is large enough, where \({\bar X}\) is the mean of \(X_1\), \(X_2\), \(\ldots\) , \(X_{2n+1}\). [You may assume that \(Y\) and \(\bar X\) are normally distributed for large \(n\).]
An experiment produces a random number \(T\) uniformly distributed on \([0,1]\). Let \(X\) be the larger root of the equation \[x^{2}+2x+T=0.\] What is the probability that \(X>-1/3\)? Find \(\mathbb{E}(X)\) and show that \(\mathrm{Var}(X)=1/18\). The experiment is repeated independently 800 times generating the larger roots \(X_{1}, X_{2}, \dots, X_{800}\). If \[Y=X_{1}+X_{2}+\dots+X_{800}.\] find an approximate value for \(K\) such that \[\mathrm{P}(Y\leqslant K)=0.08.\]
Solution: \((x+1)^2+T-1 = 0\) so the larger root is \(-1 + \sqrt{1-T}\) \begin{align*} && \mathbb{P}(X > -1/3) &= \mathbb{P}(-1 + \sqrt{1-T} > -1/3) \\ &&&= \mathbb{P}(\sqrt{1-T} > 2/3)\\ &&&= \mathbb{P}(1-T > 4/9)\\ &&&= \mathbb{P}\left (T < \frac59 \right) = \frac59 \end{align*} Similarly, for \(t \in [-1,0]\) \begin{align*} && \mathbb{P}(X \leq t) &= \mathbb{P}(-1 + \sqrt{1-T} \leq t) \\ &&&= \mathbb{P}(\sqrt{1-T} \leq t+1)\\ &&&= \mathbb{P}(1-T \leq (t+1)^2)\\ &&&= \mathbb{P}\left (T \geq 1-(t+1)^2\right) = (t+1)^2 \\ \Rightarrow && f_X(t) &= 2(t+1) \\ \Rightarrow && \E[X] &= \int_{-1}^0 x \cdot f_X(x) \d x \\ &&&= \int_{-1}^0 x2(x+1) \d x \\ &&&= \left [\frac23x^3+x^2 \right]_{-1}^0 \\ &&&= -\frac13 \\ && \E[X^2] &= \int_{-1}^0 x^2 \cdot f_X(x) \d x \\ &&&= \int_{-1}^0 2x^2(x+1) \d x \\ &&&= \left [ \frac12 x^4 + \frac23x^3\right]_{-1}^0 \\ &&&= \frac16 \\ \Rightarrow && \var[X] &= \E[X^2] - \left (\E[X] \right)^2 \\ &&&= \frac16 - \frac19 = \frac1{18} \end{align*} Notice that by the central limit theorem \(\frac{Y}{800} \approx N( -\tfrac13, \frac{1}{18 \cdot 800})\). Also notice that \(\Phi^{-1}(0.08) \approx -1.4 \approx -\sqrt{2}\) Therefore we are looking for roughly \(800 \cdot (-\frac13 -\frac{1}{\sqrt{18 \cdot 800}} \sqrt{2})) = -267-9 = -276\)
The random variable \(X\) is uniformly distributed on \([0,1]\). A new random variable \(Y\) is defined by the rule \[ Y=\begin{cases} 1/4 & \mbox{ if }X\leqslant1/4,\\ X & \mbox{ if }1/4\leqslant X\leqslant3/4\\ 3/4 & \mbox{ if }X\geqslant3/4. \end{cases} \] Find \({\mathrm E}(Y^{n})\) for all integers \(n\geqslant 1\). Show that \({\mathrm E}(Y)={\mathrm E}(X)\) and that \[{\mathrm E}(X^{2})-{\mathrm E}(Y^{2})=\frac{1}{24}.\] By using the fact that \(4^{n}=(3+1)^{n}\), or otherwise, show that \({\mathrm E}(X^{n}) > {\mathrm E}(Y^{n})\) for \(n\geqslant 2\). Suppose that \(Y_{1}\), \(Y_{2}\), \dots are independent random variables each having the same distribution as \(Y\). Find, to a good approximation, \(K\) such that \[{\rm P}(Y_{1}+Y_{2}+\cdots+Y_{240000} < K)=3/4.\]
Solution: \begin{align*} && \E[Y^n] &= \frac14 \cdot \frac1{4^n} + \frac14 \cdot \frac{3^n}{4^n} + \frac12 \int_{1/4}^{3/4}2 y^n \d y \\ &&&= \frac{3^n+1}{4^{n+1}} + \left [ \frac{y^{n+1}}{n+1} \right]_{1/4}^{3/4} \\ &&&= \frac{3^n+1}{4^{n+1}} + \frac{3^{n+1}-1}{(n+1)4^{n+1}} \end{align*} \begin{align*} && \E[Y] &= \frac{3+1}{16} + \frac{9-1}{2 \cdot 16} \\ &&&= \frac{1}{4} + \frac{1}{4} = \frac12 = \E[X] \end{align*} \begin{align*} && \E[X^2] &= \int_0^1 x^2 \d x = \frac13 \\ && \E[Y^2] &= \frac{9+1}{64} + \frac{27-1}{3 \cdot 64} = \frac{56}{3 \cdot 64} = \frac{7}{24} \\ \Rightarrow && \E[X^2] - \E[Y^2] &= \frac13 - \frac{7}{24} = \frac{1}{24} \end{align*} \begin{align*} && \E[X^n] &= \frac{1}{n+1} \\ && \E[Y^n] &= \frac{1}{n+1} \frac{1}{4^{n+1}}\left ( (n+1)(3^n+1)+3^{n+1}-1 \right) \\ &&&= \frac{1}{n+1} \frac{1}{4^{n+1}}\left ( 3^{n+1} + (n+1)3^n +n \right) \\ \\ && (3+1)^{n+1} &= 3^{n+1} + (n+1)3^n + \cdots + (n+1) \cdot 3 + 1 \\ &&&> 3^{n+1} + (n+1)3^n + n + 1 \end{align*} if \(n \geq 2\) Notice that by the central limit theorem: \begin{align*} &&\frac{1}{240\,000} \sum_{i=1}^{240\,000} Y_i &\sim N \left ( \frac12, \frac{1}{24 \cdot 240\,000}\right) \\ \Rightarrow && \mathbb{P}\left (\frac{\frac{1}{240\,000} \sum_{i=1}^{240\,000} Y_i - \frac12}{\frac1{24} \frac{1}{100}} \leq \frac23 \right) &\approx 0.75 \\ \Rightarrow && \mathbb{P} \left ( \sum_i Y_i \leq 240\,000 \cdot \left ( \frac2{3} \frac1{2400}+\frac12 \right) \right ) & \approx 0.75 \\ \Rightarrow && K &= 120\,000 + 66 \\ &&&\approx 120\,066 \end{align*}
Two computers, LEP and VOZ are programmed to add numbers after first approximating each number by an integer. LEP approximates the numbers by rounding: that is, it replaces each number by the nearest integer. VOZ approximates by truncation: that is, it replaces each number by the largest integer less than or equal to the number. The fractional parts of the numbers to be added are uniformly and independently distributed. (The fractional part of a number \(a\) is \(a-\left\lfloor a\right\rfloor ,\) where \(\left\lfloor a\right\rfloor \) is the largest integer less than or equal to \(a\).) Both computers approximate and add 1500 numbers. For each computer, find the probability that the magnitude of error in the answer will exceed 15. How many additions can LEP perform before the probability that the magnitude of error is less than 10 drops below 0.9?
The average number of pedestrians killed annually in road accidents in Poldavia during the period 1974-1989 was 1080 and the average number killed annually in commercial flight accidents during the same period was 180. Discuss the following newspaper headlines which appeared in 1991. (The percentage figures in square brackets give a rough indication of the weight of marks attached to each discussion.)
Solution:
A fair coin is thrown \(n\) times. On each throw, 1 point is scored for a head and 1 point is lost for a tail. Let \(S_{n}\) be the points total for the series of \(n\) throws, i.e. \(S_{n}=X_{1}+X_{2}+\cdots+X_{n},\) where \[ X_{j}=\begin{cases} 1 & \text{ if the }j \text{ th throw is a head}\\ -1 & \text{ if the }j\text{ th throw is a tail.} \end{cases} \]
Solution: Notice that \(\mathbb{E}(X_i) = 0, \mathbb{E}(X_i^2) = 1\) and so \(\mathbb{E}(S_n) =0, \textrm{Var}(S_n) = n\).
Each time it rains over the Cabbibo dam, a volume \(V\) of water is deposited, almost instanetaneously, in the reservoir. Each day (midnight to midnight) water flows from the reservoir at a constant rate \(u\) units of volume per day. An engineer, if present, may choose to alter the value of \(u\) at any midnight.
Solution: