183 problems found
Three pirates are sharing out the contents of a treasure chest containing \(n\) gold coins and \(2\) lead coins. The first pirate takes out coins one at a time until he takes out one of the lead coins. The second pirate then takes out coins one at a time until she draws the second lead coin. The third pirate takes out all the gold coins remaining in the chest. Find:
Solution:
A bag contains \(b\) balls, \(r\) of them red and the rest white. In a game the player must remove balls one at a time from the bag (without replacement). She may remove as many balls as she wishes, but if she removes any red ball, she loses and gets no reward at all. If she does not remove a red ball, she is rewarded with \pounds 1 for each white ball she has removed. If she removes \(n\) white balls on her first \(n\) draws, calculate her expected gain on the next draw and show that %her expected total reward would be the same as before it is zero if \(\ds n = {b-r \over r+1}\,\). Hence, or otherwise, show that she will maximise her expected total reward if she aims to remove \(n\) balls, where \[ n = \mbox{ the integer part of } \ds {b + 1 \over r + 1}\;. \] With this value of \(n\), show that in the case \(r=1\) and \(b\) even, her expected total reward is \(\pounds {1 \over 4}b\,\), and find her expected total reward in the case \(r=1\) and \(b\) odd.
Explain why, if \(\mathrm{A, B}\) and \(\mathrm{C}\) are three events, \[ \mathrm{P(A \cup B \cup C) = P(A) + P(B) + P(C) - P(A \cap B) - P(B \cap C) - P(C \cap A) +P(A \cap B \cap C)}, \] where \(\mathrm{P(X)}\) denotes the probability of event \(\mathrm{X}\). A cook makes three plum puddings for Christmas. He stirs \(r\) silver sixpences thoroughly into the pudding mixture before dividing it into three equal portions. Find an expression for the probability that each pudding contains at least one sixpence. Show that the cook must stir 6 or more sixpences into the mixture if there is to be less than \({1 \over 3}\) chance that at least one of the puddings contains no sixpence. Given that the cook stirs 6 sixpences into the mixture and that each pudding contains at least one sixpence, find the probability that there are two sixpences in each pudding.
Solution:
A team of \(m\) players, numbered from \(1\) to \(m\), puts on a set of a \(m\) shirts, similarly numbered from \(1\) to \(m\). The players change in a hurry, so that the shirts are assigned to them randomly, one to each player. Let \(C_i\) be the random variable that takes the value \(1\) if player \(i\) is wearing shirt \(i\), and 0 otherwise. Show that \(\mathrm{E}\left(C_1\right)={1 \over m}\) and find \(\var \left(C_1\right)\) and \(\mathrm{Cov}\left(C_1 \, , \; C_2 \right) \,\). Let \(\, N = C_1 + C_2 + \cdots + C_m \,\) be the random variable whose value is the number of players who are wearing the correct shirt. Show that \(\mathrm{E}\left(N\right)= \var \left(N\right) = 1 \,\). Explain why a Normal approximation to \(N\) is not likely to be appropriate for any \(m\), but that a Poisson approximation might be reasonable. In the case \(m = 4\), find, by listing equally likely possibilities or otherwise, the probability that no player is wearing the correct shirt and verify that an appropriate Poisson approximation to \(N\) gives this probability with a relative error of about \(2\%\). [Use \(\e \approx 2\frac{72}{100} \,\).]
Solution: There are \(m!\) different ways of assigning the shirts, and in \((m-1)!\) of them player \(1\) gets their own shirt, ie \(\mathbb{E}(C_1) = \mathbb{P}(\text{player }1\text{ gets own shirt}) = \frac{(m-1)!}{m!} = \frac{1}{m}\). \(\var(C_1) = \mathbb{E}(C_1^2) - [\mathbb{E}(C_1)]^2 = \frac{1}{m} - \frac{1}{m^2} = \frac{m-1}{m^2}\). If we have two players, there are \((m-2)!\) ways they both get their own shirts, therefore \(\textrm{Cov}(C_1,C_2) = \mathbb{E}(C_1C_2) - \mathbb{E}(C_1)\mathbb{E}(C_2) = \frac{(m-2)!}{m!} - \frac{1}{m^2} = \frac{1}{m(m-1)} - \frac{1}{m^2} = \frac{m-m+1}{m^2(m-1)} = \frac{1}{m^2(m-1)}\). \begin{align*} \mathbb{E}(N) &= \mathbb{E}(C_1 + C_2 + \cdots + C_m) \\ &= \mathbb{E}(C_1) + \mathbb{E}(C_2) + \cdots + \mathbb{E}(C_m) \\ &= \frac{1}{m} + \frac{1}{m} +\cdots+ \frac1m \\ &= 1 \\ \\ \var(N) &= \sum_{r=1}^m \var(C_r) + 2\sum_{r=1}^{m-1} \sum_{s=2}^{m} \textrm{Cov}(C_r,C_s) \\ &= m \frac{m-1}{m^2} + 2 \frac{m(m-1)}{2}\frac{1}{m^2(m-1)} \\ &=\frac{m-1}{m} + \frac{1}{m} \\ &= 1 \end{align*} If we were to take a normal approximation, we would want to take \(N(1,1)\), but this would say things like \(-1\) is as likely as \(3\) shirts being correct, which is clearly a bad model. A Poisson is much more likely to be a sensible model as they have the same mean and variance as the parameter, and if \(m\) is large, the covariance between shirts is going to be very small, so it will appear similar to random events occurring. We can have \begin{align*} BADC \\ BCDA \\ BDAC \\ CADB \\ CDAB\\ CDBA \\ DABC\\ DCAB \\ DCBA \end{align*} Ie \(\frac{9}{24}\) ways to have no player wearing their own shirt with \(4\) players. \(Po(1)\) would say this probability is \(e^{-1}\), giving a relative error of: \begin{align*} \frac{e^{-1}-\frac{9}{24}}{\frac9{24}} &\approx \frac{\frac{100}{272} - \frac{9}{24}}{\frac9{24}} \\ &= -\frac{1}{51} \\ &\approx -2\% \end{align*}
A men's endurance competition has an unlimited number of rounds. In each round, a competitor has, independently, a probability \(p\) of making it through the round; otherwise, he fails the round. Once a competitor fails a round, he drops out of the competition; before he drops out, he takes part in every round. The grand prize is awarded to any competitor who makes it through a round which all the other remaining competitors fail; if all the remaining competitors fail at the same round the grand prize is not awarded. If the competition begins with three competitors, find the probability that:
Solution:
In a bag are \(n\) balls numbered 1, 2, \(\ldots\,\), \(n\,\). When a ball is taken out of the bag, each ball is equally likely to be taken.
Solution:
If a football match ends in a draw, there may be a "penalty shoot-out". Initially the teams each take 5 shots at goal. If one team scores more times than the other, then that team wins. If the scores are level, the teams take shots alternately until one team scores and the other team does not score, both teams having taken the same number of shots. The team that scores wins. Two teams, Team A and Team B, take part in a penalty shoot-out. Their probabilities of scoring when they take a single shot are \(p_A\) and \(p_B\) respectively. Explain why the probability \(\alpha\) of neither side having won at the end of the initial \(10\)-shot period is given by $$\alpha =\sum_{i=0}^5\binom{5}{i}^2(1-p_A)^i(1-p_B)^i\,p_A^{5-i}p_B^{5-i}.$$ Show that the expected number of shots taken is \(\displaystyle 10+ \frac{2\alpha}\beta\;,\) where \(\beta=p_A+p_B-2p_Ap_B\,.\)
Solution: Note that in the first \(10\)-short period the number of goals scored by each team is \(B(5, \p_i)\). For them to be equal they must both have scored the same number of goals, ie \begin{align*} && \alpha &= \sum_{i=0}^5 \mathbb{P}(\text{both teams score }5-i) \\ &&&= \sum_{i=0}^5 \binom{5}{i} (1-p_A)^ip_A^{5-i} \binom{5}{i} (1-p_B)^i p_B^{5-i} \\ &&&= \sum_{i=0}^5 \binom{5}{i} ^2(1-p_A)^i (1-p_B)^i p_A^{5-i} p_B^{5-i} \\ \end{align*} Suppose we make it to the end of the shoot out with scores tied. The probability that we finish each round is \(p_A(1-p_B) + p_B(1-p_A)\) (the probability \(A\) wins or \(B\) wins). This is \(p_A + p_B - 2p_Ap_B = \beta\)). Therefore the number of additional rounds is geometric with parameter \(\beta\) and the expected number of rounds is \(\frac{1}{\beta}\). Each round has two shots, and there is a probability \(\alpha\) of this occuring, ie \(\frac{2\alpha}{\beta}\). Added to the \(10\) guaranteed shots we get the desired result
Jane goes out with any of her friends who call, except that she never goes out with more than two friends in a day. The number of her friends who call on a given day follows a Poisson distribution with parameter \(2\). Show that the average number of friends she sees in a day is~\(2-4\e^{-2}\,\). Now Jane has a new friend who calls on any given day with probability \(p\). Her old friends call as before, independently of the new friend. She never goes out with more than two friends in a day. Find the average number of friends she now sees in a day.
The life of a certain species of elementary particles can be described as follows. Each particle has a life time of \(T\) seconds, after which it disintegrates into \(X\) particles of the same species, where \(X\) is a random variable with binomial distribution \(\mathrm{B}(2,p)\,\). A population of these particles starts with the creation of a single such particle at \(t=0\,\). Let \(X_n\) be the number of particles in existence in the time interval \(nT < t < (n+1)T\,\), where \(n=1\,\), \(2\,\), \(\ldots\). Show that \(\P(X_1=2 \mbox { and } X_2=2) = 6p^4q^2\;\), where \(q=1-p\,\). Find the possible values of \(p\) if it is known that \(\P(X_1=2 \vert X_2=2) =9/25\,\). Explain briefly why \(\E(X_n) =2p\E(X_{n-1})\) and hence determine \(\E(X_n)\) in terms of \(p\). Show that for one of the values of \(p\) found above \(\lim_{n \to \infty}\E(X_n) = 0\) and that for the other \(\lim_{n \to \infty}\E(X_n) = + \infty\,\).
Solution: Notice that we can see the total number generated as \(X_n \sim B(2X_{n-1},p)\), since a Binomial is a sum of independent Bernoullis, and there are two Bernoullis per particle. \begin{align*} && \mathbb{P}(X_1=2 \mbox { and } X_2=2) &= \underbrace{p^2}_{\text{two generated in first iteration}} \cdot \underbrace{\binom{4}{2}p^2q^2}_{\text{two generated from the first two}} \\ &&&= 6p^4q^2 \end{align*} \begin{align*} && \mathbb{P})(X_1 = 2 |X_2 = 2) &= \frac{ \mathbb{P}(X_1=2 \mbox { and } X_2=2) }{ \mathbb{P}( X_2=2) } \\ &&&= \frac{6p^4q^2}{6p^4q^2+2pq \cdot p^2} \\ &&&= \frac{3pq}{3pq+1} \\ \Rightarrow && \frac{9}{25} &= \frac{3pq}{3pq+1} \\ \Rightarrow && 27pq + 9 &= 75pq \\ \Rightarrow && 9 &= 48pq \\ \Rightarrow && pq &= \frac{3}{16} \\ \Rightarrow && 0 &= p^2 - p + \frac3{16} \\ \Rightarrow && p &= \frac14, \frac34 \end{align*} By the same reasoning about the Bernoullis, we must have \(\E[X_n] = \E[\E[X_n | X_{n-1}]] = \E[2pX_{n-1}] = 2p \E[X_{n-1}]\) therefore \(\E[X_n] = (2p)^n\). If \(p = \frac14\) then \(\E[X_n] = \frac1{2^n} \to 0\) If \(p = \frac34\) then \(\E[X_n] = \left(\frac32 \right)^n \to \infty\)
The random variable \(X\) takes the values \(k=1\), \(2\), \(3\), \(\dotsc\), and has probability distribution $$ \P(X=k)= A{{{\lambda}^k\e^{-{\lambda}}} \over {k!}}\,, $$ where \(\lambda \) is a positive constant. Show that \(A = (1-\e^{-\lambda})^{-1}\,\). Find the mean \({\mu}\) in terms of \({\lambda}\) and show that $$ \var(X) = {\mu}(1-{\mu}+{\lambda})\;. $$ Deduce that \({\lambda} < {\mu} < 1+{\lambda}\,\). Use a normal approximation to find the value of \(P(X={\lambda})\) in the case where \({\lambda}=100\,\), giving your answer to 2 decimal places.
Solution: Let \(Y \sim Po(\lambda)\) \begin{align*} && 1 &= \sum_{k=1}^\infty \mathbb{P}(X = k ) \\ &&&= \sum_{k=1}^\infty A \frac{\lambda^k e^{-\lambda}}{k!}\\ &&&= Ae^{-\lambda} \sum_{k=1}^{\infty} \frac{\lambda^k e^{-\lambda}}{k!} \\ &&&= Ae^{-\lambda} \left (e^{\lambda}-1 \right) \\ \Rightarrow && A &= (1-e^{-\lambda})^{-1} \\ \\ && \E[X] &= \sum_{k=1}^{\infty} k \cdot \mathbb{P}(X=k) \\ &&&= A\sum_{k=1}^{\infty} k \frac{\lambda^k e^{-\lambda}}{k!} \\ &&&= A\E[Y] = A\lambda = \lambda(1-e^{-\lambda})^{-1} \\ \\ && \var[X] &= \E[X^2] - (\E[X])^2 \\ &&&= A\sum_{k=1}^{\infty} k^2 \frac{\lambda^k e^{-\lambda}}{k!} - \mu^2 \\ &&&= A\E[Y^2] - \mu^2 \\ &&&= A(\var[Y]+\lambda^2) - \mu^2 \\ &&&= A(\lambda + \lambda^2) - \mu^2 \\ &&&= A\lambda(1+\lambda) - \mu^2 \\ &&&= \mu(1+\lambda - \mu) \end{align*} Since \(A > 1\) we must have \(\mu > \lambda\) and since \(\var[X] > 0\) we must have \(1 + \lambda > \mu\) as required. If \(\lambda = 100\), then \(A \approx 1\) and \(P(X=\lambda) \approx P(Y = \lambda)\) and \(Y \approx N(\lambda, \lambda)\) so the value is approximately \(\displaystyle \int_{-\frac12}^{\frac12} \frac{1}{\sqrt{2\pi \lambda}} e^{-\frac{x^2}{2\lambda}} \d x \approx \frac{1}{\sqrt{200\pi}} = \frac{1}{\sqrt{630.\ldots}} \approx \frac{1}{25} = 0.04 \)
The probability of throwing a 6 with a biased die is \(p\,\). It is known that \(p\) is equal to one or other of the numbers \(A\) and \(B\) where \(0 < A < B < 1 \,\). Accordingly the following statistical test of the hypothesis \(H_0: \,p=B\) against the alternative hypothesis \(H_1: \,p=A\) is performed. The die is thrown repeatedly until a 6 is obtained. Then if \(X\) is the total number of throws, \(H_0\) is accepted if \(X \le M\,\), where \(M\) is a given positive integer; otherwise \(H_1\) is accepted. Let \({\alpha}\) be the probability that \(H_1\) is accepted if \(H_0\) is true, and let \({\beta}\) be the probability that \(H_0\) is accepted if \(H_1\) is true. Show that \({\beta} = 1- {\alpha}^K,\) where \(K\) is independent of \(M\) and is to be determined in terms of \(A\) and \(B\,\). Sketch the graph of \({\beta}\) against \({\alpha}\,\).
Solution: \(X \sim Geo(p)\). \(\alpha = \mathbb{P}(X > M | p = B) = (1-B)^{M}\) \(\beta = \mathbb{P}(X \leq M | p = A) = 1 - \mathbb{P}(X > M | p = A) = 1 - (1-A)^{M}\) \begin{align*} \ln \alpha &= M \ln(1-B) \\ \ln (1-\beta) &= M \ln(1-A) \\ \frac{\ln \alpha}{\ln (1-\beta)} &= \frac{\ln(1-B)}{\ln(1-A)} \\ \ln(1-\beta) &= \ln \alpha \frac{\ln (1-A)}{\ln(1-B)} \\ \beta &= 1- \alpha^{ \frac{\ln (1-A)}{\ln(1-B)} } \end{align*} and \(K = \frac{\ln (1-A)}{\ln(1-B)} \) Since \(0 < A < B < 1\) we must have that \(0 < 1 - B < 1-A < 1\) and \(\ln(1-B) < \ln(1-A) < 0\) so \(0 < K < 1\)
In a rabbit warren, underground chambers \(A, B, C\) and \(D\) are at the vertices of a square, and burrows join \(A\) to \(B\), \ \(B\) to \(C\), \ \(C\) to \(D\) and \(D\) to \(A\). Each of the chambers also has a tunnel to the surface. A rabbit finding itself in any chamber runs along one of the two burrows to a neighbouring chamber, or leaves the burrow through the tunnel to the surface. Each of these three possibilities is equally likely. Let \(p_A\,\), \(p_B\,\), \(p_C\) and \(p_D\) be the probabilities of a rabbit leaving the burrow through the tunnel from chamber \(A\), given that it is currently in chamber \(A, B, C\) or \(D\), respectively.
Harry the Calculating Horse will do any mathematical problem I set him, providing the answer is 1, 2, 3 or 4. When I set him a problem, he places a hoof on a large grid consisting of unit squares and his answer is the number of squares partly covered by his hoof. Harry has circular hoofs, of radius \(1/4\) unit. After many years of collaboration, I suspect that Harry no longer bothers to do the calculations, instead merely placing his hoof on the grid completely at random. I often ask him to divide 4 by 4, but only about \(1/4\) of his answers are right; I often ask him to add 2 and 2, but disappointingly only about \(\pi/16\) of his answers are right. Is this consistent with my suspicions? I decide to investigate further by setting Harry many problems, the answers to which are 1, 2, 3, or 4 with equal frequency. If Harry is placing his hoof at random, find the expected value of his answers. The average of Harry's answers turns out to be 2. Should I get a new horse?
Solution: Without loss of generality, let's assume that Harry is putting the center of his hoof within one square.
In order to get money from a cash dispenser I have to punch in an identification number. I have forgotten my identification number, but I do know that it is equally likely to be any one of the integers \(1\), \(2\), \ldots , \(n\). I plan to punch in integers in order until I get the right one. I can do this at the rate of \(r\) integers per minute. As soon as I punch in the first wrong number, the police will be alerted. The probability that they will arrive within a time \(t\) minutes is \(1-\e^{-\lambda t}\), where \(\lambda\) is a positive constant. If I follow my plan, show that the probability of the police arriving before I get my money is \[ \sum_{k=1}^n \frac{1-\e^{-\lambda(k-1)/r}}n\;. \] Simplify the sum. On past experience, I know that I will be so flustered that I will just punch in possible integers at random, without noticing which I have already tried. Show that the probability of the police arriving before I get my money is \[ 1-\frac1{n-(n-1)\e^{-\lambda/r}} \;. \]
On \(K\) consecutive days each of \(L\) identical coins is thrown \(M\) times. For each coin, the probability of throwing a head in any one throw is \(p\) (where \(0 < p < 1\)). Show that the probability that on exactly \(k\) of these days more than \(l\) of the coins will each produce fewer than \(m\) heads can be approximated by \[ {K \choose k}q^k(1-q)^{K-k}, \] where \[ q=\Phi\left( \frac{2h-2l-1}{2\sqrt{h} }\right), \ \ \ \ \ \ h=L\Phi\left( \frac{2m-1-2Mp}{2\sqrt{ Mp(1-p)}}\right) \] and \(\Phi(\cdot)\) is the cumulative distribution function of a standard normal variate. Would you expect this approximation to be accurate in the case \(K=7\), \(k=2\), \(L=500\), \(l=4\), \(M=100\), \(m=48\) and \(p=0.6\;\)?
Solution: Let \(H_i\) be the random variable of how many heads the \(i\)th coin throws on a given day. Then \(H_i \sim B(M,p)\), and the probability that a given coin produces fewer than \(m\) heads is \(p_h = \P(H_i < m)\) Let \(C\) be the random variable the number of coins producing fewer than \(m\) heads, then \(C \sim B(L, p_h)\). The probability that more than \(l\) of the coins produce fewer than \(m\) heads is therefore \(\P(C > l)\). Finally, the probability that on exactly \(k\) days more than \(l\) of the coins will produce fewer than \(m\) heads is: \[ \binom{K}{k} \cdot \P(C > l)^k \cdot (1-\P(C > l))^{K-k} \] Let's start by assuming that all our Binomials can be approximated by a normal distribution. \(B(M,p) \approx N(Mp, Mp(1-p))\) and so: \begin{align*} p_h &= \P(H_i < m) \\ &\approx \P( \sqrt{Mp(1-p)}Z+Mp < m-\frac12) \\ &= \P \l Z < \frac{2m-2Mp-1}{2\sqrt{Mp(1-p)}} \r \\ &= \Phi\l\frac{2m-2Mp-1}{2\sqrt{Mp(1-p)}} \r \end{align*} \(B(L, p_h) \approx B \l L, \P \l Z < \frac{2m-2Mp-1}{2\sqrt{Mp(1-p)}} \r\r = B(L, \frac{h}{L}) \approx N(h, \frac{h(L-h)}{L})\) Therefore \begin{align*} \P(C > l) &= 1-\P(C \leq l) \\ &\approx 1- \P \l \sqrt{\frac{h(L-h)}{L}} Z + h \leq l+\frac12 \r \\ &= 1 - \P \l Z \leq \frac{2l-2h+1}{2\sqrt{\frac{h(L-h)}{L}}}\r \\ &= 1- \Phi\l \frac{2l-2h+1}{2\sqrt{\frac{h(L-h)}{L}}} \r \\ &= \Phi\l \frac{2h-2l-1}{2\sqrt{\frac{h(L-h)}{L}}} \r \end{align*} If we can approximate \(\sqrt{1-\frac{h}{L}}\) by \(1\) then we obtain the approximation in the question. Alternatively, \(B(L, \frac{h}{L}) \approx Po(h)\) and \(Po(h) \approx N(h,h)\) so we obtain: \begin{align*} \P(C > l) &= 1-\P(C \leq l) \\ &\approx 1 - \P(\sqrt{h} Z +h < l + \frac12) \\ &= 1 - \P \l Z < \frac{2l-2h+1}{2\sqrt{h}} \r \\ &= \Phi \l \frac{2h - 2l -1}{2\sqrt{h}}\r \end{align*} as required. [I think this is what the examiners expected]. Considering the case \(K=7\), \(k=2\), \(L=500\), \(l=4\), \(M=100\), \(m=48\) and \(p=0.6\), we have the first normal approximation depends on \(Mp\) and \(M(1-p)\) being large. They are \(60\) and \(40\) respectively, so this is likely a good approximation. The first approximation finds that \begin{align*} h &= 500 \cdot \Phi \l \frac{2 \cdot 48 - 2 \cdot 60 - 1}{2\sqrt{24}} \r \\ &= 500 \cdot \Phi \l \frac{2 \cdot 48 - 2 \cdot 60 - 1}{2\sqrt{24}} \r \\ &= 500 \cdot \Phi \l \frac{-25}{2 \sqrt{24}} \r \\ &\approx 500 \cdot \Phi (-2.5) \\ &= 500 \cdot 0.0062 \\ &\approx 3.1 \end{align*} The second binomial approximation will be good if \(500 \cdot \frac{3.1}{500} = 3.1\) is large, but this is quite small. Therefore, we shouldn't expect this to be a good approximation. However, since \(m = 48\) is far from the mean (in a normalised sense), we might expect the percentage error to be large. [Alternatively, using what I expect the desired approach] The approximation of \(B(L, \frac{h}{L}) \approx Po(h)\) is acceptable since \(n>50\) and \(h < 5\). The approximation of \(Po(h) \sim N(h,h)\) is not acceptable since \(h\) is small (in particular \(h < 15\)) Finally, we can compute all these values exactly using a modern calculator. \begin{array}{l|cc} & \text{correct} & \text{approx} \\ \hline p_h & 0.005760\ldots & 0.005362\ldots \\ \P(C > l) & 0.164522\ldots & 0.133319\ldots \\ \text{ans} & 0.231389\ldots & 0.182516\ldots \end{array} We can also see how the errors propagate, by doing the calculations assuming the previous steps are correct, and also including the Poisson step. \begin{array}{lccc} & \text{correct} & \text{approx} & \text{using approx } p_h \\ \hline p_h & 0.005760\ldots & 0.005362\ldots & - \\ \P(C > l)\quad [Po(h)] & 0.164522\ldots & 0.165044\ldots & 0.134293\ldots \\ \P(C > l)\quad [N(h,h)] & 0.164522\ldots & 0.169953\ldots & 0.133319\ldots \\ \P(C > l)\quad [N(h,h(1-\frac{h}{L})] & 0.164522\ldots & 0.169255\ldots & 0.132677\ldots \\ \text{ans} & 0.231389\ldots & 0.231389\ldots \end{array} By doing this, we discover that the largest errors are actually coming not from approximating the second approximation but from the small absolute (but large relative error) in the first approximation. This is, in fact, a coincidence; we can observe it by investigating the specific values being used. The first approximation looks as follows: