7 problems found
The random variable \(X\) has mean \(\mu\) and variance \(\sigma^2\), and the function \({\rm V}\) is defined, for \(-\infty < x < \infty\), by \[ {\rm V}(x) = \E \big( (X-x)^2\big) . \] Express \({\rm V}(x)\) in terms of \(x\), \( \mu\) and \(\sigma\). The random variable \(Y\) is defined by \(Y={\rm V}(X)\). Show that \[ \E(Y) = 2 \sigma^2 %\text{ \ \ and \ \ } %\Var(Y) = \E(X-\mu)^4 -\sigma^4 . \tag{\(*\)} \] Now suppose that \(X\) is uniformly distributed on the interval \(0\le x \le1\,\). Find \({\rm V}(x)\,\). Find also the probability density function of \(Y\!\) and use it to verify that \((*)\) holds in this case.
Solution: \begin{align*} {\rm V}(x) &= \E \big( (X-x)^2\big) \\ &= \E \l X^2 - 2xX + x^2\r \\ &= \E [ X^2 ]- 2x\E[X] + x^2 \\ &= \sigma^2+\mu^2 - 2x\mu + x^2 \\ &= \sigma^2 + (\mu - x)^2 \end{align*} \begin{align*} \E[Y] &= \E[\sigma^2 + (\mu - X)^2] \\ &= \sigma^2 + \E[(\mu - X)^2]\\ &= \sigma^2 + \sigma^2 \\ &= 2\sigma^2 \end{align*} If \(X \sim U(0,1)\) then \(V(x) = \frac{1}{12} + (\frac12 - x)^2\). \begin{align*} \P(Y \leq y) &= \P(\frac1{12} + (\frac12 - X)^2 \leq y) \\ &= \P((\frac12 -X)^2 \leq y - \frac1{12}) \\ &= \P(|\frac12 -X| \leq \sqrt{y - \frac1{12}}) \\ &= \begin{cases} 1 & \text{if } y - \frac1{12} > \frac14 \\ 2 \sqrt{y - \frac1{12}} & \text{if } \frac14 > y - \frac1{12} > 0 \\ \end{cases} \\ &= \begin{cases} 1 & \text{if } y> \frac13 \\ \sqrt{4y - \frac1{3}} & \text{if } \frac13 > y > \frac1{12} \\ \end{cases} \end{align*} Therefore $f_Y(y) = \begin{cases} \frac{2}{\sqrt{4y-\frac{1}{3}}} & \text{if } \frac1{12} < y < \frac13 \\ 0 & \text{otherwise} \end{cases}$ \begin{align*} \E[Y] &= \int_{1/12}^{1/3} \frac{2x}{\sqrt{4x-\frac13}} \, dx \\ &= 2\int_{u = 0}^{u=1} \frac{\frac{1}{4}u +\frac1{12}}{\sqrt{u}} \,\frac{1}{4} du \tag{\(u = 4x - \frac13, \frac{du}{dx} = 4\)}\\ &= \frac{1}{2 \cdot 12}\int_{u = 0}^{u=1} 3\sqrt{u} +\frac{1}{\sqrt{u}} \, du \\ &= \frac{1}{2 \cdot 12} \left [2 u^{3/2} + 2u^{1/2} \right ]_0^1 \\ &= \frac{1}{2 \cdot 12} \cdot 4 \\ &= \frac{2}{12} \end{align*} as required
This question concerns solutions of the differential equation \[ (1-x^2) \left(\frac{\d y}{\d x}\right)^2 + k^2 y^2 = k^2\, \tag{\(*\)} \] where \(k\) is a positive integer. For each value of \(k\), let \(y_k(x)\) be the solution of \((*)\) that satisfies \(y_k(1)=1\); you may assume that there is only one such solution for each value of \(k\).
Solution:
A particle of mass \(m\) is projected with velocity \(\+ u\). It is acted upon by the force \(m\+g\) due to gravity and by a resistive force \(-mk \+v\), where \(\+v\) is its velocity and \(k\) is a positive constant. Given that, at time \(t\) after projection, its position \(\+r\) relative to the point of projection is given by \[ \+r = \frac{kt -1 +\.e^{-kt}} {k^2} \, \+g + \frac{ 1-\.e^{-kt}}{k} \, \+u \,, \] find an expression for \(\+v\) in terms of \(k\), \(t\), \(\+g\) and \(\+u\). Verify that the equation of motion and the initial conditions are satisfied. Let \(\+u = u\cos\alpha \, \+i + u \sin\alpha \, \+j\) and $\+g = -g\, \+j\(, where \)0<\alpha<90^\circ\(, and let \)T$ be the time after projection at which \(\+r \,.\, \+j =0\). Show that \[ uk \sin\alpha = \left(\frac{kT}{1-\.e^{-kT}} -1\right)g\,. \] Let \(\beta\) be the acute angle between \(\+v\) and \(\+i\) at time \(T\). Show that \[ \tan\beta = \frac{(\.e^{kT}-1)g}{uk\cos\alpha}-\tan\alpha \,. \] Show further that \(\tan\beta >\tan\alpha\) (you may assume that \(\sinh kT >kT\)) and deduce that~\(\beta >\alpha\).
Show that, if \(y=\e^x\), then \[ (x-1) \frac{\d^2 y}{\d x^2} -x \frac{\d y}{\d x} +y=0\,. \tag{\(*\)} \] In order to find other solutions of this differential equation, now let \(y=u\e^x\), where \(u\) is a function of \(x\). By substituting this into \((*)\), show that \[ (x-1) \frac{\d^2 u}{\d x^2} + (x-2) \frac{\d u}{\d x} =0\,. \tag{\(**\)} \] By setting \( \dfrac {\d u}{\d x}= v\) in \((**)\) and solving the resulting first order differential equation for \(v\), find \(u\) in terms of \(x\). Hence show that \(y=Ax + B\e^x\) satisfies \((*)\), where \(A\) and \(B\) are any constants.
Solution: \begin{align*} && y &= e^x \\ && y' &= e^x \\ && y'' &= e^x \\ \Rightarrow && (x-1)y'' - x y' + y &= (x-1)e^x - xe^x + e^x \\ &&&= 0 \end{align*} Suppose \(y = ue^x\) then \begin{align*} && y' &= u'e^x + ue^x \\ && y'' &= (u''+u')e^x + (u'+u)e^x \\ &&&= (u''+2u' +u)e^x \\ \\ && 0 &= (x-1)y'' - x y' + y \\ &&&= [(x-1)(u''+2u'+u) - x(u'+u)+u]e^x \\ &&&= [(x-1)u'' +(x-2)u']e^x \\ \Rightarrow && 0 &= (x-1)u'' + (x-2)u' \\ v = u': && 0 &= (x-1)v' + (x-2) v \\ \Rightarrow && \frac{v'}{v} &= -\frac{x-2}{x-1} \\ &&&= -1-\frac{1}{x-1} \\ \Rightarrow && \ln v &= -x - \ln(x-1) + C \\ \Rightarrow && v &= A(x-1)e^{-x} \\ && u &= \int Axe^{-x} - Ae^{-x} \d x \\ &&&= \left [-Axe^{-x} +Ae^{-x} \right] + \int Ae^{-x} \d x \\ &&&= -Axe^{-x} + D\\ \Rightarrow && y &= ue^x \\ &&&= -Ax + De^x \end{align*}
Show that \(\sin(k\sin^{-1} x)\), where \(k\) is a constant, satisfies the differential equation $$(1-x^{2})\frac {\d^2 y}{\d x^2} -x\frac{\d y}{\d x} +k^{2}y=0. \tag{*}$$ In the particular case when \(k=3\), find the solution of equation \((*)\) of the form \[ y=Ax^{3}+Bx^{2}+Cx+D, \] that satisfies \(y=0\) and \(\displaystyle \frac{\d y}{\d x}=3\) at \(x=0\). Use this result to express \(\sin 3\theta\) in terms of powers of \(\sin\theta\).
Solution: \begin{align*} && y &= \sin(k \sin^{-1} x ) \\ &&y' &= \cos (k \sin^{-1} x) \cdot k \frac{1}{\sqrt{1-x^2}} \\ && y'' &= -\sin (k \sin^{-1} x) \cdot k^2 \frac{1}{(1-x^2)} - \cos(k \sin^{-1} x) \cdot k \frac{x}{(1-x^2)\sqrt{1-x^2}} \\ && (1-x^2)y'' &= -k^2y -xy' \\ \Rightarrow && 0 &= (1-x^2)y''+xy' + k^2y \end{align*} \begin{align*} && y &= Ax^3 + Bx^2 + Cx + D \\ && y' &= 3Ax^2 + 2Bx + C \\ && y'' &= 6Ax+2B \\ && 0 &= (1-x^2)(6Ax+2B) - x( 3Ax^2 + 2Bx + C) + 9(Ax^3 + Bx^2 + Cx + D ) \\ &&&= x^3(-6A-3A+9A) + x^2(-2B-2B+9B) + x(6A-C+9C) + (2B +9D) \\ \Rightarrow && B &= 0 \\ \Rightarrow && D &= 0 \\ \Rightarrow && C &= -\frac34 A \\ \\ x = 0, y = 0, y' = 0: && y &= 3x-4x^3 \\ \end{align*} And so \(\sin 3 x = 3 \sin x - 4\sin^3 x\)
Widgets are manufactured in batches of size \((n+N)\). Any widget has a probability \(p\) of being faulty, independent of faults in other widgets. The batches go through a quality control procedure in which a sample of size \(n\), where \(n\geqslant2\), is taken from each batch and tested. If two or more widgets in the sample are found to be faulty, all widgets in the batch are tested and all faults corrected. If fewer than two widgets in the sample are found to be faulty, the sample is replaced in the batch and no faults are corrected. Show that the probability that the batch contains exactly \(k\), where \(k\leqslant N\), faulty widgets after quality control is \[ \frac{\left[N+1+k\left(n-1\right)\right]N!}{\left(N-k+1\right)!k!}p^{k}\left(1-p\right)^{N+n-k}, \] and verify that this formula also gives the correct answer for \(k=N+1\). Show that the expected number of faulty widgets in a batch after quality control is \[ \left[N+n+pN(n-1)\right]p(1-p)^{n-1}. \]
Solution: \begin{align*} \mathbb{P}(\text{exactly }k\text{ faults after test}) &= \mathbb{P}(k\text{ faults in non-tested, 0 in batch})+\mathbb{P}(k-1\text{ faults in non-tested, 1 in batch}) \\ &=\binom{N}{k}(1-p)^{N-k}p^k\binom{n}{0}(1-p)^n+\binom{N}{k-1}(1-p)^{N-k+1}p^{k-1}\binom{n}{1}(1-p)^{n-1}p \\ &= (1-p)^{N-k+n}p^k \cdot \left ( \binom{N}{k}+n\binom{N}{k-1} \right) \\ &= (1-p)^{N-k+n}p^k \cdot \left (\frac{N!}{k!(N-k)!}+\frac{N!n}{(k-1)!(N-k+1)!}\right) \\ &= (1-p)^{N-k+n}p^k \frac{N!}{k!(N-k+1)!} \cdot \left ((N-k+1)+nk \right) \\ &= \frac{\left[N+1+k\left(n-1\right)\right]N!}{\left(N-k+1\right)!k!}p^{k}\left(1-p\right)^{N+n-k} \end{align*} When \(k = N+1\) we get: \begin{align*} \frac{(N+1)n N!}{(N+1)!} p^{N+1}(1-p)^{N+n-k} &= np^{N+1}(1-p)^{N+n-k} \end{align*} and the probability is: \begin{align*} \mathbb{P}(\text{exactly }N+1\text{ faults after test}) &= \mathbb{P}(N\text{ faults in non-tested, 1 in batch}) \\ &= \binom{N}{N}p^N \cdot \binom{n}{1}p(1-p)^{N-1} \\ &= np^{N+1}(1-p)^{N+n-k} \end{align*} So the formula does work for \(k = N+1\). \begin{align*} \mathbb{E}(faults) &= \sum_{k=0}^{N+1} k \cdot \mathbb{P}(\text{exactly }k\text{ faults after test}) \\ &= \sum_{k=0}^{N+1} k \cdot \frac{\left[N+1+k\left(n-1\right)\right]N!}{\left(N-k+1\right)!k!}p^{k}\left(1-p\right)^{N+n-k} \\ &= \sum_{k=1}^{N+1} \frac{\left[N+1+k\left(n-1\right)\right]N!}{\left(N-k+1\right)!(k-1)!}p^{k}\left(1-p\right)^{N+n-k} \\ &= \sum_{k=1}^{N+1} \left[N+1+k\left(n-1\right)\right] p(1-p)^{n-1}\binom{N}{k-1}p^{k-1}\left(1-p\right)^{N-k+1} \\ &= p(1-p)^{n-1} \cdot \left ( (N+1+n-1)\sum_{k=1}^{N+1} \binom{N}{k-1}p^{k-1}\left(1-p\right)^{N-k+1}+ (n-1)\sum_{k=1}^{N+1} (k-1)\binom{N}{k-1}p^{k-1}\left(1-p\right)^{N-k+1} \right) \\ &= p(1-p)^{n-1} \left ((N+1+n-1) + (n-1)pN \right) \\ &= \left[N+n+pN(n-1)\right]p(1-p)^{n-1} \end{align*}
For \(n=0,1,2,\ldots,\) the functions \(y_{n}\) satisfy the differential equation \[ \frac{\mathrm{d}^{2}y_{n}}{\mathrm{d}x^{2}}-\omega^{2}x^{2}y_{n}=-(2n+1)\omega y_{n}, \] where \(\omega\) is a positive constant, and \(y_{n}\rightarrow0\) and \(\mathrm{d}y_{n}/\mathrm{d}x\rightarrow0\) as \(x\rightarrow+\infty\) and as \(x\rightarrow-\infty.\) Verify that these conditions are satisfied, for \(n=0\) and \(n=1,\) by \[ y_{0}(x)=\mathrm{e}^{-\lambda x^{2}}\qquad\mbox{ and }\qquad y_{1}(x)=x\mathrm{e}^{-\lambda x^{2}} \] for some constant \(\lambda,\) to be determined. Show that \[ \frac{\mathrm{d}}{\mathrm{d}x}\left(y_{m}\frac{\mathrm{d}y_{n}}{\mathrm{d}x}-y_{n}\frac{\mathrm{d}y_{m}}{\mathrm{d}x}\right)=2(m-n)\omega y_{m}y_{n}, \] and deduce that, if \(m\neq n,\) \[ \int_{-\infty}^{\infty}y_{m}(x)y_{n}(x)\,\mathrm{d}x=0. \]
Solution: \begin{align*} && y_0(x) &= e^{-\lambda x^2} \\ && \lim_{x \to \pm \infty} y_0(x) &= 0 \Leftrightarrow \lambda > 0 \\ && \lim_{x \to \pm \infty} y'_0(x) &= \lim_{x \to \pm \infty} 2x\lambda e^{-\lambda x^2} \\ &&&= 0\Leftrightarrow \lambda > 0 \\ && y''_0(x) &= 4x^2 \lambda^2 e^{-\lambda x^2} + 2\lambda e^{-\lambda x^2} \\ \\ && y''_0 - \omega^2 x^2 y_0+(2\cdot 0 + 1) \omega y_0 &= e^{-\lambda x^2} \l 4x^2 \lambda^2 + 2 \lambda - \omega^2 x^2 + \omega\r \\ &&&=0 \Leftrightarrow \lambda = \pm \frac{\omega}{2} \end{align*} Therefore \(y_0\) satisfies if \(\lambda = \frac{\omega}{2}\) Similarly for \(y_1\), \begin{align*} && y_1(x) &= xe^{-\lambda x^2} \\ && \lim_{x \to \pm \infty} y_1(x) &= 0 \Leftrightarrow \lambda > 0 \\ && \lim_{x \to \pm \infty} y'_1(x) &= \lim_{x \to \pm \infty} \l -2x^2 \lambda e^{-\lambda x^2} + e^{-\lambda x^2} \r \\ &&&= 0\Leftrightarrow \lambda > 0 \\ && y''_0(x) &= e^{-\lambda x^2} \l 4x^3 \lambda^2-4x\lambda - 2x\lambda \r \\ &&&= e^{-\lambda x^2} \l 4x^3 \lambda^2-6x\lambda \r \\ && y''_1 - \omega^2 x^2 y_1+(2\cdot 1 + 1) \omega y_1 &= e^{-\lambda x^2} \l 4x^3\lambda^2-6x\lambda-\omega^2x^3+3\omega x\r \\ &&&=0 \Leftrightarrow \lambda = \pm \frac{\omega}{2} \end{align*} Therefore \(y_1\) satisfies if \(\lambda = \frac{\omega}{2}\) \begin{align*} \frac{\mathrm{d}}{\mathrm{d}x}\left(y_{m}\frac{\mathrm{d}y_{n}}{\mathrm{d}x}-y_{n}\frac{\mathrm{d}y_{m}}{\mathrm{d}x}\right) &= y'_my'_n+y_my''_n - y'_ny'_m-y_ny''_m \\ &= y_my''_n - y_ny''_m \\ &= y_m(\omega^2 x^2 y_n - (2n+1)\omega y_n) - y_n(\omega^2 x^2 y_m - (2m+1)\omega y_m) \\ &= y_my_n (2m-2n)\omega \\ &= 2(m-n) \omega y_my_n \end{align*} Therefore: \begin{align*} \int_{-\infty}^{\infty} y_m(x)y_n(x) \d x &= \int_{-\infty}^{\infty} \frac{1}{2(m-n)} \frac{\mathrm{d}}{\mathrm{d}x}\left(y_{m}\frac{\mathrm{d}y_{n}}{\mathrm{d}x}-y_{n}\frac{\mathrm{d}y_{m}}{\mathrm{d}x}\right) \d x \\ &= \frac{1}{2(m-n)} \left [ y_{m}\frac{\mathrm{d}y_{n}}{\mathrm{d}x}-y_{n}\frac{\mathrm{d}y_{m}}{\mathrm{d}x}\right]_{-\infty}^{\infty} \\ &\to 0 \end{align*} This condition is known as Orthogonality. In fact this question is talking about a Sturm-Liouville orthogonality condition, in particular for the quantum harmonic oscillator, and the eigenfunctions are related to Hermite polynomials.