17 problems found
Let \(f(x) = (x-p)g(x)\), where g is a polynomial. Show that the tangent to the curve \(y = f(x)\) at the point with \(x = a\), where \(a \neq p\), passes through the point \((p, 0)\) if and only if \(g'(a) = 0\). The curve \(C\) has equation $$y = A(x - p)(x - q)(x - r),$$ where \(p\), \(q\) and \(r\) are constants with \(p < q < r\), and \(A\) is a non-zero constant.
Solution: The tangent to the curve \(y = f(x)\) at \(x = a\) has the equation \(\frac{y-f(a)}{x-a} = f'(a) = g(a)+(a-p)g'(a)\). This passes through \((p,0)\) iff \begin{align*} && \frac{-f(a)}{p-a} &= g(a)+(a-p)g'(a) \\ \Leftrightarrow && -f(a) &= (p-a)g(a) -(a-p)^2g'(a) \\ \Leftrightarrow && -f(a) &= -f(a) -(a-p)^2g'(a) \\ \Leftrightarrow && 0 &= g'(a) \\ \end{align*}
The function \(\f\) is defined by \[ \phantom{\ \ \ \ \ \ \ \ \ \ \ \ (x>0, \ \ x\ne1)} \f(x) = \frac{1}{x\ln x} \left(1 - (\ln x)^2 \right)^2 \ \ \ \ \ \ \ \ \ \ \ \ (x>0, \ \ x\ne1) \,.\] Show that, when \(( \ln x )^2 = 1\,\), both \(\f(x)=0\) and \(\f'(x)=0\,\). The function \(F\) is defined by \begin{align*} F(x) = \begin{cases} \displaystyle \int_{ 1/\text{e}}^x \f(t) \; \mathrm{d}t & \text{ for } 0 < x < 1\,, \\[7mm] \displaystyle \int_{\text{e}}^x \f(t) \; \mathrm{d}t & \text{ for } x > 1\,. \\ \end{cases} \end{align*}
Solution: When \((\ln x)^2 = 1\) we have \(f(x) = \frac{1}{x\ln x}(1 - 1^2)^2 = 0\) \(f'(x) = \frac{2(1 - (\ln x)^2) \cdot (-2 \ln x ) \cdot \frac1x \cdot (x \ln x) - (\ln x +1)(1-(\ln x)^2)^2}{(x\ln x)^2} = \frac{2\cdot 0 \cdot (-2 \ln x ) \cdot \frac1x \cdot (x \ln x) - (\ln x +1) \cdot 0}{(x\ln x)^2} = 0\)
The functions \(\s\) and \(\c\) satisfy \(\s(0)= 0\,\), \(\c(0)=1\,\) and \[ \s'(x) = \c(x)^2 ,\] \[ \c'(x)=-\s(x)^2. \] You may assume that \(\s\) and \(\c\) are uniquely defined by these conditions.
Solution: \begin{questionparts} \item \begin{align*} && \dfrac{\d }{\d x} \left( \s(x)^3 + \c(x)^3 \right) &= 3\s(x)^2\s'(x) + 3\c(x)^2 \c'(x) \\ &&&= 3\s(x)^2\c(x)^2 - 3\c(x)^2\s(x)^2 \\ &&&= 0 \\ \\ \Rightarrow && \s(x)^3 + \c(x)^3 &= \text{constant} \\ &&&= \s(0)^3 + \c(0)^3 \\ &&&= 1 \end{align*} \item \begin{align*} \frac{\d }{\d x} \, \Big( \s(x) \c(x) \Big) &= \s'(x) \c(x) + \s(x)\c'(x) \\ &= \c(x)^3 - \s(x)^3 \\ &= \c(x)^3 - (1-\c(x)^3) \\ &= 2\c(x)^3 - 1 \\ \\ \dfrac{\d }{\d x} \left( \dfrac{\s(x)}{\c(x)} \right) &= \frac{\s'(x)\c(x) - \s(x)\c'(x)}{\c(x)^2} \\ &= \frac{\c(x)^3 + \s(x)^3}{\c(x)^2} \\ &= \frac{1}{\c(x)^2} \\ \end{align*} \item \begin{align*} \int \s(x)^2 \d x &= -\int -\s(x)^2 \d x \\ &= -\int \c'(x) \d x \\ &= - \s(x) +C \\ \\ \int \s(x)^5 \, \d x &= \int \s(x)^2 \s(x)^3 \d x \\ &= \int \s(x)^2 (1 - \c(x)^3) \d x \\ &= -\int \c'(x) (1 - \c(x)^3) \d x \\ &= - c(x) + \frac{\c(x)^4}{4} + C \end{align*} \item If \(u = \s(x), \frac{\d u}{\d x} = \c(x)^2\) \begin{align*} \int \frac{1}{(1-u^3)^{\frac{2}{3}}} \, \d u &= \int \frac{1}{(1-\s(x)^3)^{\frac{2}{3}}} \c(x)^2 \d x \\ &= \int 1 \d x \\ &= x + C \\ &= \s^{-1}(u) + C \\ \\ \int \frac{1}{{(1-u^3)^{\frac{4}{3}}}} \d u &= \int \frac1{(1-\s(x)^3)^{\frac43} }\c(x)^2 \d x \\ &= \int \frac1{(\c(x)^3)^{\frac43}} \c(x)^2 \d x \\ &= \int \frac1{\c(x)^2} \d x \\ &= \frac{\s(x)}{\c(x)} + C \\ &= \frac{u}{(1-u^3)^{\frac13}} + C \\ \end{align*} \begin{align*} && \int {(1-u^3)}^{\frac{1}{3}} \, \d u &= \int (1-s(x)^3)^{\frac13} c(x)^2 \d x \\ &&&= \int \c(x)^3 \d x = I\\ &&&= \int \c(x) s'(x) \d x \\ &&&= \left [\c(x) \s(x) \right] + \int \s(x)^2 s(x) \d x \\ &&&= \c(x) \s(x) + \int (1 - \c(x)^3) \d x + C \\ &&&= \c(x) \s(x) + x - I + C \\ \Rightarrow && I &= \frac{x + \c(x) \s(x)}{2} + k \\ \Rightarrow && &= \frac12 \l \s^{-1}(u) + u \sqrt[3](1-u^3)\r + k \end{align*}
The sequence of functions \(y_0\), \(y_1\), \(y_2\), \(\ldots\,\) is defined by \(y_0=1\) and, for \(n\ge1\,\), \[ y_n = (-1)^n \frac {1}{z} \, \frac{\d^{n} z}{\d x^n} \,, \] where \(z= \e^{-x^2}\!\).
Solution:
The point with cartesian coordinates \((x,y)\) lies on a curve with polar equation \(r=\f(\theta)\,\). Find an expression for \(\dfrac{\d y}{\d x}\) in terms of \(\f(\theta)\), \(\f'(\theta)\) and \(\tan\theta\,\). Two curves, with polar equations \(r=\f(\theta)\) and \(r=\g(\theta)\), meet at right angles. Show that where they meet \[ \f'(\theta) \g'(\theta) +\f(\theta)\g(\theta) = 0 \,. \] The curve \(C\) has polar equation \(r=\f(\theta)\) and passes through the point given by \(r=4\), \(\theta = - \frac12\pi\). For each positive value of \(a\), the curve with polar equation \(r= a(1+\sin\theta)\) meets~\(C\) at right angles. Find \(\f(\theta)\,\). Sketch on a single diagram the three curves with polar equations \(r= 1+\sin\theta\,\), \ \(r= 4(1+\sin\theta)\) and \(r=\f(\theta)\,\).
Solution: \((x, y) = (f(\theta)\cos(\theta), f(\theta)\sin(\theta))\) so \begin{align*} \frac{dy}{d\theta} &= -f(\theta)\sin(\theta) + f'(\theta)\cos(\theta) \\ \frac{dx}{d\theta} &= f(\theta)\cos(\theta) + f'(\theta)\sin(\theta) \\ \frac{dy}{dx} &= \frac{-f(\theta)\sin(\theta) + f'(\theta)\cos(\theta)}{f(\theta)\cos(\theta) + f'(\theta)\sin(\theta) } \\ &= \frac{-f(\theta)\tan(\theta) + f'(\theta)}{f(\theta) + f'(\theta)\tan(\theta) } \end{align*} If the curves meet at right angles then the product of their gradients is \(-1\), ie \begin{align*} \frac{-f(\theta)\tan(\theta) + f'(\theta)}{f(\theta) + f'(\theta)\tan(\theta) } \cdot \frac{-g(\theta)\tan(\theta) + g'(\theta)}{g(\theta) + g'(\theta)\tan(\theta) } &= -1 \\ f(\theta)g(\theta)\tan^2 \theta - f(\theta)g'(\theta)\tan \theta - f'(\theta)g(\theta)\tan \theta + f'(\theta)g'(\theta) &= \\ \quad - \l f(\theta)g(\theta) + f(\theta)g'(\theta)\tan(\theta) + f'(\theta)g(\theta)\tan(\theta) + f'(\theta)g'(\theta)\tan^2 \theta \r \\ \tan^2\theta \l f(\theta)g(\theta) + f'(\theta)g'(\theta) \r + f'(\theta)g'(\theta) + f(\theta)g(\theta) &= 0 \\ (\tan^2\theta + 1) \l f(\theta)g(\theta) + f'(\theta)g'(\theta) \r &= 0 \\ f(\theta)g(\theta) + f'(\theta)g'(\theta) &= 0 \end{align*} \(g(\theta) = a(1+\sin\theta), g'(\theta) = a\cos\theta\) Therefore \(f'(\theta)a\cos \theta+f(\theta)a(1+\sin(\theta)) = 0\) \begin{align*} && \frac{f'(\theta)}{f(\theta)} &= -\sec(\theta) - \tan(\theta) \\ \Rightarrow && \ln(f(\theta)) &= -\ln |\tan(\theta) + \sec(\theta)| + \ln |\cos(\theta)| + C \\ \Rightarrow && f(\theta) &= A \frac{\cos \theta}{\tan \theta + \sec \theta} \\ &&&= A \frac{\cos^2 \theta}{\sin \theta + 1} \\ &&&= A \frac{1-\sin^2 \theta}{\sin \theta + 1} \\ &&&= A (1-\sin \theta) \end{align*} When \(\theta = -\frac12 \pi, r = 4\), so \(A = 2\).
Differentiate, with respect to \(x\), \[ (ax^2+bx+c)\,\ln \big( x+\sqrt{1+x^2}\big) +\big(dx+e\big)\sqrt{1+x^2} \,, \] where \(a\), \(b\), \(c\), \(d\) and \(e\) are constants. You should simplify your answer as far as possible. Hence integrate:
Solution: \begin{align*} && y &= (ax^2+bx+c)\,\ln \big( x+\sqrt{1+x^2}\big) +\big(dx+e\big)\sqrt{1+x^2} \\ && y' &= (2ax+b)\,\ln \big( x+\sqrt{1+x^2}\big) + (ax^2+bx+c) \frac{1}{x + \sqrt{1+x^2}} \cdot \left(1 + \frac{x}{\sqrt{1+x^2}} \right) + d\sqrt{1+x^2} + \frac{x(dx+e)}{\sqrt{1+x^2}} \\ &&&= (2ax+b)\,\ln \big( x+\sqrt{1+x^2}\big) + \frac{1}{\sqrt{1+x^2}} \left ( (ax^2+bx+c) + d(1+x^2) + x(dx+e) \right) \\ &&&= (2ax+b)\,\ln \big( x+\sqrt{1+x^2}\big) + \frac{1}{\sqrt{1+x^2}} \left ( (a+2d)x^2+(b+e)x+(d+c) \right) \\ \end{align*}
Given that \(\displaystyle z = y^n \left( \frac{\d y}{\d x}\right)^{\!2}\), show that \[ \frac{\d z}{\d x} = y^{n-1} \frac{\d y}{\d x} \left( n \left(\frac{\d y}{\d x}\right)^{\!2} + 2y \frac{\d^2y}{\d x^2}\right) . \]
Solution: \begin{align*} &&z &= y^n \left( \frac{\d y}{\d x}\right)^{2} \\ \Rightarrow && \frac{\d z}{\d x} &= ny^{n-1}\left( \frac{\d y}{\d x}\right)^{3} + y^{n} \cdot 2 \left( \frac{\d y}{\d x}\right) \left( \frac{\d^2 y}{\d x^2}\right) \\ &&&= y^{n-1} \left( \frac{\d y}{\d x}\right) \left (n \left( \frac{\d y}{\d x}\right)^2 + 2y \frac{\d^2 y}{\d x^2} \right) \end{align*}
For any given function \(\f\), let \[ I = \int [\f'(x)]^2 \,[\f(x)]^n \d x\,, \tag{\(*\)} \] where \(n\) is a positive integer. Show that, if \(\f(x)\) satisfies \(\f''(x) =k \f(x)\f'(x)\) for some constant \(k\), then (\(*\)) can be integrated to obtain an expression for \(I\) in terms of \(\f(x)\), \(\f'(x)\), \(k\) and \(n\).
Solution: If \(f''(x) = kf(x)f'(x)\) then we can see \begin{align*} && I &= \int [\f'(x)]^2 \,[\f(x)]^n \d x \\ &&&= \int f'(x) \cdot f'(x) [f(x)]^n \d x \\ &&&= \left[ f'(x) \cdot \frac{[f(x)]^{n+1}}{n+1} \right] - \int f''(x) \frac{[f(x)]^{n+1}}{n+1} \d x \\ &&&= \frac{1}{n+1} \left (f'(x) [f(x)]^{n+1} - \int kf'(x) [f(x)]^{n+2} \d x \right) \\ &&&= \frac{1}{n+1} \left (f'(x) [f(x)]^{n+1} - k \frac{[f(x)]^{n+3}}{n+3} \right) +C\\ &&&= \frac{[f(x)]^{n+1}}{n+1} \left ( f'(x) - \frac{k[f(x)]^2}{n+3} \right) + C \end{align*}
Let \(y= (x-a)^n \e^{bx} \sqrt{1+x^2}\,\), where \(n\) and \(a\) are constants and \(b\) is a non-zero constant. Show that \[ \frac{\d y}{\d x} = \frac{(x-a)^{n-1} \e^{bx} \q(x)}{\sqrt{1+x^2}}\,, \] where \(\q(x)\) is a cubic polynomial. Using this result, determine:
Solution:
\(\triangle\) is an operation that takes polynomials in \(x\) to polynomials in \(x\); that is, given any polynomial \(\h(x)\), there is a polynomial called \(\triangle \h(x)\) which is obtained from \(\h(x)\) using the rules that define \(\triangle\). These rules are as follows:
Solution: Claim: If \(f\) is a constant, then \(\triangle f = 0\) Proof: First consider \(f(x) = 1, g(x) = x\) then we must have: \begin{align*} && \triangle (1x) &= 1 \triangle x + x \triangle 1 \tag{iv} \\ &&&= 1 \cdot 1 + x \triangle 1 \tag{i} \\ \Rightarrow && 1 &= 1 + x \triangle 1 \tag{i} \\ \Rightarrow && \triangle 1 &= 0 \\ \Rightarrow && \triangle c &= 0 \tag{iii} \end{align*} \begin{align*} && \triangle (x^2) &= x \triangle x + x \triangle x \tag{iv} \\ &&&= x \cdot 1 + x \cdot 1 \tag{i} \\ &&&= 2x \\ \\ && \triangle (x^3) &= x^2 \triangle x + x \triangle (x^2) \tag{iv} \\ &&&= x^2 \cdot 1 + x \cdot 2x \tag{\(\triangle x^2 = 2x\)}\\ &&&= 3x^2 \end{align*} Claim: \(\triangle h(x) = \frac{\d h(x)}{\d x}\) for any polynomial \(h\) Proof: Since both \(\triangle\) and \(\frac{\d}{\d x}\) are linear (properties \((ii)\) and \((iii)\)) it suffices to prove that: \(\triangle x^n = nx^{n-1}\). For this we proceed by induction. Base cases (we've proved up to \(n = 3\) so we're good). Suppose it's true for some \(n\), then consider \(n + 1\), \begin{align*} && \triangle (x^{n+1}) &= x \triangle (x^n) + x^n \triangle x \tag{iv} \\ &&&= x \cdot n x^{n-1} + x^n \triangle x \tag{Ind. hyp.} \\ &&&= nx^n + x^n \tag{i} \\ &&&= (n+1)x^{n} \end{align*} Therefore it's true for for \(n+1\). Therefore by induction it's true for all \(n\).
Show that, if \(y^2 = x^k \f(x)\), then $\displaystyle 2xy \frac{\mathrm{d}y }{ \mathrm{d}x} = ky^2 + x^{k+1} \frac{\mathrm{d}\f }{ \mathrm{d}x}$\,.
Find the three values of \(x\) for which the derivative of \(x^2 \e^{-x^2}\) is zero. Given that \(a\) and \(b\) are distinct positive numbers, find a polynomial \(\P(x)\) such that the derivative of \(\P(x)\e^{-x^2}\) is zero for \(x=0\), \(x=\pm a\) and \(x=\pm b\,\), but for no other values of \(x\).
Solution: \begin{align*} && y &= x^2e^{-x^2} \\ \Rightarrow && y' &= 2xe^{-x^2} +x^2 \cdot (-2x)e^{-x^2} \\ &&&= e^{-x^2}(2x-2x^3) \\ &&&= 2e^{-x^2}x(1-x^2) \end{align*} Therefore the derivative is zero iff \(x = 0, \pm 1\) \begin{align*} && y &= \P(x) e^{-x^2} \\ \Rightarrow && y' &= e^{-x^2} (\P'(x)-2x\P(x)) \end{align*} Therefore we want \(\P'(x) - 2x\P(x) = Kx(x^2-a^2)(x^2-b^2)\) Since this has degree \(5\), we should look at polynomials degree \(4\) for \(\P\). We can also immediately see that \(0\) is a root of \(\P'(x)\), so \(\P(x) = a_4x^4+a_3x^3+a_2x^2+a_0\). WLOG \(a_4 = 1\) and \(K = -2\), so \begin{align*} && -2(x^5-(a^2+b^2)x^3+a^2b^2x) &= 4x^3+3a_3x^2+2a_2x- 2x(x^4+a_3x^3+a_2x^2+a_0) \\ &&&= -2x^5-2a_3 x^4+(4-2a_2)x^3+(2a_2-2a_0)x \\ \Rightarrow && a_3 &= 0 \\ && a^2+b^2 &= 2-a_2 \\ \Rightarrow && a_2 &= 2-a^2-b^2 \\ && a^2b^2 &= a_0-a_2 \\ \Rightarrow && a_0 &= a^2b^2 + 2-a^2-b^2 \\ \Rightarrow && \P(x) &= x^4+(2-a^2-b^2)x^2+(a^2-1)(b^2-1)x \end{align*}
If \(\mathrm{Q}\) is a polynomial, \(m\) is an integer, \(m\geqslant1\) and \(\mathrm{P}(x)=(x-a)^{m}\mathrm{Q}(x),\) show that \[ \mathrm{P}'(x)=(x-a)^{m-1}\mathrm{R}(x) \] where \(\mathrm{R}\) is a polynomial. Explain why \(\mathrm{P}^{(r)}(a)=0\) whenever \(1\leqslant r\leqslant m-1\). (\(\mathrm{P}^{(r)}\) is the \(r\)th derivative of \(\mathrm{P}.\)) If \[ \mathrm{P}_{n}(x)=\frac{\mathrm{d}^{n}}{\mathrm{d}x^{n}}(x^{2}-1)^{n} \] for \(n\geqslant1\) show that \(\mathrm{P}_{n}\) is a polynomial of degree \(n\). By repeated integration by parts, or otherwise, show that, if \(n-1\geqslant m\geqslant0,\) \[ \int_{-1}^{1}x^{m}\mathrm{P}_{n}(x)\,\mathrm{d}x=0 \] and find the value of \[ \int_{-1}^{1}x^{n}\mathrm{P}_{n}(x)\,\mathrm{d}x. \] {[}Hint. \textit{You may use the formula \[ \int_{0}^{\frac{\pi}{2}}\cos^{2n+1}t\,\mathrm{d}t=\frac{(2^{2n})(n!)^{2}}{(2n+1)!} \] without proof if you need it. However some ways of doing this question do not use this formula.}{]}
Solution: \begin{align*} && P(x) &= (x-a)^mQ(x) \\ \Rightarrow && P'(x) &= m(x-a)^{m-1}Q(x) + (x-a)^mQ'(x) \\ &&&= (x-a)^{m-1}(\underbrace{mQ(x) + (x-a)Q'(x)}_{\text{a polynomial}}) \\ &&&= (x-a)^{m-1}R(x) \end{align*} Therefore \(P^{(r)}(a) = 0\) for \(1 \leq r \leq m-1\) since each time we differentiate we will have a factor of \((x-a)^{m-r}\) which is zero when we evaluate at \(x = a\). If \(P_n(x) = \frac{\d^n}{\d x^n}(x^2-1)^n\) then we are differentiating a degree \(2n\) polynomial \(n\) times. Each time we differentiate we reduce the degree by \(1\), therefore the degree of \(P_n\) is \(n\). \begin{align*} && \int_{-1}^1 x^mP_n(x) \d x &= \left [x^m \underbrace{\frac{\d^{n-1}}{\d x^{n-1}}\left ( (x-1)^{n} (x+1)^{n} \right)}_{\text{has a factor of }x-1\text{ and }x+1}\right]_{-1}^1 - \int_{-1}^1 mx^{m-1}\frac{\d^{n-1}}{\d x^{n-1}}\left ( (x-1)^{n} (x+1)^{n} \right) \d x\\ &&&= 0 - \int_{-1}^1 mx^{m-1}\frac{\d^{n-1}}{\d x^{n-1}}\left ( (x-1)^{n} (x+1)^{n} \right) \d x\\ &&&= -\left [mx^{m-1} \underbrace{\frac{\d^{n-2}}{\d x^{n-2}}\left ( (x-1)^{n} (x+1)^{n} \right)}_{\text{has a factor of }x-1\text{ and }x+1}\right]_{-1}^1+ \int_{-1}^1 m(m-1)x^{m-2}\frac{\d^{n-2}}{\d x^{n-2}}\left ( (x-1)^{n} (x+1)^{n} \right) \d x\\ &&&= m(m-1)\int_{-1}^1 x^{m-2}\frac{\d^{n-2}}{\d x^{n-2}}\left ( (x-1)^{n} (x+1)^{n} \right) \d x\\ &&& \cdots \\ &&&= (-1)^m m!\int_{-1}^1 \frac{\d^{n-m}}{\d x^{n-m}} \left ( (x-1)^{n} (x+1)^{n} \right) \d x\\ &&&= 0 \end{align*} If \(n = m\), we have \begin{align*} && \int_{-1}^1 x^n P_n(x) \d x&= (-1)^nn! \int_{-1}^1 (x^2-1)^n \d x \\ && &= (-1)^{2n}n! \cdot 2\int_{0}^1 (1-x^2)^n \d x \\ x = \sin \theta, \d x = \cos \theta \d \theta: &&&= 2 \cdot n!\int_{0}^{\pi/2} \cos^{2n} \theta \cdot \cos \theta \d \theta \\ &&&= 2 \cdot n!\int_{0}^{\pi/2} \cos^{2n+1} \theta \d \theta \\ &&&= 2 \cdot n!\frac{(2^{2n})(n!)^{2}}{(2n+1)!} \\ &&&= \frac{(2^{2n+1})(n!)^{3}}{(2n+1)!} \\ \end{align*}
\(\lozenge\) is an operation which take polynomials in \(x\) to polynomials in \(x\); that is, given a polynomial \(\mathrm{h}(x)\) there is another polynomial called \(\lozenge\mathrm{h}(x)\). It is given that, if \(\mathrm{f}(x)\) and \(\mathrm{g}(x)\) are any two polynomials in \(x\), the following are always true:
Solution: Claim: If \(f(x) = c\) then \(\lozenge f(x) = 0\) Proof: Consider \(g(x) = x\) then \begin{align*} (1) && \lozenge(f(x)g(x)) &= g(x) \lozenge f(x) + f(x) \lozenge g(x) \\ \Rightarrow && \lozenge(c x) &= x \lozenge f(x) + c \lozenge x \\ (4) && \lozenge(c x) &= c \lozenge x \\ \Rightarrow && 0 &= x \lozenge f(x) \\ \Rightarrow && \lozenge f(x) &= 0 \end{align*} \begin{align*} (1) && \lozenge(x^2) &= x \lozenge x + x \lozenge x \\ (3) &&&= 2 x \cdot 1 \\ &&&= 2x \\ \\ (1) && \lozenge (x^3) &= x^2 \lozenge x + x \lozenge (x^2) \\ &&&= x^2 \cdot \underbrace{1}_{(3)} + x \cdot\underbrace{ 2x}_{\text{previous part}} \\ &&&= 3x^2 \end{align*} Claim: \(\lozenge h(x) = \frac{\d }{\d x} ( h(x))\) for any polynomial \(h\). Proof: (By (strong) induction on the degree of \(h\)). Base case: True, we proved this in the first part of the question. Inductive step: Assume true for all polynomials of degree less than or equal to \(k\). Then consider \(n = k+1\). We can write \(h(x) = ax^{k+1} + h_k(x)\) where \(h_k(x)\) is a polynomial of degree less than or equal to \(k\). Then notice: \begin{align*} && \lozenge (h(x)) &= \lozenge (ax^{k+1} + h_k(x)) \\ (2) &&&= \lozenge (ax^{k+1})+ \lozenge (h_k(x)) \\ &&&=\underbrace{a\lozenge (x^{k+1})}_{(4)}+ \underbrace{\frac{\d}{\d x} (h_k(x))}_{\text{inductive hypothesis}}\\ &&&= a \underbrace{\left (x \lozenge x^k + x^k \lozenge x \right)}_{(1)} + \frac{\d}{\d x} (h_k(x)) \\ &&&= a \left ( x \cdot \underbrace{k x^{k-1}}_{\text{inductive hyp.}} + x^k \cdot \underbrace{1}_{(3)} \right) + \frac{\d}{\d x} (h_k(x)) \\ &&&= (k+1)a x^k + \frac{\d}{\d x} (h_k(x)) \\ &&&= \frac{\d }{\d x} \left ( ax^{k+1} + h_k(x) \right) \\ &&&= \frac{\d }{\d x} (h(x)) \end{align*} Therefore since our statement is true for \(n=0\) and if it is true for \(n=k\) it is true for \(n=k+1\) by the principle of mathematical induction it is true for all \(n \geq 0\)