Problems

Filters
Clear Filters

6 problems found

2019 Paper 2 Q8
D: 1500.0 B: 1638.7

The domain of the function f is the set of all \(2 \times 2\) matrices and its range is the set of real numbers. Thus, if \(M\) is a \(2 \times 2\) matrix, then \(f(M) \in \mathbb{R}\). The function f has the property that \(f(MN) = f(M)f(N)\) for any \(2 \times 2\) matrices \(M\) and \(N\).

  1. You are given that there is a matrix \(M\) such that \(f(M) \neq 0\). Let \(I\) be the \(2 \times 2\) identity matrix. By considering \(f(MI)\), show that \(f(I) = 1\).
  2. Let \(J = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}\). You are given that \(f(J) \neq 1\). By considering \(J^2\), evaluate \(f(J)\). Using \(J\), show that, for any real numbers \(a\), \(b\), \(c\) and \(d\), $$.f\left(\begin{pmatrix} a & b \\ c & d \end{pmatrix}\right) = -f\left(\begin{pmatrix} c & d \\ a & b \end{pmatrix}\right) = f\left(\begin{pmatrix} d & c \\ b & a \end{pmatrix}\right)$$
  3. Let \(K = \begin{pmatrix} 1 & 0 \\ 0 & k \end{pmatrix}\) where \(k \in \mathbb{R}\). Use \(K\) to show that, if the second row of the matrix \(A\) is a multiple of the first row, then \(f(A) = 0\).
  4. Let \(P = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}\). By considering the matrices \(P^2\), \(P^{-1}\), and \(K^{-1}PK\) for suitable values of \(k\), evaluate \(f(P)\).


Solution:

  1. Consider \(f(M) = f(MI) = f(M)f(I)\). Since \(f(M) \neq 0\) we can divide by \(f(M)\) to obtain \(f(I) = 1\)
  2. Let \(J = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}\), then \(J^2 = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} = I\). Therefore \(1 = f(I) = f(J^2) = f(J)f(J) \Rightarrow f(J) = \pm 1 \Rightarrow f(J) = -1\) since \(f(J) \neq 1\). \begin{align*} \begin{pmatrix} a & b \\ c & d \end{pmatrix}J &= \begin{pmatrix} a & b \\ c & d \end{pmatrix}\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \\ &= \begin{pmatrix} b & a \\ d & c \end{pmatrix} \\ J\begin{pmatrix} a & b \\ c & d \end{pmatrix} &= \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}\begin{pmatrix} a & b \\ c & d \end{pmatrix} \\ &= \begin{pmatrix} c & d \\ a & b \end{pmatrix} \\ J\begin{pmatrix} a & b \\ c & d \end{pmatrix}J &=\begin{pmatrix} d & c \\ b & a \end{pmatrix} \end{align*} Therefore \(f\left(\begin{pmatrix} c & d \\ a & b \end{pmatrix}\right) = f \left (J\begin{pmatrix} a & b \\ c & d \end{pmatrix} \right) = f(J) f\left(\begin{pmatrix} a & b \\ c & d \end{pmatrix}\right) = -f\left(\begin{pmatrix} a & b \\ c & d \end{pmatrix}\right)\) and \(f\left(\begin{pmatrix} d & c \\ b & a \end{pmatrix}\right) = f\left(J\begin{pmatrix} a & b \\ c & d \end{pmatrix}J \right) = f(J)f\left(\begin{pmatrix} a & b \\ c & d \end{pmatrix}\right)f(J) = f\left(\begin{pmatrix} a & b \\ c & d \end{pmatrix}\right)\) as required.
  3. First consider \(O\) the matrix of \(0\), then \begin{align*} && JO &= O \\ \Rightarrow && f(JO) &= f(O) \\ \Rightarrow && f(J)f(O) &= f(O) \\ \Rightarrow && -f(O) &= f(O) \\ \Rightarrow && f(O) &= 0 \end{align*} Now consider \(K_{k} = \begin{pmatrix} 1 & 0 \\ 0 & k \end{pmatrix}\). Suppose \(A = \begin{pmatrix} a & b \\ ka & kb \end{pmatrix}\) then \begin{align*} K_{\frac1k}A &= \begin{pmatrix} 1 & 0 \\ 0 & \frac1k \end{pmatrix} \begin{pmatrix} a & b \\ ka & kb \end{pmatrix} \\ &= \begin{pmatrix} a & b \\ a & b \end{pmatrix} \end{align*} And so \(f(K_{\frac1k}A) = f\left ( \begin{pmatrix} a & b \\ a & b \end{pmatrix} \right) = - f \left ( \begin{pmatrix} a & b \\ a & b \end{pmatrix} \right) = 0\), therefore either \(f(K_{\frac1k}) = 0\) or \(f(A) = 0\), but we know that \(f(I) \neq 0\) therefore \(f(A) = 0\).
  4. Let \(P = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}\), then \(P^2 = \begin{pmatrix} 1 &2 \\ 0 & 1 \end{pmatrix}\), \(P^{-1} = \begin{pmatrix} 1 & -1 \\ 0 & 1 \end{pmatrix}\), \(K_k^{-1}PK_k = K_k^{-1}\begin{pmatrix} 1 & k \\ 0 & k \end{pmatrix} = \begin{pmatrix} 1 & k \\ 0 & 1 \end{pmatrix}\). If \(A\) has an inverse then \(f(A) \neq 0\) since \(1 = f(I) = f(A)f(A^{-1})\), in particular, \(f(A)f(A^{-1}) = 1\). Using this for \(K_2\) we have: \(f(P)^2 = f(P^2) = f(K_2^{-1}PK_2) = f(P)\) therefore \(f(P) = 0, 1\), but since \(f(P)\) has an inverse, \(f(P) \neq 0\) so \(f(P) = 1\)

2019 Paper 3 Q3
D: 1500.0 B: 1500.0

The matrix A is given by $$\mathbf{A} = \begin{pmatrix} a & b \\ c & d \end{pmatrix}.$$

  1. You are given that the transformation represented by A has a line \(L_1\) of invariant points (so that each point on \(L_1\) is transformed to itself). Let \((x, y)\) be a point on \(L_1\). Show that \(((a - 1)(d - 1) - bc)xy = 0\). Show further that \((a - 1)(d - 1) = bc\). What can be said about A if \(L_1\) does not pass through the origin?
  2. By considering the cases \(b \neq 0\) and \(b = 0\) separately, show that if \((a - 1)(d - 1) = bc\) then the transformation represented by A has a line of invariant points. You should identify the line in the different cases that arise.
  3. You are given instead that the transformation represented by A has an invariant line \(L_2\) (so that each point on \(L_2\) is transformed to a point on \(L_2\)) and that \(L_2\) does not pass through the origin. If \(L_2\) has the form \(y = mx + k\), show that \((a - 1)(d - 1) = bc\).


Solution:

  1. Suppose \((x,y)\) is on the line of invariant points, then \begin{align*} &&\begin{pmatrix} x \\ y \end{pmatrix} &= \begin{pmatrix} a & b \\ c & d \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} \\ &&&= \begin{pmatrix} ax + by \\ cx + dy \end{pmatrix} \\ \Rightarrow && \begin{cases} (a-1)x + by = 0 \\ (cx + (d-1)y = 0 \end{cases} \tag{*} \end{align*} Therefore either \(x = 0, y = 0\) or \((a-1)(d-1)-bc = 0\) \(\Rightarrow ((a-1)(d-1)-bc)xy = 0\). We also know this is true for all values \(x,y\) on the line of invariant points. If there is one where both \(x \neq 0, y \neq 0\) we are done, otherwise the line of invariant points must be one of the axes. ie but then one of \(\begin{pmatrix} a \\ c \end{pmatrix} = \begin{pmatrix} 1 \\ 0 \end{pmatrix}\) or \(\begin{pmatrix} b \\ d \end{pmatrix} = \begin{pmatrix} 0 \\ 1 \end{pmatrix}\) is true and we'd also be done. If the line doesn't go through the origin then there are points on every line, not equal to the origin which are fixed. But then every point on those lines is fixed (since \(\mathbf{A}\) is a linear operator) and so every point is fixed. ie \(\mathbf{A} = \mathbf{I}\).
  2. Suppose \((a-1)(d-1) -bc = 0\) and \(b \neq 0\) then I claim that \(y = \frac{1-a}{b}x\) is a line of invariant points. It's clear that the first equation will be satisfied in \((*)\) so it suffices to check the second, but the first condition is equivalent to the equations being linearly dependent, ie both equations are satisfied. If \(b = 0\) then \((a-1)(d-1) = 0\), so our matrix must look like \(\begin{pmatrix} 1 & 0 \\ c & d\end{pmatrix}\) (if \(d \neq 1\))or \(\begin{pmatrix} * & 0 \\ * & 1\end{pmatrix}\). In the first case, the line \(y = \frac{c}{1-d}x\) and in the second \(x = 0\) is an invariant line.
  3. Suppose the invariant line is \(y = mx+k\) then we must have that \begin{align*} \begin{pmatrix} a & b \\ c & d \end{pmatrix} \begin{pmatrix} x \\ mx + k \end{pmatrix} &= \begin{pmatrix} (a + mb)x + bk \\ (c+dm)x + dk \end{pmatrix} \end{align*} and \((c+dm)x + dk = m((a + mb)x + bk) +k \Rightarrow k(d-mb-1) = x(-c+(a-d)m+m^2b)\) Since this equation must be true for all values of \(x\), and \(k \neq 0\) we can say that \(mb = d-1\) and \(-c+(a-d)m+m^2b = 0\), ie \(-c + (a-d)m + m(d-1) = 0 \Rightarrow (a-1)m-c = 0\) if \(m \neq 0\) then \((a-1)\frac{(d-1)}{b} - c = 0\) ie our desired relation is true. If \(m = 0\) then we must have that \(y = k\) is an invariant line, ie \(d-1=0\) and \(c=0\) which also satisfies our relation.

1998 Paper 3 Q5
D: 1700.0 B: 1516.0

The exponential of a square matrix \({\bf A}\) is defined to be $$ \exp ({\bf A}) = \sum_{r=0}^\infty {1\over r!} {\bf A}^r \,, $$ where \({\bf A}^0={\bf I}\) and \(\bf I\) is the identity matrix. Let $$ {\bf M}=\left(\begin{array}{cc} 0 & -1 \\ 1 & \phantom{-} 0 \end{array} \right) \,. $$ Show that \({\bf M}^2=-{\bf I}\) and hence express \(\exp({\theta {\bf M}})\) as a single \(2\times 2\) matrix, where \(\theta\) is a real number. Explain the geometrical significance of \(\exp({\theta {\bf M}})\). Let $$ {\bf N}=\left(\begin{array}{rr} 0 & 1 \\ 0 & 0 \end{array}\right) \,. $$ Express similarly \(\exp({s{\bf N}})\), where \(s\) is a real number, and explain the geometrical significance of \(\exp({s{\bf N}})\). For which values of \(\theta\) does $$ \exp({s{\bf N}})\; \exp({\theta {\bf M}})\, = \, \exp({\theta {\bf M}})\;\exp({s{\bf N}}) $$ for all \(s\)? Interpret this fact geometrically.


Solution: \begin{align*} \mathbf{M}^2 &= \begin{pmatrix} 0 & - 1 \\ 1 & 0 \end{pmatrix}^2 \\ &= \begin{pmatrix} 0 \cdot 0 + (-1) \cdot 1 & 0 \cdot (-1) + (-1) \cdot 0 \\ 1 \cdot 0 + 0 \cdot 1 & 1 \cdot (-1) + 0 \cdot 0 \end{pmatrix} \\ &= \begin{pmatrix} -1 & 0 \\ 0 & -1\end{pmatrix} \\ &= - \mathbf{I} \end{align*} \begin{align*} \exp(\theta \mathbf{M}) &= \sum_{r=0}^\infty \frac1{r!} (\theta \mathbf{M})^r \\ &= \sum_{r=0}^\infty \frac{1}{r!} \theta^r \mathbf{M}^r \\ &= \cos \theta \mathbf{I} + \sin \theta \mathbf{M} \\ &= \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix} \end{align*} This is a rotation of \(\theta\) degrees about the origin. \begin{align*} && \mathbf{N}^2 &= \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}^2 \\ && &= \begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix} \\ \Rightarrow && \exp(s\mathbf{N}) &= \sum_{r=0}^\infty \frac{1}{r!} (s\mathbf{N})^r \\ &&&= \mathbf{I} + s \mathbf{N} \\ &&&= \begin{pmatrix} 1 &s \\ 0 & 1 \end{pmatrix} \end{align*} This is a shear, leaving the \(y\)-axis invariant, sending \((1,1)\) to \((1+s, 1)\). Suppose those matrices commute, for all \(s\), ie \begin{align*} && \begin{pmatrix} 1 &s \\ 0 & 1 \end{pmatrix}\begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix} &= \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix}\begin{pmatrix} 1 &s \\ 0 & 1 \end{pmatrix} \\ \Rightarrow && \begin{pmatrix} \cos \theta - s \sin \theta & -\sin \theta + s \cos \theta \\ \sin \theta & \cos \theta \end{pmatrix} &= \begin{pmatrix} \cos \theta & s \cos \theta - \sin \theta \\ \sin \theta & s \sin \theta + \cos \theta \end{pmatrix} \\ \Rightarrow && \sin \theta &= 0 \\ \Rightarrow && \theta &=n \pi, n \in \mathbb{Z} \end{align*} Clearly it doesn't matter when we do nothing. If we are rotating by \(\pi\) then it also doesn't matter which order we do it in as the stretch happens in both directions equally.

1993 Paper 2 Q6
D: 1600.0 B: 1516.0

In this question, \(\mathbf{A,\mathbf{B\) }}and \(\mathbf{X\) are non-zero \(2\times2\) real matrices.} Are the following assertions true or false? You must provide a proof or a counterexample in each case.

  1. If \(\mathbf{AB=0}\) then \(\mathbf{BA=0}.\)
  2. \((\mathbf{A-B)(A+B)=}\mathbf{A}^{2}-\mathbf{B}^{2}.\)
  3. The equation \(\mathbf{AX=0}\) has a non-zero solution \(\mathbf{X}\) if and only if \(\det\mathbf{A}=0.\)
  4. For any \(\mathbf{A}\) and \(\mathbf{B}\) there are at most two matrices \(\mathbf{X}\) such that \(\mathbf{X}^{2}+\mathbf{AX}+\mathbf{B}=\mathbf{0}.\)


Solution:

  1. This is false, for example let \(\mathbf{A} = \begin{pmatrix} 0 & 1 \\ 0 & 1 \end{pmatrix}\) and \(\mathbf{B} = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}\), then \begin{align*} \mathbf{AB} &= \begin{pmatrix} 0 & 1 \\ 0 & 1 \end{pmatrix}\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \\ &= \begin{pmatrix}0 & 0 \\ 0 & 0\end{pmatrix} \\ \mathbf{BA} &= \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}\begin{pmatrix} 0 & 1 \\ 0 & 1 \end{pmatrix} \\ &= \begin{pmatrix}0 & 1 \\ 0 & 0\end{pmatrix} \\ \end{align*}
  2. This is also false, using the same matrices from part (i), we find: \begin{align*} (\mathbf{A - B})(\mathbf{A + B}) &= \mathbf{A}^2-\mathbf{BA}+\mathbf{AB}-\mathbf{B}^2 \\ &= \mathbf{A}^2-\mathbf{B}^2+\begin{pmatrix}0 & 1 \\ 0 & 0\end{pmatrix} \\ &\neq \mathbf{A}^2-\mathbf{B}^2 \end{align*}
  3. This is true. Claim: The equation \(\mathbf{AX=0}\) has a non-zero solution \(\mathbf{X}\) if and only if \(\det\mathbf{A}=0.\) Proof: \((\Rightarrow)\) Suppose \(\det\mathbf{A} \neq 0\) then \(\mathbf{A}\) has an inverse, and so we must have \(\mathbf{A}^{-1}\mathbf{AX} = \mathbf{0} \Rightarrow \mathbf{X} = \mathbf{0}\). \((\Leftarrow)\) Suppose \(\det \mathbf{A} = 0\) then \(ad-bc=0\), so consider the matrix \(\mathbf{X} = \begin{pmatrix} d & d\\ -c & -c\end{pmatrix}\) (or if this is zero, \(\mathbf{X} = \begin{pmatrix} a & a\\ -b & -b\end{pmatrix}\))
  4. This is false. Consider \(\mathbf{A} = \mathbf{B} = \mathbf{0}\), then \(\mathbf{X} = \begin{pmatrix} 0 & x \\ 0 & 0\end{pmatrix}\) has the property that \(\mathbf{X}^2 = \mathbf{0}\) for all \(x\), so at least more than 2 values

1992 Paper 3 Q2
D: 1700.0 B: 1540.7

The matrices \(\mathbf{I}\) and \(\mathbf{J}\) are \[ \mathbf{I}=\begin{pmatrix}1 & 0\\ 0 & 1 \end{pmatrix}\quad\mbox{ and }\quad\mathbf{J}=\begin{pmatrix}1 & 1\\ 1 & 1 \end{pmatrix} \] respectively and \(\mathbf{A}=\mathbf{I}+a\mathbf{J},\) where \(a\) is a non-zero real constant. Prove that \[ \mathbf{A}^{2}=\mathbf{I}+\tfrac{1}{2}[(1+2a)^{2}-1]\mathbf{J}\quad\mbox{ and }\quad\mathbf{A}^{3}=\mathbf{I}+\tfrac{1}{2}[(1+2a)^{3}-1]\mathbf{J} \] and obtain a similar form for \(\mathbf{A}^{4}.\) If \(\mathbf{A}^{k}=\mathbf{I}+p_{k}\mathbf{J},\) suggest a suitable form for \(p_{k}\) and prove that it is correct by induction, or otherwise.


Solution: If $\mathbf{J}=\begin{pmatrix}1 & 1\\ 1 & 1 \end{pmatrix}\(, them \)\mathbf{J}^2=\begin{pmatrix}2 & 2\\ 2 & 2 \end{pmatrix} = 2\mathbf{J}\(. Therefore \)\mathbf{J}^n = 2\mathbf{J}^{n-1} = 2^{n-1}\mathbf{J}$ Let \(\mathbf{A}=\mathbf{I}+a\mathbf{J}\) then \begin{align*} \mathbf{A}^2 &=\l \mathbf{I}+a\mathbf{J}\r^2 \\ &= \mathbf{I}+2a\mathbf{J} + a^2\mathbf{J}^2 \\ &= \mathbf{I}+2a\mathbf{J} + 2a^2\mathbf{J} \\ &= \mathbf{I}+(2a+ 2a^2)\mathbf{J} \\ &= \mathbf{I}+\frac12(1+4a+ 4a^2-1)\mathbf{J} \\ &= \mathbf{I}+\frac12((1+2a)^2-1)\mathbf{J} \\ \end{align*} \begin{align*} \mathbf{A}^3 &=\l \mathbf{I}+a\mathbf{J}\r^3 \\ &= \mathbf{I}+3a\mathbf{J} + a^2\mathbf{J} + a^3\mathbf{J}^3 \\ &= \mathbf{I}+3a\mathbf{J} + 6a^2\mathbf{J} + 4a^3\mathbf{J} \\ &= \mathbf{I}+(3a+ 6a^3+4a^3)\mathbf{J} \\ &= \mathbf{I}+\frac12(1+3\cdot2a+3\dot4a^2+ 8a^3-1)\mathbf{J} \\ &= \mathbf{I}+\frac12((1+2a)^3-1)\mathbf{J} \\ \end{align*} \begin{align*} \mathbf{A}^4 &=\l \mathbf{I}+a\mathbf{J}\r^4 \\ &= \mathbf{I}+4a\mathbf{J} + 6a^2\mathbf{J}^2 + 4a^3\mathbf{J}^3+a^4\mathbf{J}^4 \\ &= \mathbf{I}+4a\mathbf{J} + 12a^2\mathbf{J} + 16a^3\mathbf{J}+8a^4\mathbf{J}\\ &= \mathbf{I}+(4a+ 12a^3+16a^3+8a^4)\mathbf{J} \\ &= \mathbf{I}+\frac12(1+4\cdot2a+6\cdot4a^2+ 4\cdot8a^3+16a^4-1)\mathbf{J} \\ &= \mathbf{I}+\frac12((1+2a)^4-1)\mathbf{J} \\ \end{align*} Claim: \(\mathbf{A}^k = \mathbf{I} + \frac12 ((1+2a)^{k}-1)\mathbf{J}\) Proof: Firstly, note that \(\mathbf{I}\) commutes with everything, so we can just apply the binomial theorem as if we were using real numbers: \begin{align*} \mathbf{A}^k &=\l \mathbf{I}+a\mathbf{J}\r^k \\ &= \sum_{i=0}^k \binom{k}{i}a^i\mathbf{J}^i \\ &= \mathbf{I} + \sum_{i=1}^k \binom{k}{i}a^i2^{i-1}\mathbf{J} \\ &= \mathbf{I} + \frac12\l\sum_{i=1}^k \binom{k}{i}a^i2^{i}\r\mathbf{J} \\ &= \mathbf{I} + \frac12\l\sum_{i=0}^k \binom{k}{i}a^i2^{i} - 1\r\mathbf{J} \\ &= \mathbf{I} + \frac12\l(1+2a)^k - 1\r\mathbf{J} \end{align*} as required

1989 Paper 3 Q7
D: 1700.0 B: 1474.1

The linear transformation \(\mathrm{T}\) is a shear which transforms a point \(P\) to the point \(P'\) defined by

  1. \(\overrightarrow{PP'}\) makes an acute angle \(\alpha\) (anticlockwise) with the \(x\)-axis,
  2. \(\angle POP'\) is clockwise (i.e. the rotation from \(OP\) to \(OP'\) clockwise is less than \(\pi),\)
  3. \(PP'=k\times PN,\) where \(PN\) is the perpendicular onto the line \(y=x\tan\alpha,\) where \(k\) is a given non-zero constant.
If \(\mathrm{T}\) is represented in matrix form by $\begin{pmatrix}x'\\ y' \end{pmatrix}=\mathbf{M}\begin{pmatrix}x\\ y \end{pmatrix},$ show that \[ \mathbf{M}=\begin{pmatrix}1-k\sin\alpha\cos\alpha & k\cos^{2}\alpha\\ -k\sin^{2}\alpha & 1+k\sin\alpha\cos\alpha \end{pmatrix}. \] Show that the necessary and sufficient condition for $\begin{pmatrix}p & q\\ r & t \end{pmatrix}\( to commute with \)\mathbf{M}$ is \[ t-p=2q\tan\alpha=-2r\cot\alpha. \]


Solution:

TikZ diagram
We can see that \(\mathbf{M}\) sends \(\begin{pmatrix} 1 \\ \tan \alpha \end{pmatrix}\) to itself, and \(\begin{pmatrix} -\tan \alpha \\ 1 \end{pmatrix}\) to \(\begin{pmatrix} -\tan \alpha \\ 1 \end{pmatrix} + k \begin{pmatrix} 1 \\ \tan \alpha \end{pmatrix}\) Therefore, we have: \begin{align*} && \mathbf{M} \begin{pmatrix} 1 & -\tan \alpha \\ \tan \alpha & 1 \end{pmatrix} &= \begin{pmatrix} 1 & k - \tan \alpha \\ \tan \alpha & 1 + k\tan \alpha \end{pmatrix} \\ && \sec \alpha \mathbf{M} \begin{pmatrix} \cos \alpha & -\sin\alpha \\ \sin \alpha & \cos \alpha \end{pmatrix} &= \begin{pmatrix} 1 & k - \tan \alpha \\ \tan \alpha & 1 + k\tan \alpha \end{pmatrix} \\ \Rightarrow && \mathbf{M} &= \cos \alpha\begin{pmatrix} 1 & k - \tan \alpha \\ \tan \alpha & 1 + k\tan \alpha \end{pmatrix}\begin{pmatrix} \cos \alpha & \sin\alpha \\ -\sin \alpha & \cos \alpha \end{pmatrix} \\ &&&= \cos\alpha \begin{pmatrix}\cos \alpha -k\sin\alpha + \frac{\sin^2 \alpha}{\cos \alpha} & \sin \alpha + k \cos \alpha - \sin \alpha \\ \sin \alpha - \sin \alpha - k\frac{\sin^2 \alpha}{\cos \alpha} & \frac{\sin^2 \alpha}{\cos \alpha} + \cos\alpha + k \sin \alpha \end{pmatrix} \\ &&&= \begin{pmatrix}1-k\sin\alpha\cos\alpha & k\cos^{2}\alpha\\ -k\sin^{2}\alpha & 1+k\sin\alpha\cos\alpha \end{pmatrix} \end{align*} Suppose $\begin{pmatrix}p & q\\ r & t \end{pmatrix} \mathbf{M} = \mathbf{M} \begin{pmatrix}p & q\\ r & t \end{pmatrix}$ then, \begin{align*} && \begin{pmatrix}p & q\\ r & t \end{pmatrix} \mathbf{M} &= \mathbf{M} \begin{pmatrix}p & q\\ r & t \end{pmatrix} \\ \Leftrightarrow && \small \begin{pmatrix} p(1-k\sin\alpha\cos\alpha) + q(-k\sin^{2}\alpha) & pk\cos^{2}\alpha + q(1+k\sin\alpha\cos\alpha)\\ r(1-k\sin\alpha\cos\alpha) + t(-k\sin^{2}\alpha) & rk\cos^{2}\alpha + t(1+k\sin\alpha\cos\alpha)\end{pmatrix} &= \\ && \qquad \small \begin{pmatrix} p(1-k\sin\alpha\cos\alpha) + rk\cos^{2}\alpha & q(1-k\sin\alpha\cos\alpha) + tk\cos^{2}\alpha \\ -pk\sin^{2}\alpha + r(1+k\sin\alpha\cos\alpha) & -qk\sin^{2}\alpha+t (1+k\sin\alpha\cos\alpha) \end{pmatrix} \\ \Leftrightarrow && \begin{cases} p(1-k\sin\alpha\cos\alpha) + q(-k\sin^{2}\alpha) &= p(1-k\sin\alpha\cos\alpha) + rk\cos^{2}\alpha \\ pk\cos^{2}\alpha + q(1+k\sin\alpha\cos\alpha) &=q(1-k\sin\alpha\cos\alpha) + tk\cos^{2}\alpha \\ r(1-k\sin\alpha\cos\alpha) + t(-k\sin^{2}\alpha) &=-pk\sin^{2}\alpha + r(1+k\sin\alpha\cos\alpha) \\ rk\cos^{2}\alpha + t(1+k\sin\alpha\cos\alpha) &= -qk\sin^{2}\alpha+t (1+k\sin\alpha\cos\alpha) \end{cases} \\ \Leftrightarrow && \begin{cases} -q\tan^{2}\alpha &= r \\ p\cos^{2}\alpha + q\sin\alpha\cos\alpha &=-q\sin\alpha\cos\alpha + t\cos^{2}\alpha \\ -r\sin\alpha\cos\alpha + -t\sin^{2}\alpha &=-p\sin^{2}\alpha + r\sin\alpha\cos\alpha \\ r &= -q\tan^{2}\alpha \end{cases} \\ \Leftrightarrow && \begin{cases} -q\tan^{2}\alpha &= r \\ 2q\sin\alpha\cos\alpha &=(t-p)\cos^{2}\alpha \\ (p-t)\sin^{2}\alpha &=2r\sin\alpha\cos\alpha \end{cases} \\ \Leftrightarrow && \begin{cases} -q\tan^{2}\alpha &= r \\ 2q\tan \alpha &=(t-p) \end{cases} \\ \end{align*} as required