Subsection 11.4.1 Cofactors and \(2 \times 2\) Deteriminants
Let \(A\) be an \(n \times n\) matrix. The determinant of \(A\text{,}\) denoted as \(\det (A)\) is a number. If the matrix is a \(2 \times 2\) matrix, this number is very easy to find.
Definition 11.4.1. Determinant of a \(2 \times 2\) Matrix.
Let
\begin{equation*}
A = \begin{pmatrix} a & b\\ c & d \end{pmatrix}\text{.}
\end{equation*}
Then
\begin{equation*}
\det (A) \equiv ad - cb\text{.}
\end{equation*}
The determinant is also often denoted by enclosing the matrix with two vertical lines. Thus
\begin{equation*}
\det \begin{pmatrix} a & b\\ c & d \end{pmatrix} = \begin{vmatrix} a & b\\ c & d \end{vmatrix} .
\end{equation*}
Example 11.4.2.
Find \(\det \begin{pmatrix} 2 & 4\\ -1 & 6 \end{pmatrix}\text{.}\)
Solution.
From the definition this is
\begin{equation*}
(2)(6) - (-1)(4) = 16\text{.}
\end{equation*}
Having defined what is meant by the determinant of a \(2 \times 2\) matrix, what about a \(3 \times 3\) matrix?
Definition 11.4.3. Matrix Minor.
Suppose \(A\) is a \(3 \times 3\) matrix. The \(ij^{th}\) minor, denoted as \(\text{minor}(A)_{ij}\) or \((i,j)\) minor , is the determinant of the \(2\times 2\) matrix which results from deleting the \(i^{th}\) row and the \(j^{th}\) column.
Example 11.4.4.
Consider the matrix
\begin{equation*}
\begin{pmatrix} 1 & 2 & 3\\ 4 & 3 & 2\\ 3 & 2 & 1\\ \end{pmatrix}.
\end{equation*}
The (1,2) minor is the determinant of the \(2 \times 2\) matrix which results when you delete the first row and the second column. This minor is therefore
\begin{equation*}
\det\begin{pmatrix} 4 & 2\\ 3 & 1\end{pmatrix} = (4)(1) - (3)(2) = -2.
\end{equation*}
The (2,3) minor is the determinant of the \(2 \times 2\) matrix which results when you delete the second row and the third column. This minor is therefore
\begin{equation*}
\det\begin{pmatrix} 1 & 2\\ 3 & 2\end{pmatrix} = (1)(2) - (3)(2) = -4.
\end{equation*}
Definition 11.4.5. \(ij^{th}\) Cofactor.
Suppose \(A\) is a \(3 \times 3\) matrix. The \(ij^{th}\) cofactor is defined to be \((-1)^{i+j} \times (ij^{th}\) minor). In words, you multiply \((-1)^{(i+j)}\) times the \(ij^{th}\) minor to get the \(ij^{th}\) cofactor.
The cofactors of a matrix are so important that special notation is appropriate when referring to them. The \(ij^{th}\) cofactor of a matrix \(A\) will be denoted by \(\text{cof} (A)_{ij}\) . It is also convenient to refer to the cofactor of an entry of a matrix as follows. For \(a_{ij}\) an entry of the matrix, its cofactor is just \(\text{cof} (A)_{ij}\) . Thus the cofactor of the \(ij^{th}\) entry is just the \(ij^{th}\) cofactor.
Example 11.4.6.
Consider the matrix
\begin{equation*}
\begin{pmatrix} 1 & 2 & 3\\ 4 & 3 & 2\\ 3 & 2 & 1\\ \end{pmatrix}.
\end{equation*}
The (1,2) minor is the determinant of the \(2 \times 2\) matrix which results when you delete the first row and the second column. This minor is therefore
\begin{equation*}
\det \begin{pmatrix}4&2\\ 3&1\end{pmatrix} = (4)(1) - (3)(2) = -2.
\end{equation*}
It follows
\begin{equation*}
\text{cof} (A)_{12} = (-1)^{1+2} \det \begin{pmatrix} 4 & 2\\ 3 & 1\end{pmatrix} = (-1)^{1+2} (-2) = 2.
\end{equation*}
The (2,3) minor is the determinant of the \(2 \times 2\) matrix which results when you delete the second row and the third column. This minor is therefore
\begin{equation*}
\det \begin{pmatrix} 1 & 2\\ 3 & 2\end{pmatrix} = (1)(2) - (3)(2) = -4.
\end{equation*}
Therefore,
\begin{equation*}
\text{cof} (A)_{23} = (-1)^{2+3} \det \begin{pmatrix} 1 & 2\\ 3 & 2\end{pmatrix} = (-1)^{2+3} (-4) = 4.
\end{equation*}
Similarly,
\begin{equation*}
\text{cof} (A)_{22} = (-1)^{2+2} \det \begin{pmatrix} 1 & 3\\ 3 & 1\\ \end{pmatrix} = -4.
\end{equation*}
Definition 11.4.7. Expanding Along a Row or Column.
The determinant of a \(3 \times 3\) matrix \(A\text{,}\) is obtained by picking a row (column) and taking the product of each entry in that row (column) with its cofactor and adding these up. This process when applied to the \(i^{th}\) row (column) is known as expanding the determinant along the \(i^{th}\) row (column).
Example 11.4.8.
Find the determinant of
\begin{equation*}
A = \begin{pmatrix} 1 & 2 & 3\\ 4 & 3 & 2\\ 3 & 2 & 1\\ \end{pmatrix}.
\end{equation*}
Here is how it is done by expanding around the first column.
\begin{equation*}
\overbrace{1(-1)^{1+1}\begin{vmatrix}3 & 2\\ 2 & 1 \end{vmatrix}}^{\text{cof} (A)_{11}} + \overbrace{4(-1)^{2+1}\begin{vmatrix}2 & 3\\ 2 & 1 \end{vmatrix}}^{\text{cof} (A)_{21}} + \overbrace{3(-1)^{3+1}\begin{vmatrix}3 & 2\\ 3 & 2 \end{vmatrix}}^{\text{cof} (A)_{31}} = 0 .
\end{equation*}
You see, we just followed the rule in the above definition. We took the 1 in the first column and multiplied it by its cofactor, the 4 in the first column and multiplied it by its cofactor, and the 3 in the first column and multiplied it by its cofactor. Then we added these numbers together.
You could also expand the determinant along the second row as follows.
\begin{equation*}
\overbrace{4(-1)^{2+1}\begin{vmatrix}2 & 3\\ 2 & 1 \end{vmatrix}}^{\text{cof} (A)_{21}} + \overbrace{3(-1)^{2+2}\begin{vmatrix}1 & 3\\ 3 & 1 \end{vmatrix}}^{\text{cof} (A)_{22}} + \overbrace{2(-1)^{2+3}\begin{vmatrix}1 & 2\\ 3 & 2 \end{vmatrix}}^{\text{cof} (A)_{23}} = 0 .
\end{equation*}
Observe this gives the same number. You should try expanding along other rows and columns. If you don’t make any mistakes, you will always get the same answer.
What about a \(4 \times 4\) matrix? You know now how to find the determinant of a \(3 \times 3\) matrix. The pattern is the same.
Definition 11.4.9.
Suppose A is a \(4 \times 4\) matrix. The \(ij^{th}\) minor is the determinant of the \(3 \times 3\) matrix you obtain when you delete the \(i^{th}\) row and the \(j^{th}\) column. The \(ij^{th}\) cofactor, \(\text{cof} (A)\) is defined to be \((-1)^{i+j} \times (ij^{th} \text{ minor})\text{.}\) In words, you multiply \((-1)^{i+j}\) times the \(ij^{th}\) minor to get the \(ij^{th}\) cofactor.
Definition 11.4.10. Laplace Expansion.
The determinant of a
\(4 \times 4\) matrix
\(A\text{,}\) is obtained by expanding along a row (column) just as was done for a
\(3\times 3\) matrix in
Definition 11.4.7, except the expanding process must be repeated again for each of the resulting
\(3\times 3\) matrices to obtain their determinants.
This method of evaluating a determinant by expanding along a row or a column can be done for any size square matrix and is called the method of Laplace expansion. The process must be repeated until the resulting matrices are small enough to easily calculate their determinants.
Example 11.4.11.
Find \(\det (A)\) where
\begin{equation*}
A = \begin{pmatrix} 1 & 2 & 3 & 4\\ 5 & 4 & 2 & 3\\ 1 & 3 & 4 & 5\\ 3 & 4 & 3 & 2 \end{pmatrix}.
\end{equation*}
Solution.
As in the case of a \(3 \times 3\) matrix, you can expand this along any row or column. Lets pick the third column. \(\det (A) =\)
\begin{equation*}
3(-1)^{1+3}\begin{vmatrix} 5 & 4 & 3\\ 1 & 3 & 5\\ 3 & 4 & 2 \end{vmatrix} + 2(-1)^{2+3}\begin{vmatrix} 1 & 2 & 4\\ 1 & 3 & 5\\ 3 & 4 & 2 \end{vmatrix} +
\end{equation*}
\begin{equation*}
4(-1)^{3+3}\begin{vmatrix} 1 & 2 & 4\\ 5 & 4 & 3\\ 3 & 4 & 2 \end{vmatrix} + 3(-1)^{4+3}\begin{vmatrix} 1 & 2 & 4\\ 5 & 4 & 3\\ 1 & 3 & 5 \end{vmatrix} .
\end{equation*}
Now you know how to expand each of the \(3 \times 3\) matrices along a row or a column. If you do so, you will get \(-12\) assuming you make no mistakes. You could expand this matrix along any row or any column and assuming you make no mistakes, you will always get the same thing which is defined to be the determinant of the matrix \(A\text{.}\) .
Note that each of the four terms in the example solution above involves three terms consisting of determinants of \(2 \times 2\) matrices and each of these will need 2 terms. Therefore, there will be \(4 \times 3 \times 2 = 24\) terms to evaluate in order to find the determinant using the method of Laplace expansion. Suppose now you have a \(10 \times 10\) matrix and you follow the above pattern for evaluating determinants. By analogy to the above, there will be \(10! = 3, 628 , 800\) terms involved in the evaluation of such a determinant by Laplace expansion along a row or column. This is a lot of terms.
In addition to the difficulties just discussed, you should regard the above claim that you always get the same answer by picking any row or column with considerable skepticism. It is incredible and not at all obvious. However, it requires a little effort to establish it. This is done in the Elementary Linear Algebra book chapter 7 on the theory of the determinant.
Definition 11.4.12. Cofactor Matrix.
Let \(A = (a_{ij})\) be an \(n \times n\) matrix and suppose the determinant of a \((n - 1) \times (n - 1)\) matrix has been defined. Then a new matrix called the cofactor matrix, \(cof (A)\) is defined by \(cof (A) = (c_{ij}) \) where to obtain \(c_{ij}\) delete the \(i^{th}\) row and the \(j^{th}\) column of \(A\text{,}\) take the determinant of the \((n - 1) \times (n - 1)\) matrix which results, (This is called the \(ij^{th}\) minor of \(A\text{.}\) ) and then multiply this number by \((-1)^{i+j}\) .
Thus \((-1)^{i+j} \times (\text{ the } ij^{th} \text{ minor})\) equals the \(ij^{th}\) cofactor. To make the formulas easier to remember, \(\text{cof} (A)_{ij}\) will denote the \(ij^{th}\) entry of the cofactor matrix.
With this definition of the cofactor matrix, here is how to define the determinant of an \(n \times n\) matrix.
Definition 11.4.13. Determininant as Sum of Cofactor Matrices.
Let \(A\) be an \(n \times n\) matrix where \(n \geq 2\) and suppose the determinant of an \((n - 1) \times (n - 1)\) has been defined. Then
\begin{equation*}
\det (A) = \sum_{j = 1}^{n} a_{ij}\, \text{cof} (A)_{ij} = \sum_{i = 1}^{n} a_{ij}\, \text{cof} (A)_{ij}
\end{equation*}
The first sum consists of expanding the determinant along the \(i^{th}\) row and the second expands the determinant along the \(j^{th}\) column.
Theorem 11.4.14.
Expanding the \(n \times \)n matrix along any row or column always gives the same answer so the above definition is a good definition.
Subsection 11.4.3 Properties of Determinants
There are many properties satisfied by determinants. Some of these properties have to do with row operations. Recall the row operations:
Definition 11.1.6
Theorem 11.4.18. Switching two rows negates the determinant.
Let \(A\) be an \(n \times n\) matrix and let \(A_1\) be a matrix which results from switching two rows of \(A\text{.}\) Then \(\det (A) = -\det (A_1)\) . Also, if one row of \(A\) is a multiple of another row of \(A\text{,}\) then \(\det (A) = 0\text{.}\)
Example 11.4.19.
Let \(A = \begin{pmatrix}1 & 2\\ 3 & 4\end{pmatrix} \text{,}\) and let \(A_1 = \begin{pmatrix}3 & 4\\ 1 & 2\end{pmatrix} \text{.}\)
Then \(\det (A) = -2\) and \(\det (A_1) = 2.\)
Theorem 11.4.20. Multiplying a matrix by a scalar, also multiplies the determinant.
Let \(A\) be an \(n \times n\) matrix and let \(A_1\) be a matrix which results from multiplying some row of \(A\) by a scalar \(c\text{.}\) Then \(c \,\det (A) = \det (A_1)\text{.}\)
Example 11.4.21.
Let \(A = \begin{pmatrix}1 & 2\\ 3 & 4\end{pmatrix} \text{,}\) \(A_1 = \begin{pmatrix}2 & 4\\ 3 & 4\end{pmatrix} \text{.}\)
The first row of \(A_1\) is 2 times the first row of \(A\text{.}\)
\(\det (A) = -2\) and \(\det (A_1) = -4 = 2 \cdot -2 = 2 \, \det (A).\)
Theorem 11.4.22. Row operation 3 doesn’t change the determinant.
Let \(A\) be an \(n \times n\) matrix and let \(A_1\) be a matrix which results from applying row operation 3. That is, you replace some row by a multiple of another row added to itself. Then \(\det (A) = \det (A_1 ).\)
Example 11.4.23.
Let \(A = \begin{pmatrix}1 & 2\\ 3 & 4\end{pmatrix} \) and let \(A_1 = \begin{pmatrix}1 & 2\\ 4 & 6\end{pmatrix} \text{.}\) Thus the second row of \(A_1\) is one times the first row added to the second row.
\(\det (A) = -2\) and \(\det (A_1) = -2.\)
Theorem 11.4.24. Columns have the same effect.
There are two other major properties of determinants which do not involve row operations.
Theorem 11.4.25. The determinant of a matrix product is the product of the determinants.
Let \(A \) and \(B \) be two \(n\times n\) matrices, then
\begin{equation*}
\det (AB) = \det (A) \det (B).
\end{equation*}
Example 11.4.26.
Compare \(\det (AB)\) and \(\det (A) \det (B)\) for
\begin{equation*}
A = \begin{pmatrix}1 & 2\\ -3& 2\end{pmatrix}, B = \begin{pmatrix}3 & 2\\ 4& 1\end{pmatrix}
\end{equation*}
Solution.
First
\begin{equation*}
AB = \begin{pmatrix}1 & 2\\ -3& 2\end{pmatrix}\begin{pmatrix}3 & 2\\ 4& 1\end{pmatrix} = \begin{pmatrix}11 & 4\\ -1 & -4\end{pmatrix}
\end{equation*}
and so
\begin{equation*}
\det (AB) = \det \begin{pmatrix}11 & 4\\ -1 & -4\end{pmatrix} = -40.
\end{equation*}
Now
\begin{equation*}
\det (A) = det\begin{pmatrix}1 & 2\\ -3 & 2\end{pmatrix} = 8
\end{equation*}
and
\begin{equation*}
\det ( b) = det\begin{pmatrix}3 & 2\\ 4 & 1\end{pmatrix} = -5.
\end{equation*}
Thus \(\det (A)\det (B) = 8 \times (-5) = -40.\)
Theorem 11.4.27. The determinant of a matrix is the determinant of its transpose.
Let \(A \) be an \(n\times n\) matrix, then
\begin{equation*}
\det (A) = \det (A^T)
\end{equation*}
Example 11.4.28.
Compare \(\det (A)\) and \(\det (A^T)\) for
\begin{equation*}
A = \begin{pmatrix}1 & 2\\ -3& 2\end{pmatrix}.
\end{equation*}
Solution.
\begin{equation*}
\det (A) = det\begin{pmatrix}1 & 2\\ -3 & 2\end{pmatrix} = (1\cdot 2) - (-3 \cdot 2) = 8
\end{equation*}
and
\begin{equation*}
\det (A^t) = det\begin{pmatrix}1 & -3\\ 2 & 2\end{pmatrix} = (1\cdot 2) - (2 \cdot -3) = 8.
\end{equation*}
Thus \(\det (A)= \det (A^T).\)
Subsection 11.4.6 Cramer’s Rule
This formula for the inverse also implies a famous procedure known as Cramer’s rule. Cramer’s rule gives a formula for the solutions, \(\mathbf{x}\text{,}\) to a system of equations, \(A\mathbf{x} = \mathbf{y}\) in the special case that \(A\) is a square matrix. Note this rule does not apply if you have a system of equations in which there is a different number of equations than variables.
In case you are solving a system of equations, \(A\mathbf{x} = \mathbf{y}\) for \(\mathbf{x}\text{,}\) it follows that if \(A^{-1}\) exists,
\begin{equation*}
\mathbf{x} = (A^{-1}A)\mathbf{x} = A^{-1}(A\mathbf{x}) = A^{-1}\mathbf{y}
\end{equation*}
thus solving the system. Now in the case
\(A^{-1}\) exists,
Theorem 11.4.31 is a formula for
\(A^{-1}\text{.}\) Using this formula,
\begin{equation*}
x_i = \sum_{j=1}^n a_{ij}^{-1} y_{j} = \sum_{j=1}^n\frac{1}{\det(A)} \text{cof} (A)_{ji}y_j.
\end{equation*}
\begin{equation*}
x_i = \frac{1}{\det(A)}\det \begin{pmatrix}\cdot & \cdots & y_1 & \cdots \cdot \\ \vdots && \vdots & & \vdots\\ \cdot & \cdots & y_n & \cdots & \cdot \end{pmatrix}
\end{equation*}
where here the \(i^{th}\) column of \(A\) is replaced by the column vector \((y_1, \cdots, y_n)^T\text{,}\) and the determinant of this modified matrix is taken and divided by \(\det(A).\) This formula is known as Cramer’s rule.
Definition 11.4.35. Cramer’s Rule.
Suppose \(A\) is an \(n \times n\) matrix and it is desired to solve the system \(A\mathbf{x} = \mathbf{y}\text{,}\) \(\mathbf{y} = (y_1, \cdots , y_n )^T\) for \(\mathbf{x} = (x_1, \cdots , x_n )^T\text{.}\) Then Cramer’s rule says
\begin{equation*}
x_i = \frac{\det A_i}{\det A}
\end{equation*}
where \(A_i\) is obtained by replacing the \(i^{th}\) column of \(A\) with the column \((y_1, \cdots , y_n )^T\)
Example 11.4.36.
Find \(x, y, z\) if
\begin{equation*}
\begin{pmatrix} 1 & 2 & 1\\ 3 & 2 & 1\\ 2 & -3 & 2 \end{pmatrix} \begin{pmatrix}x \\ y \\ z \end{pmatrix} = \begin{pmatrix}1 \\ 2 \\ 3 \end{pmatrix}
\end{equation*}
Solution.
Using Cramer’s rule, solve for \(x\) by replacing column 1 with \((y_1, \cdots , y_n )^T = \begin{pmatrix}1 \\ 2 \\ 3 \end{pmatrix}\) and dividing the determinant of the new matrix by the determinant of the original matrix.
\begin{equation*}
x = \frac{\begin{vmatrix}1 & 2 & 1\\ 2 & 2 & 1\\ 3 & -3 & 2\end{vmatrix}}{\begin{vmatrix}1 & 2 & 1\\ 3 & 2 & 1\\ 2 & -3 & 2\end{vmatrix}} = \frac{1}{2}
\end{equation*}
Now to find \(y,\)
\begin{equation*}
y = \frac{\begin{vmatrix}1 & 1 & 1\\ 3 & 2 & 1\\ 2 & 3 & 2\end{vmatrix}}{\begin{vmatrix}1 & 2 & 1\\ 3 & 2 & 1\\ 2 & -3 & 2\end{vmatrix}} = -\frac{1}{7}
\end{equation*}
and \(z,\)
\begin{equation*}
z = \frac{\begin{vmatrix}1 & 2 & 1\\ 3 & 2 & 2\\ 2 & -3 & 3\end{vmatrix}}{\begin{vmatrix}1 & 2 & 1\\ 3 & 2 & 1\\ 2 & -3 & 2\end{vmatrix}} = -\frac{11}{14}
\end{equation*}
For large systems Cramer’s rule is less than useful because to use it you must evaluate determinants. However, you have no practical way to evaluate determinants for large matrices other than row operations and if you are using row operations, you might just as well use them to solve the system to begin with. It will be a lot less trouble. Nevertheless, there are situations in which Cramer’s rule is useful.
Example 11.4.37.
Solve for \(z\) if
\begin{equation*}
\begin{pmatrix} 1 & 0 & 0\\ 0 & e^t \cos t & e^t \sin t\\ 0 & -e^t\sin t & e^t \cos t \end{pmatrix} \begin{pmatrix}x \\ y \\ z \end{pmatrix} = \begin{pmatrix}1 \\ t \\ t^2 \end{pmatrix}
\end{equation*}
Solution.
You could do it by row operations but it might be easier in this case to use Cramer’s rule because the matrix of coefficients do not consist of numbers but of functions. Thus
\begin{equation*}
z = \frac{\begin{vmatrix}1 & 0 & 1\\ 0 & e^t \cos t & t\\ 0 & -e^t\sin t & t^2\end{vmatrix}}{\begin{vmatrix}1 & 0 & 0\\ 0 & e^t \cos t & e^t \sin t\\ 0 & -e^t\sin t & e^t \cos t\end{vmatrix}} = t((\cos t)t + \sin t)e^{-t}.
\end{equation*}