### Key Concepts

### 11.1 Systems of Linear Equations: Two Variables

- A system of linear equations consists of two or more equations made up of two or more variables such that all equations in the system are considered simultaneously.
- The solution to a system of linear equations in two variables is any ordered pair that satisfies each equation independently. See Example 1.
- Systems of equations are classified as independent with one solution, dependent with an infinite number of solutions, or inconsistent with no solution.
- One method of solving a system of linear equations in two variables is by graphing. In this method, we graph the equations on the same set of axes. See Example 2.
- Another method of solving a system of linear equations is by substitution. In this method, we solve for one variable in one equation and substitute the result into the second equation. See Example 3.
- A third method of solving a system of linear equations is by addition, in which we can eliminate a variable by adding opposite coefficients of corresponding variables. See Example 4.
- It is often necessary to multiply one or both equations by a constant to facilitate elimination of a variable when adding the two equations together. See Example 5, Example 6, and Example 7.
- Either method of solving a system of equations results in a false statement for inconsistent systems because they are made up of parallel lines that never intersect. See Example 8.
- The solution to a system of dependent equations will always be true because both equations describe the same line. See Example 9.
- Systems of equations can be used to solve real-world problems that involve more than one variable, such as those relating to revenue, cost, and profit. See Example 10 and Example 11.

### 11.2 Systems of Linear Equations: Three Variables

- A solution set is an ordered triple $\left\{\left(x,y,z\right)\right\}$
that represents the intersection of three planes in space. See

Example 1. - A system of three equations in three variables can be solved by using a series of steps that forces a variable to be eliminated. The steps include interchanging the order of equations, multiplying both sides of an equation by a nonzero constant, and adding a nonzero multiple of one equation to another equation. See Example 2.
- Systems of three equations in three variables are useful for solving many different types of real-world problems. See Example 3.
- A system of equations in three variables is inconsistent if no solution exists. After performing elimination operations, the result is a contradiction. See Example 4.
- Systems of equations in three variables that are inconsistent could result from three parallel planes, two parallel planes and one intersecting plane, or three planes that intersect the other two but not at the same location.
- A system of equations in three variables is dependent if it has an infinite number of solutions. After performing elimination operations, the result is an identity. See Example 5.
- Systems of equations in three variables that are dependent could result from three identical planes, three planes intersecting at a line, or two identical planes that intersect the third on a line.

### 11.3 Systems of Nonlinear Equations and Inequalities: Two Variables

- There are three possible types of solutions to a system of equations representing a line and a parabola: (1) no solution, the line does not intersect the parabola; (2) one solution, the line is tangent to the parabola; and (3) two solutions, the line intersects the parabola in two points. See Example 1.
- There are three possible types of solutions to a system of equations representing a circle and a line: (1) no solution, the line does not intersect the circle; (2) one solution, the line is tangent to the circle; (3) two solutions, the line intersects the circle in two points. See Example 2.
- There are five possible types of solutions to the system of nonlinear equations representing an ellipse and a circle:

(1) no solution, the circle and the ellipse do not intersect; (2) one solution, the circle and the ellipse are tangent to each other; (3) two solutions, the circle and the ellipse intersect in two points; (4) three solutions, the circle and ellipse intersect in three places; (5) four solutions, the circle and the ellipse intersect in four points. See Example 3. - An inequality is graphed in much the same way as an equation, except for > or <, we draw a dashed line and shade the region containing the solution set. See Example 4.
- Inequalities are solved the same way as equalities, but solutions to systems of inequalities must satisfy both inequalities. See Example 5.

### 11.4 Partial Fractions

- Decompose $\frac{P\left(x\right)}{Q\left(x\right)}$ by writing the partial fractions as $\frac{A}{{a}_{1}x+{b}_{1}}+\frac{B}{{a}_{2}x+{b}_{2}}.$ Solve by clearing the fractions, expanding the right side, collecting like terms, and setting corresponding coefficients equal to each other, then setting up and solving a system of equations. See Example 1.
- The decomposition of $\frac{P\left(x\right)}{Q\left(x\right)}$ with repeated linear factors must account for the factors of the denominator in increasing powers. See Example 2.
- The decomposition of $\frac{P\left(x\right)}{Q\left(x\right)}$ with a nonrepeated irreducible quadratic factor needs a linear numerator over the quadratic factor, as in $\frac{A}{x}+\frac{Bx+C}{\left(a{x}^{2}+bx+c\right)}.$ See Example 3.
- In the decomposition of $\frac{P\left(x\right)}{Q\left(x\right)},$
where $Q\left(x\right)$
has a repeated irreducible quadratic factor, when the irreducible quadratic factors are repeated, powers of the denominator factors must be represented in increasing powers as
$$\frac{Ax+B}{\left(a{x}^{2}+bx+c\right)}+\frac{{A}_{2}x+{B}_{2}}{{\left(a{x}^{2}+bx+c\right)}^{2}}+\cdots \text{+}\frac{{A}_{n}x+{B}_{n}}{{\left(a{x}^{2}+bx+c\right)}^{n}}.$$See Example 4.

### 11.5 Matrices and Matrix Operations

- A matrix is a rectangular array of numbers. Entries are arranged in rows and columns.
- The dimensions of a matrix refer to the number of rows and the number of columns. A $3\times 2$ matrix has three rows and two columns. See Example 1.
- We add and subtract matrices of equal dimensions by adding and subtracting corresponding entries of each matrix. See Example 2, Example 3, Example 4, and Example 5.
- Scalar multiplication involves multiplying each entry in a matrix by a constant. See Example 6.
- Scalar multiplication is often required before addition or subtraction can occur. See Example 7.
- Multiplying matrices is possible when inner dimensions are the same—the number of columns in the first matrix must match the number of rows in the second.
- The product of two matrices, $A$ and $B,$ is obtained by multiplying each entry in row 1 of $A$ by each entry in column 1 of $B;$ then multiply each entry of row 1 of $A$ by each entry in columns 2 of $B,$ and so on. See Example 8 and Example 9.
- Many real-world problems can often be solved using matrices. See Example 10.
- We can use a calculator to perform matrix operations after saving each matrix as a matrix variable. See Example 11.

### 11.6 Solving Systems with Gaussian Elimination

- An augmented matrix is one that contains the coefficients and constants of a system of equations. See Example 1.
- A matrix augmented with the constant column can be represented as the original system of equations. See Example 2.
- Row operations include multiplying a row by a constant, adding one row to another row, and interchanging rows.
- We can use Gaussian elimination to solve a system of equations. See Example 3, Example 4, and Example 5.
- Row operations are performed on matrices to obtain row-echelon form. See Example 6.
- To solve a system of equations, write it in augmented matrix form. Perform row operations to obtain row-echelon form. Back-substitute to find the solutions. See Example 7 and Example 8.
- A calculator can be used to solve systems of equations using matrices. See Example 9.
- Many real-world problems can be solved using augmented matrices. See Example 10 and Example 11.

### 11.7 Solving Systems with Inverses

- An identity matrix has the property $AI=IA=A.$ See Example 1.
- An invertible matrix has the property $A{A}^{\mathrm{-1}}={A}^{\mathrm{-1}}A=I.$ See Example 2.
- Use matrix multiplication and the identity to find the inverse of a $2\times 2$ matrix. See Example 3.
- The multiplicative inverse can be found using a formula. See Example 4.
- Another method of finding the inverse is by augmenting with the identity. See Example 5.
- We can augment a $3\times 3$ matrix with the identity on the right and use row operations to turn the original matrix into the identity, and the matrix on the right becomes the inverse. See Example 6.
- Write the system of equations as $AX=B,$ and multiply both sides by the inverse of $A:{A}^{\mathrm{-1}}AX={A}^{\mathrm{-1}}B.$ See Example 7 and Example 8.
- We can also use a calculator to solve a system of equations with matrix inverses. See Example 9.

### 11.8 Solving Systems with Cramer's Rule

- The determinant for $\left[\begin{array}{cc}a& b\\ c& d\end{array}\right]$ is $ad-bc.$ See Example 1.
- Cramer’s Rule replaces a variable column with the constant column. Solutions are $x=\frac{{D}_{x}}{D},y=\frac{{D}_{y}}{D}.$ See Example 2.
- To find the determinant of a 3×3 matrix, augment with the first two columns. Add the three diagonal entries (upper left to lower right) and subtract the three diagonal entries (lower left to upper right). See Example 3.
- To solve a system of three equations in three variables using Cramer’s Rule, replace a variable column with the constant column for each desired solution: $x=\frac{{D}_{x}}{D},y=\frac{{D}_{y}}{D},z=\frac{{D}_{z}}{D}.$ See Example 4.
- Cramer’s Rule is also useful for finding the solution of a system of equations with no solution or infinite solutions. See Example 5 and Example 6.
- Certain properties of determinants are useful for solving problems. For example:
- If the matrix is in upper triangular form, the determinant equals the product of entries down the main diagonal.
- When two rows are interchanged, the determinant changes sign.
- If either two rows or two columns are identical, the determinant equals zero.
- If a matrix contains either a row of zeros or a column of zeros, the determinant equals zero.
- The determinant of an inverse matrix ${A}^{-1}$ is the reciprocal of the determinant of the matrix $A.$
- If any row or column is multiplied by a constant, the determinant is multiplied by the same factor. See Example 7 and Example 8.