Numerical Methods Week 2

System of Linear Equations 1

We move on to our second session: Introduction to systems of linear equations.


Learning outcomes:

  • Recall methods of solution of inhomogeneous systems of linear equations.
  • Elimination methods, Gauss elimination, Gauss-Jordan elimination .
  • Implement and use Gauss-Jordan Elimination to solve systems of equations.
  • Reading:

  • Introduction to Part 3 and chapter 9 of Chapra and Canale.

  • Matt Watkins mwatkins@lincoln.ac.uk

    Solving a system of Equations

    $\newcommand{\vect}[1]{\boldsymbol{#1}}​$ Suppose we want to solve a system of equations $\vect{A} \vect{x} = \vect{b}$

    $\vect{A}$ is a matrix of coefficients of our unknowns, $\vect{x}$. $\vect{b}$ is a vector of constants.

    Explicitly this can be written (for a set of 4 equations with 4 unknowns) as:
    \[ \left( \begin{array}{cccc} a_{00} & a_{01} & a_{02} & a_{03}\\ a_{10} & a_{11} & a_{12} & a_{13}\\ a_{20} & a_{21} & a_{22} & a_{23}\\ a_{30} & a_{31} & a_{32} & a_{33}\\ \end{array} \right) \left( \begin{array}{c} x_0\\ x_1\\ x_2\\ x_3\\ \end{array} \right) = \left( \begin{array}{c} b_0\\ b_1\\ b_2\\ b_3\\ \end{array} \right) \]

    It can be useful to rewrite this as an augmented matrix

    \[ \left( \begin{array}{cccc|c} a_{00} & a_{01} & a_{02} & a_{03} & b_0\\ a_{10} & a_{11} & a_{12} & a_{13} & b_1\\ a_{20} & a_{21} & a_{22} & a_{23} & b_2\\ a_{30} & a_{31} & a_{32} & a_{33} & b_3\\ \end{array} \right)\]

    Gaussian Elimination

    Triangularization

    We reduce the augmented matrix to row echelon form. Let us take an initial augmented matrix as: \[ \left( \begin{array}{cccc|c} 2 & 2 & 4 & -2 & 10\\ 1 & 3 & 2 & 4 & 17\\ 3 & 1 & 3 & 1 & 18\\ 1 & 3 & 4 & 2 & 27\\ \end{array} \right) \]

    pivoting around row 0, we remove all entries below the diagonal entry in column 0, \[ \left( \begin{array}{cccc|c} 2 & 2 & 4 & -2 & 10\\ 0 & 2 & 0 & 5 & 12\\ 0 & -2 & -3 & 4 & 3\\ 0 & 2 & 2 & 3 & 22\\ \end{array} \right) \]

    Matrix after pivoting around row 0 \[ \left( \begin{array}{cccc|c} 2 & 2 & 4 & -2 & 10\\ 0 & 2 & 0 & 5 & 12\\ 0 & -2 & -3 & 4 & 3\\ 0 & 2 & 2 & 3 & 22\\ \end{array} \right) \]

    Then pivoting around row 1 we remove elements below the diagonal in column 1, \[ \left( \begin{array}{cccc|c} 2 & 2 & 4 & -2 & 10\\ 0 & 2 & 0 & 5 & 12\\ 0 & 0 & -3 & 9 & 15\\ 0 & 0 & 2 & -2 & 10\\ \end{array} \right) \]

    pivoting around row 2 \[ \left( \begin{array}{cccc|c} 2 & 2 & 4 & -2 & 10\\ 0 & 2 & 0 & 5 & 12\\ 0 & 0 & -3 & 9 & 15\\ 0 & 0 & 0 & 4 & 20\\ \end{array} \right) \]

    Gaussian Elimination

    Back substitution

    From the triangularized augmented matrix we can then solve for $\vect{x}$ \[ \left( \begin{array}{cccc|c} 2 & 2 & 4 & -2 & 10\\ 0 & 2 & 0 & 5 & 12\\ 0 & 0 & -3 & 9 & 15\\ 0 & 0 & 0 & 4 & 20\\ \end{array} \right) \]

    Starting at row 3, only the coefficient of $x_3$ is non-zero. Converting the augmented notation to real equations using the last row we have $$ 0 \cdot x_0 + 0 \cdot x_1 + 0 \cdot x_2 + 4 \cdot x_3 = 20 \implies x_3 = 5 $$

    Then we can work up the rows \[\begin{align*} 0 \cdot x_0 + 0 \cdot x_1 + -3 \cdot x_2 + 9 \cdot x_3 & = 15 \\ 0 \cdot x_0 + 0 \cdot x_1 + -3 \cdot x_2 + 9 \cdot 5 & = 15 \implies x_2 = 10 \end{align*}\]

    Now row 2 \[\begin{align*} 0 \cdot x_0 + 2 \cdot x_1 + 0 \cdot x_2 + 5 \cdot x_3 & = 12 \\ 0 \cdot x_0 + 2 \cdot x_1 + 0 \cdot 10 + 5 \cdot 5 & = 12 \implies x_1 = -6.5 \end{align*}\]

    And finally \[\begin{align*} 2 \cdot x_0 + 2 \cdot -6.5 + 4 \cdot 10 + -2 \cdot 5 & = 10 \implies x_0 = -3.5 \end{align*}\]

    Gauss Elimination

    Use the code to find the solutions of the following systems

    \[ \begin{align*} 3x_0 + 4x_1 - 7x_2 & = 23\\ 7x_0 - x_1 + 2x_2 & = 14\\ x_0 + 10x_1 - 2x_2 & = 33\\ \end{align*} \]

    Can you find the solutions to this system of equations? Why not? \[ \begin{align*} 1x_0 + 2x_1 + 3x_2 & = 1\\ 4x_0 + 5x_1 + 6x_2 & = 2\\ 7x_0 + 8x_1 + 9x_2 & = 3\\ \end{align*} \]


    Gauss-Jordan Elimination

    Guass-Jordan elimination is very similar to Gauss elimination. Instead of triangularization, we make a completely diagonal matrix. Or more exactly we reduce the augmented matrix to $\textbf{reduced}$ row echelon form. Initial matrix is: \[ \left( \begin{array}{cccc|c} 2 & 2 & 4 & -2 & 10\\ 1 & 3 & 2 & 4 & 17\\ 3 & 1 & 3 & 1 & 18\\ 1 & 3 & 4 & 2 & 27\\ \end{array} \right) \] as before

    The first step is the same: pivoting around row 0 \[ \left( \begin{array}{cccc|c} 2 & 2 & 4 & -2 & 10\\ 0 & 2 & 0 & 5 & 12\\ 0 & -2 & -3 & 4 & 3\\ 0 & 2 & 2 & 3 & 22\\ \end{array} \right) \]

    The first step is the same: pivoting around row 0 \[ \left( \begin{array}{cccc|c} 2 & 2 & 4 & -2 & 10\\ 0 & 2 & 0 & 5 & 12\\ 0 & -2 & -3 & 4 & 3\\ 0 & 2 & 2 & 3 & 22\\ \end{array} \right) \]

    But now, pivoting around row 1, we remove entries above $\textbf{and}$ below the diagonal of column 1 \[ \left( \begin{array}{cccc|c} 2 & 0 & 4 & -7 & -2\\ 0 & 2 & 0 & 5 & 12\\ 0 & 0 & -3 & 9 & 15\\ 0 & 0 & 2 & -2 & 10\\ \end{array} \right) \]

    This continues pivoting around row 2 \[ \left( \begin{array}{cccc|c} 2 & 0 & 0 & 5 & 18\\ 0 & 2 & 0 & 5 & 12\\ 0 & 0 & -3 & 9 & 15\\ 0 & 0 & 0 & 4 & 20\\ \end{array} \right) \]

    And finally, pivoting around row 3 \[ \left( \begin{array}{cccc|c} 2 & 0 & 0 & 0 & -7\\ 0 & 2 & 0 & 0 & -13\\ 0 & 0 & -3 & 0 & -30\\ 0 & 0 & 0 & 4 & 20\\ \end{array} \right) \]

    We can then divide each row by the final coefficients to get: \[ \left( \begin{array}{cccc|c} 1 & 0 & 0 & 0 & -3.5\\ 0 & 1 & 0 & 0 & -6.5\\ -0 & -0 & 1 & -0 & 10\\ 0 & 0 & 0 & 1 & 5\\ \end{array} \right) \] And we can just read the solutions for $\vect{x}$ off.

    Gauss-Jordan Elimination for Matrix Inversion

    We can solve the equation $\vect{AX} = \vect{I}$ using exactly the same method: \[ \left( \begin{array}{ccc|ccc} 2 & 1 & 1 & 1 & 0 & 0\\ 1 & 0 & -1 & 0 & 1 & 0\\ 2 & -1 & 2 & 0 & 0 & 1\\ \end{array} \right) \] pivoting around row 0 \[ \left( \begin{array}{ccc|ccc} 2 & 1 & 1 & 1 & 0 & 0\\ 0 & -0.5 & -1.5 & -0.5 & 1 & 0\\ 0 & -2 & 1 & -1 & 0 & 1\\ \end{array} \right) \] pivoting around row 1 \[ \left( \begin{array}{ccc|ccc} 2 & 0 & -2 & 0 & 2 & 0\\ 0 & -0.5 & -1.5 & -0.5 & 1 & 0\\ 0 & 0 & 7 & 1 & -4 & 1\\ \end{array} \right) \]

    pivoting around row 2 \[ \left( \begin{array}{ccc|ccc} 2 & 0 & 0 & 0.285714 & 0.857143 & 0.285714\\ 0 & -0.5 & 0 & -0.285714 & 0.142857 & 0.214286\\ 0 & 0 & 7 & 1 & -4 & 1\\ \end{array} \right) \] Scaling the rows, the final matrix is: \[ \left( \begin{array}{ccc|ccc} 1 & 0 & 0 & 0.142857 & 0.428571 & 0.142857\\ -0 & 1 & -0 & 0.571429 & -0.285714 & -0.428571\\ 0 & 0 & 1 & 0.142857 & -0.571429 & 0.142857\\ \end{array} \right) \]

    The RHS of the augmented matrix is $\vect{A}^{-1}$

    Gauss-Jordan Elimination

    Alter the code for Gauss elimination to instead perform Gauss-Jordan elimination.

    - I suggest copying your file for now, renaming rather than altering the previous code directly. - You should change the second loop so that it goes over all rows.
    - You should add an `if` statement to skip the row with the same index as the column you are working on.
    - When the matrix is diagonal, divide each row by the value of the remaining diagonal element to get the identity matrix and $\vect{x}$.
    - Test regularly as you make the alterations.

    Solve the same matrix problems as before and check the result is the same.

    Gauss-Jordan Matrix inversion

    Make a new Gauss-Jordan routine that can calculate the inverse of a matrix.

    - You need to extend the number of columns in the augmented matrix
    - You need to extend range of the loops that go over the columns

    Find the inverse of the matrix \[ \left( \begin{array}{cccc} 2 & 2 & 4 & -2 \\ 1 & 3 & 2 & 4 \\ 3 & 1 & 3 & 1\\ 1 & 3 & 4 & 2 \\ \end{array} \right) \]
    Check your solution is correct by multiplying the original and inverse matrices.


    Summary and Further Reading

    You should be reading additional material to provide a solid background to what we do in class

    All the textbooks in the book list on Bb contain sections on solving linear equations. I suggest Chapter 9 of Chapra and Canale for starters.

    Homework

    Before next week read about extra steps that can be performed to improve elimination methods.

    Read about LU decomposition of square matrices, Chapter 10 of Chapra and Canale.

    Snake

    Use the arrow keys