-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Linear Algebra Lecture Notes 线性代数课程笔记 #40
Comments
TL;DR. Archived link: Vector section notes of Essence Linear AlgebraTL;DR Archived link: MIT OCW Linear Algebra courses list & compare.Notations
|
MIT OCW 18.06 SC Unit 1.1 The geometry of linear equations
We view this problem in three ways:
|
Matrices Elimination
TerminologyBefore learning JFR, the core terms are: 「Gaussian elimination」Refer to wiki:
Refer to simple wiki: Gaussian elimination To perform To be simpler, here is the structure:
And if we make the result only in
「Elementary Row Operations」Elementary row operations are used to simplify the matrix. The three types of row operations used are:
Confusing operation: See where the ExampleSuppose the goal is to find the solution for the linear system below: First we need to turn it into Then we apply At the end, if we'd like, we can further on apply some row operations to get the matrix in 「Row Echelon Form」 vs. 「Reduced Row Echelon Form」Refer to this lecture video: REF & RREF. It doesn't really matter it is a
「Augmented Matrix」
When we apply elimination to If a given Matrix was told it's an 「Equivalent systems」 & 「Equivalent Matrices」
「Pivot」
Refer to this video from mathispower4u. It means the value that represents the 「Free variables」If there's no pivot in a column, that means this
|
❖ Matrix MultiplicationRefer to this video by mathispower4u Practice: A fairly simple way to remember how to do 「Properties」 of matrix multiplicationRefer to Khan academy article. 「Einstein summation」 convention |
MIT OCW 18.06 SC Unit 1.2 Elimination with Matrices
prerequisites:
「Column operation」 of Matrices MultiplicationThe result above is a THE RESULT OF THAT COLUMN OPERATION IS A LINEAR COMBINATIONS OF THE COLUMNS.
「Row operation」 of Matrices MultiplicationThe result above is a THE RESULT OF THAT ROW OPERATION IS A COMBINATION OF THE ROWS. 「Elementary Matrix」
Refer to this amazing good video by Mathispower4u: Elementary Matrices Simply saying, an
The example above is an The reason we need an FOR EVERY SINGLE STEP OF ELIMINATION, WE NEED AN ELEMENTARY MATRIX. So for two steps of elimination, we could represent it with Combining all elimination steps in ONE MATRIX: 「Permutation Matrix」
Review Dr. Strang's lecture. Example: To switch two ROWS of a matrix by using a Example: To switch two COLUMNS of a matrix by using a Some common 「permutation matrices」 |
MIT OCW 18.06 SC Unit 1.3 Multiplication & Inverse Matrices
Refer to Juanklopper's jupyter notebook.
Method 1: Multiply 「matrix by vector」Calculation of an entry of the Product Matrix. Method 2: Multiply 「matrix by COLUMN」Each column of the Method 3: Multiply 「ROW by matrix」Each row of the Method 4: Multiply 「COLUMN by ROW」「Dot product」Method 5: 「Block multiplication」You can cut each matrix to blocks, each block is no necessary to be equal sized as long as they can match each other well. After you cut matrices into blocks, the multiplication will just be like a smaller matrix multiplication: Each block can be seen as a number in a matrix. 「Inverses」 (Square matrices)If a matrix's inverse exists, then we call this matrix And only with 「Singular Matrix」 (No inverse)Simplest way to tell if it's a Use 「Gauss-Jordan Elimination」 to get InverseTHIS METHOD IS SO MUCH EASIER TO GET THE INVERSE THAN THE WAY WE LEARNT IN HIGH SHCOOL WHICH LETS YOU TO CALCULATE ALL DETERMINANT, ADJUGATE AND COFACTOR AND SO ON. Refer to Khan academy lecture: Inverting a 3x3 matrix using Gaussian elimination. Practice for Gauss-Jordan Elimination to get Inverse of a matrix: With this formula above, we got TWO equations, which will help us form a system of equations! Why could we use Gauss-Jordan Elimination to solve The Refer mathispower4uFor for refreshing on: How to get elementary matrices |
MIT OCW 18.06 SC Unit 1.4 Factorization into A = LU
What's the 「Inverse of a product」Assume Yes, we multiply their inverses together
Inverse of a 「Transposed Matrix」So the Inverse of 「LU Decompose」 (without Row Exhcnage)
Assume in the elimination process without row exchanges, we only apply EA = U
A = LU
So as we've understood the meaning behind it, we can forget it and just remember the Row exchanges with 「Permutations」
For a 3x3 Identity Matrix, there're 6 permutations of it: The |
LU Decomposition [DRAFT]For a Matrix A, we could factor it out as
「Upper Triangular Matrix」The factor matrix Refer to video: LU Decomposition using Gaussian Elimination 「Lower Triangular Matrix」The factor matrix How to get the 「Lower Triangular Matrix」Refer to this video: LU Decomposition - Shortcut Method by Math is power Solve 「System of equations」 using 「LU Decomposition」
Refer to this video: Solve a System of Linear Equations Using LU Decomposition Assume there's equation Steps to apply the |
MIT OCW 18.06 SC Unit 1.5 Transposes, Permutations, Vector Spaces Rⁿ
「Permutations」
For LU Decomposition the PA = LU
# P = Permutation Matrix = Identity Matrix with Reordered rows Which apply row exchanges to matrix A into the right order (for pivots), then decompose it. 「Permutation」 propertiesPossibilities of Permutations of nxn matrix = n!
P⁻¹ = Pᵀ
PᵀP = 𝐈 「Transposes」The way to do a transpose is just SWITCH ENTRIES.
Properties of 「Transposes」Special 「transpose matrices」
「Symmetric matrices」
#symmetric matrix
Aᵀ = A Given any matrix R (not necessarily square) the product RᵀR is always symmetric, because after transposing it's still the same: (RᵀR)ᵀ = Rᵀ(Rᵀ)ᵀ = RᵀR
# Note: (Rᵀ)ᵀ = R, and matrix multiplications is from right to left. 「Vector spaces」Most important thing about 「vector spaces」We can do operations to any vector and still in the same space. In another word, if you do some additions or scalings to a vector but turns out it jump out of the space, then It can't be a vector space. EVERY VECTOR SPACE GOT TO HAVE THE ZERO VECTOR IN IT. Rules of 「Vector spaces」Refer to video by TheTrevTutor: Vector Spaces 「Subspaces」If a Vector space is INSIDE of a Vector space e.g. R², we call it
Remember the NO.1 rule of a Subspace: Three rules of Subspace:
「Column Space」
Which means the column space of a matrix only have 3 vectors: 2 column vectors and a Zero vector, and the Column space(their linear combinations) forms a 2D plane. |
MIT OCW 18.06 SC Unit 1.6Column Space and Nullspace
Is the union of two subspaces a subspace?How to form a Column spaceFor a 3x3 matrix,
Does every 「Ax=B」 have a solution for every 「B」?NO!
Null space 「Ax=0」
Refer to Khan academy lecture |
❖ 「Scalar Projection」 & 「Vector Projection」Refer to the note in Assume that the vector
Notice that: When you read it, it's in a reverse order! Very important! Projection 「Formula」Note that, the formula concerns of these concepts as prerequisites:
How to calculate the 「Scalar Projection」
Componentᵥw = (dot product of v & w) / (w's length) Refer to lecture by Imperial College London: Projection What if we know the vectors, and we want to know how much is the How to calculate the 「Vector Projection」
Remember that a It can be understood as this formula: Projectionᵥw = (Componentᵥw) * (Unit vector of v) But usually we write it as this: Refer also to video for formula by Kate Penner: Vector Projection Equations |
❖ Change of basis
Refer to video by Trefor Bazett: Deriving the Change-of-Basis formula 「Projection vector method」 (Only for 90° bases)
Refer to lecture form Imperial College London: Changing basis Remember the `Projection Just to save some words, here's the example and solution: Example: Component V₁ = (V﹒b₁) / |b₁|² = (5*1 + -1*1) / ( √(1²+1²) )² = 4/2 = 2
Component V₂ = (V﹒b₂) / |b₂|² = (5*1 + -1*-1) / ( √(1²+(-1)²) )² = 6/2= 3
V' = (2, 3) Matrices 「changing basis」Refer to lecture: Matrices changing basis |
❖ Orthogonal Matrix
Refer to Wiki: Orthogonal Matrix. 「Orthonormal basis」
「Transpose」 of Orthogonal matrixIf the Matrix composed with orthonormal basis, then its Aᵀ = A⁻¹
# which makes this one true as well
AAᵀ = 𝐈
AᵀA = 𝐈 「Determinant」 of Orthogonal matrixThe |A| = ±1 The 「Gram–Schmidt process」
Refer to Wiki: The Gram–Schmidt process How to see this process intuitively? How to 「Orthogonalize basis」Refer to video by Trefor Bazett: |
❖ Eigen-stuffs [DRAFT]「Eigenvectors」
When we say How to calculate the Eigenvectors「Eigenvalues」「Diagonal Matrix」
Imagine we are applying a Transformation matrix many many times, if we follow the basic Matrix Multiplication rule that will be a shit ton of calculations. But 「Eigenbasis」 & 「Diagonalization」Refer to lecture: Changing to the eigenbasis If you're lucky enough, that the If you aren't lucky, that the Transformation Matrix isn't a Diagonal matrix,
The steps will be like: |
Book: Linear Algebra for Machine Learning (Jason Brownlee)
Check THIS LINK for reading book: Jason-Brownlee-Basics-for-Linear-Algebra-for-Machine-Learning-Discover-the-Mathematical-Language-of-Data-in-Python-2018 Linear Algebra Is Important in Machine LearningStudy Linear Algebra Too EarlyStudy Too Much Linear AlgebraStudy Linear Algebra WrongA Bette Way To Study Linear AlgebraWhat will be learnt in this book
Types of Matrices
Matrix Operations
Sparse MatrixMatrices that contain mostly zero values are called
Matrix DecompositionsMost common types of matrix decomposition:
LU Decomposition
LUP DecompositionQR DecompositionCholesky DecompositionThe Cholesky decomposition is for square symmetric matrices where all values are greater than zero, so-called positive definite matrices. Eigendecomposition
The parent matrix can be shown to be a product of the eigenvectors and eigenvalues:
Singular Value Decomposition (SVD)
PseudoinverseDimensionality Reduction |
More intuitive way to think of vector, matrix, tensor: Imagine a rectangle:
|
Notes on Chapter 2 of Math for Machine learningTypes of Vectors:Mathematical "closure":
Mind map of some concepts:Properties for Matrix Multiplication:Analytic geometry |
|
Study resources
Tools
Practice & Quizzes
Study goals of Linear Algebra
MIT OCW Linear Algebra 18.06
The text was updated successfully, but these errors were encountered: