A cheatsheet of mathematical formulas for fundations of Continuous Optimization
A big part of the above formulas are from my notes at the DSTI. I also added curves I found or created, as well as more gradient descent algorithms. Please note that it is a formulas cheat sheet, not a course. It is good to check or refresh your knowledge.
If you are specifically interested in gradient descent algorithms and not so much into refreshing mathematical foundations such as Euler equality or the Lagrangian, you may like the following article, that dive more into the details of each optimization algorithms: “A glimpse of the maths behind Optimization Algorithms”.
Notations
- Indexes : 1, 2, … , i, j, k, m, n ∈ [1,n] ⊂ ℕ
- In regular: a, b, c, …, t, u, v, x, y, z ∈ ℝ
- In bold: x is a vector of coordinates xᵢ ∈ ℝ with i ∈ [1,n]
- Usually, x is the variable, y and z directions of space. While a and b are fixed vectors.
- êᵢ are the unit vectors of the orthonormal basis
- In uppercase: A is a matrix of size n×m and elements aᵢⱼ
- J : V ⊂ ℝⁿ ↦ ℝ is the function to optimize
- ε represent a small quantity in ℝ
- θ ∈ [0,1] ⊂ ℝ