Each unconstrained/constrained optimizer is used as a function from module_opt.py(or module_opt_AD.py)
Users simply define a single scalar objective function and list of equality/inequality constraint functions and provide them as input arguments to desired optimizer with proper initial guess point x0(convergence tolerance tol is supplementary).
module_opt.py→ Module containing Optimization Functions using Finite Difference based Gradient Calculationmodule_opt_AD.py→ Module containing Optimization Functions using Automatic Differentiation(provided by PyTorch) based Gradient Calculation
→ You should provide two types of functions as input argument, one with only numpy operations and another with only torch operations(treat only torch.Tensor datatype as input and output) when using each optimizer
- Steepest Descent Method(SDM)
:stp_descent(f, x0, tol)- The output of SDM contains log of
$x_k, \quad f(x_k), \quad ∇f(x_k)$
- The output of SDM contains log of
- Conjugate Gradient Method(CGM)
:cg_hs(f, x0, tol)/cg_fr(f, x0, tol)- The output of CGM contains log of
$x_k, \quad f(x_k), \quad ∇f(x_k)$
- The output of CGM contains log of
- Newton's Method(only in module_opt.py)
:newton(f, x0, tol)- The output of Newton's Method contains log of
$x_k, \quad f(x_k), \quad ∇f(x_k)$
- The output of Newton's Method contains log of
- Quasi Newton's Method(QNM) - BFGS
:quasi_newton_bfgs(f, x0, tol)- The output of QNM contains log of
$x_k, \quad f(x_k), \quad ∇f(x_k)$
- The output of QNM contains log of
- Test results.
module_opt_unconstrained_test.ipynbmodule_opt_unconstrained_AD_test.ipynb
- Quadratic Penalty Method(QPM)
:qpm(f, ce, ci, x0, inner_opt, tol)- The output of QPM contains log of
$x_k, \quad f(x_k), \quad ∇f(x_k), \quad max(|c_e(x_k)|), \quad max(|c_i(x_k)|)$
- The output of QPM contains log of
- Augmented Lagrangian Method(ALM)
:alm(f, ce, ci, x0, inner_opt, tol)- ALM builds up unconstrained opt problem based on augmented Lagrangian function of original constrained opt problem at each k-th iteration
- Each unconstrained opt problem is solved using unconstrained optimizer
- The output of ALM contains log of
$x_k, \quad f(x_k), \quad ∇L(x_k), \quad max(|c_e(x_k)|), \quad max(|c_i(x_k)|), \quad\lambda_k, \quad \nu_k$
- Sequential Quadratic Programming(SQP)
:sqp(f, ce, ci, x0, maxiter=100, inner_opt=3, tol=1e-6, tol_inter=1e-4)- SQP loop builds up Quadratic Programming subproblem(QPk) at each k-th iteration
- QPk is solved using
alm4sqp(f, ce, ci, x0, lmbda0, nu0, inner_opt, tol)-
alm4sqp()receives$p_{k-1}^*, \quad \lambda_{k-1}^*, \quad \nu_{k-1}^*$ from SQP loop as initial guess of design variables(x0), and lagrange multiplierslmbda0,nu0.
-
- The output of SQP contains log of
$x_k, \quad f(x_k), \quad ∇f(x_k), \quad ∇L(x_k), \quad max(|c_e(x_k)|), \quad max(|c_i(x_k)|), \quad \lambda_k, \quad \nu_k$
- Test results.
sqp_test_important.ipynbassignment3.ipynbassignment3_AD.ipynbassignment5_*.ipynb... refer to the assignment_5 pdf file.