## 주메뉴

### 수치최적화

• 연세대학교
• 정윤모
• 주제분류
공학 >컴퓨터ㆍ통신 >정보통신공학
• 강의학기
2012년 2학기
• 조회수
21,312
•
For graduate student in science and engineering. This course covers numerical optimization. We will concentrate on convex

optimization. For such purpose we will briefly cover the convex theory.For unconstrained optimization, we study algorithm.After that we will consider applications, approximation and fitting, l1 minimization, for examples.
CH 1: The introduction of this course and basic concepts

#### 차시별 강의

 1. CH 1: The introduction of this course and basic concepts The schedule for this semester. Reference book, requirments and evaluation method, some basic concepts CH 2: Convex sets 1.affine and convex sets; 2. some important examples; 3. Operations that preserve convexity; 4. Separating and supporting hyperplanes 2. CH 3: Convex functions 1. basic properties and examples; 2.operations that preserve convexity; 3. CH 3: Convex functions 1.operations that preserve convexity; 2. Jensens inequality; 3. conjugate functions 3. Ch4: Convex optimization problems 1. Optimization problems in standard form; 2. Convex optimization problem; Ch4: Convex optimization problems 1. Convex optimization problem; 2. equivalent convex problems Ch4: Convex optimization problems; CH5: Duality 1. Linear program; 2. QCQP; 3.second order cone programming; 4: Lagrange dual function; 5: Standard form Lp 4. CH5: Duality 1: Lagrange dual problem; 2: weak and strong duality CH5: Duality 1: Geometric interpretation; CH5: Duality 1. Slaters constraint equation; 2. KKT conditions 5. CH5: Duality 1. Saddle point interpretation; 2. Perturbation and sensitivity analysis CH5: Duality; CH6: Unconstrained minimization 1. Problem reformulations; 2. Strong convexity and implications 6. CH6: Unconstrained minimization 1. Descent methods; 2. linear search types; 3. Gradient descent method CH6: Unconstrained minimization 1. Quadratic problem example; 2. Nonquadratic problem example; 3. Steeppest descent method CH6: Unconstrained minimization 1. Different norm in normalized steepest descent method; 2. Choice of norm; 7. CH6: Unconstrained minimization 1. Newton step; 2. Newton decrement CH6: Unconstrained minimization 1. Newton method; 2. Classical convergence analysis CH6: Unconstrained minimization 1. Classical convergence analysis; 2. Damped Newton phase; 3. Implementation 8. CH7: Equality constrained minimization 1. Equality constrained minimization; 2. Quadratic minimization CH7: Equality constrained minimization 1. Quadratic minimization; 2. Eliminating equality constraints; 3. Example 9. CH7: Equality constrained minimization 1. Newton step; 2. Newton decrement; 3. Newton method with equality constraints; 4. Newton method and elimination; 5. Newton step at infeasible points CH7: Equality constrained minimization 1. Infeasible start Newton method; 2. Solving KKT system; 3.Analytic centering 10. CH8: Iterior Point Method 1. Inequality constrained minimization; 2. Logrithmic barrier; 3. Central path CH8: Iterior Point Method 1. Dual points on central path; 2. Interpretation via KKT conditions; 3. Force field interpretation; 4. Barrier method; 5. Convergence analysis 11. CH8: Iterior Point Method 1. Barrier method; 2. Convergence analysis; 3. Feasibility and phase I methods CH8: Iterior Point Method 1. Primal-dual interior point methods; 2. Interpretation of Newton step CH8: Iterior Point Method 1. L1 norm approximation 12. CH9: Approximation and fitting 1. Norm approximation; 2. Penalty function approximation CH9: Approximation and fitting 1. Example; 2. Huber penaltry function; 3. Least norm problems 13. CH9: Approximation and fitting 1. Signal reconstruction; 2. Quadratic smoothing example; 3. Total variation reconstruction example 14. CH9: Statistical estimation 1. Maximum likelihood estimation; 2. Linear measurments wkth IID noise; 3. Exampes CH9: Statistical estimation 1. Homework 3 comments; 2. Final project explanation 15. CH9: Statistic estimation; Geometric problems; Penalty barrier and augmented Lagrangian methods 1. Logistic regression; 2. Linear discrimination; 3. Roboust linear discrimination; 4. Quadratic penalty method CH9: Penalty barrier and augmented Lagrangian methods 1. Augmented Lagrangian method; 2. L1 penlaty function

#### 사용자 의견

강의 평가를 위해서는 로그인 해주세요.
운영자2017-11-09 10:05
KOCW 운영팀입니다. 연세대학교로 강의교재 및 강의자료에 대해 다시 문의하였습니다. 다만 문의하여도 교수자 및 기관의 사정에 따라 답변이 늦거나 없을 수 있습니다. 이 점 양해 부탁드립니다.
leedhcf92 2017-11-09 08:07
내용이 어려워서 그러는데 혹시 강의교안 또는 교재 정보좀 알 수 있을까요?
운영자2017-08-16 10:40
KOCW 운영팀입니다. 연세대학교로 강의교재를 문의하였습니다. 답변 받는대로 안내 드리도록 하겠습니다.
kristen91 2017-08-16 01:20
교재정보좀 알려주세요!
운영자2016-05-24 16:00
KOCW 운영팀입니다. 13차시 강의는 목록은 있으나 강의 영상이 원래 없던 차시 입니다. 해당 차시는 삭제 처리하도록 하겠습니다.
masonseo 2016-05-24 15:25
13차시 첫번째 동영상은 유실된건가요, 아니면 원래 없는 건가요?

#### 이용방법

• 강의 이용시 필요한 프로그램 [바로가기]

※ 강의별로 교수님의 사정에 따라 전체 차시 중 일부 차시만 공개되는 경우가 있으니 양해 부탁드립니다.