UsefulLinks
Systems Science
Optimization Theory
1. Foundations of Optimization
2. Mathematical Foundations
3. Unconstrained Optimization
4. Constrained Optimization Theory
5. Linear Programming
6. Nonlinear Programming
7. Integer and Combinatorial Optimization
8. Dynamic Programming
9. Stochastic Optimization
10. Heuristic and Metaheuristic Methods
11. Multi-Objective Optimization
12. Specialized Optimization Topics
13. Applications and Case Studies
14. Computational Aspects and Software
8.
Dynamic Programming
8.1.
Theoretical Foundations
8.1.1.
Principle of Optimality
8.1.2.
Bellman's Equation
8.1.3.
Value Functions
8.1.4.
Policy Functions
8.2.
Deterministic Dynamic Programming
8.2.1.
Finite Horizon Problems
8.2.1.1.
Backward Induction
8.2.1.2.
Forward Recursion
8.2.2.
Infinite Horizon Problems
8.2.2.1.
Discounted Problems
8.2.2.2.
Undiscounted Problems
8.2.3.
Continuous State and Control
8.3.
Stochastic Dynamic Programming
8.3.1.
Markov Decision Processes
8.3.1.1.
States, Actions, and Transitions
8.3.1.2.
Reward Functions
8.3.1.3.
Policies and Value Functions
8.3.2.
Value Iteration
8.3.3.
Policy Iteration
8.3.4.
Linear Programming Formulation
8.4.
Applications and Extensions
8.4.1.
Shortest Path Problems
8.4.2.
Resource Allocation Over Time
8.4.3.
Inventory Control Models
8.4.4.
Optimal Stopping Problems
8.4.5.
Stochastic Control Problems
8.5.
Computational Aspects
8.5.1.
Curse of Dimensionality
8.5.2.
Approximation Methods
8.5.3.
Function Approximation
8.5.4.
Approximate Dynamic Programming
Previous
7. Integer and Combinatorial Optimization
Go to top
Next
9. Stochastic Optimization