Useful Links
Systems Science
Optimization Theory
1. Foundations of Optimization
2. Mathematical Foundations
3. Unconstrained Optimization
4. Constrained Optimization Theory
5. Linear Programming
6. Nonlinear Programming
7. Integer and Combinatorial Optimization
8. Dynamic Programming
9. Stochastic Optimization
10. Heuristic and Metaheuristic Methods
11. Multi-Objective Optimization
12. Specialized Optimization Topics
13. Applications and Case Studies
14. Computational Aspects and Software
Dynamic Programming
Theoretical Foundations
Principle of Optimality
Bellman's Equation
Value Functions
Policy Functions
Deterministic Dynamic Programming
Finite Horizon Problems
Backward Induction
Forward Recursion
Infinite Horizon Problems
Discounted Problems
Undiscounted Problems
Continuous State and Control
Stochastic Dynamic Programming
Markov Decision Processes
States, Actions, and Transitions
Reward Functions
Policies and Value Functions
Value Iteration
Policy Iteration
Linear Programming Formulation
Applications and Extensions
Shortest Path Problems
Resource Allocation Over Time
Inventory Control Models
Optimal Stopping Problems
Stochastic Control Problems
Computational Aspects
Curse of Dimensionality
Approximation Methods
Function Approximation
Approximate Dynamic Programming
Previous
7. Integer and Combinatorial Optimization
Go to top
Next
9. Stochastic Optimization