UsefulLinks
Engineering
Industrial Engineering
Operations Research and Optimization
1. Introduction to Operations Research
2. Mathematical Foundations for Optimization
3. Linear Programming
4. Network Optimization
5. Integer Programming
6. Nonlinear Programming
7. Dynamic Programming
8. Stochastic Processes and Queuing Theory
9. Simulation Modeling
10. Decision Analysis
11. Heuristics and Metaheuristics
12. Advanced Optimization Topics
7.
Dynamic Programming
7.1.
Fundamental Concepts
7.1.1.
Principle of Optimality
7.1.1.1.
Bellman's Principle
7.1.1.2.
Optimal Substructure
7.1.1.3.
Overlapping Subproblems
7.1.2.
Dynamic Programming Elements
7.1.2.1.
Stages
7.1.2.2.
States
7.1.2.3.
Decisions
7.1.2.4.
Transitions
7.1.2.5.
Returns
7.1.3.
Recursive Relationships
7.1.3.1.
Bellman Equations
7.1.3.2.
Forward Recursion
7.1.3.3.
Backward Recursion
7.1.4.
State Space Representation
7.1.4.1.
State Variables
7.1.4.2.
State Transitions
7.1.4.3.
Boundary Conditions
7.2.
Deterministic Dynamic Programming
7.2.1.
Sequential Decision Problems
7.2.1.1.
Multi-stage Optimization
7.2.1.2.
Stage-wise Decomposition
7.2.2.
Shortest Path Problems
7.2.2.1.
Network Representation
7.2.2.2.
Recursive Formulation
7.2.2.3.
Solution Methods
7.2.3.
Resource Allocation Problems
7.2.3.1.
Knapsack Variants
7.2.3.2.
Capital Budgeting
7.2.3.3.
Production Planning
7.2.4.
Inventory Control Problems
7.2.4.1.
Single-Item Inventory
7.2.4.2.
Multi-Period Models
7.2.4.3.
Setup Costs
7.2.5.
Equipment Replacement Problems
7.2.5.1.
Replacement Timing
7.2.5.2.
Maintenance Decisions
7.2.5.3.
Economic Life Analysis
7.3.
Stochastic Dynamic Programming
7.3.1.
Markov Decision Processes
7.3.1.1.
State Transitions
7.3.1.2.
Transition Probabilities
7.3.1.3.
Reward Functions
7.3.1.4.
Policy Definition
7.3.2.
Value Functions
7.3.2.1.
State Value Functions
7.3.2.2.
Action Value Functions
7.3.2.3.
Bellman Equations
7.3.3.
Policy Evaluation
7.3.3.1.
Policy Iteration
7.3.3.2.
Value Iteration
7.3.3.3.
Linear System Solution
7.3.4.
Optimal Policies
7.3.4.1.
Stationary Policies
7.3.4.2.
Markovian Policies
7.3.4.3.
Policy Improvement
7.3.5.
Infinite Horizon Problems
7.3.5.1.
Discounted Rewards
7.3.5.2.
Average Rewards
7.3.5.3.
Convergence Criteria
7.4.
Computational Aspects
7.4.1.
Curse of Dimensionality
7.4.1.1.
State Space Explosion
7.4.1.2.
Computational Complexity
7.4.2.
Approximation Methods
7.4.2.1.
State Aggregation
7.4.2.2.
Function Approximation
7.4.2.3.
Approximate Dynamic Programming
7.4.3.
Solution Algorithms
7.4.3.1.
Tabular Methods
7.4.3.2.
Approximate Methods
7.4.3.3.
Simulation-Based Methods
Previous
6. Nonlinear Programming
Go to top
Next
8. Stochastic Processes and Queuing Theory