Identifying effective policies in approximate dynamic programming: beyond regression. Maxwell, M. S., Henderson, S. G., & Topaloglu, H. In Johansson, B., Jain, S., Hugan, J., & Yücesan, E., editors, Proceedings of the 2010 Winter Simulation Conference, pages 1079–1087, Piscataway NJ, 2010. IEEE.
Identifying effective policies in approximate dynamic programming: beyond regression [pdf]Paper  abstract   bibtex   
Dynamic programming formulations may be used to solve for optimal policies in Markov decision processes. Due to computational complexity dynamic programs must often be solved approximately. We consider the case of a tunable approximation architecture used in lieu of computing true value functions. The standard methodology advocates tuning the approximation architecture via sample path information and regression to get a good fit to the true value function. We provide an example which shows that this approach may unnecessarily lead to poorly performing policies and suggest direct search methods to find better performing value function approximations. We illustrate this concept with an application from ambulance redeployment.

Downloads: 0