Rigorous Agent Evaluation: An Adversarial Approach to Uncover Catastrophic Failures. Uesato, J., Kumar, A., Szepesvári, C., Erez, T., Ruderman, A., Anderson, K., Dvijotham, K., Heess, N., & Kohli, P. In ICLR, pages 3692–3702, June, 2019.
Rigorous Agent Evaluation: An Adversarial Approach to Uncover Catastrophic Failures [pdf]Paper  abstract   bibtex   
This paper addresses the problem of evaluating learning systems in safety critical domains such as autonomous driving, where failures can have catastrophic consequences. We focus on two problems: searching for scenarios when learned agents fail and assessing their probability of failure. The standard method for agent evaluation in reinforcement learning, Vanilla Monte Carlo, can miss failures entirely, leading to the deployment of unsafe agents. We demonstrate this is an issue for current agents, where even matching the compute used for training is sometimes insufficient for evaluation. To address this shortcoming, we draw upon the rare event probability estimation literature and propose an adversarial evaluation approach. Our approach focuses evaluation on adversarially chosen situations, while still providing unbiased estimates of failure probabilities. The key difficulty is in identifying these adversarial situations – since failures are rare there is little signal to drive optimization. To solve this we propose a continuation approach that learns failure modes in related but less robust agents. Our approach also allows reuse of data already collected for training the agent. We demonstrate the efficacy of adversarial evaluation on two standard domains: humanoid control and simulated driving. Experimental results show that our methods can find catastrophic failures and estimate failures rates of agents multiple orders of magnitude faster than standard evaluation schemes, in minutes to hours rather than days.
@inproceedings{UKSz19,
	abstract = {This paper addresses the problem of evaluating learning systems in safety critical domains such as autonomous driving, where failures can have catastrophic consequences. We focus on two problems: searching for scenarios when learned agents fail and assessing their probability of failure. The standard method for agent evaluation in reinforcement learning, Vanilla Monte Carlo, can miss failures entirely, leading to the deployment of unsafe agents. We demonstrate this is an issue for current agents, where even matching the compute used for training is sometimes insufficient for evaluation. To address this shortcoming, we draw upon the rare event probability estimation literature and propose an adversarial evaluation approach. Our approach focuses evaluation on adversarially chosen situations, while still providing unbiased estimates of failure probabilities. The key difficulty is in identifying these adversarial situations -- since failures are rare there is little signal to drive optimization. To solve this we propose a continuation approach that learns failure modes in related but less robust agents. Our approach also allows reuse of data already collected for training the agent. We demonstrate the efficacy of adversarial evaluation on two standard domains: humanoid control and simulated driving. Experimental results show that our methods can find catastrophic failures and estimate failures rates of agents multiple orders of magnitude faster than standard evaluation schemes, in minutes to hours rather than days.},
	acceptrate = {500 out of 1591=31\%},
	author = {Uesato, J. and Kumar, A. and Szepesv{\'a}ri, Cs. and Erez, T. and Ruderman, A. and Anderson, K. and Dvijotham, K. and Heess, N. and Kohli, P.},
	booktitle = {ICLR},
	date = {2019-06},
	date-added = {2019-07-20 13:57:55 -0600},
	date-modified = {2019-07-20 14:02:08 -0600},
	keywords = {Monte Carlo methods, failure probability prediction, continuation method, reinforcement learning, adversarial evaluation},
	month = {June},
	pages = {3692--3702},
	rating = {0},
	read = {Yes},
	title = {Rigorous Agent Evaluation: An Adversarial Approach to Uncover Catastrophic Failures},
	url_paper = {ICLR2019-Risk.pdf},
	year = {2019}}
Downloads: 0