Improvement in a Lazy Context: An Operational Theory for Call-By-Need (Extended Version). Moran, A. K. & Sands, D. November 1998. Extended version of i̧teMoran:Sands:Need
Improvement in a Lazy Context: An Operational Theory for Call-By-Need (Extended Version) [pdf]Paper  abstract   bibtex   
The standard implementation technique for lazy functional languages is call-by-need, which ensures that an argument to a function in any given call is evaluated at most once. A significant problem with call-by-need is that it is difficult - even for compiler writers - to predict the effects of program transformations. The traditional theories for lazy functional languages are based on call-by-name models, and offer no help in determining which transformations do indeed optimize a program. In this article we present an operational theory for call-by-need, based upon an improvement ordering on programs: M is improved by N if in all program-contexts C, when C[M] terminates then C[N] terminates at least as cheaply. We show that this improvement relation satisfies a "context lemma", and supports a rich inequational theory, subsuming the call-by-need lambda calculi of Ariola \emphet al. The reduction-based call-by-need calculi are inadequate as a theory of lazy-program transformation since they only permit transformations which speed up programs by at most a constant factor (a claim we substantiate); we go beyond the various reduction-based calculi for call-by-need by providing powerful proof rules for recursion, including syntactic continuity - the basis of fixed-point-induction style reasoning, and an Improvement Theorem, suitable for arguing the safetey and correctness of recursion-based program transformations.

Downloads: 0