Unfalsifiability of Security Claims. Herley, C. 113(23):6415–6420.
Unfalsifiability of Security Claims [link]Paper  doi  abstract   bibtex   
[Significance] Much in computer security involves recommending defensive measures: telling people how they should choose and maintain passwords, manage their computers, and so on. We show that claims that any measure is necessary for security are empirically unfalsifiable. That is, no possible observation contradicts a claim of the form ” if you don't do X you are not secure.” This means that self-correction operates only in one direction. If we are wrong about a measure being sufficient, a successful attack will demonstrate that fact, but if we are wrong about necessity, no possible observation reveals the error. The fact that claims of necessity are easy to make, but impossible to refute, makes waste inevitable and cumulative. [Abstract] There is an inherent asymmetry in computer security: Things can be declared insecure by observation, but not the reverse. There is no observation that allows us to declare an arbitrary system or technique secure. We show that this implies that claims of necessary conditions for security (and sufficient conditions for insecurity) are unfalsifiable. This in turn implies an asymmetry in self-correction: Whereas the claim that countermeasures are sufficient is always subject to correction, the claim that they are necessary is not. Thus, the response to new information can only be to ratchet upward: Newly observed or speculated attack capabilities can argue a countermeasure in, but no possible observation argues one out. Further, when justifications are unfalsifiable, deciding the relative importance of defensive measures reduces to a subjective comparison of assumptions. Relying on such claims is the source of two problems: once we go wrong we stay wrong and errors accumulate, and we have no systematic way to rank or prioritize measures.
@article{herleyUnfalsifiabilitySecurityClaims2016,
  title = {Unfalsifiability of Security Claims},
  author = {Herley, Cormac},
  date = {2016-06},
  journaltitle = {Proceedings of the National Academy of Sciences},
  volume = {113},
  pages = {6415--6420},
  issn = {1091-6490},
  doi = {10.1073/pnas.1517797113},
  url = {https://doi.org/10.1073/pnas.1517797113},
  abstract = {[Significance]

Much in computer security involves recommending defensive measures: telling people how they should choose and maintain passwords, manage their computers, and so on. We show that claims that any measure is necessary for security are empirically unfalsifiable. That is, no possible observation contradicts a claim of the form ” if you don't do X you are not secure.” This means that self-correction operates only in one direction. If we are wrong about a measure being sufficient, a successful attack will demonstrate that fact, but if we are wrong about necessity, no possible observation reveals the error. The fact that claims of necessity are easy to make, but impossible to refute, makes waste inevitable and cumulative.

[Abstract]

There is an inherent asymmetry in computer security: Things can be declared insecure by observation, but not the reverse. There is no observation that allows us to declare an arbitrary system or technique secure. We show that this implies that claims of necessary conditions for security (and sufficient conditions for insecurity) are unfalsifiable. This in turn implies an asymmetry in self-correction: Whereas the claim that countermeasures are sufficient is always subject to correction, the claim that they are necessary is not. Thus, the response to new information can only be to ratchet upward: Newly observed or speculated attack capabilities can argue a countermeasure in, but no possible observation argues one out. Further, when justifications are unfalsifiable, deciding the relative importance of defensive measures reduces to a subjective comparison of assumptions. Relying on such claims is the source of two problems: once we go wrong we stay wrong and errors accumulate, and we have no systematic way to rank or prioritize measures.},
  keywords = {*imported-from-citeulike-INRMM,~INRMM-MiD:c-14062237,~to-add-doi-URL,cognitive-biases,complexity,computational-science,logics,software-security,unfalsifiability},
  number = {23}
}

Downloads: 0