var bibbase_data = {"data":"\"Loading..\"\n\n
\n\n \n\n \n\n \n \n\n \n\n \n \n\n \n\n \n
\n generated by\n \n \"bibbase.org\"\n\n \n
\n \n\n
\n\n \n\n\n
\n\n Excellent! Next you can\n create a new website with this list, or\n embed it in an existing web page by copying & pasting\n any of the following snippets.\n\n
\n JavaScript\n (easiest)\n
\n \n <script src=\"https://bibbase.org/show?bib=http%3A%2F%2Fscience.mq.edu.au%2F%7Efcassez%2Fbib%2Ffranck-bib.bib&jsonp=1&filter=mywebpage:soft-verif&css=true&jsonp=1\"></script>\n \n
\n\n PHP\n
\n \n <?php\n $contents = file_get_contents(\"https://bibbase.org/show?bib=http%3A%2F%2Fscience.mq.edu.au%2F%7Efcassez%2Fbib%2Ffranck-bib.bib&jsonp=1&filter=mywebpage:soft-verif&css=true\");\n print_r($contents);\n ?>\n \n
\n\n iFrame\n (not recommended)\n
\n \n <iframe src=\"https://bibbase.org/show?bib=http%3A%2F%2Fscience.mq.edu.au%2F%7Efcassez%2Fbib%2Ffranck-bib.bib&jsonp=1&filter=mywebpage:soft-verif&css=true\"></iframe>\n \n
\n\n

\n For more details see the documention.\n

\n
\n
\n\n
\n\n This is a preview! To use this list on your own web site\n or create a new web site from it,\n create a free account. The file will be added\n and you will be able to edit it in the File Manager.\n We will show you instructions once you've created your account.\n
\n\n
\n\n

To the site owner:

\n\n

Action required! Mendeley is changing its\n API. In order to keep using Mendeley with BibBase past April\n 14th, you need to:\n

    \n
  1. renew the authorization for BibBase on Mendeley, and
  2. \n
  3. update the BibBase URL\n in your page the same way you did when you initially set up\n this page.\n
  4. \n
\n

\n\n

\n \n \n Fix it now\n

\n
\n\n
\n\n\n
\n \n \n
\n
\n  \n 2017\n \n \n (3)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n ScalaSMT: Satisfiability Modulo Theory in Scala (Tool Paper).\n \n \n \n \n\n\n \n Cassez, F.; and Sloane, A.\n\n\n \n\n\n\n In SCALA'17, October 23–27, 2017, Vancouver, BC, Canada. Proceedings., 2017. Association for Computing Machinery\n forthcoming\n\n\n\n
\n\n\n\n \n \n \"ScalaSMT:Paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 3 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{scala-17,\nauthor    = {Franck Cassez\nand {Anthony M.} Sloane},\n  title     = {ScalaSMT: Satisfiability Modulo Theory in Scala (Tool Paper)},\n  booktitle = {SCALA'17, October 23--27, 2017, Vancouver, BC, Canada. Proceedings.},\n  pages     = {},\n  year      = {2017},\n  xxxurl       = {http://dx.doi.org/10.1007/978-3-662-46681-0_39},\n  xxxdoi       = {10.1007/978-3-662-46681-0_39},\n  publisher = acm,\n  xvolume = 9035,\n  xxxseries    = lncs,\n  mywebpage = {soft-verif},\n  keywords = {scala, SMT solvers},\n  category = {soft-verif},\n  show = {},\n  note={forthcoming},\n  urlpaper = {papers/scala-17.pdf},\n  abstract = {A Satisfiability Modulo Theory (SMT) solver is a program that implements algorithms to automatically determine whether \n  a logical formula is satisfiable. \n  The performance of SMT solvers has dramatically increased in the last decade and for instance, \n  many of the state-of-the-art software analysis tools heavily rely on SMT solving to analyse source code. \n  We present ScalaSMT, a Scala library that leverages the power of SMT solvers and makes SMT solving directly usable in Scala. \n  ScalaSMT provides seamless access to numerous popular SMT solvers like Z3, CVC4, Yices, MathSat or SMTInterpol. \n  Our library comes with a domain-specific language to write terms and logical formulas for a wide range of logical theories, \n  thereby isolating users from the details of particular solvers.},\n    Type = {B - International Conferences},\n    show = {},\n}\n\n
\n
\n\n\n
\n A Satisfiability Modulo Theory (SMT) solver is a program that implements algorithms to automatically determine whether a logical formula is satisfiable. The performance of SMT solvers has dramatically increased in the last decade and for instance, many of the state-of-the-art software analysis tools heavily rely on SMT solving to analyse source code. We present ScalaSMT, a Scala library that leverages the power of SMT solvers and makes SMT solving directly usable in Scala. ScalaSMT provides seamless access to numerous popular SMT solvers like Z3, CVC4, Yices, MathSat or SMTInterpol. Our library comes with a domain-specific language to write terms and logical formulas for a wide range of logical theories, thereby isolating users from the details of particular solvers.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Analysis of program code.\n \n \n \n \n\n\n \n Cassez, F.; and Müller, C.\n\n\n \n\n\n\n September~12 2017.\n US Patent 9,760,469\n\n\n\n
\n\n\n\n \n \n \"AnalysisPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@misc{cassez2017analysis,\n  title={Analysis of program code},\n  author={Cassez, F. and M{\\"u}ller, C.},\n  url={https://www.google.com/patents/US9760469},\n  year={2017},\n  mywebpage = {soft-verif},\n  keywords = {software verification},\n  category = {soft-verif},\n  month=sep # "~12",\n  note={US Patent 9,760,469}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Skink: Static Analysis of Programs in LLVM Intermediate Representation (Competition contribution).\n \n \n \n \n\n\n \n Cassez, F.; Sloane, A.; Roberts, M.; Pigram, M.; Suvanpong, P.; and de Aledo Marugán, P. G.\n\n\n \n\n\n\n In Tools and Algorithms for the Construction and Analysis of Systems - 23rd International Conference, TACAS 2017, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2017, Uppsala, Sweden, April 22-29, 2017. Proceedings, of LNCS, pages 380–384, 2017. Springer\n \n\n\n\n
\n\n\n\n \n \n \"Skink:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{tacas-17,\nauthor    = {Franck Cassez\nand {Anthony M.} Sloane\nand Matthew Roberts\nand Matthew Pigram\nand Pongsak Suvanpong\nand Pablo Gonz{\\'{a}}lez de Aledo Marug{\\'{a}}n},\n  title     = {Skink: Static Analysis of Programs in LLVM Intermediate Representation (Competition contribution)},\n  booktitle = {Tools and Algorithms for the Construction and Analysis of Systems\n               - 23rd International Conference, {TACAS} 2017, Held as Part of the\n               European Joint Conferences on Theory and Practice of Software, {ETAPS}\n               2017, Uppsala, Sweden, April 22-29, 2017. Proceedings},\n  \n pages     = {380--384},\n  year      = {2017},\n  url       = {https://doi.org/10.1007/978-3-662-54580-5_27},\n  doi       = {10.1007/978-3-662-54580-5_27},\n  publisher = springv,\n  xvolume = 10206,\n  series    = lncs,\n  mywebpage = {soft-verif},\n  keywords = {refinement, software verification},\n  category = {soft-verif},\n  show = {},\n  xxnote={forthcoming},\n  urlpaper = {papers/sv-comp-2017.pdf},\n  abstract = {Skink is a static analysis tool that analyses the LLVM intermediate representation (LLVM-IR)\nof a program source code.\nThe analysis consists of checking whether there is a feasible execution that can reach a\n designated error block in the LLVM-IR.\nThe result of a program analysis is ``correct'' if the error block is not reachable,\n``incorrect'' if the error block is reachable, or ``inconclusive'' if the status of the program could not be determined.\nIn this paper, we introduce Skink V2.0 to analyse single and multi-threaded C programs.},\n    Type = {A - International Conferences},\n}\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%% 2016\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n
\n
\n\n\n
\n Skink is a static analysis tool that analyses the LLVM intermediate representation (LLVM-IR) of a program source code. The analysis consists of checking whether there is a feasible execution that can reach a designated error block in the LLVM-IR. The result of a program analysis is ``correct'' if the error block is not reachable, ``incorrect'' if the error block is reachable, or ``inconclusive'' if the status of the program could not be determined. In this paper, we introduce Skink V2.0 to analyse single and multi-threaded C programs.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2015\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Verification of Concurrent Programs Using Trace Abstraction Refinement.\n \n \n \n \n\n\n \n Cassez, F.; and Ziegler, F.\n\n\n \n\n\n\n In Davis, M.; Fehnker, A.; McIver, A.; and Voronkov, A., editor(s), Logic for Programming, Artificial Intelligence, and Reasoning - 20th International Conference, LPAR-20 2015, Suva, Fiji, November 24-28, 2015, Proceedings, volume 9450, of Lecture Notes in Computer Science, pages 233–248, 2015. Springer\n LPAR Best paper award\n\n\n\n
\n\n\n\n \n \n \"VerificationPaper\n  \n \n \n \"VerificationSlides\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{cassez-lpar-2015,\n  author    = {Franck Cassez and\n               Frowin Ziegler},\n                editor    = {Martin Davis and\n               Ansgar Fehnker and\n               Annabelle McIver and\n               Andrei Voronkov},\n  title     = {Verification of Concurrent Programs Using Trace Abstraction Refinement},\n  booktitle = {Logic for Programming, Artificial Intelligence, and Reasoning - 20th\n               International Conference, {LPAR-20} 2015, Suva, Fiji, November 24-28,\n               2015, Proceedings},\n  pages     = {233--248},\n  year      = {2015},\n  series    = {Lecture Notes in Computer Science},\n  volume    = {9450},\n  publisher = {Springer},\n  year      = {2015},\n  url       = {http://dx.doi.org/10.1007/978-3-662-48899-7_17},\n  mywebpage = {soft-verif},\n  category = {soft-verif},\n  isbn      = {978-3-662-48898-0},\n  doi       = {10.1007/978-3-662-48899-7_17},\n  show = {},\n  note = {LPAR Best paper award},\n  urlpaper = {papers/lpar-2015.pdf},\n    urlslides = {papers/slides-lpar-2015.pdf},\n  abstract = {Verifying concurrent programs is notoriously hard due to the state explosion problem: (1) the data state space can be very large as the variables can range over very large sets, and (2) the control state space is the Cartesian product of the control state space of the concurrent com- ponents and thus grows exponentially in the number of components. On the one hand, the most successful approaches to address the control state explosion problem are based on assume-guarantee reasoning or model-checking coupled with partial order reduction. On the other hand, the most successful techniques to address the data space explosion problem for sequential programs verification are based on the abstraction/refine- ment paradigm which consists in refining an abstract over-approximation of a program via predicate refinement. In this paper, we show that we can combine partial order reduction techniques with trace abstraction refinement. We apply our approach to standard benchmarks and show that it matches current state-of-the-art analysis techniques.},\n  Type = {B - International Conferences},\n}\n\n
\n
\n\n\n
\n Verifying concurrent programs is notoriously hard due to the state explosion problem: (1) the data state space can be very large as the variables can range over very large sets, and (2) the control state space is the Cartesian product of the control state space of the concurrent com- ponents and thus grows exponentially in the number of components. On the one hand, the most successful approaches to address the control state explosion problem are based on assume-guarantee reasoning or model-checking coupled with partial order reduction. On the other hand, the most successful techniques to address the data space explosion problem for sequential programs verification are based on the abstraction/refine- ment paradigm which consists in refining an abstract over-approximation of a program via predicate refinement. In this paper, we show that we can combine partial order reduction techniques with trace abstraction refinement. We apply our approach to standard benchmarks and show that it matches current state-of-the-art analysis techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Perentie: Modular Trace Refinement and Selective Value Tracking - (Competition Contribution).\n \n \n \n \n\n\n \n Cassez, F.; Matsuoka, T.; Pierzchalski, E.; and Smyth, N.\n\n\n \n\n\n\n In Tools and Algorithms for the Construction and Analysis of Systems - 21st International Conference, TACAS 2015, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2015, London, UK, April 11-18, 2015. Proceedings, volume 9035, of LNCS, pages 439–442, 2015. Springer\n \n\n\n\n
\n\n\n\n \n \n \"Perentie:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{tacas-15,\nauthor    = {Franck Cassez and\n               Takashi Matsuoka and\n               Edward Pierzchalski and\n               Nathan Smyth},\n  title     = {Perentie: Modular Trace Refinement and Selective Value Tracking -\n               (Competition Contribution)},\n  booktitle = {Tools and Algorithms for the Construction and Analysis of Systems\n               - 21st International Conference, {TACAS} 2015, Held as Part of the\n               European Joint Conferences on Theory and Practice of Software, {ETAPS}\n               2015, London, UK, April 11-18, 2015. Proceedings},\n  pages     = {439--442},\n  year      = {2015},\n  url       = {http://dx.doi.org/10.1007/978-3-662-46681-0_39},\n  doi       = {10.1007/978-3-662-46681-0_39},\n  publisher = springv,\n  volume = 9035,\n  series    = lncs,\n  mywebpage = {soft-verif},\n  keywords = {refinement, software verification},\n  category = {soft-verif},\n  show = {},\n  urlpaper = {papers/sv-comp-2015.pdf},\n  abstract = {Perentie is a software analysis tool based on iterative refinement of trace abstraction:\n  if the refinement process terminates, the program is either declared correct or a counterexample is\n  provided and the program is incorrect.},\n    Type = {B - International Conferences},\n\n}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%% 2014\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n
\n
\n\n\n
\n Perentie is a software analysis tool based on iterative refinement of trace abstraction: if the refinement process terminates, the program is either declared correct or a counterexample is provided and the program is incorrect.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2014\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Summary-Based Inter-Procedural Analysis via Modular Trace Refinement.\n \n \n \n \n\n\n \n Cassez, F.; Müller, C.; and Burnett, K.\n\n\n \n\n\n\n In 34th International Conference on Foundation of Software Technology and Theoretical Computer Science, FSTTCS 2014, December 15-17, 2014, New Delhi, India, pages 545–556, 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Summary-Based link\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{fsttcs-14,\n  author    = {Franck Cassez and\n               Christian Müller and\n               Karla Burnett},\n  title     = {Summary-Based Inter-Procedural Analysis via Modular Trace Refinement},\n  booktitle = {34th International Conference on Foundation of Software Technology\n               and Theoretical Computer Science, {FSTTCS} 2014, December 15-17, 2014,\n               New Delhi, India},\n  pages     = {545--556},\n  year      = {2014},\n  mywebpage = {soft-verif},\n  category = {Software verification},\n  keywords = {trace refinement, software verification, Hoare logic},\n  url_link       = {http://dx.doi.org/10.4230/LIPIcs.FSTTCS.2014.545},\n  doi       = {10.4230/LIPIcs.FSTTCS.2014.545},\n  abstract = {We propose a generalisation of trace refinement for the verification of inter-procedural programs.\n  Our method is a top-down modular, summary-based approach, and analyses inter-procedural programs by building function summaries on-demand\n  and improving the summaries each time a function is analysed. Our method is sound, and complete relative to the existence of a\n  modular Hoare proof for a non-recursive program. We have implemented a prototype analyser that demonstrates the\n  main features of our approach and yields promising results.\n  },\n  Type = {B - International Conferences},\n}\n\n\n
\n
\n\n\n
\n We propose a generalisation of trace refinement for the verification of inter-procedural programs. Our method is a top-down modular, summary-based approach, and analyses inter-procedural programs by building function summaries on-demand and improving the summaries each time a function is analysed. Our method is sound, and complete relative to the existence of a modular Hoare proof for a non-recursive program. We have implemented a prototype analyser that demonstrates the main features of our approach and yields promising results. \n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2013\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n PtrTracker: Pragmatic pointer analysis.\n \n \n \n \n\n\n \n Biallas, S.; Olesen, M. C.; Cassez, F.; and Huuck, R.\n\n\n \n\n\n\n In 13th IEEE International Working Conference on Source Code Analysis and Manipulation, SCAM 2013, Eindhoven, Netherlands, September 22-23, 2013, pages 69–73, 2013. IEEE Computer Society\n \n\n\n\n
\n\n\n\n \n \n \"PtrTracker:Paper\n  \n \n \n \"PtrTracker: link\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{scam-2013,\n  author    = {Sebastian Biallas and\n               Mads Chr. Olesen and\n               Franck Cassez and\n               Ralf Huuck},\n  title     = {PtrTracker: Pragmatic pointer analysis},\n  booktitle = {13th {IEEE} International Working Conference on Source Code Analysis\n               and Manipulation, {SCAM} 2013, Eindhoven, Netherlands, September 22-23,\n               2013},\n  pages     = {69--73},\n  year      = {2013},\n  publisher = {{IEEE} Computer Society},\n  urlpaper = {papers/scam-2013.pdf},\n  url_link       = {http://dx.doi.org/10.1109/SCAM.2013.6648186},\n  doi       = {10.1109/SCAM.2013.6648186},\n  keywords = {Software verification, pointers, static analysis},\n  mywebpage = {soft-verif},\n  category = {Software verification},\n  abstract = {\n  Static program analysis for bug detection in industrial C/C++ code has many challenges. One of them is to analyze pointer and pointer structures efficiently. While there has been much research into various aspects of pointer analysis either for compiler optimization or for verification tasks, both classical categories are not optimized for bug detection, where speed and precision are important, but soundness (no missed bugs) and completeness (no false positives) do not necessarily need to be guaranteed.\n  },\n    Type = {B - International Conferences},\n}\n\n
\n
\n\n\n
\n Static program analysis for bug detection in industrial C/C++ code has many challenges. One of them is to analyze pointer and pointer structures efficiently. While there has been much research into various aspects of pointer analysis either for compiler optimization or for verification tasks, both classical categories are not optimized for bug detection, where speed and precision are important, but soundness (no missed bugs) and completeness (no false positives) do not necessarily need to be guaranteed. \n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2012\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n What is a Timing Anomaly?.\n \n \n \n \n\n\n \n Cassez, F.; Hansen, R. R.; and Olesen, M. C.\n\n\n \n\n\n\n In 12th International Workshop on Worst-Case Execution Time Analysis, WCET 2012, July 10, 2012, Pisa, Italy, volume 23, of OASICS, pages 1-12, 2012. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik\n \n\n\n\n
\n\n\n\n \n \n \"WhatPaper\n  \n \n \n \"What link\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{wcet-12,\n  author    = {Franck Cassez and\n               René Rydhof Hansen and\n               Mads Chr. Olesen},\n  title     = {What is a Timing Anomaly?},\n  booktitle = {12th International Workshop on Worst-Case Execution Time\n               Analysis, WCET 2012, July 10, 2012, Pisa, Italy},\n  year      = {2012},\n  pages     = {1-12},\n  urlpaper = {papers/wcet-2012.pdf},\n  url_link        = {http://dx.doi.org/10.4230/OASIcs.WCET.2012.1},\n  abstract = {Timing anomalies make worst-case execution time analysis much harder, because the analysis will have to consider all local choices. It has been widely recognised that certain hardware features are timing anomalous, while others are not. However, defining formally what a timing anomaly is, has been difficult. We examine previous definitions of timing anomalies, and identify examples where they do not align with common observations. We then provide a definition for consistently slower hardware traces that can be used to define timing anomalies and aligns with common observations.},\n  category= {wcet},\n  mywebpage = {soft-verif,wcet},\n  keywords = {wcet},\n   publisher = {Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik},\n  series    = {OASICS},\n  volume    = {23},\n        Type = {B - International Conferences},\n\n}\n\n\n
\n
\n\n\n
\n Timing anomalies make worst-case execution time analysis much harder, because the analysis will have to consider all local choices. It has been widely recognised that certain hardware features are timing anomalous, while others are not. However, defining formally what a timing anomaly is, has been difficult. We examine previous definitions of timing anomalies, and identify examples where they do not align with common observations. We then provide a definition for consistently slower hardware traces that can be used to define timing anomalies and aligns with common observations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n High Performance Static Analysis for Industry.\n \n \n \n \n\n\n \n Bradley, M.; Cassez, F.; Fehnker, A.; Given-Wilson, T.; and Huuck, R.\n\n\n \n\n\n\n Electr. Notes Theor. Comput. Sci., 289: 3-14. 2012.\n \n\n\n\n
\n\n\n\n \n \n \"HighPaper\n  \n \n \n \"HighLink\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@article{tapas-12,\n  author    = {Mark Bradley and\n               Franck Cassez and\n               Ansgar Fehnker and\n               Thomas Given-Wilson and\n               Ralf Huuck},\n  title     = {High Performance Static Analysis for Industry},\n  journal   = {Electr. Notes Theor. Comput. Sci.},\n  volume    = {289},\n  year      = {2012},\n  pages     = {3-14},\n  urlpaper = {papers/tapas-2012.pdf},\n  ee        = {http://dx.doi.org/10.1016/j.entcs.2012.11.002},\n  bibsource = {DBLP, http://dblp.uni-trier.de},\n  mywebpage = {soft-verif},\n  keywords = {software verification},\n  abstract = {\n  Static source code analysis for software bug detection has come a long way since its early beginnings as a compiler technology. However, with the introduction of more sophisticated algorithmic techniques, such as model checking and constraint solving, questions about performance are a major concern. In this work we present an empirical study of our industrial strength source code analysis tool Goanna that uses a model checking core for static analysis of C/C++ code. We present the core technology and abstraction mechanism with a focus on performance, as guided by experience from having analyzed millions of lines of code. In particular, we present results from our recent study within the NIST/DHS SAMATE program. The results show that, maybe surprisingly, formal verification techniques can be used successfully in practical industry applications scaling roughly linearly, even for millions of lines of code.\n  },\n        Type = {B - International Conferences},\n\n}\n\n
\n
\n\n\n
\n Static source code analysis for software bug detection has come a long way since its early beginnings as a compiler technology. However, with the introduction of more sophisticated algorithmic techniques, such as model checking and constraint solving, questions about performance are a major concern. In this work we present an empirical study of our industrial strength source code analysis tool Goanna that uses a model checking core for static analysis of C/C++ code. We present the core technology and abstraction mechanism with a focus on performance, as guided by experience from having analyzed millions of lines of code. In particular, we present results from our recent study within the NIST/DHS SAMATE program. The results show that, maybe surprisingly, formal verification techniques can be used successfully in practical industry applications scaling roughly linearly, even for millions of lines of code. \n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n\n\n \n\n \n \n \n \n\n
\n"}; document.write(bibbase_data.data);