ALMOST: Adversarial Learning to Mitigate Oracle-less ML Attacks via Synthesis Tuning. Chowdhury, A. B., Alrahis, L., Collini, L., Knechtel, J., Karri, R., Garg, S., Sinanoglu, O., & Tan, B. March, 2023. arXiv:2303.03372 [cs]
Paper abstract bibtex Oracle-less machine learning (ML) attacks have broken various logic locking schemes. Regular synthesis, which is tailored for area-power-delay optimization, yields netlists where key-gate localities are vulnerable to learning. Thus, we call for security-aware logic synthesis. We propose ALMOST, a framework for adversarial learning to mitigate oracle-less ML attacks via synthesis tuning. ALMOST uses a simulated-annealing-based synthesis recipe generator, employing adversarially trained models that can predict state-of-the-art attacks' accuracies over wide ranges of recipes and key-gate localities. Experiments on ISCAS benchmarks confirm the attacks' accuracies drops to around 50\% for ALMOST-synthesized circuits, all while not undermining design optimization.
@misc{chowdhury_almost_2023-1,
title = {{ALMOST}: {Adversarial} {Learning} to {Mitigate} {Oracle}-less {ML} {Attacks} via {Synthesis} {Tuning}},
shorttitle = {{ALMOST}},
url = {http://arxiv.org/abs/2303.03372},
abstract = {Oracle-less machine learning (ML) attacks have broken various logic locking schemes. Regular synthesis, which is tailored for area-power-delay optimization, yields netlists where key-gate localities are vulnerable to learning. Thus, we call for security-aware logic synthesis. We propose ALMOST, a framework for adversarial learning to mitigate oracle-less ML attacks via synthesis tuning. ALMOST uses a simulated-annealing-based synthesis recipe generator, employing adversarially trained models that can predict state-of-the-art attacks' accuracies over wide ranges of recipes and key-gate localities. Experiments on ISCAS benchmarks confirm the attacks' accuracies drops to around 50{\textbackslash}\% for ALMOST-synthesized circuits, all while not undermining design optimization.},
urldate = {2023-08-22},
publisher = {arXiv},
author = {Chowdhury, Animesh Basak and Alrahis, Lilas and Collini, Luca and Knechtel, Johann and Karri, Ramesh and Garg, Siddharth and Sinanoglu, Ozgur and Tan, Benjamin},
month = mar,
year = {2023},
note = {arXiv:2303.03372 [cs]},
keywords = {\#broken, Computer Science - Cryptography and Security, Computer Science - Machine Learning, Jab/\#Pre},
}
Downloads: 0
{"_id":"7SNk7mhXY28Wv6xSx","bibbaseid":"chowdhury-alrahis-collini-knechtel-karri-garg-sinanoglu-tan-almostadversariallearningtomitigateoraclelessmlattacksviasynthesistuning-2023","author_short":["Chowdhury, A. B.","Alrahis, L.","Collini, L.","Knechtel, J.","Karri, R.","Garg, S.","Sinanoglu, O.","Tan, B."],"bibdata":{"bibtype":"misc","type":"misc","title":"ALMOST: Adversarial Learning to Mitigate Oracle-less ML Attacks via Synthesis Tuning","shorttitle":"ALMOST","url":"http://arxiv.org/abs/2303.03372","abstract":"Oracle-less machine learning (ML) attacks have broken various logic locking schemes. Regular synthesis, which is tailored for area-power-delay optimization, yields netlists where key-gate localities are vulnerable to learning. Thus, we call for security-aware logic synthesis. We propose ALMOST, a framework for adversarial learning to mitigate oracle-less ML attacks via synthesis tuning. ALMOST uses a simulated-annealing-based synthesis recipe generator, employing adversarially trained models that can predict state-of-the-art attacks' accuracies over wide ranges of recipes and key-gate localities. Experiments on ISCAS benchmarks confirm the attacks' accuracies drops to around 50\\% for ALMOST-synthesized circuits, all while not undermining design optimization.","urldate":"2023-08-22","publisher":"arXiv","author":[{"propositions":[],"lastnames":["Chowdhury"],"firstnames":["Animesh","Basak"],"suffixes":[]},{"propositions":[],"lastnames":["Alrahis"],"firstnames":["Lilas"],"suffixes":[]},{"propositions":[],"lastnames":["Collini"],"firstnames":["Luca"],"suffixes":[]},{"propositions":[],"lastnames":["Knechtel"],"firstnames":["Johann"],"suffixes":[]},{"propositions":[],"lastnames":["Karri"],"firstnames":["Ramesh"],"suffixes":[]},{"propositions":[],"lastnames":["Garg"],"firstnames":["Siddharth"],"suffixes":[]},{"propositions":[],"lastnames":["Sinanoglu"],"firstnames":["Ozgur"],"suffixes":[]},{"propositions":[],"lastnames":["Tan"],"firstnames":["Benjamin"],"suffixes":[]}],"month":"March","year":"2023","note":"arXiv:2303.03372 [cs]","keywords":"#broken, Computer Science - Cryptography and Security, Computer Science - Machine Learning, Jab/#Pre","bibtex":"@misc{chowdhury_almost_2023-1,\n\ttitle = {{ALMOST}: {Adversarial} {Learning} to {Mitigate} {Oracle}-less {ML} {Attacks} via {Synthesis} {Tuning}},\n\tshorttitle = {{ALMOST}},\n\turl = {http://arxiv.org/abs/2303.03372},\n\tabstract = {Oracle-less machine learning (ML) attacks have broken various logic locking schemes. Regular synthesis, which is tailored for area-power-delay optimization, yields netlists where key-gate localities are vulnerable to learning. Thus, we call for security-aware logic synthesis. We propose ALMOST, a framework for adversarial learning to mitigate oracle-less ML attacks via synthesis tuning. ALMOST uses a simulated-annealing-based synthesis recipe generator, employing adversarially trained models that can predict state-of-the-art attacks' accuracies over wide ranges of recipes and key-gate localities. Experiments on ISCAS benchmarks confirm the attacks' accuracies drops to around 50{\\textbackslash}\\% for ALMOST-synthesized circuits, all while not undermining design optimization.},\n\turldate = {2023-08-22},\n\tpublisher = {arXiv},\n\tauthor = {Chowdhury, Animesh Basak and Alrahis, Lilas and Collini, Luca and Knechtel, Johann and Karri, Ramesh and Garg, Siddharth and Sinanoglu, Ozgur and Tan, Benjamin},\n\tmonth = mar,\n\tyear = {2023},\n\tnote = {arXiv:2303.03372 [cs]},\n\tkeywords = {\\#broken, Computer Science - Cryptography and Security, Computer Science - Machine Learning, Jab/\\#Pre},\n}\n\n","author_short":["Chowdhury, A. B.","Alrahis, L.","Collini, L.","Knechtel, J.","Karri, R.","Garg, S.","Sinanoglu, O.","Tan, B."],"key":"chowdhury_almost_2023-1","id":"chowdhury_almost_2023-1","bibbaseid":"chowdhury-alrahis-collini-knechtel-karri-garg-sinanoglu-tan-almostadversariallearningtomitigateoraclelessmlattacksviasynthesistuning-2023","role":"author","urls":{"Paper":"http://arxiv.org/abs/2303.03372"},"keyword":["#broken","Computer Science - Cryptography and Security","Computer Science - Machine Learning","Jab/#Pre"],"metadata":{"authorlinks":{}}},"bibtype":"misc","biburl":"https://api.zotero.org/users/4645877/collections/5QADJUWI/items?key=OGCZ3uLZZq4lLIXadnuJrB1J&format=bibtex&limit=100","dataSources":["TRtmubHSqHw6999cH","Wsv2bQ4jPuc7qme8R"],"keywords":["#broken","computer science - cryptography and security","computer science - machine learning","jab/#pre"],"search_terms":["adversarial","learning","mitigate","oracle","less","attacks","via","synthesis","tuning","chowdhury","alrahis","collini","knechtel","karri","garg","sinanoglu","tan"],"title":"ALMOST: Adversarial Learning to Mitigate Oracle-less ML Attacks via Synthesis Tuning","year":2023}