var bibbase_data = {"data":"\"Loading..\"\n\n
\n\n \n\n \n\n \n \n\n \n\n \n \n\n \n\n \n
\n generated by\n \n \"bibbase.org\"\n\n \n
\n \n\n
\n\n \n\n\n
\n\n Excellent! Next you can\n create a new website with this list, or\n embed it in an existing web page by copying & pasting\n any of the following snippets.\n\n
\n JavaScript\n (easiest)\n
\n \n <script src=\"https://bibbase.org/show?bib=https%3A%2F%2Fapi.zotero.org%2Fusers%2F8994433%2Fcollections%2F66SDI5S7%2Fitems%3Fkey%3DozCmKMmrE26WwEN9PAdOLaiV%26format%3Dbibtex%26limit%3D100&jsonp=1&jsonp=1\"></script>\n \n
\n\n PHP\n
\n \n <?php\n $contents = file_get_contents(\"https://bibbase.org/show?bib=https%3A%2F%2Fapi.zotero.org%2Fusers%2F8994433%2Fcollections%2F66SDI5S7%2Fitems%3Fkey%3DozCmKMmrE26WwEN9PAdOLaiV%26format%3Dbibtex%26limit%3D100&jsonp=1\");\n print_r($contents);\n ?>\n \n
\n\n iFrame\n (not recommended)\n
\n \n <iframe src=\"https://bibbase.org/show?bib=https%3A%2F%2Fapi.zotero.org%2Fusers%2F8994433%2Fcollections%2F66SDI5S7%2Fitems%3Fkey%3DozCmKMmrE26WwEN9PAdOLaiV%26format%3Dbibtex%26limit%3D100&jsonp=1\"></iframe>\n \n
\n\n

\n For more details see the documention.\n

\n
\n
\n\n
\n\n This is a preview! To use this list on your own web site\n or create a new web site from it,\n create a free account. The file will be added\n and you will be able to edit it in the File Manager.\n We will show you instructions once you've created your account.\n
\n\n
\n\n

To the site owner:

\n\n

Action required! Mendeley is changing its\n API. In order to keep using Mendeley with BibBase past April\n 14th, you need to:\n

    \n
  1. renew the authorization for BibBase on Mendeley, and
  2. \n
  3. update the BibBase URL\n in your page the same way you did when you initially set up\n this page.\n
  4. \n
\n

\n\n

\n \n \n Fix it now\n

\n
\n\n
\n\n\n
\n \n \n
\n
\n  \n 2023\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n MANDA: On Adversarial Example Detection for Network Intrusion Detection System.\n \n \n \n \n\n\n \n Wang, N.; Chen, Y.; Xiao, Y.; Hu, Y.; Lou, W.; and Hou, Y. T.\n\n\n \n\n\n\n IEEE Transactions on Dependable and Secure Computing, 20(2): 1139–1153. March 2023.\n \n\n\n\n
\n\n\n\n \n \n \"MANDA:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{wang_manda_2023,\n\ttitle = {{MANDA}: {On} {Adversarial} {Example} {Detection} for {Network} {Intrusion} {Detection} {System}},\n\tvolume = {20},\n\tissn = {1941-0018},\n\tshorttitle = {{MANDA}},\n\turl = {https://ieeexplore.ieee.org/abstract/document/9709532},\n\tdoi = {10.1109/TDSC.2022.3148990},\n\tabstract = {With the rapid advancement in machine learning (ML), ML-based Intrusion Detection Systems (IDSs) are widely deployed to protect networks from various attacks. One of the biggest challenges is that ML-based IDSs suffer from adversarial example (AE) attacks. By applying small perturbations (e.g., slightly increasing packet inter-arrival time) to the intrusion traffic, an AE attack can flip the prediction of a well-trained IDS. We address this challenge by proposing MANDA, a MANifold and Decision boundary-based AE detection system. Through analyzing AE attacks, we notice that 1) an AE tends to be close to its original manifold (i.e., the cluster of samples in its original class) regardless of which class it is misclassified into; and 2) AEs tend to be close to the decision boundary to minimize the perturbation scale. Based on the two observations, we design MANDA for accurate AE detection by exploiting inconsistency between manifold evaluation and IDS model inference and evaluating model uncertainty on small perturbations. We evaluate MANDA on both binary IDS and multi-class IDS on two datasets (NSL-KDD and CICIDS) under three state-of-the-art AE attacks. Our experimental results show that MANDA achieves high true-positive rate (98.41\\%) with a 5\\% false-positive rate.},\n\tnumber = {2},\n\turldate = {2024-02-08},\n\tjournal = {IEEE Transactions on Dependable and Secure Computing},\n\tauthor = {Wang, Ning and Chen, Yimin and Xiao, Yang and Hu, Yang and Lou, Wenjing and Hou, Y. Thomas},\n\tmonth = mar,\n\tyear = {2023},\n\tkeywords = {AE detection, Adaptation models, Adversarial example (AE), Detectors, Generative adversarial networks, Manifolds, Perturbation methods, Social networking (online), Task analysis, intrusion detection system},\n\tpages = {1139--1153},\n}\n\n
\n
\n\n\n
\n With the rapid advancement in machine learning (ML), ML-based Intrusion Detection Systems (IDSs) are widely deployed to protect networks from various attacks. One of the biggest challenges is that ML-based IDSs suffer from adversarial example (AE) attacks. By applying small perturbations (e.g., slightly increasing packet inter-arrival time) to the intrusion traffic, an AE attack can flip the prediction of a well-trained IDS. We address this challenge by proposing MANDA, a MANifold and Decision boundary-based AE detection system. Through analyzing AE attacks, we notice that 1) an AE tends to be close to its original manifold (i.e., the cluster of samples in its original class) regardless of which class it is misclassified into; and 2) AEs tend to be close to the decision boundary to minimize the perturbation scale. Based on the two observations, we design MANDA for accurate AE detection by exploiting inconsistency between manifold evaluation and IDS model inference and evaluating model uncertainty on small perturbations. We evaluate MANDA on both binary IDS and multi-class IDS on two datasets (NSL-KDD and CICIDS) under three state-of-the-art AE attacks. Our experimental results show that MANDA achieves high true-positive rate (98.41%) with a 5% false-positive rate.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Applying Behavioral Finance to Influence Consumer Decision-Making and Behavior Via Human-Automation Interaction.\n \n \n \n \n\n\n \n Zeng, L.; and Chen, Y.\n\n\n \n\n\n\n In Duffy, V. G.; Lehto, M.; Yih, Y.; and Proctor, R. W., editor(s), Human-Automation Interaction: Manufacturing, Services and User Experience, of Automation, Collaboration, & E-Services, pages 597–611. Springer International Publishing, Cham, 2023.\n \n\n\n\n
\n\n\n\n \n \n \"ApplyingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@incollection{zeng_applying_2023,\n\taddress = {Cham},\n\tseries = {Automation, {Collaboration}, \\& {E}-{Services}},\n\ttitle = {Applying {Behavioral} {Finance} to {Influence} {Consumer} {Decision}-{Making} and {Behavior} {Via} {Human}-{Automation} {Interaction}},\n\tisbn = {9783031107801},\n\turl = {https://doi.org/10.1007/978-3-031-10780-1_33},\n\tabstract = {This chapter focuses on addressing what drives human decision makingDecision making and behavior and how to influence such in the context of real-world human-automationHuman automation interaction. Humans do not think or behave like robotsRobots/computers, and automation technologies that do not incorporate Human Factors are likely to be sub-optimal at promoting the desired behavioral outcome. The ever-increasing body of knowledge in Behavioral Finance not only provides deep insights into the “Why” behind human decision makingDecision making, but also sheds light on “How” to facilitate the desired behavior change and create win–win situations for shareholders all around. In this chapter, we discuss in detail why human decision makingDecision making often seems irrational and is subject to the influence of heuristicsHeuristics/biases. Then, we provide a viable business framework to help practitioners drive real-world impact through a Behavioral Finance approach. To illustrate how such a framework is deployed, we analyze several applications by organizations across industry sectors. In the end of the chapter, we discuss the ethics standardStandards needed when applying Behavioral Finance and provide a set of practical guidelines for researchers and practitioners to apply Behavioral Finance in an appropriate manner to drive real impact.},\n\tlanguage = {en},\n\turldate = {2024-02-08},\n\tbooktitle = {Human-{Automation} {Interaction}: {Manufacturing}, {Services} and {User} {Experience}},\n\tpublisher = {Springer International Publishing},\n\tauthor = {Zeng, Leon and Chen, Yimin},\n\teditor = {Duffy, Vincent G. and Lehto, Mark and Yih, Yuehwern and Proctor, Robert W.},\n\tyear = {2023},\n\tdoi = {10.1007/978-3-031-10780-1_33},\n\tkeywords = {Behavioral finance, Decision making, Implicit cognition},\n\tpages = {597--611},\n}\n\n
\n
\n\n\n
\n This chapter focuses on addressing what drives human decision makingDecision making and behavior and how to influence such in the context of real-world human-automationHuman automation interaction. Humans do not think or behave like robotsRobots/computers, and automation technologies that do not incorporate Human Factors are likely to be sub-optimal at promoting the desired behavioral outcome. The ever-increasing body of knowledge in Behavioral Finance not only provides deep insights into the “Why” behind human decision makingDecision making, but also sheds light on “How” to facilitate the desired behavior change and create win–win situations for shareholders all around. In this chapter, we discuss in detail why human decision makingDecision making often seems irrational and is subject to the influence of heuristicsHeuristics/biases. Then, we provide a viable business framework to help practitioners drive real-world impact through a Behavioral Finance approach. To illustrate how such a framework is deployed, we analyze several applications by organizations across industry sectors. In the end of the chapter, we discuss the ethics standardStandards needed when applying Behavioral Finance and provide a set of practical guidelines for researchers and practitioners to apply Behavioral Finance in an appropriate manner to drive real impact.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n DUO: Stealthy Adversarial Example Attack on Video Retrieval Systems via Frame-Pixel Search.\n \n \n \n \n\n\n \n Yao, X.; Zhan, Y.; Chen, Y.; Tang, F.; Zhao, M.; Li, E.; and Zhang, Y.\n\n\n \n\n\n\n In 2023 IEEE 43rd International Conference on Distributed Computing Systems (ICDCS), pages 1–11, July 2023. \n ISSN: 2575-8411\n\n\n\n
\n\n\n\n \n \n \"DUO:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{yao_duo_2023,\n\ttitle = {{DUO}: {Stealthy} {Adversarial} {Example} {Attack} on {Video} {Retrieval} {Systems} via {Frame}-{Pixel} {Search}},\n\tshorttitle = {{DUO}},\n\turl = {https://ieeexplore.ieee.org/abstract/document/10272552},\n\tdoi = {10.1109/ICDCS57875.2023.00044},\n\tabstract = {Massive videos are released every day particularly through video-focused social media apps such as TikTok. This trend has fostered the quick emergence of video retrieval systems, which provide cloud-based services to retrieve similar videos using machine learning techniques. Adversarial example (AE) attacks have been shown to be effective on such systems by perturbing an unaltered video subtly to induce false retrieval results. Such AE attacks can be easily detected because the adversarial perturbations are all over pixels and frames. In this paper, we propose DUO, a stealthy targeted black-box AE attack which uses DUal search Over frame-pixel to generate sparse perturbations and improve stealthiness. DUO is motivated by two observations: only “key frames” in a video decide model predictions, and different pixels and frames contribute far differently to AEs. We implement DUO into a sequential attack pipeline consisting of two components (i.e., SparseTransfer and SparseQuery) built upon such intuitions. In particular, DUO uses SparseTransfer to generate initial perturbations and then SparseQuery to further rectify them. Extensive evaluations on two popular datasets confirm the higher efficacy and stealthiness of DUO over existing AE attacks on video retrieval systems. In particular, we show that DUO achieves higher precision while significantly reducing adversarial perturbations by more than ×100 than the state-of-the-art AE attack.},\n\turldate = {2024-02-08},\n\tbooktitle = {2023 {IEEE} 43rd {International} {Conference} on {Distributed} {Computing} {Systems} ({ICDCS})},\n\tauthor = {Yao, Xin and Zhan, Yu and Chen, Yimin and Tang, Fengxiao and Zhao, Ming and Li, Enlang and Zhang, Yanchao},\n\tmonth = jul,\n\tyear = {2023},\n\tnote = {ISSN: 2575-8411},\n\tkeywords = {Closed box, Machine learning, Perturbation methods, Pipelines, Predictive models, Rendering (computer graphics), Social networking (online), black-box attack, sparse targeted adversarial example attack, stealthiness, video retrieval system},\n\tpages = {1--11},\n}\n\n
\n
\n\n\n
\n Massive videos are released every day particularly through video-focused social media apps such as TikTok. This trend has fostered the quick emergence of video retrieval systems, which provide cloud-based services to retrieve similar videos using machine learning techniques. Adversarial example (AE) attacks have been shown to be effective on such systems by perturbing an unaltered video subtly to induce false retrieval results. Such AE attacks can be easily detected because the adversarial perturbations are all over pixels and frames. In this paper, we propose DUO, a stealthy targeted black-box AE attack which uses DUal search Over frame-pixel to generate sparse perturbations and improve stealthiness. DUO is motivated by two observations: only “key frames” in a video decide model predictions, and different pixels and frames contribute far differently to AEs. We implement DUO into a sequential attack pipeline consisting of two components (i.e., SparseTransfer and SparseQuery) built upon such intuitions. In particular, DUO uses SparseTransfer to generate initial perturbations and then SparseQuery to further rectify them. Extensive evaluations on two popular datasets confirm the higher efficacy and stealthiness of DUO over existing AE attacks on video retrieval systems. In particular, we show that DUO achieves higher precision while significantly reducing adversarial perturbations by more than ×100 than the state-of-the-art AE attack.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n mmLock: User Leaving Detection Against Data Theft via High-Quality mmWave Radar Imaging.\n \n \n \n \n\n\n \n Xu, J.; Bi, Z.; Singha, A.; Li, T.; Chen, Y.; and Zhang, Y.\n\n\n \n\n\n\n In 2023 32nd International Conference on Computer Communications and Networks (ICCCN), pages 1–10, July 2023. \n ISSN: 2637-9430\n\n\n\n
\n\n\n\n \n \n \"mmLock:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{xu_mmlock_2023,\n\ttitle = {{mmLock}: {User} {Leaving} {Detection} {Against} {Data} {Theft} via {High}-{Quality} {mmWave} {Radar} {Imaging}},\n\tshorttitle = {{mmLock}},\n\turl = {https://ieeexplore.ieee.org/abstract/document/10230151},\n\tdoi = {10.1109/ICCCN58024.2023.10230151},\n\tabstract = {The use of smart devices such as smartphones, tablets, and laptops skyrocketed in the last decade. These devices enable ubiquitous applications for entertainment, communication, productivity, and healthcare but also introduce big concern about user privacy and data security. In addition to various authentication techniques, automatic and immediate device locking based on user leaving detection is an indispensable way to secure the devices. Current user leaving detection techniques mainly rely on acoustic ranging and do not work well in environments with multiple moving objects. In this paper, we present mmLock, a system that enables faster and more accurate user leaving detection in dynamic environments. mmLock uses a mmWave FMCW radar to capture the user's 3D mesh and detects the leaving gesture from the 3D human mesh data with a hybrid PointNet-LSTM model. Based on explainable user point clouds, mmLock is more robust than existing gesture recognition systems which can only identify the raw signal patterns. We implement and evaluate mmLock with a commercial off-the-shelf (COTS) TI mmWave radar in multiple environments and scenarios. We train the PointNet-LSTM model out of over 1 TB mmWave signal data and achieve 100\\% true-positive rate in most scenarios.},\n\turldate = {2024-02-08},\n\tbooktitle = {2023 32nd {International} {Conference} on {Computer} {Communications} and {Networks} ({ICCCN})},\n\tauthor = {Xu, Jiawei and Bi, Ziqian and Singha, Amit and Li, Tao and Chen, Yimin and Zhang, Yanchao},\n\tmonth = jul,\n\tyear = {2023},\n\tnote = {ISSN: 2637-9430},\n\tkeywords = {Distance measurement, Point cloud compression, Radar, Radar detection, Radar imaging, Target recognition, Three-dimensional displays},\n\tpages = {1--10},\n}\n\n
\n
\n\n\n
\n The use of smart devices such as smartphones, tablets, and laptops skyrocketed in the last decade. These devices enable ubiquitous applications for entertainment, communication, productivity, and healthcare but also introduce big concern about user privacy and data security. In addition to various authentication techniques, automatic and immediate device locking based on user leaving detection is an indispensable way to secure the devices. Current user leaving detection techniques mainly rely on acoustic ranging and do not work well in environments with multiple moving objects. In this paper, we present mmLock, a system that enables faster and more accurate user leaving detection in dynamic environments. mmLock uses a mmWave FMCW radar to capture the user's 3D mesh and detects the leaving gesture from the 3D human mesh data with a hybrid PointNet-LSTM model. Based on explainable user point clouds, mmLock is more robust than existing gesture recognition systems which can only identify the raw signal patterns. We implement and evaluate mmLock with a commercial off-the-shelf (COTS) TI mmWave radar in multiple environments and scenarios. We train the PointNet-LSTM model out of over 1 TB mmWave signal data and achieve 100% true-positive rate in most scenarios.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Evaluating the Impact of Noisy Point Clouds on Wireless Gesture Recognition Systems.\n \n \n \n \n\n\n \n Jiang, P.; Fassman, E.; Singha, A.; Chen, Y.; and Li, T.\n\n\n \n\n\n\n In Proceedings of the Twenty-fourth International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing, of MobiHoc '23, pages 480–485, New York, NY, USA, October 2023. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"EvaluatingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{jiang_evaluating_2023,\n\taddress = {New York, NY, USA},\n\tseries = {{MobiHoc} '23},\n\ttitle = {Evaluating the {Impact} of {Noisy} {Point} {Clouds} on {Wireless} {Gesture} {Recognition} {Systems}},\n\tisbn = {9781450399265},\n\turl = {https://dl.acm.org/doi/10.1145/3565287.3617626},\n\tdoi = {10.1145/3565287.3617626},\n\tabstract = {Point cloud data gathered through wireless sensors has garnered increasing attention for its critical applications, including automotive radars, security systems, and notably, gesture recognition. It provides a non-intrusive and robust approach towards humancomputer interactions. However, its reliance on real-time data makes resilience of paramount concern and attacks on or imperfections with these sensors can have catastrophic effects. From real-time spoofing to data poisoning attacks or even just faulty data, systems based on 2D and 3D point cloud machine learning models can be extremely vulnerable. Despite this, there exist few studies prioritizing evaluations on the robustness of these systems over noisy time-sensitive point clouds. This study presents an in-depth examination on the effects of noisy data being used in training various millimeter wave based gesture recognition systems. Noisy point clouds can be introduced during the training stage where imperfect data is fed to a model, causing the model to misclassify test-time samples and lowering its overall accuracy. We stage and evaluate the impact of four different, simple data noising scenarios to observe potential vulnerabilities within these systems. Our findings reveal the respective susceptibilities and resiliencies of transformer, long-short term memory, and convolutional models, highlighting the importance to not only dedicate time and research towards innovations in wireless gesture recognition, but also towards optimizing these systems in order to proactively prevent undesirable effects.},\n\turldate = {2024-02-08},\n\tbooktitle = {Proceedings of the {Twenty}-fourth {International} {Symposium} on {Theory}, {Algorithmic} {Foundations}, and {Protocol} {Design} for {Mobile} {Networks} and {Mobile} {Computing}},\n\tpublisher = {Association for Computing Machinery},\n\tauthor = {Jiang, Paul and Fassman, Ellie and Singha, Amit and Chen, Yimin and Li, Tao},\n\tmonth = oct,\n\tyear = {2023},\n\tkeywords = {classification of point clouds, cybersecurity, gesture recognition, machine learning, millimeter waves, noisy data, time-sensitive point clouds, wireless},\n\tpages = {480--485},\n}\n
\n
\n\n\n
\n Point cloud data gathered through wireless sensors has garnered increasing attention for its critical applications, including automotive radars, security systems, and notably, gesture recognition. It provides a non-intrusive and robust approach towards humancomputer interactions. However, its reliance on real-time data makes resilience of paramount concern and attacks on or imperfections with these sensors can have catastrophic effects. From real-time spoofing to data poisoning attacks or even just faulty data, systems based on 2D and 3D point cloud machine learning models can be extremely vulnerable. Despite this, there exist few studies prioritizing evaluations on the robustness of these systems over noisy time-sensitive point clouds. This study presents an in-depth examination on the effects of noisy data being used in training various millimeter wave based gesture recognition systems. Noisy point clouds can be introduced during the training stage where imperfect data is fed to a model, causing the model to misclassify test-time samples and lowering its overall accuracy. We stage and evaluate the impact of four different, simple data noising scenarios to observe potential vulnerabilities within these systems. Our findings reveal the respective susceptibilities and resiliencies of transformer, long-short term memory, and convolutional models, highlighting the importance to not only dedicate time and research towards innovations in wireless gesture recognition, but also towards optimizing these systems in order to proactively prevent undesirable effects.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2022\n \n \n (6)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n NOSnoop: An Effective Collaborative Meta-Learning Scheme Against Property Inference Attack.\n \n \n \n \n\n\n \n Ma, X.; Li, B.; Jiang, Q.; Chen, Y.; Gao, S.; and Ma, J.\n\n\n \n\n\n\n IEEE Internet of Things Journal, 9(9): 6778–6789. May 2022.\n \n\n\n\n
\n\n\n\n \n \n \"NOSnoop:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{ma_nosnoop_2022,\n\ttitle = {{NOSnoop}: {An} {Effective} {Collaborative} {Meta}-{Learning} {Scheme} {Against} {Property} {Inference} {Attack}},\n\tvolume = {9},\n\tissn = {2327-4662},\n\tshorttitle = {{NOSnoop}},\n\turl = {https://ieeexplore.ieee.org/abstract/document/9538829},\n\tdoi = {10.1109/JIOT.2021.3112737},\n\tabstract = {Collaborative learning has been used to train a joint model on geographically diverse data through periodically sharing knowledge. Although participants keep the data locally in collaborative learning, the adversary can still launch inference attacks through participants’ shared information. In this article, we focus on the property inference attack during model training and design a novel defense mechanism, namely, NOSnoop, to defend such an attack. We propose a collaborative meta-learning architecture to learn the common knowledge over all participants and utilize the natural advantage of meta-learning to hide the sensitive property data. We consider both irrelevant property and relevant property preservation in NOSnoop. For irrelevant property preservation, we utilize the inherent advantage of meta-learning to hide the sensitive property data in meta-training support data set. Thus, the adversary cannot capture the key information related to the sensitive properties and cannot infer victim’s private property successfully. For relevant property preservation, an adversarial game is further proposed to reduce the inference success rate of the adversary. We conduct comprehensive experiments to evaluate the effectiveness of NOSnoop. When hiding the sensitive property data in meta-training support data set, NOSnoop achieves an inference AUC score as low as 0.4984 for irrelevant property preservation, meaning the adversary cannot distinguish whether the training batch has the sensitive property data or not. When preserving the relevant property, NOSnoop is able to achieve an inference AUC score of 0.5091 without compromising model utility.},\n\tnumber = {9},\n\turldate = {2024-02-08},\n\tjournal = {IEEE Internet of Things Journal},\n\tauthor = {Ma, Xindi and Li, Baopu and Jiang, Qi and Chen, Yimin and Gao, Sheng and Ma, Jianfeng},\n\tmonth = may,\n\tyear = {2022},\n\tkeywords = {Collaborative work, Computational modeling, Data models, Data privacy, Inference attack, Internet of Things, Privacy, Training, machine learning, meta-learning, privacy preservation},\n\tpages = {6778--6789},\n}\n\n
\n
\n\n\n
\n Collaborative learning has been used to train a joint model on geographically diverse data through periodically sharing knowledge. Although participants keep the data locally in collaborative learning, the adversary can still launch inference attacks through participants’ shared information. In this article, we focus on the property inference attack during model training and design a novel defense mechanism, namely, NOSnoop, to defend such an attack. We propose a collaborative meta-learning architecture to learn the common knowledge over all participants and utilize the natural advantage of meta-learning to hide the sensitive property data. We consider both irrelevant property and relevant property preservation in NOSnoop. For irrelevant property preservation, we utilize the inherent advantage of meta-learning to hide the sensitive property data in meta-training support data set. Thus, the adversary cannot capture the key information related to the sensitive properties and cannot infer victim’s private property successfully. For relevant property preservation, an adversarial game is further proposed to reduce the inference success rate of the adversary. We conduct comprehensive experiments to evaluate the effectiveness of NOSnoop. When hiding the sensitive property data in meta-training support data set, NOSnoop achieves an inference AUC score as low as 0.4984 for irrelevant property preservation, meaning the adversary cannot distinguish whether the training batch has the sensitive property data or not. When preserving the relevant property, NOSnoop is able to achieve an inference AUC score of 0.5091 without compromising model utility.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n FeCo: Boosting Intrusion Detection Capability in IoT Networks via Contrastive Learning.\n \n \n \n \n\n\n \n Wang, N.; Chen, Y.; Hu, Y.; Lou, W.; and Hou, Y. T.\n\n\n \n\n\n\n In IEEE INFOCOM 2022 - IEEE Conference on Computer Communications, pages 1409–1418, May 2022. \n ISSN: 2641-9874\n\n\n\n
\n\n\n\n \n \n \"FeCo:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{wang_feco_2022,\n\ttitle = {{FeCo}: {Boosting} {Intrusion} {Detection} {Capability} in {IoT} {Networks} via {Contrastive} {Learning}},\n\tshorttitle = {{FeCo}},\n\turl = {https://ieeexplore.ieee.org/abstract/document/9796926},\n\tdoi = {10.1109/INFOCOM48880.2022.9796926},\n\tabstract = {Over the last decade, Internet of Things (IoT) has permeated our daily life with a broad range of applications. However, a lack of sufficient security features in IoT devices renders IoT ecosystems vulnerable to various network intrusion attacks, potentially causing severe damage. Previous works have explored using machine learning to build anomaly detection models for defending against such attacks. In this paper, we propose FeCo, a federated-contrastive-learning framework that coordinates in-network IoT devices to jointly learn intrusion detection models. FeCo utilizes federated learning to alleviate users’ privacy concerns as participating devices only submit their model parameters rather than local data. Compared to previous works, we develop a novel representation learning method based on contrastive learning that is able to learn a more accurate model for the benign class. FeCo significantly improves the intrusion detection accuracy compared to previous works. Besides, we implement a two-step feature selection scheme to avoid overfitting and reduce computation time. Through extensive experiments on the NSL-KDD dataset, we demonstrate that FeCo achieves as high as 8\\% accuracy improvement compared to the state-of-the-art and is robust to non-IID data. Evaluations on convergence, computation overhead, and scalability further confirm the suitability of FeCo for IoT intrusion detection.},\n\turldate = {2024-02-08},\n\tbooktitle = {{IEEE} {INFOCOM} 2022 - {IEEE} {Conference} on {Computer} {Communications}},\n\tauthor = {Wang, Ning and Chen, Yimin and Hu, Yang and Lou, Wenjing and Hou, Y. Thomas},\n\tmonth = may,\n\tyear = {2022},\n\tnote = {ISSN: 2641-9874},\n\tkeywords = {Biological system modeling, Data privacy, Feature extraction, Intrusion detection, Representation learning, Scalability, Telecommunication traffic},\n\tpages = {1409--1418},\n}\n\n
\n
\n\n\n
\n Over the last decade, Internet of Things (IoT) has permeated our daily life with a broad range of applications. However, a lack of sufficient security features in IoT devices renders IoT ecosystems vulnerable to various network intrusion attacks, potentially causing severe damage. Previous works have explored using machine learning to build anomaly detection models for defending against such attacks. In this paper, we propose FeCo, a federated-contrastive-learning framework that coordinates in-network IoT devices to jointly learn intrusion detection models. FeCo utilizes federated learning to alleviate users’ privacy concerns as participating devices only submit their model parameters rather than local data. Compared to previous works, we develop a novel representation learning method based on contrastive learning that is able to learn a more accurate model for the benign class. FeCo significantly improves the intrusion detection accuracy compared to previous works. Besides, we implement a two-step feature selection scheme to avoid overfitting and reduce computation time. Through extensive experiments on the NSL-KDD dataset, we demonstrate that FeCo achieves as high as 8% accuracy improvement compared to the state-of-the-art and is robust to non-IID data. Evaluations on convergence, computation overhead, and scalability further confirm the suitability of FeCo for IoT intrusion detection.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n FLARE: Defending Federated Learning against Model Poisoning Attacks via Latent Space Representations.\n \n \n \n \n\n\n \n Wang, N.; Xiao, Y.; Chen, Y.; Hu, Y.; Lou, W.; and Hou, Y. T.\n\n\n \n\n\n\n In Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security, of ASIA CCS '22, pages 946–958, New York, NY, USA, May 2022. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"FLARE:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{wang_flare_2022,\n\taddress = {New York, NY, USA},\n\tseries = {{ASIA} {CCS} '22},\n\ttitle = {{FLARE}: {Defending} {Federated} {Learning} against {Model} {Poisoning} {Attacks} via {Latent} {Space} {Representations}},\n\tisbn = {9781450391405},\n\tshorttitle = {{FLARE}},\n\turl = {https://dl.acm.org/doi/10.1145/3488932.3517395},\n\tdoi = {10.1145/3488932.3517395},\n\tabstract = {Federated learning (FL) has been shown vulnerable to a new class of adversarial attacks, known as model poisoning attacks (MPA), where one or more malicious clients try to poison the global model by sending carefully crafted local model updates to the central parameter server. Existing defenses that have been fixated on analyzing model parameters show limited effectiveness in detecting such carefully crafted poisonous models. In this work, we propose FLARE, a robust model aggregation mechanism for FL, which is resilient against state-of-the-art MPAs. Instead of solely depending on model parameters, FLARE leverages the penultimate layer representations (PLRs) of the model for characterizing the adversarial influence on each local model update. PLRs demonstrate a better capability to differentiate malicious models from benign ones than model parameter-based solutions. We further propose a trust evaluation method that estimates a trust score for each model update based on pairwise PLR discrepancies among all model updates. Under the assumption that honest clients make up the majority, FLARE assigns a trust score to each model update in a way that those far from the benign cluster are assigned low scores. FLARE then aggregates the model updates weighted by their trust scores and finally updates the global model. Extensive experimental results demonstrate the effectiveness of FLARE in defending FL against various MPAs, including semantic backdoor attacks, trojan backdoor attacks, and untargeted attacks, and safeguarding the accuracy of FL.},\n\turldate = {2024-02-08},\n\tbooktitle = {Proceedings of the 2022 {ACM} on {Asia} {Conference} on {Computer} and {Communications} {Security}},\n\tpublisher = {Association for Computing Machinery},\n\tauthor = {Wang, Ning and Xiao, Yang and Chen, Yimin and Hu, Yang and Lou, Wenjing and Hou, Y. Thomas},\n\tmonth = may,\n\tyear = {2022},\n\tkeywords = {defense, federated learning, model poisoning attack},\n\tpages = {946--958},\n}\n\n
\n
\n\n\n
\n Federated learning (FL) has been shown vulnerable to a new class of adversarial attacks, known as model poisoning attacks (MPA), where one or more malicious clients try to poison the global model by sending carefully crafted local model updates to the central parameter server. Existing defenses that have been fixated on analyzing model parameters show limited effectiveness in detecting such carefully crafted poisonous models. In this work, we propose FLARE, a robust model aggregation mechanism for FL, which is resilient against state-of-the-art MPAs. Instead of solely depending on model parameters, FLARE leverages the penultimate layer representations (PLRs) of the model for characterizing the adversarial influence on each local model update. PLRs demonstrate a better capability to differentiate malicious models from benign ones than model parameter-based solutions. We further propose a trust evaluation method that estimates a trust score for each model update based on pairwise PLR discrepancies among all model updates. Under the assumption that honest clients make up the majority, FLARE assigns a trust score to each model update in a way that those far from the benign cluster are assigned low scores. FLARE then aggregates the model updates weighted by their trust scores and finally updates the global model. Extensive experimental results demonstrate the effectiveness of FLARE in defending FL against various MPAs, including semantic backdoor attacks, trojan backdoor attacks, and untargeted attacks, and safeguarding the accuracy of FL.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Transferability of Adversarial Examples in Machine Learning-based Malware Detection.\n \n \n \n \n\n\n \n Hu, Y.; Wang, N.; Chen, Y.; Lou, W.; and Hou, Y. T.\n\n\n \n\n\n\n In 2022 IEEE Conference on Communications and Network Security (CNS), pages 28–36, October 2022. \n \n\n\n\n
\n\n\n\n \n \n \"TransferabilityPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{hu_transferability_2022,\n\ttitle = {Transferability of {Adversarial} {Examples} in {Machine} {Learning}-based {Malware} {Detection}},\n\turl = {https://ieeexplore.ieee.org/abstract/document/9947226},\n\tdoi = {10.1109/CNS56114.2022.9947226},\n\tabstract = {Machine Learning (ML) has been increasingly applied to malware detection in recent years. Adversarial example (AE) attack, a well-known attack against ML working across different mediums, is effective in evading or misleading ML-based malware detection systems. Such an attack could be made more effective if the generated AEs are transferable so that the AEs can evade different types of malware detection models. To better understand AE transferability in the malware domain, in this paper, we study AE transferability enhancement techniques and how they impact AE generation and Android malware detection. Firstly, we adapt the current image-based AE transferability enhancement techniques (i.e., ensemble sample (ES) and ensemble model (EM)) to malware. In the adapted ES and EM methods, we maintain malware functionality and executability while adding perturbations. Further, we develop a new transfer-based AE generation method, BATE, using a novel feature evenness metric. The idea is to spread perturbations more evenly among perturbed features by incorporating an evenness score in the objective function. We compare our proposed methods with EM and ES on a real Android dataset. The extensive evaluations demonstrate the effectiveness of our method in increasing the upper bound of AE transferability. We also confirm the effectiveness of our evenness-score-based method by showing quantitative correlations between AE transferability and feature evenness score.},\n\turldate = {2024-02-08},\n\tbooktitle = {2022 {IEEE} {Conference} on {Communications} and {Network} {Security} ({CNS})},\n\tauthor = {Hu, Yang and Wang, Ning and Chen, Yimin and Lou, Wenjing and Hou, Y. Thomas},\n\tmonth = oct,\n\tyear = {2022},\n\tkeywords = {AE attack, Adaptation models, Correlation, Machine learning, Measurement, Network security, Perturbation methods, Upper bound, malware detection, transferability},\n\tpages = {28--36},\n}\n\n
\n
\n\n\n
\n Machine Learning (ML) has been increasingly applied to malware detection in recent years. Adversarial example (AE) attack, a well-known attack against ML working across different mediums, is effective in evading or misleading ML-based malware detection systems. Such an attack could be made more effective if the generated AEs are transferable so that the AEs can evade different types of malware detection models. To better understand AE transferability in the malware domain, in this paper, we study AE transferability enhancement techniques and how they impact AE generation and Android malware detection. Firstly, we adapt the current image-based AE transferability enhancement techniques (i.e., ensemble sample (ES) and ensemble model (EM)) to malware. In the adapted ES and EM methods, we maintain malware functionality and executability while adding perturbations. Further, we develop a new transfer-based AE generation method, BATE, using a novel feature evenness metric. The idea is to spread perturbations more evenly among perturbed features by incorporating an evenness score in the objective function. We compare our proposed methods with EM and ES on a real Android dataset. The extensive evaluations demonstrate the effectiveness of our method in increasing the upper bound of AE transferability. We also confirm the effectiveness of our evenness-score-based method by showing quantitative correlations between AE transferability and feature evenness score.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Clang __usercall: towards native support for user defined calling conventions.\n \n \n \n \n\n\n \n Widberg, J.; Narain, S.; and Chen, Y.\n\n\n \n\n\n\n In Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, of ESEC/FSE 2022, pages 1746–1750, New York, NY, USA, November 2022. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"ClangPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{widberg_clang_2022,\n\taddress = {New York, NY, USA},\n\tseries = {{ESEC}/{FSE} 2022},\n\ttitle = {Clang \\_\\_usercall: towards native support for user defined calling conventions},\n\tisbn = {9781450394130},\n\tshorttitle = {Clang \\_\\_usercall},\n\turl = {https://doi.org/10.1145/3540250.3558921},\n\tdoi = {10.1145/3540250.3558921},\n\tabstract = {In reverse engineering interfacing with C/C++ functions is of great interest because it provides much more flexibility for product development and security purpose. However, it has been a great challenge when interfacing functions with user defined calling conventions due to the lack of sufficient and user-friendly tooling. In this work, we design and implement Clang \\_\\_usercall, which aims to provide programmers with an elegant and familiar syntax to specify user defined calling conventions on functions in C/C++ source code. Our key novelties lie in mimicing the most popular syntax and adapting Clang for interfacing purpose. Our preliminary user study shows that our solution outperforms the existing ones in multiple key aspects including user experience and required lines of code. Clang \\_\\_usercall is already added to the Compiler Explorer website as well.},\n\turldate = {2024-02-08},\n\tbooktitle = {Proceedings of the 30th {ACM} {Joint} {European} {Software} {Engineering} {Conference} and {Symposium} on the {Foundations} of {Software} {Engineering}},\n\tpublisher = {Association for Computing Machinery},\n\tauthor = {Widberg, Jared and Narain, Sashank and Chen, Yimin},\n\tmonth = nov,\n\tyear = {2022},\n\tkeywords = {Calling Convention, Clang, Compiler, LLVM, Reverse engineering},\n\tpages = {1746--1750},\n}\n\n
\n
\n\n\n
\n In reverse engineering interfacing with C/C++ functions is of great interest because it provides much more flexibility for product development and security purpose. However, it has been a great challenge when interfacing functions with user defined calling conventions due to the lack of sufficient and user-friendly tooling. In this work, we design and implement Clang __usercall, which aims to provide programmers with an elegant and familiar syntax to specify user defined calling conventions on functions in C/C++ source code. Our key novelties lie in mimicing the most popular syntax and adapting Clang for interfacing purpose. Our preliminary user study shows that our solution outperforms the existing ones in multiple key aspects including user experience and required lines of code. Clang __usercall is already added to the Compiler Explorer website as well.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Squeezing More Utility via Adaptive Clipping on Differentially Private Gradients in Federated Meta-Learning.\n \n \n \n \n\n\n \n Wang, N.; Xiao, Y.; Chen, Y.; Zhang, N.; Lou, W.; and Hou, Y. T.\n\n\n \n\n\n\n In Proceedings of the 38th Annual Computer Security Applications Conference, of ACSAC '22, pages 647–657, New York, NY, USA, December 2022. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"SqueezingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{wang_squeezing_2022,\n\taddress = {New York, NY, USA},\n\tseries = {{ACSAC} '22},\n\ttitle = {Squeezing {More} {Utility} via {Adaptive} {Clipping} on {Differentially} {Private} {Gradients} in {Federated} {Meta}-{Learning}},\n\tisbn = {9781450397599},\n\turl = {https://dl.acm.org/doi/10.1145/3564625.3564652},\n\tdoi = {10.1145/3564625.3564652},\n\tabstract = {Federated meta-learning has emerged as a promising AI framework for today’s mobile computing scenes involving distributed clients. It enables collaborative model training using the data located at distributed mobile clients and accommodates clients that need fast model customization with limited new data. However, federated meta-learning solutions are susceptible to inference-based privacy attacks since the global model encoded with clients’ training data is open to all clients and the central server. Meanwhile, differential privacy (DP) has been widely used as a countermeasure against privacy inference attacks in federated learning. The adoption of DP in federated meta-learning is complicated by the model accuracy-privacy trade-off and the model hierarchy attributed to the meta-learning component. In this paper, we introduce DP-FedMeta, a new differentially private federated meta-learning architecture that addresses such data privacy challenges. DP-FedMeta features an adaptive gradient clipping method and a one-pass meta-training process to improve the model utility-privacy trade-off. At the core of DP-FedMeta are two DP mechanisms, namely DP-AGR and DP-AGRLR, to provide two notions of privacy protection for the hierarchical models. Extensive experiments in an emulated federated meta-learning scenario on well-known datasets (Omniglot, CIFAR-FS, and Mini-ImageNet) demonstrate that DP-FedMeta accomplishes better privacy protection while maintaining comparable model accuracy compared to the state-of-the-art solution that directly applies DP-based meta-learning to the federated setting.},\n\turldate = {2024-02-08},\n\tbooktitle = {Proceedings of the 38th {Annual} {Computer} {Security} {Applications} {Conference}},\n\tpublisher = {Association for Computing Machinery},\n\tauthor = {Wang, Ning and Xiao, Yang and Chen, Yimin and Zhang, Ning and Lou, Wenjing and Hou, Y. Thomas},\n\tmonth = dec,\n\tyear = {2022},\n\tkeywords = {adaptive clipping, differential privacy, federated meta-learning, privacy utility trade-off},\n\tpages = {647--657},\n}\n\n
\n
\n\n\n
\n Federated meta-learning has emerged as a promising AI framework for today’s mobile computing scenes involving distributed clients. It enables collaborative model training using the data located at distributed mobile clients and accommodates clients that need fast model customization with limited new data. However, federated meta-learning solutions are susceptible to inference-based privacy attacks since the global model encoded with clients’ training data is open to all clients and the central server. Meanwhile, differential privacy (DP) has been widely used as a countermeasure against privacy inference attacks in federated learning. The adoption of DP in federated meta-learning is complicated by the model accuracy-privacy trade-off and the model hierarchy attributed to the meta-learning component. In this paper, we introduce DP-FedMeta, a new differentially private federated meta-learning architecture that addresses such data privacy challenges. DP-FedMeta features an adaptive gradient clipping method and a one-pass meta-training process to improve the model utility-privacy trade-off. At the core of DP-FedMeta are two DP mechanisms, namely DP-AGR and DP-AGRLR, to provide two notions of privacy protection for the hierarchical models. Extensive experiments in an emulated federated meta-learning scenario on well-known datasets (Omniglot, CIFAR-FS, and Mini-ImageNet) demonstrate that DP-FedMeta accomplishes better privacy protection while maintaining comparable model accuracy compared to the state-of-the-art solution that directly applies DP-based meta-learning to the federated setting.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2021\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n MANDA: On Adversarial Example Detection for Network Intrusion Detection System.\n \n \n \n \n\n\n \n Wang, N.; Chen, Y.; Hu, Y.; Lou, W.; and Hou, Y. T.\n\n\n \n\n\n\n In IEEE INFOCOM 2021 - IEEE Conference on Computer Communications, pages 1–10, May 2021. \n ISSN: 2641-9874\n\n\n\n
\n\n\n\n \n \n \"MANDA:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{wang_manda_2021,\n\ttitle = {{MANDA}: {On} {Adversarial} {Example} {Detection} for {Network} {Intrusion} {Detection} {System}},\n\tshorttitle = {{MANDA}},\n\turl = {https://ieeexplore.ieee.org/document/9488874},\n\tdoi = {10.1109/INFOCOM42981.2021.9488874},\n\tabstract = {With the rapid advancement in machine learning (ML), ML-based Intrusion Detection Systems (IDSs) are widely deployed to protect networks from various attacks. Yet one of the biggest challenges is that ML-based IDSs suffer from adversarial example (AE) attacks. By applying small perturbations (e.g. slightly increasing packet inter-arrival time) to the intrusion traffic, an AE attack can flip the prediction of a well-trained IDS. We address this challenge by proposing MANDA, a MANifold and Decision boundary-based AE detection system. Through analyzing AE attacks, we notice that 1) an AE tends to be close to its original manifold (i.e., the cluster of samples in its original class) regardless which class it is misclassified into; and 2) AEs tend to be close to the decision boundary so as to minimize the perturbation scale. Based on the two observations, we design MANDA for accurate AE detection by exploiting inconsistency between manifold evaluation and IDS model inference and evaluating model uncertainty on small perturbations. We evaluate MANDA on NSL-KDD under three state-of-the-art AE attacks. Our experimental results show that MANDA achieves as high as 98.41\\% true-positive rate with 5\\% false-positive rate and can be applied to other problem spaces such as image recognition.},\n\turldate = {2024-02-08},\n\tbooktitle = {{IEEE} {INFOCOM} 2021 - {IEEE} {Conference} on {Computer} {Communications}},\n\tauthor = {Wang, Ning and Chen, Yimin and Hu, Yang and Lou, Wenjing and Hou, Y. Thomas},\n\tmonth = may,\n\tyear = {2021},\n\tnote = {ISSN: 2641-9874},\n\tkeywords = {Conferences, Detectors, Image recognition, Manifolds, Network intrusion detection, Perturbation methods, Uncertainty},\n\tpages = {1--10},\n}\n\n
\n
\n\n\n
\n With the rapid advancement in machine learning (ML), ML-based Intrusion Detection Systems (IDSs) are widely deployed to protect networks from various attacks. Yet one of the biggest challenges is that ML-based IDSs suffer from adversarial example (AE) attacks. By applying small perturbations (e.g. slightly increasing packet inter-arrival time) to the intrusion traffic, an AE attack can flip the prediction of a well-trained IDS. We address this challenge by proposing MANDA, a MANifold and Decision boundary-based AE detection system. Through analyzing AE attacks, we notice that 1) an AE tends to be close to its original manifold (i.e., the cluster of samples in its original class) regardless which class it is misclassified into; and 2) AEs tend to be close to the decision boundary so as to minimize the perturbation scale. Based on the two observations, we design MANDA for accurate AE detection by exploiting inconsistency between manifold evaluation and IDS model inference and evaluating model uncertainty on small perturbations. We evaluate MANDA on NSL-KDD under three state-of-the-art AE attacks. Our experimental results show that MANDA achieves as high as 98.41% true-positive rate with 5% false-positive rate and can be applied to other problem spaces such as image recognition.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2020\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n IndoorWaze: A Crowdsourcing-Based Context-Aware Indoor Navigation System.\n \n \n \n \n\n\n \n Li, T.; Han, D.; Chen, Y.; Zhang, R.; Zhang, Y.; and Hedgpeth, T.\n\n\n \n\n\n\n IEEE Transactions on Wireless Communications, 19(8): 5461–5472. August 2020.\n \n\n\n\n
\n\n\n\n \n \n \"IndoorWaze:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{li_indoorwaze_2020,\n\ttitle = {{IndoorWaze}: {A} {Crowdsourcing}-{Based} {Context}-{Aware} {Indoor} {Navigation} {System}},\n\tvolume = {19},\n\tissn = {1558-2248},\n\tshorttitle = {{IndoorWaze}},\n\turl = {https://ieeexplore.ieee.org/abstract/document/9094357},\n\tdoi = {10.1109/TWC.2020.2993545},\n\tabstract = {Indoor navigation systems are very useful in large complex indoor environments such as shopping malls. Current systems focus on improving indoor localization accuracy and must be combined with an accurate labeled floor plan to provide usable indoor navigation services. Such labeled floor plans are often unavailable or involve a prohibitive cost to manually obtain. In this paper, we present IndoorWaze, a novel crowdsourcing-based context-aware indoor navigation system that can automatically generate an accurate context-aware floor plan with labeled indoor POIs for the first time in literature. IndoorWaze combines the Wi-Fi fingerprints of indoor walkers with the Wi-Fi fingerprints and POI labels provided by POI employees to produce a high-fidelity labeled floor plan. As a lightweight crowdsourcing-based system, IndoorWaze involves very little effort from indoor walkers and POI employees. We prototype IndoorWaze on Android smartphones and evaluate it in a large shopping mall. Our results show that IndoorWaze can generate a high-fidelity labeled floor plan, in which all the stores are correctly labeled and arranged, all the pathways and crossings are correctly shown, and the median estimation error for the store dimension is below 12\\%.},\n\tnumber = {8},\n\turldate = {2024-02-08},\n\tjournal = {IEEE Transactions on Wireless Communications},\n\tauthor = {Li, Tao and Han, Dianqi and Chen, Yimin and Zhang, Rui and Zhang, Yanchao and Hedgpeth, Terri},\n\tmonth = aug,\n\tyear = {2020},\n\tkeywords = {Crowdsourcing, Indoor navigation, Legged locomotion, Smart phones, Wireless communication, Wireless fidelity, context-aware, crowdsourcing, labeling},\n\tpages = {5461--5472},\n}\n\n
\n
\n\n\n
\n Indoor navigation systems are very useful in large complex indoor environments such as shopping malls. Current systems focus on improving indoor localization accuracy and must be combined with an accurate labeled floor plan to provide usable indoor navigation services. Such labeled floor plans are often unavailable or involve a prohibitive cost to manually obtain. In this paper, we present IndoorWaze, a novel crowdsourcing-based context-aware indoor navigation system that can automatically generate an accurate context-aware floor plan with labeled indoor POIs for the first time in literature. IndoorWaze combines the Wi-Fi fingerprints of indoor walkers with the Wi-Fi fingerprints and POI labels provided by POI employees to produce a high-fidelity labeled floor plan. As a lightweight crowdsourcing-based system, IndoorWaze involves very little effort from indoor walkers and POI employees. We prototype IndoorWaze on Android smartphones and evaluate it in a large shopping mall. Our results show that IndoorWaze can generate a high-fidelity labeled floor plan, in which all the stores are correctly labeled and arranged, all the pathways and crossings are correctly shown, and the median estimation error for the store dimension is below 12%.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2018\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Secure Crowdsourced Indoor Positioning Systems.\n \n \n \n \n\n\n \n Li, T.; Chen, Y.; Zhang, R.; Zhang, Y.; and Hedgpeth, T.\n\n\n \n\n\n\n In IEEE INFOCOM 2018 - IEEE Conference on Computer Communications, pages 1034–1042, April 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SecurePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{li_secure_2018,\n\ttitle = {Secure {Crowdsourced} {Indoor} {Positioning} {Systems}},\n\turl = {https://ieeexplore.ieee.org/abstract/document/8486398},\n\tdoi = {10.1109/INFOCOM.2018.8486398},\n\tabstract = {Indoor positioning systems (IPSes) can enable many location-based services in large indoor environments where GPS is not available or reliable. Mobile crowdsourcing is widely advocated as an effective way to construct IPS maps. This paper presents the first systematic study of security issues in crowd-sourced WiFi-based IPSes to promote security considerations in designing and deploying crowdsourced IPSes. We identify three attacks on crowdsourced WiFi-based IPSes and propose the corresponding countermeasures. The efficacy of the attacks and also our countermeasures are experimentally validated on a prototype system. The attacks and countermeasures can be easily extended to other crowdsourced IPSes.},\n\turldate = {2024-02-08},\n\tbooktitle = {{IEEE} {INFOCOM} 2018 - {IEEE} {Conference} on {Computer} {Communications}},\n\tauthor = {Li, Tao and Chen, Yimin and Zhang, Rui and Zhang, Yanchao and Hedgpeth, Terri},\n\tmonth = apr,\n\tyear = {2018},\n\tkeywords = {Crowdsourcing, Databases, IP networks, Indoor environments, Prototypes, Security, Wireless fidelity},\n\tpages = {1034--1042},\n}\n\n
\n
\n\n\n
\n Indoor positioning systems (IPSes) can enable many location-based services in large indoor environments where GPS is not available or reliable. Mobile crowdsourcing is widely advocated as an effective way to construct IPS maps. This paper presents the first systematic study of security issues in crowd-sourced WiFi-based IPSes to promote security considerations in designing and deploying crowdsourced IPSes. We identify three attacks on crowdsourced WiFi-based IPSes and propose the corresponding countermeasures. The efficacy of the attacks and also our countermeasures are experimentally validated on a prototype system. The attacks and countermeasures can be easily extended to other crowdsourced IPSes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n EyeTell: Video-Assisted Touchscreen Keystroke Inference from Eye Movements.\n \n \n \n \n\n\n \n Chen, Y.; Li, T.; Zhang, R.; Zhang, Y.; and Hedgpeth, T.\n\n\n \n\n\n\n In 2018 IEEE Symposium on Security and Privacy (SP), pages 144–160, May 2018. \n ISSN: 2375-1207\n\n\n\n
\n\n\n\n \n \n \"EyeTell:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 3 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{chen_eyetell_2018,\n\ttitle = {{EyeTell}: {Video}-{Assisted} {Touchscreen} {Keystroke} {Inference} from {Eye} {Movements}},\n\tshorttitle = {{EyeTell}},\n\turl = {https://ieeexplore.ieee.org/abstract/document/8418601},\n\tdoi = {10.1109/SP.2018.00010},\n\tabstract = {Keystroke inference attacks pose an increasing threat to ubiquitous mobile devices. This paper presents EyeTell, a novel video-assisted attack that can infer a victim's keystrokes on his touchscreen device from a video capturing his eye movements. EyeTell explores the observation that human eyes naturally focus on and follow the keys they type, so a typing sequence on a soft keyboard results in a unique gaze trace of continuous eye movements. In contrast to prior work, EyeTell requires neither the attacker to visually observe the victim's inputting process nor the victim device to be placed on a static holder. Comprehensive experiments on iOS and Android devices confirm the high efficacy of EyeTell for inferring PINs, lock patterns, and English words under various environmental conditions.},\n\turldate = {2024-02-08},\n\tbooktitle = {2018 {IEEE} {Symposium} on {Security} and {Privacy} ({SP})},\n\tauthor = {Chen, Yimin and Li, Tao and Zhang, Rui and Zhang, Yanchao and Hedgpeth, Terri},\n\tmonth = may,\n\tyear = {2018},\n\tnote = {ISSN: 2375-1207},\n\tkeywords = {Authentication, Gaze tracking, Keyboards, Mobile handsets, Password, Pins, keystroke inference, mobile devices, security, video analysis},\n\tpages = {144--160},\n}\n\n
\n
\n\n\n
\n Keystroke inference attacks pose an increasing threat to ubiquitous mobile devices. This paper presents EyeTell, a novel video-assisted attack that can infer a victim's keystrokes on his touchscreen device from a video capturing his eye movements. EyeTell explores the observation that human eyes naturally focus on and follow the keys they type, so a typing sequence on a soft keyboard results in a unique gaze trace of continuous eye movements. In contrast to prior work, EyeTell requires neither the attacker to visually observe the victim's inputting process nor the victim device to be placed on a static holder. Comprehensive experiments on iOS and Android devices confirm the high efficacy of EyeTell for inferring PINs, lock patterns, and English words under various environmental conditions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Proximity-Proof: Secure and Usable Mobile Two-Factor Authentication.\n \n \n \n \n\n\n \n Han, D.; Chen, Y.; Li, T.; Zhang, R.; Zhang, Y.; and Hedgpeth, T.\n\n\n \n\n\n\n In Proceedings of the 24th Annual International Conference on Mobile Computing and Networking, of MobiCom '18, pages 401–415, New York, NY, USA, October 2018. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"Proximity-Proof:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{han_proximity-proof_2018,\n\taddress = {New York, NY, USA},\n\tseries = {{MobiCom} '18},\n\ttitle = {Proximity-{Proof}: {Secure} and {Usable} {Mobile} {Two}-{Factor} {Authentication}},\n\tisbn = {9781450359030},\n\tshorttitle = {Proximity-{Proof}},\n\turl = {https://dl.acm.org/doi/10.1145/3241539.3241574},\n\tdoi = {10.1145/3241539.3241574},\n\tabstract = {Mobile two-factor authentication (2FA) has become commonplace along with the popularity of mobile devices. Current mobile 2FA solutions all require some form of user effort which may seriously affect the experience of mobile users, especially senior citizens or those with disability such as visually impaired users. In this paper, we propose Proximity-Proof, a secure and usable mobile 2FA system without involving user interactions. Proximity-Proof automatically transmits a user's 2FA response via inaudible OFDM-modulated acoustic signals to the login browser. We propose a novel technique to extract individual speaker and microphone fingerprints of a mobile device to defend against the powerful man-in-the-middle (MiM) attack. In addition, Proximity-Proof explores two-way acoustic ranging to thwart the co-located attack. To the best of our knowledge, Proximity-Proof is the first mobile 2FA scheme resilient to the MiM and co-located attacks. We empirically analyze that Proximity-Proof is at least as secure as existing mobile 2FA solutions while being highly usable. We also prototype Proximity-Proof and confirm its high security, usability, and efficiency through comprehensive user experiments.},\n\turldate = {2024-02-08},\n\tbooktitle = {Proceedings of the 24th {Annual} {International} {Conference} on {Mobile} {Computing} and {Networking}},\n\tpublisher = {Association for Computing Machinery},\n\tauthor = {Han, Dianqi and Chen, Yimin and Li, Tao and Zhang, Rui and Zhang, Yaochao and Hedgpeth, Terri},\n\tmonth = oct,\n\tyear = {2018},\n\tkeywords = {mobile security, speaker and microphone fingerprinting, two-factor authentication, usability},\n\tpages = {401--415},\n}\n\n
\n
\n\n\n
\n Mobile two-factor authentication (2FA) has become commonplace along with the popularity of mobile devices. Current mobile 2FA solutions all require some form of user effort which may seriously affect the experience of mobile users, especially senior citizens or those with disability such as visually impaired users. In this paper, we propose Proximity-Proof, a secure and usable mobile 2FA system without involving user interactions. Proximity-Proof automatically transmits a user's 2FA response via inaudible OFDM-modulated acoustic signals to the login browser. We propose a novel technique to extract individual speaker and microphone fingerprints of a mobile device to defend against the powerful man-in-the-middle (MiM) attack. In addition, Proximity-Proof explores two-way acoustic ranging to thwart the co-located attack. To the best of our knowledge, Proximity-Proof is the first mobile 2FA scheme resilient to the MiM and co-located attacks. We empirically analyze that Proximity-Proof is at least as secure as existing mobile 2FA solutions while being highly usable. We also prototype Proximity-Proof and confirm its high security, usability, and efficiency through comprehensive user experiments.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Beware of What You Share: Inferring User Locations in Venmo.\n \n \n \n \n\n\n \n Yao, X.; Chen, Y.; Zhang, R.; Zhang, Y.; and Lin, Y.\n\n\n \n\n\n\n IEEE Internet of Things Journal, 5(6): 5109–5118. December 2018.\n \n\n\n\n
\n\n\n\n \n \n \"BewarePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{yao_beware_2018,\n\ttitle = {Beware of {What} {You} {Share}: {Inferring} {User} {Locations} in {Venmo}},\n\tvolume = {5},\n\tissn = {2327-4662},\n\tshorttitle = {Beware of {What} {You} {Share}},\n\turl = {https://ieeexplore.ieee.org/abstract/document/8372910},\n\tdoi = {10.1109/JIOT.2018.2844218},\n\tabstract = {Mobile payment apps are seeing explosive usage worldwide. This paper focuses on Venmo, a very popular mobile person-to-person payment service owned by Paypal. Venmo allows money transfers between users with a mandatory transaction note. More than half of transaction records in Venmo are public information. In this paper, we propose a multilayer location inference (MLLI) technique to infer user locations from public transaction records in Venmo. MLLI explores two observations. First, many Venmo transaction notes contain implicit location cues. Second, the types and temporal patterns of user transactions have strong ties to their location closeness. With a large dataset of 2.12M users and 20.23M Venmo transaction records, we show that MLLI can identify the top-1, top-3, and top-5 possible locations for a Venmo user with accuracy up to 50\\%, 80\\%, and 90\\%, respectively. Our results highlight the danger of sharing transaction notes on Venmo or similar mobile payment apps.},\n\tnumber = {6},\n\turldate = {2024-02-08},\n\tjournal = {IEEE Internet of Things Journal},\n\tauthor = {Yao, Xin and Chen, Yimin and Zhang, Rui and Zhang, Yanchao and Lin, Yaping},\n\tmonth = dec,\n\tyear = {2018},\n\tkeywords = {Electronic commerce, Facebook, Internet of Things, Location inference, Location interference, Online banking, Privacy, Text mining, Urban areas, mobile payment, privacy, security},\n\tpages = {5109--5118},\n}\n\n
\n
\n\n\n
\n Mobile payment apps are seeing explosive usage worldwide. This paper focuses on Venmo, a very popular mobile person-to-person payment service owned by Paypal. Venmo allows money transfers between users with a mandatory transaction note. More than half of transaction records in Venmo are public information. In this paper, we propose a multilayer location inference (MLLI) technique to infer user locations from public transaction records in Venmo. MLLI explores two observations. First, many Venmo transaction notes contain implicit location cues. Second, the types and temporal patterns of user transactions have strong ties to their location closeness. With a large dataset of 2.12M users and 20.23M Venmo transaction records, we show that MLLI can identify the top-1, top-3, and top-5 possible locations for a Venmo user with accuracy up to 50%, 80%, and 90%, respectively. Our results highlight the danger of sharing transaction notes on Venmo or similar mobile payment apps.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2017\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Your face your heart: Secure mobile face authentication with photoplethysmograms.\n \n \n \n \n\n\n \n Chen, Y.; Sun, J.; Jin, X.; Li, T.; Zhang, R.; and Zhang, Y.\n\n\n \n\n\n\n In IEEE INFOCOM 2017 - IEEE Conference on Computer Communications, pages 1–9, May 2017. \n \n\n\n\n
\n\n\n\n \n \n \"YourPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{chen_your_2017,\n\ttitle = {Your face your heart: {Secure} mobile face authentication with photoplethysmograms},\n\tshorttitle = {Your face your heart},\n\turl = {https://ieeexplore.ieee.org/abstract/document/8057220},\n\tdoi = {10.1109/INFOCOM.2017.8057220},\n\tabstract = {Face authentication emerges as a powerful method for preventing unauthorized access to mobile devices. It is, however, vulnerable to photo-based forgery attacks (PFA) and videobased forgery attacks (VFA), in which the adversary exploits a photo or video containing the user's frontal face. Effective defenses against PFA and VFA often rely on liveness detection, which seeks to find a live indicator that the submitted face photo or video of the legitimate user is indeed captured in real time. In this paper, we propose FaceHeart, a novel and practical face authentication system for mobile devices. FaceHeart simultaneously takes a face video with the front camera and a fingertip video with the rear camera on COTS mobile devices. It then achieves liveness detection by comparing the two photoplethysmograms independently extracted from the face and fingertip videos, which should be highly consistent if the two videos are for the same live person and taken at the same time. As photoplethysmograms are closely tied to human cardiac activity and almost impossible to forge or control, FaceHeart is strongly resilient to PFA and VFA. Extensive user experiments on Samsung Galaxy S5 have confirmed the high efficacy and efficiency of FaceHeart.},\n\turldate = {2024-02-08},\n\tbooktitle = {{IEEE} {INFOCOM} 2017 - {IEEE} {Conference} on {Computer} {Communications}},\n\tauthor = {Chen, Yimin and Sun, Jingchao and Jin, Xiaocong and Li, Tao and Zhang, Rui and Zhang, Yanchao},\n\tmonth = may,\n\tyear = {2017},\n\tkeywords = {Authentication, Cameras, Face, Feature extraction, Mobile communication, Mobile handsets, Streaming media},\n\tpages = {1--9},\n}\n\n
\n
\n\n\n
\n Face authentication emerges as a powerful method for preventing unauthorized access to mobile devices. It is, however, vulnerable to photo-based forgery attacks (PFA) and videobased forgery attacks (VFA), in which the adversary exploits a photo or video containing the user's frontal face. Effective defenses against PFA and VFA often rely on liveness detection, which seeks to find a live indicator that the submitted face photo or video of the legitimate user is indeed captured in real time. In this paper, we propose FaceHeart, a novel and practical face authentication system for mobile devices. FaceHeart simultaneously takes a face video with the front camera and a fingertip video with the rear camera on COTS mobile devices. It then achieves liveness detection by comparing the two photoplethysmograms independently extracted from the face and fingertip videos, which should be highly consistent if the two videos are for the same live person and taken at the same time. As photoplethysmograms are closely tied to human cardiac activity and almost impossible to forge or control, FaceHeart is strongly resilient to PFA and VFA. Extensive user experiments on Samsung Galaxy S5 have confirmed the high efficacy and efficiency of FaceHeart.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n POWERFUL: Mobile app fingerprinting via power analysis.\n \n \n \n \n\n\n \n Chen, Y.; Jin, X.; Sun, J.; Zhang, R.; and Zhang, Y.\n\n\n \n\n\n\n In IEEE INFOCOM 2017 - IEEE Conference on Computer Communications, pages 1–9, May 2017. \n \n\n\n\n
\n\n\n\n \n \n \"POWERFUL:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 3 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{chen_powerful_2017,\n\ttitle = {{POWERFUL}: {Mobile} app fingerprinting via power analysis},\n\tshorttitle = {{POWERFUL}},\n\turl = {https://ieeexplore.ieee.org/abstract/document/8057232},\n\tdoi = {10.1109/INFOCOM.2017.8057232},\n\tabstract = {Which apps a mobile user has and how they are used can disclose significant private information about the user. In this paper, we present the design and evaluation of POWERFUL, a new attack which can fingerprint sensitive mobile apps (or infer sensitive app usage) by analyzing the power consumption profiles on Android devices. POWERFUL works on the observation that distinct apps and their different usage patterns all lead to distinguishable power consumption profiles. Since the power profiles on Android devices require no permission to access, POWERFUL is very difficult to detect and can pose a serious threat against user privacy. Extensive experiments involving popular and sensitive apps in Google Play Store show that POWERFUL can identify the app used at any particular time with accuracy up to 92.9\\%, demonstrating the feasibility of POWERFUL.},\n\turldate = {2024-02-08},\n\tbooktitle = {{IEEE} {INFOCOM} 2017 - {IEEE} {Conference} on {Computer} {Communications}},\n\tauthor = {Chen, Yimin and Jin, Xiaocong and Sun, Jingchao and Zhang, Rui and Zhang, Yanchao},\n\tmonth = may,\n\tyear = {2017},\n\tkeywords = {Androids, Feature extraction, Google, Humanoid robots, Mobile communication, Mobile handsets, Power demand},\n\tpages = {1--9},\n}\n\n
\n
\n\n\n
\n Which apps a mobile user has and how they are used can disclose significant private information about the user. In this paper, we present the design and evaluation of POWERFUL, a new attack which can fingerprint sensitive mobile apps (or infer sensitive app usage) by analyzing the power consumption profiles on Android devices. POWERFUL works on the observation that distinct apps and their different usage patterns all lead to distinguishable power consumption profiles. Since the power profiles on Android devices require no permission to access, POWERFUL is very difficult to detect and can pose a serious threat against user privacy. Extensive experiments involving popular and sensitive apps in Google Play Store show that POWERFUL can identify the app used at any particular time with accuracy up to 92.9%, demonstrating the feasibility of POWERFUL.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2016\n \n \n (3)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n VISIBLE: Video-Assisted Keystroke Inference from Tablet Backside Motion.\n \n \n \n \n\n\n \n Sun, J.; Jin, X.; Chen, Y.; Zhang, J.; Zhang, R.; and Zhang, Y.\n\n\n \n\n\n\n In Proceedings 2016 Network and Distributed System Security Symposium, San Diego, CA, 2016. Internet Society\n \n\n\n\n
\n\n\n\n \n \n \"VISIBLE:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{sun_visible_2016,\n\taddress = {San Diego, CA},\n\ttitle = {{VISIBLE}: {Video}-{Assisted} {Keystroke} {Inference} from {Tablet} {Backside} {Motion}},\n\tisbn = {9781891562419},\n\tshorttitle = {{VISIBLE}},\n\turl = {https://www.ndss-symposium.org/wp-content/uploads/2017/09/visible-video-assisted-keystroke-inference-tablet-backside-motion.pdf},\n\tdoi = {10.14722/ndss.2016.23060},\n\tlanguage = {en},\n\turldate = {2024-02-08},\n\tbooktitle = {Proceedings 2016 {Network} and {Distributed} {System} {Security} {Symposium}},\n\tpublisher = {Internet Society},\n\tauthor = {Sun, Jingchao and Jin, Xiaocong and Chen, Yimin and Zhang, Jinxue and Zhang, Rui and Zhang, Yanchao},\n\tyear = {2016},\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n iLock: Immediate and Automatic Locking of Mobile Devices against Data Theft.\n \n \n \n \n\n\n \n Li, T.; Chen, Y.; Sun, J.; Jin, X.; and Zhang, Y.\n\n\n \n\n\n\n In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, of CCS '16, pages 933–944, New York, NY, USA, October 2016. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"iLock:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{li_ilock_2016,\n\taddress = {New York, NY, USA},\n\tseries = {{CCS} '16},\n\ttitle = {{iLock}: {Immediate} and {Automatic} {Locking} of {Mobile} {Devices} against {Data} {Theft}},\n\tisbn = {9781450341394},\n\tshorttitle = {{iLock}},\n\turl = {https://dl.acm.org/doi/10.1145/2976749.2978294},\n\tdoi = {10.1145/2976749.2978294},\n\tabstract = {Mobile device losses and thefts are skyrocketing. The sensitive data hosted on a lost/stolen device are fully exposed to the adversary. Although password-based authentication mechanisms are available on mobile devices, many users reportedly do not use them, and a device may be lost/stolen while in the unlocked mode. This paper presents the design and evaluation of iLock, a secure and usable defense against data theft on a lost/stolen mobile device. iLock automatically, quickly, and accurately recognizes the user's physical separation from his/her device by detecting and analyzing the changes in wireless signals. Once significant physical separation is detected, the device is immediately locked to prevent data theft. iLock relies on acoustic signals and requires at least one speaker and one microphone that are available on most COTS (commodity-off-the-shelf) mobile devices. Extensive experiments on Samsung Galaxy S5 show that iLock can lock the device with negligible false positives and negatives.},\n\turldate = {2024-02-08},\n\tbooktitle = {Proceedings of the 2016 {ACM} {SIGSAC} {Conference} on {Computer} and {Communications} {Security}},\n\tpublisher = {Association for Computing Machinery},\n\tauthor = {Li, Tao and Chen, Yimin and Sun, Jingchao and Jin, Xiaocong and Zhang, Yanchao},\n\tmonth = oct,\n\tyear = {2016},\n\tkeywords = {FMCW, audio ranging, device locking, smartphone security},\n\tpages = {933--944},\n}\n\n
\n
\n\n\n
\n Mobile device losses and thefts are skyrocketing. The sensitive data hosted on a lost/stolen device are fully exposed to the adversary. Although password-based authentication mechanisms are available on mobile devices, many users reportedly do not use them, and a device may be lost/stolen while in the unlocked mode. This paper presents the design and evaluation of iLock, a secure and usable defense against data theft on a lost/stolen mobile device. iLock automatically, quickly, and accurately recognizes the user's physical separation from his/her device by detecting and analyzing the changes in wireless signals. Once significant physical separation is detected, the device is immediately locked to prevent data theft. iLock relies on acoustic signals and requires at least one speaker and one microphone that are available on most COTS (commodity-off-the-shelf) mobile devices. Extensive experiments on Samsung Galaxy S5 show that iLock can lock the device with negligible false positives and negatives.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n DPSense: Differentially Private Crowdsourced Spectrum Sensing.\n \n \n \n \n\n\n \n Jin, X.; Zhang, R.; Chen, Y.; Li, T.; and Zhang, Y.\n\n\n \n\n\n\n In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, of CCS '16, pages 296–307, New York, NY, USA, October 2016. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"DPSense:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{jin_dpsense_2016,\n\taddress = {New York, NY, USA},\n\tseries = {{CCS} '16},\n\ttitle = {{DPSense}: {Differentially} {Private} {Crowdsourced} {Spectrum} {Sensing}},\n\tisbn = {9781450341394},\n\tshorttitle = {{DPSense}},\n\turl = {https://dl.acm.org/doi/10.1145/2976749.2978426},\n\tdoi = {10.1145/2976749.2978426},\n\tabstract = {Dynamic spectrum access (DSA) has great potential to address worldwide spectrum shortage by enhancing spectrum efficiency. It allows unlicensed secondary users to access the underutilized licensed spectrum when the licensed primary users are not transmitting. As a key enabler for DSA systems, crowdsourced spectrum sensing (CSS) allows a spectrum sensing provider (SSP) to outsource the sensing of spectrum occupancy to distributed mobile users. In this paper, we propose DPSense, a novel framework that allows the SSP to select mobile users for executing spatiotemporal spectrum-sensing tasks without violating the location privacy of mobile users. Detailed evaluations on real location traces confirm that DPSense can provide differential location privacy to mobile users while ensuring that the SSP can accomplish spectrum-sensing tasks with overwhelming probability and also the minimal cost.},\n\turldate = {2024-02-08},\n\tbooktitle = {Proceedings of the 2016 {ACM} {SIGSAC} {Conference} on {Computer} and {Communications} {Security}},\n\tpublisher = {Association for Computing Machinery},\n\tauthor = {Jin, Xiaocong and Zhang, Rui and Chen, Yimin and Li, Tao and Zhang, Yanchao},\n\tmonth = oct,\n\tyear = {2016},\n\tkeywords = {crowdsourced spectrum sensing, differential privacy, dynamic spectrum access, location privacy},\n\tpages = {296--307},\n}\n\n
\n
\n\n\n
\n Dynamic spectrum access (DSA) has great potential to address worldwide spectrum shortage by enhancing spectrum efficiency. It allows unlicensed secondary users to access the underutilized licensed spectrum when the licensed primary users are not transmitting. As a key enabler for DSA systems, crowdsourced spectrum sensing (CSS) allows a spectrum sensing provider (SSP) to outsource the sensing of spectrum occupancy to distributed mobile users. In this paper, we propose DPSense, a novel framework that allows the SSP to select mobile users for executing spatiotemporal spectrum-sensing tasks without violating the location privacy of mobile users. Detailed evaluations on real location traces confirm that DPSense can provide differential location privacy to mobile users while ensuring that the SSP can accomplish spectrum-sensing tasks with overwhelming probability and also the minimal cost.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2015\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Your song your way: Rhythm-based two-factor authentication for multi-touch mobile devices.\n \n \n \n \n\n\n \n Chen, Y.; Sun, J.; Zhang, R.; and Zhang, Y.\n\n\n \n\n\n\n In 2015 IEEE Conference on Computer Communications (INFOCOM), pages 2686–2694, April 2015. \n ISSN: 0743-166X\n\n\n\n
\n\n\n\n \n \n \"YourPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{chen_your_2015,\n\ttitle = {Your song your way: {Rhythm}-based two-factor authentication for multi-touch mobile devices},\n\tshorttitle = {Your song your way},\n\turl = {https://ieeexplore.ieee.org/abstract/document/7218660},\n\tdoi = {10.1109/INFOCOM.2015.7218660},\n\tabstract = {Multi-touch mobile devices have penetrated into everyday life to support personal and business communications. Secure and usable authentication techniques are indispensable for preventing illegitimate access to mobile devices. This paper presents RhyAuth, a novel two-factor rhythm-based authentication scheme for multi-touch mobile devices. RhyAuth requires a user to perform a sequence of rhythmic taps/slides on a device screen to unlock the device. The user is authenticated and admitted only when the features extracted from her rhythmic taps/slides match those stored on the device. RhyAuth is a two-factor authentication scheme that depends on a user-chosen rhythm and also the behavioral metrics for inputting the rhythm. Through a 32-user experiment on Android devices, we show that RhyAuth is highly secure against various attacks and also very usable for both sighted and visually impaired people.},\n\turldate = {2024-02-08},\n\tbooktitle = {2015 {IEEE} {Conference} on {Computer} {Communications} ({INFOCOM})},\n\tauthor = {Chen, Yimin and Sun, Jingchao and Zhang, Rui and Zhang, Yanchao},\n\tmonth = apr,\n\tyear = {2015},\n\tnote = {ISSN: 0743-166X},\n\tkeywords = {Authentication, Computers, Data processing, Feature extraction, Measurement, Mobile handsets, Rhythm},\n\tpages = {2686--2694},\n}\n\n
\n
\n\n\n
\n Multi-touch mobile devices have penetrated into everyday life to support personal and business communications. Secure and usable authentication techniques are indispensable for preventing illegitimate access to mobile devices. This paper presents RhyAuth, a novel two-factor rhythm-based authentication scheme for multi-touch mobile devices. RhyAuth requires a user to perform a sequence of rhythmic taps/slides on a device screen to unlock the device. The user is authenticated and admitted only when the features extracted from her rhythmic taps/slides match those stored on the device. RhyAuth is a two-factor authentication scheme that depends on a user-chosen rhythm and also the behavioral metrics for inputting the rhythm. Through a 32-user experiment on Android devices, we show that RhyAuth is highly secure against various attacks and also very usable for both sighted and visually impaired people.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n\n\n \n\n \n \n \n \n\n
\n"}; document.write(bibbase_data.data);