var bibbase_data = {"data":"\"Loading..\"\n\n
\n\n \n\n \n\n \n \n\n \n\n \n \n\n \n\n \n
\n generated by\n \n \"bibbase.org\"\n\n \n
\n \n\n
\n\n \n\n\n
\n\n Excellent! Next you can\n create a new website with this list, or\n embed it in an existing web page by copying & pasting\n any of the following snippets.\n\n
\n JavaScript\n (easiest)\n
\n \n <script src=\"https://bibbase.org/show?bib=https%3A%2F%2Fbibbase.org%2Fzotero-mypublications%2Fkhannurien&jsonp=1&jsonp=1\"></script>\n \n
\n\n PHP\n
\n \n <?php\n $contents = file_get_contents(\"https://bibbase.org/show?bib=https%3A%2F%2Fbibbase.org%2Fzotero-mypublications%2Fkhannurien&jsonp=1\");\n print_r($contents);\n ?>\n \n
\n\n iFrame\n (not recommended)\n
\n \n <iframe src=\"https://bibbase.org/show?bib=https%3A%2F%2Fbibbase.org%2Fzotero-mypublications%2Fkhannurien&jsonp=1\"></iframe>\n \n
\n\n

\n For more details see the documention.\n

\n
\n
\n\n
\n\n This is a preview! To use this list on your own web site\n or create a new web site from it,\n create a free account. The file will be added\n and you will be able to edit it in the File Manager.\n We will show you instructions once you've created your account.\n
\n\n
\n\n

To the site owner:

\n\n

Action required! Mendeley is changing its\n API. In order to keep using Mendeley with BibBase past April\n 14th, you need to:\n

    \n
  1. renew the authorization for BibBase on Mendeley, and
  2. \n
  3. update the BibBase URL\n in your page the same way you did when you initially set up\n this page.\n
  4. \n
\n

\n\n

\n \n \n Fix it now\n

\n
\n\n
\n\n\n
\n \n \n
\n
\n  \n 2024\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n HeROcache: Storage-Aware Scheduling in Heterogeneous Serverless Edge – The Case of IDS.\n \n \n \n \n\n\n \n Lannurien, V.; Slimani, C.; d’Orazio , L.; Barais, O.; Paquelet, S.; and Boukhobza, J.\n\n\n \n\n\n\n In 2024 IEEE 24th International Symposium on Cluster, Cloud and Internet Computing (CCGrid), pages 587–597, Philadelphia, PA, USA, May 2024. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"HeROcache:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{lannurien_herocache:_2024,\n\taddress = {Philadelphia, PA, USA},\n\ttitle = {{HeROcache}: {Storage}-{Aware} {Scheduling} in {Heterogeneous} {Serverless} {Edge} – {The} {Case} of {IDS}},\n\tcopyright = {All rights reserved},\n\tisbn = {979-8-3503-9566-2},\n\tshorttitle = {{HeROcache}},\n\turl = {https://ieeexplore.ieee.org/document/10701415/},\n\tdoi = {10.1109/CCGrid59990.2024.00071},\n\tabstract = {Intrusion Detection Systems (IDS) are time-sensitive applications that aim to classify potentially malicious network traffic. IDSs are part of a class of applications that rely on short-lived functions that can be run reactively and, as such, could be deployed on edge resources, to offload processing from energy-constrained battery-backed devices. The serverless service model could fit the needs of such applications, given that the platform allows adequate levels of Quality of Service (QoS) for a variety of users, since the criticality of IDS applications depends on several parameters. Deploying serverless functions on unreserved edge resources requires to pay particular attention to (1) initialization delays that could be significant on low resources platforms, (2) inter-function communication between edge nodes, and (3) heterogeneous devices. In this paper, we propose both a storage-aware allocation and scheduling policy that seek to minimize task placement costs for service providers on edge devices while optimizing QoS for IDS users. To do so, we propose a caching and consolidation strategy that minimizes cold starts and inter-function communication delays while satisfying QoS by leveraging heterogeneous edge resources. We evaluated our platform in a simulation environment using characterization data from real-world IDS tasks and execution platforms and compared it with a vanilla Knative orchestrator and a storage-agnostic policy. Our strategy achieves 18\\% fewer QoS penalties while consolidating applications across 80\\% fewer edge nodes.},\n\tlanguage = {en},\n\turldate = {2024-12-11},\n\tbooktitle = {2024 {IEEE} 24th {International} {Symposium} on {Cluster}, {Cloud} and {Internet} {Computing} ({CCGrid})},\n\tpublisher = {IEEE},\n\tauthor = {Lannurien, Vincent and Slimani, Camélia and d’Orazio, Laurent and Barais, Olivier and Paquelet, Stéphane and Boukhobza, Jalil},\n\tmonth = may,\n\tyear = {2024},\n\tpages = {587--597},\n}\n\n\n\n
\n
\n\n\n
\n Intrusion Detection Systems (IDS) are time-sensitive applications that aim to classify potentially malicious network traffic. IDSs are part of a class of applications that rely on short-lived functions that can be run reactively and, as such, could be deployed on edge resources, to offload processing from energy-constrained battery-backed devices. The serverless service model could fit the needs of such applications, given that the platform allows adequate levels of Quality of Service (QoS) for a variety of users, since the criticality of IDS applications depends on several parameters. Deploying serverless functions on unreserved edge resources requires to pay particular attention to (1) initialization delays that could be significant on low resources platforms, (2) inter-function communication between edge nodes, and (3) heterogeneous devices. In this paper, we propose both a storage-aware allocation and scheduling policy that seek to minimize task placement costs for service providers on edge devices while optimizing QoS for IDS users. To do so, we propose a caching and consolidation strategy that minimizes cold starts and inter-function communication delays while satisfying QoS by leveraging heterogeneous edge resources. We evaluated our platform in a simulation environment using characterization data from real-world IDS tasks and execution platforms and compared it with a vanilla Knative orchestrator and a storage-agnostic policy. Our strategy achieves 18% fewer QoS penalties while consolidating applications across 80% fewer edge nodes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n HeROsim: An Allocation and Scheduling Simulator for Evaluating Serverless Orchestration Policies.\n \n \n \n \n\n\n \n Lannurien, V.; d’Orazio , L.; Barais, O.; Paquelet, S.; and Boukhobza, J.\n\n\n \n\n\n\n IEEE Internet Computing,1–9. 2024.\n \n\n\n\n
\n\n\n\n \n \n \"HeROsim:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{lannurien_herosim:_2024,\n\ttitle = {{HeROsim}: {An} {Allocation} and {Scheduling} {Simulator} for {Evaluating} {Serverless} {Orchestration} {Policies}},\n\tcopyright = {https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html},\n\tissn = {1089-7801, 1941-0131},\n\tshorttitle = {{HeROsim}},\n\turl = {https://ieeexplore.ieee.org/document/10777556/},\n\tdoi = {10.1109/MIC.2024.3511332},\n\tlanguage = {en},\n\turldate = {2024-12-11},\n\tjournal = {IEEE Internet Computing},\n\tauthor = {Lannurien, Vincent and d’Orazio, Laurent and Barais, Olivier and Paquelet, Stephane and Boukhobza, Jalil},\n\tyear = {2024},\n\tpages = {1--9},\n}\n\n\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2023\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Serverless Cloud Computing: State of the Art and Challenges.\n \n \n \n \n\n\n \n Lannurien, V.; D’Orazio, L.; Barais, O.; and Boukhobza, J.\n\n\n \n\n\n\n In Krishnamurthi, R.; Kumar, A.; Gill, S. S.; and Buyya, R., editor(s), Serverless Computing: Principles and Paradigms, volume 162, pages 275–316. Springer International Publishing, Cham, 2023.\n Series Title: Lecture Notes on Data Engineering and Communications Technologies\n\n\n\n
\n\n\n\n \n \n \"ServerlessPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@incollection{krishnamurthi_serverless_2023,\n\taddress = {Cham},\n\ttitle = {Serverless {Cloud} {Computing}: {State} of the {Art} and {Challenges}},\n\tvolume = {162},\n\tcopyright = {All rights reserved},\n\tisbn = {978-3-031-26632-4 978-3-031-26633-1},\n\tshorttitle = {Serverless {Cloud} {Computing}},\n\turl = {https://link.springer.com/10.1007/978-3-031-26633-1_11},\n\tabstract = {The serverless model represents a paradigm shift in the cloud: as opposed to traditional cloud computing service models, serverless customers do not reserve hardware resources. The execution of their code is event-driven (HTTP requests, cron jobs, etc.) and billing is based on actual resource usage. In return, the responsibility of resource allocation and task placement lies on the provider. While serverless in the wild is mainly advertised as a public cloud offering, solutions are actively developed and backed by solid actors in the industry to allow the development of private cloud serverless platforms. The first generation of serverless offerings, ”Function as a Service” (FaaS), has severe shortcomings that can offset the potential benefits for both customers and providers – in terms of spendings and reliability on the customer side, and in terms of resources multiplexing on the provider side. Circumventing these flaws would allow considerable savings in money and energy for both providers and tenants. This chapter aims at establishing a comprehensive tour of these limitations, and presenting state-of-the-art studies to mitigate weaknesses that are currently holding serverless back from becoming the de facto cloud computing model. The main challenges related to the deployment of such a cloud platform are discussed and some perspectives for future directions in research are given.},\n\tlanguage = {en},\n\turldate = {2024-06-19},\n\tbooktitle = {Serverless {Computing}: {Principles} and {Paradigms}},\n\tpublisher = {Springer International Publishing},\n\tauthor = {Lannurien, Vincent and D’Orazio, Laurent and Barais, Olivier and Boukhobza, Jalil},\n\teditor = {Krishnamurthi, Rajalakshmi and Kumar, Adarsh and Gill, Sukhpal Singh and Buyya, Rajkumar},\n\tyear = {2023},\n\tdoi = {10.1007/978-3-031-26633-1_11},\n\tnote = {Series Title: Lecture Notes on Data Engineering and Communications Technologies},\n\tpages = {275--316},\n}\n\n\n\n
\n
\n\n\n
\n The serverless model represents a paradigm shift in the cloud: as opposed to traditional cloud computing service models, serverless customers do not reserve hardware resources. The execution of their code is event-driven (HTTP requests, cron jobs, etc.) and billing is based on actual resource usage. In return, the responsibility of resource allocation and task placement lies on the provider. While serverless in the wild is mainly advertised as a public cloud offering, solutions are actively developed and backed by solid actors in the industry to allow the development of private cloud serverless platforms. The first generation of serverless offerings, ”Function as a Service” (FaaS), has severe shortcomings that can offset the potential benefits for both customers and providers – in terms of spendings and reliability on the customer side, and in terms of resources multiplexing on the provider side. Circumventing these flaws would allow considerable savings in money and energy for both providers and tenants. This chapter aims at establishing a comprehensive tour of these limitations, and presenting state-of-the-art studies to mitigate weaknesses that are currently holding serverless back from becoming the de facto cloud computing model. The main challenges related to the deployment of such a cloud platform are discussed and some perspectives for future directions in research are given.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n HeROfake: Heterogeneous Resources Orchestration in a Serverless Cloud – An Application to Deepfake Detection.\n \n \n \n \n\n\n \n Lannurien, V.; D'Orazio, L.; Barais, O.; Bernard, E.; Weppe, O.; Beaulieu, L.; Kacete, A.; Paquelet, S.; and Boukhobza, J.\n\n\n \n\n\n\n In 2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid), pages 154–165, Bangalore, India, May 2023. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"HeROfake:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{lannurien_herofake:_2023,\n\taddress = {Bangalore, India},\n\ttitle = {{HeROfake}: {Heterogeneous} {Resources} {Orchestration} in a {Serverless} {Cloud} – {An} {Application} to {Deepfake} {Detection}},\n\tcopyright = {https://doi.org/10.15223/policy-029},\n\tisbn = {9798350301199},\n\tshorttitle = {{HeROfake}},\n\turl = {https://ieeexplore.ieee.org/document/10171518/},\n\tdoi = {10.1109/CCGrid57682.2023.00024},\n\tabstract = {Serverless is a trending service model for cloud computing. It shifts a lot of the complexity from customers to service providers. However, current serverless platforms mostly consider the provider’s infrastructure as homogeneous, as well as the users’ requests. This limits possibilities for the provider to leverage heterogeneity in their infrastructure to improve function response time and reduce energy consumption. We propose a heterogeneity-aware serverless orchestrator for private clouds that consists of two components: the autoscaler allocates heterogeneous hardware resources (CPUs, GPUs, FPGAs) for function replicas, while the scheduler maps function executions to these replicas. Our objective is to guarantee function response time, while enabling the provider to reduce resource usage and energy consumption. This work considers a case study for a deepfake detection application relying on CNN inference. We devised a simulation environment that implements our model and a baseline Knative orchestrator, and evaluated both policies with regard to consolidation of tasks, energy consumption and SLA penalties. Experimental results show that our platform yields substantial gains for all those metrics, with an average of 35\\% less energy consumed for function executions while consolidating tasks on less than 40\\% of the infrastructure’s nodes, and more than 60\\% less SLA violations.},\n\tlanguage = {en},\n\turldate = {2024-06-19},\n\tbooktitle = {2023 {IEEE}/{ACM} 23rd {International} {Symposium} on {Cluster}, {Cloud} and {Internet} {Computing} ({CCGrid})},\n\tpublisher = {IEEE},\n\tauthor = {Lannurien, Vincent and D'Orazio, Laurent and Barais, Olivier and Bernard, Esther and Weppe, Olivier and Beaulieu, Laurent and Kacete, Amine and Paquelet, Stéphane and Boukhobza, Jalil},\n\tmonth = may,\n\tyear = {2023},\n\tpages = {154--165},\n}\n
\n
\n\n\n
\n Serverless is a trending service model for cloud computing. It shifts a lot of the complexity from customers to service providers. However, current serverless platforms mostly consider the provider’s infrastructure as homogeneous, as well as the users’ requests. This limits possibilities for the provider to leverage heterogeneity in their infrastructure to improve function response time and reduce energy consumption. We propose a heterogeneity-aware serverless orchestrator for private clouds that consists of two components: the autoscaler allocates heterogeneous hardware resources (CPUs, GPUs, FPGAs) for function replicas, while the scheduler maps function executions to these replicas. Our objective is to guarantee function response time, while enabling the provider to reduce resource usage and energy consumption. This work considers a case study for a deepfake detection application relying on CNN inference. We devised a simulation environment that implements our model and a baseline Knative orchestrator, and evaluated both policies with regard to consolidation of tasks, energy consumption and SLA penalties. Experimental results show that our platform yields substantial gains for all those metrics, with an average of 35% less energy consumed for function executions while consolidating tasks on less than 40% of the infrastructure’s nodes, and more than 60% less SLA violations.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n\n\n \n\n \n \n \n \n\n
\n"}; document.write(bibbase_data.data);