Predictive Modeling and Analysis of Energy Consumption in Ev Charging Stations Using Machine Learning Techniques

Mostafa Jabari, Mohammad Ghoreishi, Tommaso Bragatto, Francesca Santori, Marco Maccioni, Francesco Bellesini

The rapid expansion of electric vehicle (EV) adoption has introduced significant challenges in managing energy demand and infrastructure planning for charging stations. Unpredictable usage patterns and limited real-time control hinder the efficiency and scalability of EV charging networks. Existing forecasting methods often struggle to capture the nonlinear and time-dependent behavior of charging sessions. Recent advancements in machine learning have demonstrated potential for improving prediction accuracy by leveraging historical session data. In this study, we propose a data-driven machine-learning framework to forecast energy consumption at EV charging stations using session-level features from real-world operational data. We compare three regression models, including Linear Regression, Random Forest, and Extreme Gradient Boosting (XGBoost), to evaluate their ability to capture complex consumption dynamics. Experimental results reveal that XGBoost significantly outperforms the others, achieving the lowest Mean Absolute Error (1.08 kWh), Root Mean Squared Error (3.69 kWh), and the highest R2 score (0.85). These findings provide actionable insights for optimizing station management, enhancing energy efficiency, and guiding infrastructure expansion.

 

SARIMA-Based Forecasting for Camera-Driven Smart Parking Systems

Mohammad Ghoreishi, Mostafa Jabari, Tommaso Bragatto, Francesca Santori, Massimo Cresta

Effective parking management is essential for reducing congestion and enhancing urban mobility in smart cities. However, accurately forecasting parking space availability remains challenging due to seasonal and temporal variability. This study proposes a cost-effective forecasting framework based on the Seasonal Autoregressive Integrated Moving Average (SARIMA) model, applied to vehicle inflow and outflow data extracted from camera-based detection systems. By capturing periodic patterns through seasonal differencing and autoregressive terms, SARIMA achieves high forecasting accuracy, evidenced by a Mean Absolute Error (MAE) of 6.38 and Root Mean Square Error (RMSE) of 7.24 across a 24 -hour horizon. The model outperforms neural network and regression baselines, particularly during peak periods. The approach is deployed via a real-time dashboard integrated with ASM Terni’s infrastructure, demonstrating its scalability and practical utility. Future extensions include hybrid SARIMA-deep learning models for enhanced generalization.

 

IoT Sensor Application for Transformer Health Monitoring, Real-time Alarms, and Grid Modernization: A Case Study for Electrical Grids

Mohammad Ghoreishi, Jose Velasquez Spadaro, Gonzalo Murillo, Francesca Santori, Èric Nieto Rodriguez, Juan Carlos Cidoncha Secilla

The increasing complexity of modern electrical grids and the integration of renewable energy sources require asset monitoring and novel management strategies. This paper presents the design, development, and deployment of self-powered IoT-based monitoring devices for transformer health assessment within ASM Terni’s secondary substations. The devices enable real-time monitoring and predictive maintenance by integrating piezoelectric energy harvesting, cloud-based data visualization, and Health Index Factor computation. Initial challenges, including communication continuity and validation with existing infrastructure, were successfully resolved during deployment. Field results demonstrate the system’s operational resilience, scalability, and potential for sub-metering applications. This work highlights the imminent potential of IoT in modernizing grid operations and supporting sustainability goals.

 

From Browser to Kernel: Exploring a Lightweight Sandboxed Approach for Unikernel Extensions

Martin Kröning, Stefan Lankes, Jonathan Klimt, Antonello Monti

Library Operating Systems (libOS) are highly efficient because the entire software stack, from the kernel to the application, is compiled, optimized, and linked together. However, in certain scenarios, such as code injection for network packet analysis or adding custom drivers, it is necessary to extend the kernel as needed. The traditional approach of modifying and recompiling the kernel source code can be time-consuming and error-prone. This paper analyzes the possibility of using WebAssembly (Wasm) to extend an operating system kernel at runtime. Wasm is a portable bytecode format that enables fast execution of language-independent code while prioritizing security and portability. Its type system and bounded memory regions effectively prevent unauthorized data access. A prototype module for analyzing network traffic demonstrates the potential, while performance is determined by using standard benchmarks. The performance of the kernel sandbox proved to be about 20 % slower than running the Wasm code in state-of-the-art runtimes on Linux, which is acceptable for a first proof-of-concept.

 

NExt generation Meta Operating systems (NEMO) and Data Space: envisioning the future

Olga Segou, Dimitris Skias, Terpsichori-Helen Velivassaki, Theodore Zahariadis, Enric Pages, Rubén Ramiro, Rosaria Rossini, Panagiotis Karkazis, Alejandro Muniz, Luis Contreras

Data Spaces are the vehicle for data-driven innovation and great opportunity for European industries. Data spaces will establish a framework for the responsible and ethical use of data, with a focus on data privacy, security, and sovereignty that ensures seamless exchange of data across different sectors, organizations, and member states. Effective delivery of services within Data Spaces mandate for efficient and effective use of available resources across the IoT, edge and cloud continuum, while enforcing security and privacy policies, monitoring infrastructure and transaction mechanisms required to ensure data sovereignty, observability and accountability of data exchanges. Herein, we present the NExt generation Meta Operating systems (NEMO)https://meta-os.eu/ architecture approach embracing Data Spaces, hosting Connectors and services within Data Spaces, delivered as extensible element of the metaOS architecture. We also propose a Fiware-based Industrial Data Space Connector for NEMO enabling to organise and valorise large pools of data, as a significant cornerstone of data-driven innovation.

 

Brief Announcement: A Self-* and Persistent Hub Sampling Service

Mohamed Amine LEGHERABA, Maria Potop-Butucaru, Sébastien Tixeuil

We present Elevator, a novel algorithm for hub sampling in peer-to-peer networks. Elevator constructs overlays whose topology lies between a random graph and a star network. Our approach makes use of preferential attachment, forming hubs spontaneously, and offering a decentralized solution for use cases that require networks with both low diameter and resilience to failures.

 

Machine Learning-Based Network Anomaly Detection: Design, Implementation, and Evaluation

Pilar Schummer, Alberto del Río, Javier Serrano, David Jimenez, Guillermo Sánchez, Álvaro Llorente

In the last decade, numerous methods have been proposed to define and detect outliers, particularly in complex environments like networks, where anomalies significantly deviate from normal patterns. Although defining a clear standard is challenging, anomaly detection systems have become essential for network administrators to efficiently identify and resolve irregularities. Methods: This study develops and evaluates a machine learning-based system for network anomaly detection, focusing on point anomalies within network traffic. It employs both unsupervised and supervised learning techniques, including change point detection, clustering, and classification models, to identify anomalies. SHAP values are utilized to enhance model interpretability. Results: Unsupervised models effectively captured temporal patterns, while supervised models, particularly Random Forest (94.3%), demonstrated high accuracy in classifying anomalies, closely approximating the actual anomaly rate. Conclusions: Experimental results indicate that the system can accurately predict network anomalies in advance. Congestion and packet loss were identified as key factors in anomaly detection. This study demonstrates the potential for real-world deployment of the anomaly detection system to validate its scalability.

 

Seamless User-Generated Content Processing for Smart Media: Delivering QoE-Aware Live Media with YOLO-Based Bib Number Recognition

Alberto del Rio, Álvaro Llorente, Sofia Ortiz-Arce, Maria Belesioti, George Pappas, Alejandro Muñiz, Luis Contreras, Dimitris Christopoulos

The increasing availability of User-Generated Content during large-scale events is transforming spectators into active co-creators of live narratives while simultaneously introducing challenges in managing heterogeneous sources, ensuring content quality, and orchestrating distributed infrastructures. A trial was conducted to evaluate automated orchestration, media enrichment, and real-time quality assessment in a live sporting scenario. A key innovation of this work is the use of a cloud-native architecture based on Kubernetes, enabling dynamic and scalable integration of smartphone streams and remote production tools into a unified workflow. The system also included advanced cognitive services, such as a Video Quality Probe for estimating perceived visual quality and an AI Engine based on YOLO models for detection and recognition of runners and bib numbers. Together, these components enable a fully automated workflow for live production, combining real-time analysis and quality monitoring, capabilities that previously required manual or offline processing. The results demonstrated consistently high Mean Opinion Score (MOS) values above 3 72.92% of the time, confirming acceptable perceived quality under real network conditions, while the AI Engine achieved strong performance with a Precision of 93.6% and Recall of 80.4%.

 

An Enhanced Method for Objective QoE Analysis in Adaptive Streaming Services

Sofía Ortiz-Arce, Álvaro Llorente, Alberto del Rio, Federico Alvarez

Evaluating Quality of Experience (QoE) is crucial for multimedia services, as it measures user satisfaction with content delivery. This paper presents an objective method for evaluating QoE in adaptive streaming services, using the commercial tool Video-MOS, which allows real-time monitoring and analysis of multimedia content quality across various platforms and networks. The method aims to provide a precise evaluation that incorporates both technical and subjective factors. This approach integrates multiple factors that influence the overall user perception of adaptive streaming quality, offering greater flexibility and performance, which could contribute to more comprehensive future assessment. The method is validated by a test plan that incorporates a variety of content and scenarios to simulate various network conditions. The results demonstrate the method’s effectiveness in predicting QoE, highlighting the rebuffering frequency as a significant factor. In optimal conditions, specific content types can achieve a QoE score as high as 3.36. Conversely, under unfavorable conditions, the QoE may decrease by up to 1.42 Mean Opinion Score (MOS) points, which represents an 80% reduction from its optimal level. Although the rebuffering frequency has a substantial influence, a long initial buffer can have an even more negative effect on QoE, particularly under adverse conditions. Furthermore, adaptive streaming technologies, such as Dynamic Adaptive Streaming over HTTP (MPEG-DASH) and Adaptive Bitrate Streaming (ABR) are integral to the assessment process.

 

Performance Evaluation of YOLOv8-Based Bib Number Detection in Media Streaming Race

Rafael Martínez, Álvaro Llorente, Alberto del Rio, Javier Serrano, David Jimenez

The evolution of telecommunication networks unlocks new possibilities for multimedia services, including enriched and personalized experiences. However, ensuring high Quality of Service and Quality of Experience requires intelligent solutions at the edge. This study investigates the real-time detection of race bib numbers using YOLOv8, a state-of-the-art object detection framework, within the context of 5G/6G edge computing. We train (BDBD and SVHN datasets) and analyze various YOLOv8 models (nano to extreme) across two diverse racing datasets (TGCRBNW and RBNR), encompassing varied environmental conditions (daytime and nighttime). Our assessment focuses on key performance metrics, including processing time, efficiency, and accuracy. For instance, on the TGCRBNW dataset, the extreme-sized model shows a noticeable reduction in prediction time when the more powerful GPU is used, with times decreasing from 1,161 to 54 seconds on a desktop computer. Similarly, on the RBNR dataset, the extreme-sized model exhibits a significant reduction in prediction time from 373 to 15 seconds when using the more powerful GPU. In terms of accuracy, we found varying performance across scenarios and datasets. For example, not good enough results are obtained in most scenarios on the TGCRBNW dataset (lower than 50% in all sets and models), while YOLOv8m obtain the high accuracy in several scenarios on the RBNR dataset (almost 80% of accuracy in the best set). Variability in prediction times was observed between different computer architectures, highlighting the importance of selecting appropriate hardware for specific tasks. These results emphasize the importance of aligning computational resources with the demands of real-world tasks to achieve timely and accurate predictions.

 

Multisite gaming streaming optimization over virtualized 5G environment using Deep Reinforcement Learning techniques

Alberto del Río, Javier Serrano , David Jiménez, Luis M. Contreras, Federico Alvarez

The massive growth of live streaming, especially gaming-focused content, has led to an overall increase in global bandwidth consumption. Certain services see their quality diminished at times of peak consumption, degrading the quality of the content. This trend generates new research related to optimizing image quality according to network and service conditions. In this work we present a gaming streaming use case optimization on a real multisite 5G environment. The paper outlines the virtualized workflow of the use case and provides a detailed description of the applications and resources deployed for the simulation. This simulation tests the optimization of the service based on the addition of Artificial Intelligence (AI) algorithms, assuring the delivery of content with good Quality of Experience (QoE) under different working conditions. The AI introduced is based on Deep Reinforcement Learning (DRL) algorithms that can adapt, in a flexible way, to the different conditions that the multimedia workflow could face. That is, adapt, through corrective actions, the streaming bitrate, in order to optimize the QoE of the content on a real-time multisite scenario. The results of this work demonstrate how we have been able to minimize content losses, as well as the fact of obtaining high audiovisual multimedia quality results with higher bitrates, compared to a service without an optimizer integrated in the system. In a multi-site environment, we have achieved an improvement of 20 percentage points in terms of blockiness efficiency and also 15 percentage points in block loss.

 

Open Source approach in two metaOS projects

Rosaria Rossini, Marco Jahn, Anastasios Zafeiropoulos, Nikos Filinis, Dimitrios Spatharakis, Ioannis Dimolitsas, Eleni Fotopoulou, Constantinos Vassilakis, Symeon Papavassiliou, Theodore Zahariadis, Ilias Nektarios Seitanidis, Enric Pages, Guillermo Gomez, Sergiy Remezov Grynchenko

Transparency, collaboration, security, and digital sovereignty are the open source values that are also important to many European countries and institutions. Open source software offers practical advantages, cost savings, and opportunities for innovation that contribute to the region’s technological advancement and competitiveness, as well as fosters collaboration among developers and encourages innovation. On top of these values, several European projects try to build a sustainable and innovative future. In this paper the authors present two examples of an open source applications of meta-operating system (metaOS). The first NExt generation Meta Operating systems (NEMO) builds the future of the Artificial Intelligence of Things(AIoT)-edge-cloud continuum by introducing an open source, modular and cybersecure metaOS. The second regards the open source solutions provided by the NEPHELE project to assist orchestration of distributed applications across resources in the computing continuum. In this matter, the open source components of the metaOS projects are presented for each functional layer in the architecture, demonstrating the value of open source for the metaOS sustainability in a specific use case.

 

Comparative Analysis of A3C and PPO Algorithms in Reinforcement Learning: A Survey on General Environments

Albert del Rio, David Jimenez, Javier Serrano

This research article presents a comparison between two mainstream Deep Reinforcement Learning (DRL) algorithms, Asynchronous Advantage Actor-Critic (A3C) and Proximal Policy Optimization (PPO), in the context of two diverse environments: CartPole and Lunar Lander. DRL algorithms are widely known for their effectiveness in training agents to navigate complex environments and achieve optimal policies. Nevertheless, a methodical assessment of their effectiveness in various settings is crucial for comprehending their advantages and disadvantages. In this study, we conduct experiments on the CartPole and Lunar Lander environments using both A3C and PPO algorithms. We compare their performance in terms of convergence speed and stability. Our results indicate that A3C typically achieves quicker training times, but exhibits greater instability in reward values. Conversely, PPO demonstrates a more stable training process at the expense of longer execution times. An evaluation of the environment is needed in terms of algorithm selection, based on specific application needs, balancing between training time and stability. A3C is ideal for applications requiring rapid training, while PPO is better suited for those prioritizing training stability.

Olive Tree Segmentation from UAV Imagery

Konstantinos Prousalidis, Stavroula Bourou, Terpsichori-Helen Velivassaki, Artemis Voulkidis, Aikaterini Zachariadi, Vassilios Zachariadis

This paper addresses the challenge of olive tree segmentation using drone imagery, which is crucial for precision agriculture applications. We tackle the data scarcity issue by augmenting existing detection datasets. Additionally, lightweight model variations of state-of-the-art models like YOLOv8n, RepViT-SAM, and EdgeSAM are combined into two proposed pipelines to meet computational constraints while maintaining segmentation accuracy. Our multifaceted approach successfully achieves an equilibrium among model size, inference time, and accuracy, thereby facilitating efficient olive tree segmentation in precision agriculture scenarios with constrained datasets. Following comprehensive evaluations, YOLOv8n appears to surpass the other models in terms of inference time and accuracy, albeit necessitating a more intricate fine-tuning procedure. Conversely, SAM-based pipelines provide a significantly more streamlined fine-tuning process, compatible with existing detection datasets for olive trees. However, this convenience incurs the disadvantages of a more elaborate inference architecture that relies on dual models, consequently yielding lower performance metrics and prolonged inference durations.

A software-defined connectivity service for multi-cluster cloud native applications

Raul Martin, Ivan Vidal, Francisco Valera

Containerization technologies have risen in popularity for deploying microservices applications in cloud-native environments, offering the benefits of traditional virtualization with reduced overhead. However, existing container networking solutions lack support for applications requiring isolated link-layer communications among containers in different clusters. These communications are fundamental to enable the seamless integration of cloud-native solutions in 5G and beyond networks. Accordingly, we present an SDN-enabled networking solution that supports the creation of isolated link-layer virtual networks between containers across different Kubernetes clusters by building virtual circuits that dynamically adapt to changes in the topology. In this article, we introduce our solution, highlighting its advantages over existing alternatives, and provide a comprehensive design overview. Additionally, we validate it through an experiment, offering a deeper understanding of its functionality. Our work fills an existing gap for applications with inter-cluster link-layer networking access requirements in the cloud-native ecosystem.

Open Source in NExt generation Meta Operating systems (NEMO)

Rosaria Rossini , Terpsichori-Helen Velivassaki,  Theodore Zahariadis,  Panagiotis Karkazis, Dimitrios Skias,  Enric Pere Pages Montanera,  Artemis Voulkidis

The open source approach aligns with the values of transparency, collaboration, security, and digital sovereignty that are important to many European countries and institutions. It offers practical advantages, cost savings, and opportunities for innovation that contribute to the region’s technological advancement and competi- tiveness. Open source software also fosters collaboration among developers and encourages innovation. On top of these values, NExt generation Meta Operating systems (NEMO) builds the future of the AIoT-edge- cloud continuum by introducing an open source, modu- lar and cyber-secure meta-operating system (metaOS). In this paper, the open source components of the NEMO metaOS are presented for each functional layer in the NEMO architecture, demonstrating the value of open source for the metaOS sustainability.

Challenger: Blockchain-based Massively Multiplayer Online Game Architecture

Boris Chan Yip Hon, Bilel Zaghdoudi, Maria Potop-Butucaru, Sébastien Tixeuil, Serge Fdida

We propose Challenger a peer-to-peer blockchain-based middleware architecture for narrative games, and discuss its resilience to cheating attacks. Our architecture orchestrates nine services in a fully decentralized manner where nodes are not aware of the entire composition of the system nor its size. All these components are orchestrated together to obtain (strong) resilience to cheaters.  The main contribution of the paper is to provide, for the first time, an architecture for narrative games agnostic of a particular blockchain that brings together several distinct research areas, namely distributed ledgers, peer-to-peer networks, multi-player-online games and resilience to attacks.

FREDY: Federated Resilience Enhanced with Differential Privacy

Anastasakis Z, Velivassaki T-H, Voulkidis A, Bourou S, Psychogyios K, Skias D, Zahariadis T.

Federated Learning is identified as a reliable technique for distributed training of ML models. Specifically, a set of dispersed nodes may collaborate through a federation in producing a jointly trained ML model without disclosing their data to each other. Each node performs local model training and then shares its trained model weights with a server node, usually called Aggregator in federated learning, as it aggregates the trained weights and then sends them back to its clients for another round of local training. Despite the data protection and security that FL provides to each client, there are still well-studied attacks such as membership inference attacks that can detect potential vulnerabilities of the FL system and thus expose sensitive data. In this paper, in order to prevent this kind of attack and address private data leakage, we introduce FREDY, a differential private federated learning framework that enables knowledge transfer from private data. Particularly, our approach has a teachers–student scheme. Each teacher model is trained on sensitive, disjoint data in a federated manner, and the student model is trained on the most voted predictions of the teachers on public unlabeled data which are noisy aggregated in order to guarantee the privacy of each teacher’s sensitive data. Only the student model is publicly accessible as the teacher models contain sensitive information. We show that our proposed approach guarantees the privacy of sensitive data against model inference attacks while it combines the federated learning settings for the model training procedures.

Multisite gaming streaming optimization over virtualized 5G environment using Deep Reinforcement Learning techniques

A. del Rio, J. Serrano, D. Jimenez, Luis M. Contreras, F. Alvarez

The massive growth of live streaming, especially gaming-focused content, has led to an overall increase in global bandwidth consumption. Certain services see their quality diminished at times of peak consumption, degrading the quality of the content. This trend generates new research related to optimizing image quality according to network and service conditions. In this work we present a gaming streaming use case optimization on a real multisite 5G environment. The paper outlines the virtualized workflow of the use case and provides a detailed description of the applications and resources deployed for the simulation. This simulation tests the optimization of the service based on the addition of Artificial Intelligence (AI) algorithms, assuring the delivery of content with good Quality of Experience (QoE) under different working conditions. The AI introduced is based on Deep Reinforcement Learning (DRL) algorithms that can adapt, in a flexible way, to the different conditions that the multimedia workflow could face. That is, adapt, through corrective actions, the streaming bitrate, in order to optimize the QoE of the content on a real-time multisite scenario. The results of this work demonstrate how we have been able to minimize content losses, as well as the fact of obtaining high audiovisual multimedia quality results with higher bitrates, compared to a service without an optimizer integrated in the system. In a multi-site environment, we have achieved an improvement of 20 percentage points in terms of blockiness efficiency and also 15 percentage points in block loss.

Putting Intelligence into Things: An Overview of Current Architectures

Maria Belesioti, Ioannis Chochliouros, Panagiotis Dimas, Manolis Sofianopoulos

In the era of the Internet of Things (IoT), billions of sensors collect data from their environment and process it to enable intelligent decisions at the right time. However, transferring massive amounts of disparate data in complex environments is a challenging issue. The convergence of Artificial Intelligence (AI) and the Internet of Things has breathed new life into IoT operations and human-machine interaction. Resource-constrained IoT devices typically need more data storage and processing capacity to build modern AI models. The intuitive solution integrates cloud computing technology with AIoT and leverages cloud-side servers’ powerful and flexible processing and storage capacity. This paper briefly introduces IoT and AIoT architectures in the context of cloud computing, fog computing and more. Finally, an overview of the NEMO [1] concept is presented. The NEMO project aims to establish itself as the “game changer” of AIoT-Edge-Cloud Continuum by bringing intelligence closer to data, making AI-as-a-Service an integral part of self-organizing networks orchestrating micro-service execution.

NEMO: Building the Next Generation Meta Operating System

Ioannis P. Chochliouros, Enric Pages-Montanera, Aitor Alcázar-Fernández, Theodore Zahariadis, Terpsichori-Helen Velivassaki, Charalabos Skianis, Rosaria Rossini, Maria Belesioti, Nikolaos Drosos, Emmanouil Bakiris, Prashanth Kumar Pedholla, Panagiotis Karkazis, Astik Kumar Samal, Luis Miguel Contreras Murillo, Alberto Del Río, Javier Serrano, Dimitrios Skias, Olga E. Segou, and Sonja Waechter

Artificial Intelligence of Things (AIoT) is one of the next big concepts to support societal changes and economic growth, being one of the fastest growing ICT segments. A specific challenge is to leverage existing technology strengths to develop solutions that sustain the European industry and values. The ongoing ΝΕΜΟ (“Next Generation Meta-Operating System”) EU-funded project intends to establish itself as the “game changer” of the AIoT-Edge-Cloud continuum by introducing an open source, modular and cybersecure meta-operating system, leveraging on existing technologies and introducing novel concepts, methods, tools, testing and engagement campaigns.  NEMO will bring intelligence closer to the data and make AI-as-a-Service an integral part of network self-organisation and micro-services execution orchestration. Its widespread penetration and massive acceptance will be achieved via new technology, pre-commercial exploitation components and liaison with open-source communities.

By defining a modular and adaptable mOS (meta-OS) architecture together with building blocks and plugins the project will “address” current and future technological and business needs.

On the Challenge of Sound Code for Operating Systems

Jonathan Klimt, Martin Kröning, Stefan Lankes, Antonello Monti

The memory-safe systems programming language Rust is gaining more and more attention in the operating system development communities, as it provides memory safety without sacrificing performance or control. However, these safety guarantees only apply to the safe subset of Rust, while bare-metal programming requires some parts of the program to be written in unsafe Rust. Writing abstractions for these parts of the software that are sound, meaning that they guarantee the absence of undefined behavior and thus uphold the invariants of safe Rust, can be challenging. Producing sound code, however, is essential to avoid breakage when the code is used in new ways or the compiler behavior changes.

In this paper, we present common patterns of unsound abstractions derived from the experience of reworking soundness in our kernel. During this process, we were able to remove over 400 unsafe expressions while discovering and fixing several hard-to-spot concurrency bugs along the way.

GPU Acceleration in Unikernels Using Cricket GPU Virtualization

Niklas Eiling, Martin Kröning, Jonathan Klimt, Philipp Fensch, Stefan Lankes, Antonello Monti

Today, large compute clusters increasingly move towards heterogeneous architectures by employing accelerators, such as GPUs, to realize ever-increasing performance. To achieve maximum performance on these architectures, applications have to be tailored to the available hardware by using special APIs to interact with the hardware resources, such as the CUDA APIs for NVIDIA GPUs. Simultaneously, unikernels emerge as a solution for the increasing overhead introduced by the complexity of modern operating systems and their inability to optimize for specific application profiles. Unikernels allow for better static code checking and enable optimizations impossible with monolithic kernels, yielding more robust and faster programs. Despite this, there is a lack of support for using GPUs in unikernels. Due to the proprietary nature of the CUDA APIs, direct support for interacting with NVIDIA GPUs from unikernels is infeasible, resulting in applications requiring GPUs being unsuitable for deployment in unikernels.

We propose using Cricket GPU virtualization to introduce GPU support to the unikernels RustyHermit and Unikraft. To interface with Cricket, we implement a generic library for using ONC RPCs in Rust. With Cricket and our RPC library, unikernels become able to use GPU resources, even when they are installed in remote machines. This way, we enable the use of unikernels for applications that require the high parallel performance of GPUs to achieve manageable execution times.

Data Poisoning Attacks in Gossip Learning

Alexandre Pham, Maria Potop-Butucaru, Sébastien Tixeuil, Serge Fdida

Traditional machine learning systems were designed in a centralized manner. In such designs, the central entity maintains both the machine learning model and the data used to adjust the model’s parameters. As data centralization yields privacy issues, Federated Learning was introduced to reduce data sharing and have a central server coordinate the learning of multiple devices. While Federated Learning is more decentralized, it still relies on a central entity that may fail or be subject to attacks, provoking the failure of the whole system. Then, Decentralized Federated Learning removes the need for a central server entirely, letting participating processes handle the coordination of the model construction. This distributed control urges studying the possibility of malicious attacks by the participants themselves. While poisoning attacks on Federated Learning have been extensively studied, their effects in Decentralized Federated Learning did not get the same level of attention. Our work is the first to propose a methodology to assess poisoning attacks in Decentralized Federated Learning in both churn free and churn prone scenarios. Furthermore, in order to evaluate our methodology on a case study representative for gossip learning we extended the gossipy simulator with an attack injector module.