FREDY: Federated Resilience Enhanced with Differential Privacy

Anastasakis Z, Velivassaki T-H, Voulkidis A, Bourou S, Psychogyios K, Skias D, Zahariadis T.

Federated Learning is identified as a reliable technique for distributed training of ML models. Specifically, a set of dispersed nodes may collaborate through a federation in producing a jointly trained ML model without disclosing their data to each other. Each node performs local model training and then shares its trained model weights with a server node, usually called Aggregator in federated learning, as it aggregates the trained weights and then sends them back to its clients for another round of local training. Despite the data protection and security that FL provides to each client, there are still well-studied attacks such as membership inference attacks that can detect potential vulnerabilities of the FL system and thus expose sensitive data. In this paper, in order to prevent this kind of attack and address private data leakage, we introduce FREDY, a differential private federated learning framework that enables knowledge transfer from private data. Particularly, our approach has a teachers–student scheme. Each teacher model is trained on sensitive, disjoint data in a federated manner, and the student model is trained on the most voted predictions of the teachers on public unlabeled data which are noisy aggregated in order to guarantee the privacy of each teacher’s sensitive data. Only the student model is publicly accessible as the teacher models contain sensitive information. We show that our proposed approach guarantees the privacy of sensitive data against model inference attacks while it combines the federated learning settings for the model training procedures.

Multisite gaming streaming optimization over virtualized 5G environment using Deep Reinforcement Learning techniques

A. del Rio, J. Serrano, D. Jimenez, Luis M. Contreras, F. Alvarez

The massive growth of live streaming, especially gaming-focused content, has led to an overall increase in global bandwidth consumption. Certain services see their quality diminished at times of peak consumption, degrading the quality of the content. This trend generates new research related to optimizing image quality according to network and service conditions. In this work we present a gaming streaming use case optimization on a real multisite 5G environment. The paper outlines the virtualized workflow of the use case and provides a detailed description of the applications and resources deployed for the simulation. This simulation tests the optimization of the service based on the addition of Artificial Intelligence (AI) algorithms, assuring the delivery of content with good Quality of Experience (QoE) under different working conditions. The AI introduced is based on Deep Reinforcement Learning (DRL) algorithms that can adapt, in a flexible way, to the different conditions that the multimedia workflow could face. That is, adapt, through corrective actions, the streaming bitrate, in order to optimize the QoE of the content on a real-time multisite scenario. The results of this work demonstrate how we have been able to minimize content losses, as well as the fact of obtaining high audiovisual multimedia quality results with higher bitrates, compared to a service without an optimizer integrated in the system. In a multi-site environment, we have achieved an improvement of 20 percentage points in terms of blockiness efficiency and also 15 percentage points in block loss.

Putting Intelligence into Things: An Overview of Current Architectures

Maria Belesioti, Ioannis Chochliouros, Panagiotis Dimas, Manolis Sofianopoulos

In the era of the Internet of Things (IoT), billions of sensors collect data from their environment and process it to enable intelligent decisions at the right time. However, transferring massive amounts of disparate data in complex environments is a challenging issue. The convergence of Artificial Intelligence (AI) and the Internet of Things has breathed new life into IoT operations and human-machine interaction. Resource-constrained IoT devices typically need more data storage and processing capacity to build modern AI models. The intuitive solution integrates cloud computing technology with AIoT and leverages cloud-side servers’ powerful and flexible processing and storage capacity. This paper briefly introduces IoT and AIoT architectures in the context of cloud computing, fog computing and more. Finally, an overview of the NEMO [1] concept is presented. The NEMO project aims to establish itself as the “game changer” of AIoT-Edge-Cloud Continuum by bringing intelligence closer to data, making AI-as-a-Service an integral part of self-organizing networks orchestrating micro-service execution.

NEMO: Building the Next Generation Meta Operating System

Ioannis P. Chochliouros, Enric Pages-Montanera, Aitor Alcázar-Fernández, Theodore Zahariadis, Terpsichori-Helen Velivassaki, Charalabos Skianis, Rosaria Rossini, Maria Belesioti, Nikolaos Drosos, Emmanouil Bakiris, Prashanth Kumar Pedholla, Panagiotis Karkazis, Astik Kumar Samal, Luis Miguel Contreras Murillo, Alberto Del Río, Javier Serrano, Dimitrios Skias, Olga E. Segou, and Sonja Waechter

Artificial Intelligence of Things (AIoT) is one of the next big concepts to support societal changes and economic growth, being one of the fastest growing ICT segments. A specific challenge is to leverage existing technology strengths to develop solutions that sustain the European industry and values. The ongoing ΝΕΜΟ (“Next Generation Meta-Operating System”) EU-funded project intends to establish itself as the “game changer” of the AIoT-Edge-Cloud continuum by introducing an open source, modular and cybersecure meta-operating system, leveraging on existing technologies and introducing novel concepts, methods, tools, testing and engagement campaigns.  NEMO will bring intelligence closer to the data and make AI-as-a-Service an integral part of network self-organisation and micro-services execution orchestration. Its widespread penetration and massive acceptance will be achieved via new technology, pre-commercial exploitation components and liaison with open-source communities.

By defining a modular and adaptable mOS (meta-OS) architecture together with building blocks and plugins the project will “address” current and future technological and business needs.

On the Challenge of Sound Code for Operating Systems

Jonathan Klimt, Martin Kröning, Stefan Lankes, Antonello Monti

The memory-safe systems programming language Rust is gaining more and more attention in the operating system development communities, as it provides memory safety without sacrificing performance or control. However, these safety guarantees only apply to the safe subset of Rust, while bare-metal programming requires some parts of the program to be written in unsafe Rust. Writing abstractions for these parts of the software that are sound, meaning that they guarantee the absence of undefined behavior and thus uphold the invariants of safe Rust, can be challenging. Producing sound code, however, is essential to avoid breakage when the code is used in new ways or the compiler behavior changes.

In this paper, we present common patterns of unsound abstractions derived from the experience of reworking soundness in our kernel. During this process, we were able to remove over 400 unsafe expressions while discovering and fixing several hard-to-spot concurrency bugs along the way.

GPU Acceleration in Unikernels Using Cricket GPU Virtualization

Niklas Eiling, Martin Kröning, Jonathan Klimt, Philipp Fensch, Stefan Lankes, Antonello Monti

Today, large compute clusters increasingly move towards heterogeneous architectures by employing accelerators, such as GPUs, to realize ever-increasing performance. To achieve maximum performance on these architectures, applications have to be tailored to the available hardware by using special APIs to interact with the hardware resources, such as the CUDA APIs for NVIDIA GPUs. Simultaneously, unikernels emerge as a solution for the increasing overhead introduced by the complexity of modern operating systems and their inability to optimize for specific application profiles. Unikernels allow for better static code checking and enable optimizations impossible with monolithic kernels, yielding more robust and faster programs. Despite this, there is a lack of support for using GPUs in unikernels. Due to the proprietary nature of the CUDA APIs, direct support for interacting with NVIDIA GPUs from unikernels is infeasible, resulting in applications requiring GPUs being unsuitable for deployment in unikernels.

We propose using Cricket GPU virtualization to introduce GPU support to the unikernels RustyHermit and Unikraft. To interface with Cricket, we implement a generic library for using ONC RPCs in Rust. With Cricket and our RPC library, unikernels become able to use GPU resources, even when they are installed in remote machines. This way, we enable the use of unikernels for applications that require the high parallel performance of GPUs to achieve manageable execution times.

Data Poisoning Attacks in Gossip Learning

Alexandre Pham, Maria Potop-Butucaru, Sébastien Tixeuil, Serge Fdida

Traditional machine learning systems were designed in a centralized manner. In such designs, the central entity maintains both the machine learning model and the data used to adjust the model’s parameters. As data centralization yields privacy issues, Federated Learning was introduced to reduce data sharing and have a central server coordinate the learning of multiple devices. While Federated Learning is more decentralized, it still relies on a central entity that may fail or be subject to attacks, provoking the failure of the whole system. Then, Decentralized Federated Learning removes the need for a central server entirely, letting participating processes handle the coordination of the model construction. This distributed control urges studying the possibility of malicious attacks by the participants themselves. While poisoning attacks on Federated Learning have been extensively studied, their effects in Decentralized Federated Learning did not get the same level of attention. Our work is the first to propose a methodology to assess poisoning attacks in Decentralized Federated Learning in both churn free and churn prone scenarios. Furthermore, in order to evaluate our methodology on a case study representative for gossip learning we extended the gossipy simulator with an attack injector module.