Mostafa Jabari, Mohammad Ghoreishi, Tommaso Bragatto, Francesca Santori, Marco Maccioni, Francesco Bellesini
The rapid expansion of electric vehicle (EV) adoption has introduced significant challenges in managing energy demand and infrastructure planning for charging stations. Unpredictable usage patterns and limited real-time control hinder the efficiency and scalability of EV charging networks. Existing forecasting methods often struggle to capture the nonlinear and time-dependent behavior of charging sessions. Recent advancements in machine learning have demonstrated potential for improving prediction accuracy by leveraging historical session data. In this study, we propose a data-driven machine-learning framework to forecast energy consumption at EV charging stations using session-level features from real-world operational data. We compare three regression models, including Linear Regression, Random Forest, and Extreme Gradient Boosting (XGBoost), to evaluate their ability to capture complex consumption dynamics. Experimental results reveal that XGBoost significantly outperforms the others, achieving the lowest Mean Absolute Error (1.08 kWh), Root Mean Squared Error (3.69 kWh), and the highest R2 score (0.85). These findings provide actionable insights for optimizing station management, enhancing energy efficiency, and guiding infrastructure expansion.
Mohammad Ghoreishi, Mostafa Jabari, Tommaso Bragatto, Francesca Santori, Massimo Cresta
Effective parking management is essential for reducing congestion and enhancing urban mobility in smart cities. However, accurately forecasting parking space availability remains challenging due to seasonal and temporal variability. This study proposes a cost-effective forecasting framework based on the Seasonal Autoregressive Integrated Moving Average (SARIMA) model, applied to vehicle inflow and outflow data extracted from camera-based detection systems. By capturing periodic patterns through seasonal differencing and autoregressive terms, SARIMA achieves high forecasting accuracy, evidenced by a Mean Absolute Error (MAE) of 6.38 and Root Mean Square Error (RMSE) of 7.24 across a 24 -hour horizon. The model outperforms neural network and regression baselines, particularly during peak periods. The approach is deployed via a real-time dashboard integrated with ASM Terni’s infrastructure, demonstrating its scalability and practical utility. Future extensions include hybrid SARIMA-deep learning models for enhanced generalization.
Mohammad Ghoreishi, Jose Velasquez Spadaro, Gonzalo Murillo, Francesca Santori, Èric Nieto Rodriguez, Juan Carlos Cidoncha Secilla
The increasing complexity of modern electrical grids and the integration of renewable energy sources require asset monitoring and novel management strategies. This paper presents the design, development, and deployment of self-powered IoT-based monitoring devices for transformer health assessment within ASM Terni’s secondary substations. The devices enable real-time monitoring and predictive maintenance by integrating piezoelectric energy harvesting, cloud-based data visualization, and Health Index Factor computation. Initial challenges, including communication continuity and validation with existing infrastructure, were successfully resolved during deployment. Field results demonstrate the system’s operational resilience, scalability, and potential for sub-metering applications. This work highlights the imminent potential of IoT in modernizing grid operations and supporting sustainability goals.
Martin Kröning, Stefan Lankes, Jonathan Klimt, Antonello Monti
Library Operating Systems (libOS) are highly efficient because the entire software stack, from the kernel to the application, is compiled, optimized, and linked together. However, in certain scenarios, such as code injection for network packet analysis or adding custom drivers, it is necessary to extend the kernel as needed. The traditional approach of modifying and recompiling the kernel source code can be time-consuming and error-prone. This paper analyzes the possibility of using WebAssembly (Wasm) to extend an operating system kernel at runtime. Wasm is a portable bytecode format that enables fast execution of language-independent code while prioritizing security and portability. Its type system and bounded memory regions effectively prevent unauthorized data access. A prototype module for analyzing network traffic demonstrates the potential, while performance is determined by using standard benchmarks. The performance of the kernel sandbox proved to be about 20 % slower than running the Wasm code in state-of-the-art runtimes on Linux, which is acceptable for a first proof-of-concept.
Olga Segou, Dimitris Skias, Terpsichori-Helen Velivassaki, Theodore Zahariadis, Enric Pages, Rubén Ramiro, Rosaria Rossini, Panagiotis Karkazis, Alejandro Muniz, Luis Contreras
Data Spaces are the vehicle for data-driven innovation and great opportunity for European industries. Data spaces will establish a framework for the responsible and ethical use of data, with a focus on data privacy, security, and sovereignty that ensures seamless exchange of data across different sectors, organizations, and member states. Effective delivery of services within Data Spaces mandate for efficient and effective use of available resources across the IoT, edge and cloud continuum, while enforcing security and privacy policies, monitoring infrastructure and transaction mechanisms required to ensure data sovereignty, observability and accountability of data exchanges. Herein, we present the NExt generation Meta Operating systems (NEMO)https://meta-os.eu/ architecture approach embracing Data Spaces, hosting Connectors and services within Data Spaces, delivered as extensible element of the metaOS architecture. We also propose a Fiware-based Industrial Data Space Connector for NEMO enabling to organise and valorise large pools of data, as a significant cornerstone of data-driven innovation.
Mohamed Amine LEGHERABA, Maria Potop-Butucaru, Sébastien Tixeuil
We present Elevator, a novel algorithm for hub sampling in peer-to-peer networks. Elevator constructs overlays whose topology lies between a random graph and a star network. Our approach makes use of preferential attachment, forming hubs spontaneously, and offering a decentralized solution for use cases that require networks with both low diameter and resilience to failures.
Pilar Schummer, Alberto del Río, Javier Serrano, David Jimenez, Guillermo Sánchez, Álvaro Llorente
In the last decade, numerous methods have been proposed to define and detect outliers, particularly in complex environments like networks, where anomalies significantly deviate from normal patterns. Although defining a clear standard is challenging, anomaly detection systems have become essential for network administrators to efficiently identify and resolve irregularities. Methods: This study develops and evaluates a machine learning-based system for network anomaly detection, focusing on point anomalies within network traffic. It employs both unsupervised and supervised learning techniques, including change point detection, clustering, and classification models, to identify anomalies. SHAP values are utilized to enhance model interpretability. Results: Unsupervised models effectively captured temporal patterns, while supervised models, particularly Random Forest (94.3%), demonstrated high accuracy in classifying anomalies, closely approximating the actual anomaly rate. Conclusions: Experimental results indicate that the system can accurately predict network anomalies in advance. Congestion and packet loss were identified as key factors in anomaly detection. This study demonstrates the potential for real-world deployment of the anomaly detection system to validate its scalability.
Alberto del Rio, Álvaro Llorente, Sofia Ortiz-Arce, Maria Belesioti, George Pappas, Alejandro Muñiz, Luis Contreras, Dimitris Christopoulos
The increasing availability of User-Generated Content during large-scale events is transforming spectators into active co-creators of live narratives while simultaneously introducing challenges in managing heterogeneous sources, ensuring content quality, and orchestrating distributed infrastructures. A trial was conducted to evaluate automated orchestration, media enrichment, and real-time quality assessment in a live sporting scenario. A key innovation of this work is the use of a cloud-native architecture based on Kubernetes, enabling dynamic and scalable integration of smartphone streams and remote production tools into a unified workflow. The system also included advanced cognitive services, such as a Video Quality Probe for estimating perceived visual quality and an AI Engine based on YOLO models for detection and recognition of runners and bib numbers. Together, these components enable a fully automated workflow for live production, combining real-time analysis and quality monitoring, capabilities that previously required manual or offline processing. The results demonstrated consistently high Mean Opinion Score (MOS) values above 3 72.92% of the time, confirming acceptable perceived quality under real network conditions, while the AI Engine achieved strong performance with a Precision of 93.6% and Recall of 80.4%.
Sofía Ortiz-Arce, Álvaro Llorente, Alberto del Rio, Federico Alvarez
Evaluating Quality of Experience (QoE) is crucial for multimedia services, as it measures user satisfaction with content delivery. This paper presents an objective method for evaluating QoE in adaptive streaming services, using the commercial tool Video-MOS, which allows real-time monitoring and analysis of multimedia content quality across various platforms and networks. The method aims to provide a precise evaluation that incorporates both technical and subjective factors. This approach integrates multiple factors that influence the overall user perception of adaptive streaming quality, offering greater flexibility and performance, which could contribute to more comprehensive future assessment. The method is validated by a test plan that incorporates a variety of content and scenarios to simulate various network conditions. The results demonstrate the method’s effectiveness in predicting QoE, highlighting the rebuffering frequency as a significant factor. In optimal conditions, specific content types can achieve a QoE score as high as 3.36. Conversely, under unfavorable conditions, the QoE may decrease by up to 1.42 Mean Opinion Score (MOS) points, which represents an 80% reduction from its optimal level. Although the rebuffering frequency has a substantial influence, a long initial buffer can have an even more negative effect on QoE, particularly under adverse conditions. Furthermore, adaptive streaming technologies, such as Dynamic Adaptive Streaming over HTTP (MPEG-DASH) and Adaptive Bitrate Streaming (ABR) are integral to the assessment process.
Rafael Martínez, Álvaro Llorente, Alberto del Rio, Javier Serrano, David Jimenez
The evolution of telecommunication networks unlocks new possibilities for multimedia services, including enriched and personalized experiences. However, ensuring high Quality of Service and Quality of Experience requires intelligent solutions at the edge. This study investigates the real-time detection of race bib numbers using YOLOv8, a state-of-the-art object detection framework, within the context of 5G/6G edge computing. We train (BDBD and SVHN datasets) and analyze various YOLOv8 models (nano to extreme) across two diverse racing datasets (TGCRBNW and RBNR), encompassing varied environmental conditions (daytime and nighttime). Our assessment focuses on key performance metrics, including processing time, efficiency, and accuracy. For instance, on the TGCRBNW dataset, the extreme-sized model shows a noticeable reduction in prediction time when the more powerful GPU is used, with times decreasing from 1,161 to 54 seconds on a desktop computer. Similarly, on the RBNR dataset, the extreme-sized model exhibits a significant reduction in prediction time from 373 to 15 seconds when using the more powerful GPU. In terms of accuracy, we found varying performance across scenarios and datasets. For example, not good enough results are obtained in most scenarios on the TGCRBNW dataset (lower than 50% in all sets and models), while YOLOv8m obtain the high accuracy in several scenarios on the RBNR dataset (almost 80% of accuracy in the best set). Variability in prediction times was observed between different computer architectures, highlighting the importance of selecting appropriate hardware for specific tasks. These results emphasize the importance of aligning computational resources with the demands of real-world tasks to achieve timely and accurate predictions.
Alberto del Río, Javier Serrano , David Jiménez, Luis M. Contreras, Federico Alvarez
The massive growth of live streaming, especially gaming-focused content, has led to an overall increase in global bandwidth consumption. Certain services see their quality diminished at times of peak consumption, degrading the quality of the content. This trend generates new research related to optimizing image quality according to network and service conditions. In this work we present a gaming streaming use case optimization on a real multisite 5G environment. The paper outlines the virtualized workflow of the use case and provides a detailed description of the applications and resources deployed for the simulation. This simulation tests the optimization of the service based on the addition of Artificial Intelligence (AI) algorithms, assuring the delivery of content with good Quality of Experience (QoE) under different working conditions. The AI introduced is based on Deep Reinforcement Learning (DRL) algorithms that can adapt, in a flexible way, to the different conditions that the multimedia workflow could face. That is, adapt, through corrective actions, the streaming bitrate, in order to optimize the QoE of the content on a real-time multisite scenario. The results of this work demonstrate how we have been able to minimize content losses, as well as the fact of obtaining high audiovisual multimedia quality results with higher bitrates, compared to a service without an optimizer integrated in the system. In a multi-site environment, we have achieved an improvement of 20 percentage points in terms of blockiness efficiency and also 15 percentage points in block loss.
Rosaria Rossini, Marco Jahn, Anastasios Zafeiropoulos, Nikos Filinis, Dimitrios Spatharakis, Ioannis Dimolitsas, Eleni Fotopoulou, Constantinos Vassilakis, Symeon Papavassiliou, Theodore Zahariadis, Ilias Nektarios Seitanidis, Enric Pages, Guillermo Gomez, Sergiy Remezov Grynchenko
Transparency, collaboration, security, and digital sovereignty are the open source values that are also important to many European countries and institutions. Open source software offers practical advantages, cost savings, and opportunities for innovation that contribute to the region’s technological advancement and competitiveness, as well as fosters collaboration among developers and encourages innovation. On top of these values, several European projects try to build a sustainable and innovative future. In this paper the authors present two examples of an open source applications of meta-operating system (metaOS). The first NExt generation Meta Operating systems (NEMO) builds the future of the Artificial Intelligence of Things(AIoT)-edge-cloud continuum by introducing an open source, modular and cybersecure metaOS. The second regards the open source solutions provided by the NEPHELE project to assist orchestration of distributed applications across resources in the computing continuum. In this matter, the open source components of the metaOS projects are presented for each functional layer in the architecture, demonstrating the value of open source for the metaOS sustainability in a specific use case.