Edge and Fog Computing for the Internet of Things
[ad_1]
1. Introduction
Over the last years few years, the number of interconnected devices within the context of Internet of Things (IoT) has rapidly grown; some statistics state that the total number of IoT-connected devices in 2023 has reached the groundbreaking number of 17 billion. While this number may appear substantially large, if compared with the total number of people on Earth, it is still limited and is thus expected to rise significantly in the next few years. Several new application sectors are expected to rise together with the rapid increase in the performances of all the technological segments of IoT infrastructures.
Embedded systems in particular have experienced a rapid growth in their computation capabilities: the clock frequency of microcontrollers and microprocessors is almost comparable with the one achievable by common desktop computers, while their cost is still low and their power consumption is also continually decreasing. This aspect goes hand in hand with the availability of Internet of Things (IoT)-devoted data transmission technologies, which are being extensively exploited to move several data processing tasks from the cloud towards the edge of the networks. Together with the technologies traditionally employed for data transmission, like WiFi and Bluetooth, new ones have been developed, focusing specifically on the requirements of the IoT; these include the transmission of small quantities of data (few kBs or even less), at the price of a reduced power consumption and of negligible costs.
While the availability of low-cost hardware platforms enables the implementation of artificial intelligence (AI) algorithms at the edge, on the cloud, and spread throughout the network, these data transmission technologies allow a capillary data exchange among IoT nodes, as well as the distribution of the computation load according to the fog computing paradigm. Such architectural flexibility is opening the way to a plethora of applications, which fully exploit the availability of computing power available at all levels. The definition of the ideal computing model is no longer only linked to the computation performances of single processing tools, but also to the best trade-off between these performances and other requirements like cost and power consumption reduction, easy system deployability, the time required for the results of the computations to foster a prompt actuation, and so on.
2. Contributions
The four contributions present in this Special Issue tackle different aspects related to distributed computing architectures within the IoT domain. While some of them focus on specific aspects, some others are more devoted to the description of novel applications and innovative ways to face specific problems.
Funding
This research received no external funding.
Acknowledgments
The Editor wishes to thank all the authors who, with their knowledge and research outcomes, contributed to this Special Issue. Additional thanks to the Reviewers who, through their crucial work, allowed the improvement and publication of high-profile contributions.
Conflicts of Interest
The author declares no conflicts of interest.
References
- Fic, P.; Czornik, A.; Rosikowski, P. Anomaly Detection for Hydraulic Power Units—A Case Study. Future Internet 2023, 15, 206. [Google Scholar] [CrossRef]
- Tang, X.; Chen, F.; He, Y. Intelligent Video Streaming at Network Edge: An Attention-Based Multiagent Reinforcement Learning Solution. Future Internet 2023, 15, 234. [Google Scholar] [CrossRef]
- Spiekermann, D.; Keller, J. Challenges of Network Forensic Investigation in Fog and Edge Computing. Future Internet 2023, 15, 342. [Google Scholar] [CrossRef]
- Poltronieri, F.; Stefanelli, C.; Tortonesi, M.; Zaccarini, M. Reinforcement Learning vs. Computational Intelligence: Comparing Service Management Approaches for the Cloud Continuum. Future Internet 2023, 15, 359. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
[ad_2]