applsci-logo

Journal Browser

Journal Browser

Cloud Computing Beyond

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (28 February 2022) | Viewed by 42254

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Science & Engineering, College of Software, Kyung Hee University, Seoul 02447, Republic of Korea
Interests: cloud computing; the Internet of Things; future internet; distributed real-time systems; mobile computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Cloud computing has become an important infrastructure in the ICT industry. Recently, SaaS, PaaS, and IaaS have been used mostly in companies and in personal computing. Many cloud services require interoperable services in order to extend the service capability and business market. Additionally, AI (artificial intelligence)-based applications are emerging in many industries. Cloud computing is utilized very well and provides very fast responses for training data in AI applications. Furthermore, cloud services are migrating to edge nodes to support real-time services as well as AI applications. Conventional virtual-machine-based cloud services are challenged by many emerging issues. Thus, distributed cloud—the distribution of cloud capabilities to the edge of the network—is considerable as a new paradigm integrating with the edge cloud, where resources are virtualized and shared to CSPs (cloud service providers) using high-performance 5G networks. Future computing with cloud computing infrastructures called “cloud computing beyond” needs to solve the following technical challenges (listed in the keywords). Other challenging topics are also welcome to this Special Issue.

Prof. Dr. Eui-Nam Huh
Guest Editor

Keywords

  • real-time cloud services
  • cloud infrastructure for AI
  • distributed cloud with 5G
  • parallel and distributed deep learning
  • edge cloud resource provisioning
  • load balancing in edge cloud
  • micro-services-based services and systems
  • container management
  • offloading
  • security
  • trust and forensics

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 4708 KiB  
Article
Containerized Microservices Orchestration and Provisioning in Cloud Computing: A Conceptual Framework and Future Perspectives
by Abdul Saboor, Mohd Fadzil Hassan, Rehan Akbar, Syed Nasir Mehmood Shah, Farrukh Hassan, Saeed Ahmed Magsi and Muhammad Aadil Siddiqui
Appl. Sci. 2022, 12(12), 5793; https://doi.org/10.3390/app12125793 - 7 Jun 2022
Cited by 13 | Viewed by 4860
Abstract
Cloud computing is a rapidly growing paradigm which has evolved from having a monolithic to microservices architecture. The importance of cloud data centers has expanded dramatically in the previous decade, and they are now regarded as the backbone of the modern economy. Cloud-based [...] Read more.
Cloud computing is a rapidly growing paradigm which has evolved from having a monolithic to microservices architecture. The importance of cloud data centers has expanded dramatically in the previous decade, and they are now regarded as the backbone of the modern economy. Cloud-based microservices architecture is incorporated by firms such as Netflix, Twitter, eBay, Amazon, Hailo, Groupon, and Zalando. Such cloud computing arrangements deal with the parallel deployment of data-intensive workloads in real time. Moreover, commonly utilized cloud services such as the web and email require continuous operation without interruption. For that purpose, cloud service providers must optimize resource management, efficient energy usage, and carbon footprint reduction. This study presents a conceptual framework to manage the high amount of microservice execution while reducing response time, energy consumption, and execution costs. The proposed framework suggests four key agent services: (1) intelligent partitioning: responsible for microservice classification; (2) dynamic allocation: used for pre-execution distribution of microservices among containers and then makes decisions for dynamic allocation of microservices at runtime; (3) resource optimization: in charge of shifting workloads and ensuring optimal resource use; (4) mutation actions: these are based on procedures that will mutate the microservices based on cloud data center workloads. The suggested framework was partially evaluated using a custom-built simulation environment, which demonstrated its efficiency and potential for implementation in a cloud computing context. The findings show that the engrossment of suggested services can lead to a reduced number of network calls, lower energy consumption, and relatively reduced carbon dioxide emissions. Full article
(This article belongs to the Special Issue Cloud Computing Beyond)
Show Figures

Figure 1

13 pages, 419 KiB  
Article
CLAP-PRE: Certificateless Autonomous Path Proxy Re-Encryption for Data Sharing in the Cloud
by Chengdong Ren, Xiaolei Dong, Jiachen Shen, Zhenfu Cao and Yuanjian Zhou
Appl. Sci. 2022, 12(9), 4353; https://doi.org/10.3390/app12094353 - 25 Apr 2022
Cited by 2 | Viewed by 1379
Abstract
In e-health systems, patients encrypt their personal health data for privacy purposes and upload them to the cloud. There exists a need for sharing patient health data with doctors for healing purposes in one’s own preferred order. To achieve this fine-gained access control [...] Read more.
In e-health systems, patients encrypt their personal health data for privacy purposes and upload them to the cloud. There exists a need for sharing patient health data with doctors for healing purposes in one’s own preferred order. To achieve this fine-gained access control to delegation paths, some researchers have designed a new proxy re-encryption (PRE) scheme called autonomous path proxy re-encryption (AP-PRE), where the delegator can control the whole delegation path in a multi-hop delegation process. In this paper, we introduce a certificateless autonomous path proxy re-encryption (CLAP-PRE) using multilinear maps, which holds both the properties (i.e., certificateless, autonomous path) of certificateless encryption and autonomous path proxy re-encryption. In the proposed scheme, (a) each user has two public keys (user’s identity and traditional public key) with corresponding private keys, and (b) each ciphertext is first re-encrypted from a public key encryption (PKE) scheme to an identity-based encryption (IBE) scheme and then transformed in the IBE scheme. Our scheme is an IND-CPA secure CLAP-PRE scheme under the k-multilinear decisional Diffie–Hellman (k-MDDH) assumption in the random oracle model. Full article
(This article belongs to the Special Issue Cloud Computing Beyond)
Show Figures

Figure 1

24 pages, 2794 KiB  
Article
CloudOps: Towards the Operationalization of the Cloud Continuum: Concepts, Challenges and a Reference Framework
by Juncal Alonso, Leire Orue-Echevarria and Maider Huarte
Appl. Sci. 2022, 12(9), 4347; https://doi.org/10.3390/app12094347 - 25 Apr 2022
Cited by 6 | Viewed by 2609
Abstract
The current trend of developing highly distributed, context aware, heterogeneous computing intense and data-sensitive applications is changing the boundaries of cloud computing. Encouraged by the growing IoT paradigm and with flexible edge devices available, an ecosystem of a combination of resources, ranging from [...] Read more.
The current trend of developing highly distributed, context aware, heterogeneous computing intense and data-sensitive applications is changing the boundaries of cloud computing. Encouraged by the growing IoT paradigm and with flexible edge devices available, an ecosystem of a combination of resources, ranging from high density compute and storage to very lightweight embedded computers running on batteries or solar power, is available for DevOps teams from what is known as the Cloud Continuum. In this dynamic context, manageability is key, as well as controlled operations and resources monitoring for handling anomalies. Unfortunately, the operation and management of such heterogeneous computing environments (including edge, cloud and network services) is complex and operators face challenges such as the continuous optimization and autonomous (re-)deployment of context-aware stateless and stateful applications where, however, they must ensure service continuity while anticipating potential failures in the underlying infrastructure. In this paper, we propose a novel CloudOps workflow (extending the traditional DevOps pipeline), proposing techniques and methods for applications’ operators to fully embrace the possibilities of the Cloud Continuum. Our approach will support DevOps teams in the operationalization of the Cloud Continuum. Secondly, we provide an extensive explanation of the scope, possibilities and future of the CloudOps. Full article
(This article belongs to the Special Issue Cloud Computing Beyond)
Show Figures

Figure 1

20 pages, 3293 KiB  
Article
Analysis of Complexity and Performance for Automated Deployment of a Software Environment into the Cloud
by Marian Lăcătușu, Anca Daniela Ionita, Florin Daniel Anton and Florin Lăcătușu
Appl. Sci. 2022, 12(9), 4183; https://doi.org/10.3390/app12094183 - 21 Apr 2022
Cited by 5 | Viewed by 2063
Abstract
Moving to the cloud is a topic that tends to be present in all enterprises that have digitalized their activities. This includes the need to work with software environments specific to various business domains, accessed as services supported by various cloud providers. Besides [...] Read more.
Moving to the cloud is a topic that tends to be present in all enterprises that have digitalized their activities. This includes the need to work with software environments specific to various business domains, accessed as services supported by various cloud providers. Besides provisioning, other important issues to be considered for cloud services are complexity and performance. This paper evaluates the processes to be followed for the deployment of such a software environment in the cloud and compares the manual and automated methods in terms of complexity. We consider several metrics that address multiple concerns: the multitude of independent paths, the capability to distinguish small changes in the process structure, plus the complexity of the human tasks, for which specific metrics are proposed. We thus show that the manual deployment process is from two to seven times more complex than the automatic one, depending on the metrics applied. This proves the importance of automation for making such a service more accessible to enterprises, regardless of their level of technical know-how in cloud computing. In addition, the performance is tested for an example of an environment and the possibilities to extend to multicloud are discussed. Full article
(This article belongs to the Special Issue Cloud Computing Beyond)
Show Figures

Figure 1

15 pages, 3407 KiB  
Article
A Resource Utilization Prediction Model for Cloud Data Centers Using Evolutionary Algorithms and Machine Learning Techniques
by Sania Malik, Muhammad Tahir, Muhammad Sardaraz and Abdullah Alourani
Appl. Sci. 2022, 12(4), 2160; https://doi.org/10.3390/app12042160 - 18 Feb 2022
Cited by 28 | Viewed by 5389
Abstract
Cloud computing has revolutionized the modes of computing. With huge success and diverse benefits, the paradigm faces several challenges as well. Power consumption, dynamic resource scaling, and over- and under-provisioning issues are challenges for the cloud computing paradigm. The research has been carried [...] Read more.
Cloud computing has revolutionized the modes of computing. With huge success and diverse benefits, the paradigm faces several challenges as well. Power consumption, dynamic resource scaling, and over- and under-provisioning issues are challenges for the cloud computing paradigm. The research has been carried out in cloud computing for resource utilization prediction to overcome over- and under-provisioning issues. Over-provisioning of resources consumes more energy and leads to high costs. However, under-provisioning induces Service Level Agreement (SLA) violation and Quality of Service (QoS) degradation. Most of the existing mechanisms focus on single resource utilization prediction, such as memory, CPU, storage, network, or servers allocated to cloud applications but overlook the correlation among resources. This research focuses on multi-resource utilization prediction using Functional Link Neural Network (FLNN) with hybrid Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The proposed technique is evaluated on Google cluster traces data. Experimental results show that the proposed model yields better accuracy as compared to traditional techniques. Full article
(This article belongs to the Special Issue Cloud Computing Beyond)
Show Figures

Figure 1

32 pages, 9760 KiB  
Article
Machine Learning Based on Resampling Approaches and Deep Reinforcement Learning for Credit Card Fraud Detection Systems
by Tran Khanh Dang, Thanh Cong Tran, Luc Minh Tuan and Mai Viet Tiep
Appl. Sci. 2021, 11(21), 10004; https://doi.org/10.3390/app112110004 - 26 Oct 2021
Cited by 17 | Viewed by 4133
Abstract
The problem of imbalanced datasets is a significant concern when creating reliable credit card fraud (CCF) detection systems. In this work, we study and evaluate recent advances in machine learning (ML) algorithms and deep reinforcement learning (DRL) used for CCF detection systems, including [...] Read more.
The problem of imbalanced datasets is a significant concern when creating reliable credit card fraud (CCF) detection systems. In this work, we study and evaluate recent advances in machine learning (ML) algorithms and deep reinforcement learning (DRL) used for CCF detection systems, including fraud and non-fraud labels. Based on two resampling approaches, SMOTE and ADASYN are used to resample the imbalanced CCF dataset. ML algorithms are, then, applied to this balanced dataset to establish CCF detection systems. Next, DRL is employed to create detection systems based on the imbalanced CCF dataset. The diverse classification metrics are indicated to thoroughly evaluate the performance of these ML and DRL models. Through empirical experiments, we identify the reliable degree of ML models based on two resampling approaches and DRL models for CCF detection. When SMOTE and ADASYN are used to resampling original CCF datasets before training/test split, the ML models show very high outcomes of above 99% accuracy. However, when these techniques are employed to resample for only the training CCF datasets, these ML models show lower results, particularly in terms of logistic regression with 1.81% precision and 3.55% F1 score for using ADASYN. Our work reveals the DRL model is ineffective and achieves low performance, with only 34.8% accuracy. Full article
(This article belongs to the Special Issue Cloud Computing Beyond)
Show Figures

Figure 1

17 pages, 529 KiB  
Article
Energy-Efficient Load Balancing Algorithm for Workflow Scheduling in Cloud Data Centers Using Queuing and Thresholds
by Nimra Malik, Muhammad Sardaraz, Muhammad Tahir, Babar Shah, Gohar Ali and Fernando Moreira
Appl. Sci. 2021, 11(13), 5849; https://doi.org/10.3390/app11135849 - 23 Jun 2021
Cited by 22 | Viewed by 3541
Abstract
Cloud computing is a rapidly growing technology that has been implemented in various fields in recent years, such as business, research, industry, and computing. Cloud computing provides different services over the internet, thus eliminating the need for personalized hardware and other resources. Cloud [...] Read more.
Cloud computing is a rapidly growing technology that has been implemented in various fields in recent years, such as business, research, industry, and computing. Cloud computing provides different services over the internet, thus eliminating the need for personalized hardware and other resources. Cloud computing environments face some challenges in terms of resource utilization, energy efficiency, heterogeneous resources, etc. Tasks scheduling and virtual machines (VMs) are used as consolidation techniques in order to tackle these issues. Tasks scheduling has been extensively studied in the literature. The problem has been studied with different parameters and objectives. In this article, we address the problem of energy consumption and efficient resource utilization in virtualized cloud data centers. The proposed algorithm is based on task classification and thresholds for efficient scheduling and better resource utilization. In the first phase, workflow tasks are pre-processed to avoid bottlenecks by placing tasks with more dependencies and long execution times in separate queues. In the next step, tasks are classified based on the intensities of the required resources. Finally, Particle Swarm Optimization (PSO) is used to select the best schedules. Experiments were performed to validate the proposed technique. Comparative results obtained on benchmark datasets are presented. The results show the effectiveness of the proposed algorithm over that of the other algorithms to which it was compared in terms of energy consumption, makespan, and load balancing. Full article
(This article belongs to the Special Issue Cloud Computing Beyond)
Show Figures

Figure 1

20 pages, 1227 KiB  
Article
Brainware Computing: Concepts, Scopes and Challenges
by Eui-Nam Huh and Md Imtiaz Hossain
Appl. Sci. 2021, 11(11), 5303; https://doi.org/10.3390/app11115303 - 7 Jun 2021
Cited by 6 | Viewed by 2898
Abstract
Over the decades, robotics technology has acquired sufficient advancement through the progression of 5G Internet, Artificial Intelligence (AI), Internet of Things (IoT), Cloud, and Edge Computing. Though nowadays, Cobot and Service Oriented Architecture (SOA) supported robots with edge computing paradigms have achieved remarkable [...] Read more.
Over the decades, robotics technology has acquired sufficient advancement through the progression of 5G Internet, Artificial Intelligence (AI), Internet of Things (IoT), Cloud, and Edge Computing. Though nowadays, Cobot and Service Oriented Architecture (SOA) supported robots with edge computing paradigms have achieved remarkable performances in diverse applications, the existing SOA robotics technology fails to develop a multi-domain expert with high performing robots and demands improvement to Service-Oriented Brain, SOB (including AI model, driving service application and metadata) enabling robot for deploying brain and a new computing model with more scalability and flexibility. In this paper, instead of focusing on SOA and Robot as a Service (RaaS) model, we propose a novel computing architecture, addressed as Brainware Computing, for driving multiple domain-specific brains one-at-a-time in a single hardware robot according to the service, addressed as Brain as a Service (BaaS). In Brainware Computing, each robot can install and remove the virtual machine, which contains SOB and operating applications from the nearest edge cloud. Secondly, we provide an extensive explanation of the scope and possibilities of Brainware Computing. Finally, we demonstrate several challenges and opportunities and then concluded with future research directions in the field of Brainware Computing. Full article
(This article belongs to the Special Issue Cloud Computing Beyond)
Show Figures

Figure 1

18 pages, 1376 KiB  
Article
HP-SFC: Hybrid Protection Mechanism Using Source Routing for Service Function Chaining
by Syed M. Raza, Haekwon Jeong, Moonseong Kim and Hyunseung Choo
Appl. Sci. 2021, 11(11), 5245; https://doi.org/10.3390/app11115245 - 4 Jun 2021
Cited by 1 | Viewed by 2017
Abstract
Service Function Chaining (SFC) is an emerging paradigm aiming to provide flexible service deployment, lifecycle management, and scaling in a micro-service architecture. SFC is defined as a logically connected list of ordered Service Functions (SFs) that require high availability to maintain user experience. [...] Read more.
Service Function Chaining (SFC) is an emerging paradigm aiming to provide flexible service deployment, lifecycle management, and scaling in a micro-service architecture. SFC is defined as a logically connected list of ordered Service Functions (SFs) that require high availability to maintain user experience. The SFC protection mechanism is one way to ensure high availability, and it is achieved by proactively deploying backup SFs and installing backup paths in the network. Recent studies focused on ensuring the availability of backup SFs, but overlooked SFC unavailability due to network failures. This paper extends our previous work to propose a Hybrid Protection mechanism for SFC (HP-SFC) that divides SFC into segments and combines the merits of local and global failure recovery approaches to define an installation policy for backup paths. A novel labeling technique labels SFs instead of SFC, and they are stacked as per the order of SFs in a particular SFC before being inserted into a packet header for traffic steering through segment routing. The emulation results showed that HP-SFC recovered SFC from failure within 20–25 ms depending on the topology and reduced backup paths’ flow entries by at least 8.9% and 64.5% at most. Moreover, the results confirmed that the segmentation approach made HP-SFC less susceptible to changes in network topology than other protection schemes. Full article
(This article belongs to the Special Issue Cloud Computing Beyond)
Show Figures

Figure 1

16 pages, 3455 KiB  
Article
AAAA: SSO and MFA Implementation in Multi-Cloud to Mitigate Rising Threats and Concerns Related to User Metadata
by Muhammad Iftikhar Hussain, Jingsha He, Nafei Zhu, Fahad Sabah, Zulfiqar Ali Zardari, Saqib Hussain and Fahad Razque
Appl. Sci. 2021, 11(7), 3012; https://doi.org/10.3390/app11073012 - 27 Mar 2021
Cited by 4 | Viewed by 3468
Abstract
In the modern digital era, everyone is partially or fully integrated with cloud computing to access numerous cloud models, services, and applications. Multi-cloud is a blend of a well-known cloud model under a single umbrella to accomplish all the distinct nature and realm [...] Read more.
In the modern digital era, everyone is partially or fully integrated with cloud computing to access numerous cloud models, services, and applications. Multi-cloud is a blend of a well-known cloud model under a single umbrella to accomplish all the distinct nature and realm requirements under one service level agreement (SLA). In current era of cloud paradigm as the flood of services, applications, and data access rise over the Internet, the lack of confidentiality of the end user’s credentials is rising to an alarming level. Users typically need to authenticate multiple times to get authority and access the desired services or applications. In this research, we have proposed a completely secure scheme to mitigate multiple authentications usually required from a particular user. In the proposed model, a federated trust is created between two different domains: consumer and provider. All traffic coming towards the service provider is further divided into three phases based on the concerned user’s data risks. Single sign-on (SSO) and multifactor authentication (MFA) are deployed to get authentication, authorization, accountability, and availability (AAAA) to ensure the security and confidentiality of the end user’s credentials. The proposed solution exploits the finding that MFA achieves a better AAAA pattern as compared to SSO. Full article
(This article belongs to the Special Issue Cloud Computing Beyond)
Show Figures

Figure 1

18 pages, 5569 KiB  
Article
Providing Predictable Quality of Service in a Cloud-Based Web System
by Krzysztof Zatwarnicki
Appl. Sci. 2021, 11(7), 2896; https://doi.org/10.3390/app11072896 - 24 Mar 2021
Cited by 6 | Viewed by 1694
Abstract
Cloud-computing web systems and services revolutionized the web. Nowadays, they are the most important part of the Internet. Cloud-computing systems provide the opportunity for businesses to undergo digital transformation in order to improve efficiency and reduce costs. The sudden shutdown of schools and [...] Read more.
Cloud-computing web systems and services revolutionized the web. Nowadays, they are the most important part of the Internet. Cloud-computing systems provide the opportunity for businesses to undergo digital transformation in order to improve efficiency and reduce costs. The sudden shutdown of schools and offices during the pandemic of Covid 19 significantly increased the demand for cloud solutions. Load balancing and sharing mechanisms are implemented in order to reduce the costs and increase the quality of web service. The usage of those methods with adaptive intelligent algorithms can deliver the highest and a predictable quality of service. In this article, a new HTTP request-distribution method in a two-layer architecture of a cluster-based web system is presented. This method allows for the provision of efficient processing and predictable quality by servicing requests in adopted time constraints. The proposed decision algorithms utilize fuzzy-neural models allowing service times to be estimated. This article provides a description of this new solution. It also contains the results of experiments in which the proposed method is compared with other intelligent approaches such as Fuzzy-Neural Request Distribution, and distribution methods often used in production systems. Full article
(This article belongs to the Special Issue Cloud Computing Beyond)
Show Figures

Figure 1

17 pages, 4558 KiB  
Article
A Cloud-Based UTOPIA Smart Video Surveillance System for Smart Cities
by Chel-Sang Yoon, Hae-Sun Jung, Jong-Won Park, Hak-Geun Lee, Chang-Ho Yun and Yong Woo Lee
Appl. Sci. 2020, 10(18), 6572; https://doi.org/10.3390/app10186572 - 20 Sep 2020
Cited by 10 | Viewed by 2968
Abstract
A smart city is a future city that enables citizens to enjoy Information and Communication Technology (ICT) based smart services with any device, anytime, anywhere. It heavily utilizes Internet of Things. It includes many video cameras to provide various kinds of services for [...] Read more.
A smart city is a future city that enables citizens to enjoy Information and Communication Technology (ICT) based smart services with any device, anytime, anywhere. It heavily utilizes Internet of Things. It includes many video cameras to provide various kinds of services for smart cities. Video cameras continuously feed big video data to the smart city system, and smart cities need to process the big video data as fast as it can. This is a very challenging task because big computational power is required to shorten processing time. This paper introduces UTOPIA Smart Video Surveillance, which analyzes the big video images using MapReduce, for smart cities. We implemented the smart video surveillance in our middleware platform. This paper explains its mechanism, implementation, and operation and presents performance evaluation results to confirm that the system worked well and is scalable, efficient, reliable, and flexible. Full article
(This article belongs to the Special Issue Cloud Computing Beyond)
Show Figures

Figure 1

23 pages, 3055 KiB  
Article
Fuzzy Based Collaborative Task Offloading Scheme in the Densely Deployed Small-Cell Networks with Multi-Access Edge Computing
by Md Delowar Hossain, Tangina Sultana, VanDung Nguyen, Waqas ur Rahman, Tri D. T. Nguyen, Luan N. T. Huynh and Eui-Nam Huh
Appl. Sci. 2020, 10(9), 3115; https://doi.org/10.3390/app10093115 - 29 Apr 2020
Cited by 18 | Viewed by 2996
Abstract
Accelerating the development of the 5G network and Internet of Things (IoT) application, multi-access edge computing (MEC) in a small-cell network (SCN) is designed to provide computation-intensive and latency-sensitive applications through task offloading. However, without collaboration, the resources of a single MEC server [...] Read more.
Accelerating the development of the 5G network and Internet of Things (IoT) application, multi-access edge computing (MEC) in a small-cell network (SCN) is designed to provide computation-intensive and latency-sensitive applications through task offloading. However, without collaboration, the resources of a single MEC server are wasted or sometimes overloaded for different service requests and applications; therefore, it increases the user’s task failure rate and task duration. Meanwhile, the distinct MEC server has faced some challenges to determine where the offloaded task will be processed because the system can hardly predict the demand of end-users in advance. As a result, the quality-of-service (QoS) will be deteriorated because of service interruptions, long execution, and waiting time. To improve the QoS, we propose a novel Fuzzy logic-based collaborative task offloading (FCTO) scheme in MEC-enabled densely deployed small-cell networks. In FCTO, the delay sensitivity of the QoS is considered as the Fuzzy input parameter to make a decision where to offload the task is beneficial. The key is to share computation resources with each other and among MEC servers by using fuzzy-logic approach to select a target MEC server for task offloading. As a result, it can accommodate more computation workload in the MEC system and reduce reliance on the remote cloud. The simulation result of the proposed scheme show that our proposed system provides the best performances in all scenarios with different criteria compared with other baseline algorithms in terms of the average task failure rate, task completion time, and server utilization. Full article
(This article belongs to the Special Issue Cloud Computing Beyond)
Show Figures

Figure 1

Back to TopTop