1 Introduction
The recent studies by Ericsson (Ericsson Mobility Report, n.d.) show that mobile users will become 25 billion by 2025. The rise of internet connectivity and low-cost mobile devices are some of the reasons for the increasing number of mobile users. Still, battery constraints is existing as one of the mobile device’s limitations. Today’s mobile applications exhaust the device battery in a fast manner and also require higher computational requirements. The solution to these problems can be handled by computational offloading of mobile edge computing.
Mobile edge computing is an amalgamation of mobile computing and edge computing. It is a closer infrastructure to the user device as compared to cloud computing. The properties like small-scale data centers, location nearby LTE or Wi-Fi, low latency, dense deployment by telecom vendors, and lower congestion make it a better option for mobile task offloading applications.
Computation offloading is the technique inside mobile edge computing where an application is partitioned upon local and remote execution based on some criteria. Fig 1. depicts the offloading process where an application is partitioned, and based upon some measures, the decision has been taken to offload the task or execute it locally. Those tasks which are identified to be performed on an edge server are offloaded on it.
These jobs reach the cloud server and get scheduled by some scheduling technique. Task scheduling on the cloud server is one of the prime tasks in mobile cloud computing.
The virtual machines (VM) need to be allocated to the task’s execution by the cloud service provider. Major thirst has been given to research in the field of mobile computing by a framework like Chroma(RPC) (Balan et al., 2003), Cuckoo(RMI) (Kemp et al., 2012), spectra(Flinn et al., n.d.), MAUI (Cuervo et al., 2010), Mobicloud (Huang et al., 2010), and Clonecloud (Chun et al., 2011). These are popular frameworks in this cloud computing domain that empower the concept of offloading the task to the cloud server either by task partitioning or considering a complete application for offloading purposes.
Various studies have been done in the past, trying to achieve optimization in different objective functions like makespan, energy, quality of service(QoS), load balancing, and cost. The problem of task scheduling has much scope for optimization since of its NP-hard nature.
The mobile application consists of many computational tasks represented as nodes and dependency among these nodes is defined as a cloud. Resources are required in the cloud servers for the execution of these offloaded computational tasks. The availability of these resources needs to be assured by the cloud service providers, and also, the pricing of services may vary from country to country.
The work aims to propose a hybrid scheduling technique based on Gaussian-based multi-objective particle swarm optimization (GMOPSO) and Bacterial foraging optimization (BFO). The GMOPSO provides us with the global best solution, whereas using the BFO, the local best solution is tried to be improvised. The contribution can be summarized as follows.
a) Minimize the energy consumption and makespan of the scheduling process.
b) Simulation and performance evaluations of the proposed algorithm with existing approaches.
In Section 2, related literature has been reviewed. Section 3 describes the methodology of the work. The detailed design approach of the suggested system is presented in Section 4. Section 5 offers the evaluation results compared with existing works, and the conclusion and future directions are shown in the last in section 6.
2 Literature Review
This section provides the work done so far in the field of scheduling in mobile cloud and cloud computing. Once the task has been offloaded to a virtual machine, its execution plan or schedule is another challenge.
The scheduling algorithm must be optimally designed so that the task’s timely execution can be achieved and starvation or deadlock-like conditions can be avoided. Eom et al. (2013) focused on scheduling offloading and applying machine learning-based techniques to optimize the offloading process.
Their study focused on nineteen different machine learning algorithms and four workloads. Zhang et al. (2016) have proposed joint resource scheduling and code partitioning for effectively allocating cloudlet to multiple cloud users. They have proposed a code partitioning algorithm based on the call tree. Hsu Mon Kyi & Thinn Thu Naing (2011) have proposed an algorithm for scheduling and resource allocation of virtual resources and virtual machines named the Efficient Virtual Machines Scheduling Algorithm (EVMSA). The stochastic Markov model is used to analyze the performance of the scheduling algorithm. Eucalyptus architecture is introduced as a system model. The resource allocation decision model is based on the continuous Markov chain model. Jagannathan & Modiano (2013) have presented a mathematical model of the buffer overflow in parallel queues.
The study shows that the longest queue first scheduling policy has a superior queue overflow performance than queue blind policies. Several lemmas are presented in support of the theory presented in the paper. The researcher has assumed the system consists of N parallel queues served by a single server. Time is slotted, and the server processes only one queue. Wang et al. (2013) presented the weighted round-robin scheduling algorithm for task scheduling in the Hadoop framework.
Table 1 presents the various task scheduling schemes, specifically in the mobile cloud computing framework. In the paper Wei et al. (2013), the authors have proposed the extended cloudlet approach for supporting local mobile cloud.
Techniques and Work Done | Year | Type of Problem | Objectives function | Framework | Environment |
HACAS (Wei et al., 2013) | 2013 | Application scheduling | Profit and Energy consumption | MCC | Simulation |
TSPCCE (Nir et al., 2014) | 2014 | Task scheduling | Energy | MCC | IBM’s linear programming solver |
MCC task scheduling algorithm(X. Lin et al., 2015) | 2014 | Task scheduling with DVFS | Energy and time | MCC | MATLAB |
LARAC algorithm (W. Zhang et al., 2015) | 2015 | Task scheduling with DVFS | Energy and time Deadline | MCC | Simulation |
eDors (Guo et al., 2016) | 2016 | Dynamic scheduling and energy-efficient offloading | Energy and completion time | MCC | Simulation |
MCF-DF (Lin Wang et al., 2016) | 2016 | Task admission and scheduling | Admission rate and execution cost | MEC | Python |
HCOA(T. Wang et al., 2018) | 2017 | Task offloading and scheduling | Energy | MCC | Simulation |
CMSACO (Shah-Mansouri et al., 2017) | 2017 | Multi-Task offloading | Profit and completion time | MCC | Simulation |
TSRA(Zhao et al., 2017) | 2017 | Resource allocation and scheduling | Delay | MEC | Simulation |
COPE (J. Zhang et al., 2018) | 2017 | Task scheduling | Energy, Price of Cloud service provider, Delay | MCC | Thinkair based simulation |
DAA (L. Lin et al., 2018) | 2018 | Task scheduling | Makespan | MEC | Simulation |
GABTS (Tang et al., 2018) | 2018 | Task offloading and scheduling | Energy, response time, deadline, and cost | MCC | C++ |
OAOA (Jiang et al., 2019) | 2019 | Stochastic approach for task scheduling | Energy and QoS | MCC | Simulation |
Application-aware (Oo & Ko, 2019) | 2019 | Task Scheduling | Latency | MEC | iFogSim |
MWSM (Tian et al., 2019) | 2019 | Workflow scheduling | Latency, Energy, and Cost | MCC | Simulation |
RCTSPO (Chen et al., 2020) | 2020 | Task scheduling | Makespan, Reliability, and Load | MEC | Cloudsim |
EBCO-TS (Arun & Prabu, 2020) | 2020 | Task scheduling | Makespan and energy | MCC | Cloudsim |
ADO-MTS (Garg & Nath, 2020) | 2020 | Task scheduling | Makespan, Resource utilization, and Energy | MCC | Cloudsim |
They have presented a hybrid PSO approach and optimized the profit and energy consumption during scheduling. The authors in the paper Nir et al. (2014) have presented a task scheduler model that optimizes mobile cloud computing’s energy function. In the paper Lin et al. (2015), the authors proposed a scheduling scheme based on dynamic voltage and frequency scaling and optimized the application makespan and reduce energy consumption.
3 Task Offloading & Scheduling in Mobile Cloud Computing
The mobile task offloading model consists of two ways to execute the task, i.e., either to offload the task on the cloud server or to execute the task locally on the mobile phone. After the initial task partitioning phase, the decision of offloading is made by the decision engine by gathering various device and network parameters through the profiling process. Now, through a cellular network or Wi-Fi network, the task reaches the cloud server. The objective of offloading is to transfer the computation to the resourceful server at a distant place to improve the device’s performance and save energy. Taking the offload decision to a remote server is not always mandatory but depends on the various parameters affecting the device’s performance.
In some scenarios, partial offloading is also performed. One part of the application task is processed on a mobile device, and the other is offloaded to the surrogate or cloud server. The task’s computation time depends on the computation amount required and the mobile device’s processing speed. In a scenario, let’s assume the job is divided into two partitions where the first partition executes locally and the second partition runs on a remote server.
For local execution, let CT_LOCAL be the computation time required on the local device, CA_LOCAL be the computation amount, and PS_LOCAL be the mobile device’s processing speed. The relationship among these values will be:
For remote execution, the second partition is executed on the cloud/edge server. Let BAVAILABLE be the available bandwidth in the device and the amount of data to be transferred be DAMOUNT_OF_DATA. The time taken to transfer the data to/from the server will be CT_REMOTE will be:
The total time CT_TOTAL taken to execute the application both locally and remotely will be a summation of the above two equations, which is:
3.1 Cloud Model
When the task is offloaded from the mobile device to the cloud server, it reaches the cloud service provider’s server. The cloud service provider manages all information about the task that is approached for processing.
The Datacenter Broker policy (Singh & Chana, 2016) helps the cloudlets (task) to assign virtual machines.
The data center policy must be appropriate for the minimum execution time of the cloudlet. Similar to web applications, a mobile application consists of different tasks.
These tasks can be represented as a directed acyclic graph (DAG). While the application’s independent task can be executed simultaneously in multiple virtual machines, the dependent job needs to be synchronized as per their precedence order.
3.2 Scheduling of Offloaded Task
When speaking about task scheduling, achieving a minimum makespan is considered an NP-hard problem. Most recent studies have focused on the cloud resources to the various cloudlets to optimize energy and execution time parameters.
In this work, the particular task’s execution time depends on the task size and the virtual machine’s property. Following are the basic definitions regarding mobile task scheduling:
a) Consider a set of n virtual machines as V = {V1, V2, V3…, Vn}
b) A task of the application tasks T = {T1, T2…, Tx}
c) E is the set of connections between any two tasks, Ti and Tj.
d) Collection of physical machines (PMs) in the data center = (PM1, PM2, PM3..., PMn)
It is assumed in the work that the cloud service provider has a sufficient number of computational resources. The V number of virtual machines are deployed on the physical machines, and different virtual machines have a variety of processing units (CPU), random access memory (RAM), and networking capabilities.
The data center brokers monitor all available resources and assign the machine to the task once approached. All jobs requiring processing resources need to stand in a queue, and based on the task scheduling scheme, tasks are planned to execute on the machine.
3.3 Framework for Task Scheduling
An approach has been proposed on a Gaussian multi-objective hybrid scheduling scheme based on particle swarm optimization (PSO) and bacterial foraging optimization (BFO). Energy and makespan are considered objective functions for the study. A task offloading scheme is based on optimizing the multi-objective function, where minimizing both functions is the approach’s actual goal.
Makespan is defined as the time required for the processing of the task CPU and its transmission time. The makespan of a task on the virtual machine is calculated considering the computing power of the VM and the size of the task. It can be defined by the following equation:
Two factors calculate energy cost: virtual machine usage charges, which are usually different for cloud service providers, and calculated on a second basis. The other is the execution time of the task. It can be defined by the following equation:
4 Proposed Approach for Task Scheduling
The proposed approach (GMOPSO-BFO) is based on a hybrid approach of particle swarm optimization (PSO) and Bacteria foraging optimization (BFO). The PSO approach works excellently in searching the solution globally, whereas the BFO works optimally with local search capabilities.
The combined approach of these two techniques generates an optimal solution globally and locally in search capability and higher convergence time.
4.1 Bacteria Foraging Optimization
The bacteria foraging method is a natural selection method in which microorganisms like bacteria tend to search or forage food to survive in the E-coli (intestine) of the human body (Passino 2002). The primary strategy of bacteria to survive is by locating the nutrients, handling them, and ingesting the food to get the energy to live and reproduce. Those bacteria which do not successfully forage the nutrient typically get eliminated from the system. It follows the concept of survival of the fittest.
This evolutionary concept made the scientist fascinated and motivated them to use it as an optimization process. Most of the optimization processes can be performed with such an evolutionary approach. The main aim of the bacteria is to maximize the energy attained during foraging per unit of time. It depends on certain factors like prey density in the environment and characteristics of the environment. It also depends on the sensing and cognitive capabilities of the bacteria.
The E. Coli bacteria have a cell structure having various biological features like nucleoids, ribosomes, cytoplasm, pilus, and plasma membrane. As these attributes do normal cell processes, another critical feature, i.e., flagellum, helps bacteria propagate or move in different directions. Chemotaxis is the process of movement of the organism from its position in the presence of some chemical attractants and repellants.
With the help of flagella, there are two possible movements, i.e., either its moves clockwise or tumble, and the other is counterclockwise or swim. Fig. 2 depicts the movement of bacteria like tumbling and swimming in the E. Coli.
In a favorable condition of the environment where sufficient nutrients are available and the non-acidic and non-alkaline nature of the intestine, it swims and the opposite of it. It tumbles typically, which is changing the direction of the swim.
The other significant process related to bacteria is swarming, where bacteria release some attractants to swarm together, searching for food. If the attractants are released high and deep, there are chances that different bacteria explore food together; otherwise, they go alone in the reverse situation.
During the reproduction process, the bacteria get split into two parts to increase their population. Bacteria reproduce based on the nutrient available in the bacteria or the fitness function. The bacteria also go through the elimination and dispersal phase in their lifetime due to their local environment.
Sometimes, the condition to survive gets reduced when the sudden rise in heat or nutrients is finished. In terms of computing, to avoid trapping in the local optima, the elimination and dispersal process is used.
4. 2 Particle Swarm Optimization
Particle swarm optimization (PSO) is a nature-inspired algorithm (Coello Coello & Lechuga, n.d.) (Krohling, n.d.) based on social behavior and a flock of birds’ dynamic movement. A group of birds known as a swarm moves together, searching for food in a particular direction and at different velocities. Each bird or particle looks for food and is usually followed by other birds.
These birds communicate with each other during their search and typically follow each other closer to the food. The closeness from the food is calculated as a fitness value after a periodic interval of time. Each bird in the swarm is represented as a particle in multidimensional space with a certain velocity and position.
Each particle keeps two things in its memory, i.e., its own best position pbest, and other is the global best position of gbest of their group. In the standard PSO, the velocity of the particle is updated with the equation:
The updated version of PSO, which improve the convergence rate, was a constriction factor where the velocity vector as:
where
where the randn and Randn are based on the Gaussian density function’s absolute value.
The Gaussian random density function is represented by:
Pseudocode of GMOPSO-BFO approach for task scheduling:
Initialize the Bacteria Foraging Optimization (BFO) parameters and Particle swarm optimization (GMOPSO) parameters:
Np, Nc, Sl, Nr, Ne, C, Pdispersal, dattract, wattract, hattract, wattract, pi, f, vi
Input: a collection of all bacteria where each bacteria is represented as
Output: a collection of information on how much these bacteria collect nutrients
begin: Let
for all bacteria in the list:
Loop elimination-dispersal step
Loop reproduction step
Loop Chemotaxis step
go for chemotactic steps using (a) and (b), respectively
Initialize the value of vi and position pi of the ith bacteria
(a) Compute tumbling step:
(b) Compute Swim step:
Set Jlast =
If
Update Jlast
For the reproduction phase: calculate the fitness function using:
Sort in ascending order the bacteria and chemotactic parameters If (k < Nr), perform the reproduction step again till k= Nr
For elimination and dispersal:
for each bacteria,
if (ped < Pdispersal),
do elimination and dispersal till l= Ne.
Do Mutation of the remaining bacteria (particles) using the PSO scheme.
Update pi, best, and gi, best upon meeting the condition:
Update the velocity of each bacteria (particle) after every iteration by the Gaussian-based velocity:
Update the position of each bacteria (particle) after every iteration by the formula:
Check pi, which should exist within the range.
Repeat step reproduction and PSO until convergence is achieved.
After the stopping criteria are met, the value of gbest and f(gbest) must be recorded. End.
5 Results and Discussion
The proposed approach has been developed in the language Python in the window 10 environment on Intel (R) Core (TM) i5, 1.80 GHz, CPU 8 GB. Various parameters considered during the simulation of the proposed technique have been presented in Table 2.
Parameters for BFO and PSO | Value Used |
No_of_bacteria (Np) | 20 |
No_of_chemotactics (Nc) | 10 |
swim_length (S l) | 4 |
No_of_reproductions (Nr) | 4 |
No_of_dispersals (Ne) | 2 |
step_size ( C ) | 1.45 |
probability_dispersal (P dispersal) | 0.25 |
d_attractant (dattract) | 0.1 |
w_attractant (wattract) | 0.2 |
h_repellant (hattract) | 0.1 |
w_repellant (wattract) | 10 |
PSO Swarm size | 20 |
Self-recognition coefficient | 1 |
Social coefficient | 2 |
Inertial weight | 0.5 |
In evaluating the proposed method, five virtual machines are considered, and a collection of tasks is assumed between 100 and 1000. The results are compared with the existing work on MOPSO (Alkayal et al., 2016) and BFO (Rajni & Chana, 2013) regarding the energy efficiency and makespan of the task execution.
The proposed scheme is based Gaussian swarm approach implemented in MOPSO along with the BFO. The experiment has been performed by considering the number of bacteria (Np) as 20 and No_of_chemotactics (Nc) as 10. In the same way, the initial size of PSO is considered as 20 in the experiment. The experiment runs iteratively about ten times to find the average of makespan and energy values. The experiment has been performed by considering m random task to n virtual machine.
The task size and required execution time are uniformly distributed. It has been found that the Gaussian scheme has outperformed the standard PSO and increased the convergence ability of PSO.
Since our problem is multi-objective, when Gaussian is implemented with MOPSO along with BFO, it gives better results in energy efficacy and reduced makespan time. Both factors are required for the offloading problem in mobile cloud computing. Table 3 presents the various task execution times, and it can be seen that the GMOPSO-BFO approach has performed better than the other algorithms.
Makespan per no. of task | MOPSO | BFO | MOPSO-BFO | GMOPSO-BFO | |
100 | 41.47 | 38.66 | 37.65 | 37.18 | |
200 | 155 | 151.81 | 155.05 | 145.25 | |
300 | 345.4 | 335.53 | 335 | 327.38 | |
400 | 594.85 | 597.26 | 594.85 | 567.8 | |
500 | 926.43 | 916.66 | 913.98 | 878.55 | |
600 | 1332.22 | 1333.36 | 1324.33 | 1284.5 | |
700 | 1813.15 | 1885.31 | 1803.56 | 1750.51 | |
800 | 2349.87 | 2390.75 | 2310.6 | 2316.66 | |
900 | 2934.87 | 3034.93 | 2924.65 | 2916.73 | |
1000 | 3743.78 | 3692.85 | 3655.96 | 3618.93 |
As the number of tasks increases on the virtual machine, the proposed scheme maintains the lowest makespan. The proposed scheme has less makespan for the various range of task from 100 to 1000 compared to MOPSO, BFO, and MOPSO- BFO.
In this work, the energy consumption is calculated for the proposed GMOPSO-BFO technique and compared with methods like MOPSO, BFO, and MOPSO-BFO. In this experiment, the number of virtual machines is considered 5, and the number of tasks ranges from 100 to 1000.
The experiment aimed to determine the energy consumption of the various techniques on the virtual machines.
The unit of energy consumption is considered as joules/minute. Table 4 presents the various tasks on the virtual machine and GMOPSO-BFO approach, which has consumed less energy in joules than the other algorithms. It has been observed that when the number of tasks increases from 100 to 1000, the machine’s energy consumption also increases.
Energy consumed per no. of task | MOPSO | BFO | MOPSO-BFO | GMOPSO-BFO | |
100 | 1.05 | 1.05 | 1.03 | 1.03 | |
200 | 2.15 | 2.15 | 2.15 | 2.14 | |
300 | 3.14 | 3.12 | 3.16 | 3.11 | |
400 | 4.15 | 4.13 | 4.15 | 4.12 | |
500 | 5.18 | 5.19 | 5.17 | 5.18 | |
600 | 6.33 | 6.33 | 6.3 | 6.18 | |
700 | 7.45 | 7.46 | 7.5 | 7.42 | |
800 | 8.49 | 8.47 | 8.46 | 8.45 | |
900 | 9.6 | 9.62 | 9.61 | 9.59 | |
1000 | 10.72 | 10.75 | 10.71 | 10.66 |
The proposed schemes perform better as compared to the other algorithm. The proposed scheme can save energy consumption in the virtual machine. It is clear from the experimental results that the proposed scheme GMOPSO-BFO performs better in completion time and energy consumption.
6 Conclusions
This paper presents a hybrid scheduling approach based on the Gaussian multi-objective particle swarm optimization and bacteria foraging optimization. Both makespan and energy consumption are essential factors in the offloading method of MCC. The proposed scheme performs better in makespan and energy consumption.
The results are compared with the MOPSO, BFO, and hybrid MOPSO-BFO. The scheme leverages the global optima of GMOPSO and the local optima by BFO. In the future, a scheduling scheme will be developed based on other optimization parameters like a load on the servers, scalability, latency, and resource utilization.