September
11, 2020
, 2019
A simple version of a Swarm Intelligence algorithm called bacterial foraging optimization algorithm with mutation and dynamic stepsize (BFOAM-DS) is proposed. The bacterial foraging algorithm has the ability to explore and exploit the search space through its chemotactic operator. However, premature convergence is a disadvantage. This proposal uses a mutation operator in a swim, similar to evolutionary algorithms, combined with a dynamic stepsize operator to improve its performance and allows a better balance between the exploration and exploitation of the search space. BFOAM-DS was tested in three well-known engineering design optimization problems. Results were analyzed with basic statistics and common measures for nature-inspired constrained optimization problems to evaluate the behavior of the swim with a mutation operator and the dynamic stepsize operator. Results were compared against a previous version of the proposed algorithm to conclude that BFOAM-DS is competitive and better than a previous version of the algorithm.
Keywords::
Metaheuristic, mutation operator, dynamic stepsize, engineering problem, performance measures
Nature-inspired metaheuristics have gained popularity solving constrained numerical optimization problems (CNOP) over mathematical programming, due to their easy implementation and quick execution. Moreover, metaheuristics generally provide a set of feasible solutions that the user can use according to their preferences. A CNOP can be defined as to:
Minimize
subject to:
where
Nature-inspired metaheuristic algorithms are classified in two classes:
Evolutionary Algorithms (EA): emulate the evolution of species and the survival of the fittest. Some well-known EA are genetic algorithms (GA) (Eiben & Smith, 2003), evolution strategies (ES) (Schwefel, 1993), evolutionary programming (Fogel, 1999), genetic programming (GP) (Koza et al., 2003), and differential evolution (DE) (Price, Storn & Lampinene, 2005), which have been successfully applied in CNOP such as mechanical design (Calva-Yáñez, Niño-Suárez, Villarreal-Cervantes, Sepúlveda-Cervantes & Portilla-Flores, 2013).
Swarm Intelligence Algorithms (SIA): have the capability of emulating the collaborative behavior of some simple species when searching for food or shelter (Engelbrecht, 2007). Some SIA are particle swarm optimization (PSO) (Eberhart, Shi & Kennedy 2001) and ant colony optimization (ACO) (Dorigo, Maniezzo & Colorni, 1996).
Both PSO and ACO have gained popularity because of their great performance in solving CNOP. In 2002, another SIA-type algorithm called bacterial foraging optimization (BFOA) was introduced by Passino (2002), emulating the behavior of the E.Coli bacteria in the search of nutrients in its environment. This behavior is summarized in four processes: (1) chemotaxis (swim-tumble movements), (2) swarming (communication between bacteria), (3) reproduction (cloning of the best bacteria), and (4) elimination-dispersal (replacement of the worst bacteria). In BFOA, each bacterium tries to maximize the energy obtained per each unit of time spent on the foraging process while avoiding noxious substances. BFOA was used initially to solve unconstrained optimization problems; however, recent approaches add some constraints-handling technique to solve CNOP, where the penalty function is the most used technique (Hernández-Ocaña, Mezura-Montes & Pozozs -Parra, 2013).
Further investigations have addressed the fact that BFOA is particularly sensitive to the stepsize parameter, which is used in the chemotaxis process with the swim-tumble movement to determine the distance that a bacterium can move. In specialized literature, there are different ways to control the stepsize: (1) by keeping it static during the search process (as in the original BFOA) (Hernández-Ocaña, Pozos-Parra & Mezura-Montes, 2014; Huang, Chen & Abraham, 2010; Mezura-Montes & Hernández-Ocaña, 2009), (2) by using random values (Hernández-Ocaña et al., 2014; Praveena, Vaisakh & Mohana Rao, 2010), (3) by using a dynamic variation (Hernández-Ocaña et al., 2014; Niu, Fan, Xiao & Xue, 2012; Pandit, Tripathi, Tapaswi & Pandit, 2012), or (4) by adopting an adaptive mechanism (Hernández-Ocaña et al., 2014; Mezura-Montes & López-Davila, 2012; Saber, 2012). However, such approaches were stated mainly for specific optimization problems. In a recent study of the stepsize by Hernández-Ocaña et al. (2014), different mechanisms were compared, and the dynamic control mechanism was found to be slightly superior to static, random, and adaptive versions.
Mezura-Montes & Hernández-Ocaña (2009) adapted BFOA in a proposal called modified bacterial foraging optimization algorithm (MBFOA) to solve CNOP. This approach inherits the four processes of BFOA, featuring individual and independent processes with sequential interaction. Therefore, the parameters that determine the number of swim movements, number of tumbles, and the swarming loop were eliminated. Moreover, the feasible rules proposed by Deb (2000) are used as a constraint-handling technique. Unlike BFOA, where the stepsize is static, in MBFOA the stepsize used in the swim movements was adapted according to the boundary of the decision variables.
BFOA has been combined with other algorithms, particularly with EA, to improve its performance, e.g., with a GA in Kim, Abraham & Cho (2007), Kushwaha, Bisht & Shah (2012), Luo & Chen (2010) and DE in Biswas, Dasgupta, Das & Abraham (2007). Mutation operators have been added to BFOA in Nouri & Hong (2012); nevertheless, no proposal using mutation within the chemotaxis process was found.
Currently, a new version based on MBFOA was proposed in Hernández-Ocaña et al. (2016) where two swims and a random stepsize are used in the chemotaxis process in order to improve the foraging performance. This proposal was called TS-MBFOA and implemented successfully to solve real-word problems: It minimizes the optimal synthesis of four-bar mechanisms and solves an instance of the menu planning problem (Hernández-Ocaña, Chávez-Bosquez, Hernández-Torruco, Canul-Reich & Pozos-Parra, 2018). However, this authors mention that stepsize is a sensitivity parameter, and it requires more study. In most cases, the stepsize is randomly updated.
This paper aims to test TS-MBFOA using mutation mechanisms and a simple dynamic stepsize. Results obtained are compared against state-of-the-art algorithms and validated with commonly performance measures found in the literature. It is worth mentioning that this approach uses fewer parameters than the original MBFOA due to both the dynamic stepsize implemented and the reproduction process applied to half the swarm bacteria, as initially introduced in BFOA. To the best of one’s knowledge, this is the first time that mutation is used as a swim operator.
This paper is organized as follows: Materials and methods outlines the MBFOA, describes the three test problems to be solved and the performance measures used to evaluate the results obtained by this study’s proposal; the results section details the experiments conducted in order to analyze the behavior of this approach against those of one of the best state-of-the-art algorithm; finally, the general conclusions and future works are presented.
BFOAM-DS is an algorithm derived from MBFOA in order to improve the performance in constrained spaces. Two modifications to the MBFOA have been made:
Two different swim operators are applied in the chemotaxis process: (1) exploration using a dynamic stepsize easy to implement, and (2) exploitation using a mutation operator, in order to improve the exploration and exploitation of the bacteria.
The reproduction process is applied to half of the swarm bacteria.
In BFOA, MBFOA, TS-MBFOA, and BFOAM-DS, a bacterium
In this process, two swims are interleaved in each generation: Either the exploitation swim or exploration swim is performed. The process starts with the exploitation swim (classical swim). Yet, a bacterium will not necessarily interleave exploration and exploitation swims, because if the new position of a given swim θ i (j +1, G) has a better fitness (based on the feasibility rules) than the original position θ i (j, G), another similar swim in the same direction will be carried out in the next generation. Otherwise, a new tumble for the other swim will be computed. The process stops after N c attempts.
The exploration swim uses the mutation between bacteria and is computed as:
where
The exploitation swim is computed as:
Where ϕ (i) is calculated with the original BFOA Equation 3, determining the direction of the swim or tumble:
Where Δ (i) is a uniformly distributed random vector of size
Equation 4 determines the distance of movement of a bacterium:
where C (i, G) is the dynamic stepsize of each bacterium updated in each generation, Θ (i) is a randomly generated vector of size n with elements within the range of each decision variable: [Upper k , Lower k ], k = 1,…, n, and Gmax is the maximum number of generations of the algorithm.
It is important to remark that the exploration swim (Equation 1) performs larger movements due to the mutation operator which uses bacteria randomly. On the other hand, the exploitation swim (Equation 2) generates small movements using the dynamic stepsize in the search process.
At the half number of the chemotaxis process, the swarming operator is applied with the Equation 5, where β is a user-defined positive parameter into (0,1). In this proposal, unlike of MBFOA, if a solution violates the boundary of decision variables, then a new solution x i is randomly generated, bounded by lower and upper limits L i ≤ x i ≤ U i
where θ i (j +1, G) is the new position of bacterium i, θ i (j, G) is the current position of bacterium i, θ B (G)is the current position of the best bacterium in the swarm at generation G. If the population is feasible, this best bacterium has a better value (fitness) in the objective function. If the population is outside the feasible region, this best bacterium has a smaller amount of violation of constraints. The swarming operator movement applies twice in a chemotaxis loop, while in the remaining steps the tumble-swim movement is carried out.
In this process, half of worst bacteria (S r ) are replaced, and the remaining ones are duplicated.
The worst bacteria are eliminated, and new bacteria are randomly generated with uniform distribution between the ranges of the decision variables.
The corresponding BFOAM-DS pseudocode is presented in Algorithm 1. The user-defined parameters are summarized in the caption.
Three design engineering problems are used to test the performance of this study’s proposal, called BFOAM-DS. It was coded using MATLAB R2009b and executed on a PC with a 3.5 Core 2 Duo Processor, 4GB of RAM, and 64-bit Windows 7 operating system.
It minimizes the weight of a tension/compression spring, subject to constraints of minimum deflection, shear stress, surge frequency, and limits on the outside diameter and on design variables (Arora, 2012). There are three design variables: the wire diameter (x 1 ), the mean coil diameter (x 2 ), and the number of active coils (x 3 ). The mathematical model has the form:
Minimize:
subject to:
where 0.05 ≤ x 1 ≤ 2, 0.25 ≤ x 2 ≤ 1.3, and 2 ≤ x 3 ≤ 15. Best solution: x* = (0.051690, 0.356750, 11.287126), where f(x*) = 0.012665.
It involves the minimization of the entire cost, consisting of material cost, welding and forming costs (Sandgren, 1990). The mathematical model has the form:
Minimize:
subject to:
where 1≤ x1, x2 ≤ 99 and 10 ≤ x3, x4 ≤ 200. Best solution: x* = (0.8125, 0.4375, 42.098446, 176.636596, where f(x*) = 6059.714335.
It finds the minimum fabrication cost, considering four design variables: x
1, x
2, x
3, x
4 and constraints of shear stress
Minimize:
subject to:
with
where
Best solution: x* = (0.244369, 6.217520, 8.291471, 0.244369, -5741.176933, -0.000001, -0.0000000, -3.022955, -0.119369, -0.234241, -0.000309), where f(x*) =20.380957.
To evaluate the behavior of the compared algorithms, the following performance measures for nature-inspired constrained optimization problems, taken from Liang et al. (2006), were computed:
Feasible run: a run where at least one feasible solution is found within the Maximum number of evaluations allowed per each problem (Max evals ).
Feasible rate = (number of feasible runs) / (total runs).
Successful run: a run where a feasible solution x* satisfying
Success rate = (number of successful runs) / (total runs).
Success performance = the mean of (feasible solutions for successful runs) × (number of total runs) / (number of successful runs).
Successful swim = A swim movement where the new position is better (based on the feasibility rules) than the original position.
Successful swim rate = (number of successful swims) / (total swims), where (total swims) = S b × N c × GMax.
The user-defined parameters of BFOAM-DS are shown in Table 1. Those values were fine-tuned using the iRace tool (López-Ibáñez, Dubois-Lacoste, Pérez-Cáceres, Birattari & Stützle, 2011), except for Gmax, whose value was fixed to adjust the termination condition to the Maxevals value per each problem. The parameter values for MBFOA were taken from Hernández-Ocaña et al. (2014).
Parameter | MBFOA | BFOAM-DS |
---|---|---|
Sb | 50 | 40 |
Nc | 12 | 20 |
Sr | Sb/2 | Sb/2 |
β | 1.76 | 0.68 |
R | 1.62E-2 | - |
GMax | value to reach Maxevals | value to reach Max evals |
The proposed BFOAM-DS was tested solving the three problems with 15 000 Max evals in a set of 30 independent runs. Results are presented in Table 2 and discussed based on three measures: (1) Quality, i.e., the best solution found so far; (2) Consistency, i.e., the mean value closer to the best known solution x* and the lowest standard deviation value; and (3) Computational cost, i.e., the number of evaluations required by each given problem.
Problem | Criteria | MBFOA | BFOAM-DS |
---|---|---|---|
P01 | Best | 0.012671 | 0.012665233 |
Average | 0.012759 | 0.012681938 | |
Std. | 1.36E-04 | 5.08E-05 | |
P02 | Best | 6060.46 | 6059.701609 |
Average | 6074.625 | 6173.535938 | |
Std. | 1.56E+01 | 2.01E+02 | |
P03 | Best | 2.386 | 2.380952906 |
Average | 2.404 | 2.380957824 | |
Std. | 1.6E-02 | 1.19E-05 | |
Maxevals | 48000 | 14400 |
For the Tension/compression spring design optimization problem (P01), the proposed BFOAM-DS found the best result when compared to MBFOA, and this solution is similar to the best-known solution x*. In addition to these results, BFOAM-DS has better consistency than MBFOA.
For the pressure vessel optimum design optimization problem (P02), BFOAM-DS found the best result, which was similar to x*. However, the result across 30 runs of MBFOA showed more consistency.
Finally, for the welded beam design optimization problem (P03), BFOAM-DS showed better results and higher consistency than those of MBFOA.
It is important to mention that MBFOA found these results at a computational cost of 48 000 evaluations, in contrast to BFOAM-DS which only cost 15 000 evaluations. The number of evaluations is calculated using S b × N c × GMax. For example, 40x20x18 = 14400 evaluations for BFOAM-DS according to the values shown in Table 1, where Gmax is a value to reach Max evals , in this case 15 000 / (40x20) = 18Gmax
Figures 2, 3, and 4 show the convergence graphs of MBFOA and BFOAM-DS in each of the engineering problems. Graphs depict the convergence in the median of all runs per problem. BFOAM-DS has a similar behavior in the three problems; it reaches the optimum before ten generations. On the other hand, MBFOA converges prematurely in local optima in all problems, it also requires more generations, with 48 000 evaluations as a stop condition.
Effectiveness results of BFOAM-DS in solving CNOP are shown in Table 3. According to the results of the feasible rate measure, for all three problems, the proposed algorithm found feasible solutions within the maximum number of evaluations allowed across all independent runs. However, only in problem P03, this study’s algorithm obtained 100% feasible solutions, which is similar to the best-known solution. For problem P01, this algorithm obtained 96.66% and 63.33% in problem P02. The computational cost to obtain a feasible solution similar to the best-known solution for each problem is 4860, 13 977, and 10 120, respectively, according to the measure success performance where P02 is the most complex problem, followed by P03 and P01.
Criteria | P01 | P02 | P03 |
---|---|---|---|
Feasible rate | 100% | 100 % | 100% |
Success rate | 96.66% | 63.33% | 100% |
Success performance | 4.86E+03 | 1.39E+04 | 1.01E+04 |
The effectiveness of the stepsize was measured using the performance measures successful swim and successful swim rate, taken from Hernández-Ocaña et al. (2014). The goal of these measures is to obtain the number of successful swims in a run. The algorithm BFOAM-DS carried out 13 600 swims (40S b × 20N c × 14 GMax). According to the results in Table 4, 14% of swims are successful in each run of BFOAM-DS, which are similar results to those mentioned in Hernández-Ocaña et al. (2014) for solving other CNOP. Nevertheless, the version of BFOAM-DS without mutation only obtained the 3.18% of successful swims in average.
Problem | Successful swim | Successful swim rate |
---|---|---|
BFOAM-DS without mutation | ||
P01 | 735 | 5.40% |
P02 | 324 | 2.38% |
P03 | 241 | 1.77% |
BFOAM-DS with mutation | ||
P01 | 2166 | 15.92% |
P02 | 1998 | 14.69% |
P03 | 1899 | 13.96% |
Figures 5, 6, and 7 show the behavior of the BFOAM-DS algorithm with and without the mutation swim in each of the engineering problems. Graphs depict the swim in the median of all runs per problem. The version with mutation swim generates more successful swims during all the generations of the algorithm, on average 110 successful swims in the three problems. Concerning the algorithm without a mutation swim, in P01 and P03 problems, successful swims are few in the first generation (40 successful swims on average), and these gradually decrease until the sixth generation, where the algorithm remains on average with five successful swims per generation. In the P02 problem, the swims without mutation remain stable and, on average, 38 successful swims per generation are generated until the execution of the algorithm is completed. In general, swims with mutation improve the performance of the algorithm, allowing better solutions with less computational cost.
In this new version of the bacterial foraging optimization algorithm, the mutation power of evolutionary algorithms was added to the chemotaxis process of the bacterial foraging in order to improve the performance of the algorithm. Moreover, an easy dynamic stepsize was used, which led to a decrease in the number of parameters to be defined by the user.
The statistical tests and performance measures applied to the results of 30 independent runs of each one of the three optimization problems showed that BFOAM-DS outperforms the previous version of the algorithm. Moreover, BFOAM-DS requires 33 000 evaluations fewer than the previous version of the algorithm (48 000-14 400) to achieve these competitive results. The stepsize, a user-defined parameter, has been replaced in the algorithm by a proposed dynamic stepsize. The exploration and exploitation capacity of the algorithm was improved with the swim with a mutation operator that, on average, increased the effectiveness of swims by 10%.
In general, the proposed BFOAM-DS performed better than the original MBFOA, obtaining results with less computational cost. Moreover, the consistency was competitive, and the quality of results was similar to the best-known solution in each problem. Another advantage is that BFOAM-DS requires less tuning of parameters due to the use of a dynamic stepsize.
Bacterial foraging optimization is a metaheuristic included in the group of intelligence swarm algorithms used to solve complex problems. This algorithm is considered younger and less known than evolutionary algorithms. A proposal based on bacterial foraging has been made, and it has been tested solving three constraint numerical optimization problems. This version has been called the bacterial foraging optimization algorithm with mutation and dynamic stepsize (BFOAM-DS).
BFOAM-DS was tested in three engineering design optimization problems. Results were analyzed with basic statistics (best, average and standard deviation). In addition, common measures for nature-inspired constrained optimization problems are used to evaluate the behavior of the swim with a mutation operator (feasible run, feasible rate, successful run, success rate, success performance, successful swim, and successful swim rate) and the dynamic stepsize operator. Then, results were compared against a previous version of the algorithm (MBFOA) to observe the effectiveness of the proposed improvements.
BFOAM-DS solved effectively all test problems with fewer evaluations than the previous version of the algorithm. This new version of the algorithm requires fewer parameters to calibrate, so it would be easier for the final user to tune up. The proposed operators improve the overall performance of the algorithm, as demonstrated by the performance tests.
As future work, this study’s proposal will be tested against other complex problems and an analysis of the frequency of reproduction process will be carried out.
To Consejo Nacional de Ciencia y Tecnología (Conacyt) for supporting the joint doctoral program in Computer Science at the Universidad Juárez Autónoma de Tabasco and Universidad Veracruzana.