Movement Particle Swarm Optimization Algorithm

Particle Swarm Optimization (PSO) is a well known meta-heuristic that has been used in many applications for solving optimization problems. But it has some problems such as local minima. In this paper proposed an optimization algorithm called Movement Particle Swarm Optimization (MPSO) that enhances the behavior of PSO by using a random movement function to search for more points in the search space. The meta-heuristic has been experimented over 23 benchmark faction compared with state of the art algorithms: Multi-Verse Optimizer (MFO), Sine Cosine Algorithm (SCA), Grey Wolf Optimizer (GWO) and particle Swarm Optimization (PSO). The Results showed that the proposed algorithm has enhanced the PSO over the tested benchmarked functions.


Introduction
Particles Swarm Optimization is an evolutionary computing technology that is derived from a simplified social model of the simulation.It is also known as Particle Swarm Algorithm developed by (Kennedy J. and Mendes R. in 2002), where "Swarm" comes from the particle swarm in line with five basic principles of group intelligence in the development of models applied to artificial life."Particle" is a compromise because it is necessary to describe the members of the group as no quality, no volume, and to describe its speed and acceleration status (Vijayalakshmi in 2014).
Particle Swarm Optimization, (PSO) is a method of numerical optimization for the use of which you do not need to know the exact gradient of the optimized function.This method belongs to the group algorithms of swarm intelligence, which describes collective behavior of a decentralized self-organizing system (Shi, Y., & Eberhart, R. in1998).The systems of swarm intelligence, as a rule, consist of many agents locally interacting with each other and with the environment.The agents themselves are usually rather simple, but all together, locally interacting, create the so-called swarm intelligence.The method was first developed to simulate the social behavior of a flock of birds and fish.As a result of the development of the method, it has been successfully applied to problems finding the extreme points of a function (Kessentini, S., & Barchiesi, D. in 2015).
The PSO algorithm was originally designed for graphical simulation of a beautiful and an unpredictable movement of birds.Through the observation of animal social behavior, found in the group of social sharing of information to provide an evolutionary advantage, and as a basis for the development of the algorithm.The initial version of the PSO is formed by adding the speed matching of the neighbors, taking into account the multidimensional search and the acceleration of the distance.And then introduced the inertia weight to better control the exploitation (exploitation), exploration (exploration), the formation of a standard version (Eberhart R., Y. Shi, and J. Kennedy in 2001).
The PSO algorithm uses the following psychological assumptions: In seeking a consistent cognitive process, individuals tend to remember their beliefs while taking into account the beliefs of their colleagues.When it is better aware of the beliefs of colleagues, will be adaptively adjusted.optimization algorithms GWO, SCA, MFO, and also with PSO.
The rest of this paper is organized as follows.Section 2, presents the related work.Section 3 describes the proposed MPSO algorithm.Section 4 provides experimental results and comparison of MPSO with other meta-heuristic.Finally, the conclusions and future work are presented in Section 6.
1.1 Related Work (Bergh and Engelbrecht, 2004) proposed a cooperative PSO (Cooperative PSO) algorithm, This idea is the use of cooperative behavior, the use of multiple groups in the target search space in different dimensions of the search, that is, an optimization solution by a number of independent groups are collaboratively completed, and each group is only responsible for optimizing the components of this solution vector.(Baskar S. and Suganthan P. N., 2004) proposed a similar collaboration PSO, known as concurrent PSO (concurrent PSO, CONPSO), which uses two groups to optimize a solution vector.Recently, (El Abd et al., 2006) have combined a hierarchical cooperative PSO (hierarchical cooperative PSO) that was proposed by Bergh and(Engelbrecht, 2004 andBaskar andSuganthan, 2004).Whether the particle swarm is in the D-dimensional search or multiple particle groups on different dimensions of the collaborative search, its purpose is to be able to find each particle in favor of fast convergence to the global optimal solution learning object.(Liang et al., 2004) proposed a method that can be both D-dimensional search and cannot be selected on different dimensions a new learning strategy for learning objects is called the Comprehensive Learning Particle Swarm Optimizer (CLPSO).
Stretching PSO (SPSO): SPSO the so-called stretching technique (Parsopoulos et al., 2001) and the deflection and repulsion technology applied to the PSO, the objective function of the transformation, limiting the particles have been found to the local minimum solution movement, which will help the particles have more opportunity to find the global optimal solution (Parsopoulos and Vrahatis, 2004).
Chaotic particle swarm optimization: chaos is a seemingly cluttered nature, in fact, implied inherent regularity of the common nonlinear phenomenon, with random and regularity.The chaos of the chaotic motion is used to generate the chaotic sequence based on the historical best position of the particle swarm, and the optimal position chaos PSO (chaos particle swarm optimization, CPSO) is proposed to replace the position of a particle in the particle swarm.In addition, (Wang et al., 2007) proposed using the adaptive PSO with the inertia weight to adapt to the objective function value, the global search is carried out, and the optimal location is searched by chaos local search.
A chaotic PSO combining PSO with chaos search is used to determine PSO parameters (inertia weights and acceleration constants) using chaotic sequences.A particle swarm model based on deterministic chaotic Hopfield neural network is proposed.(Kalman swarm: Monson and Seppi, 2004) used Kalman filter to update the particle position.Principal component PSO: (Mark et al., 2005) combined with the principal component analysis technique, the particles weren't only in the n-dimensional x space flight according to the traditional algorithm, but also in the m-dimensional z Space synchronized flight (m <n).
The PSO algorithm is based on the group, moving the individual in the group to a good area according to the fitness of the environment.However, it does not use the evolutionary operator for the individual, but rather treats each individual as a volume-free particle (point) in the D-dimensional search space that travels at a certain speed in the search space, depending on its own flight experience and companion flight experience to dynamically adjust (Suganthan, P.N in 1999).The i-th particles are expressed as Xi = (xi1, xi2,...,xiD), and the best position (with the best fitness) that is experienced is Pi = (pi1, pi2,..., piD), also called pbest.The index number of the best position where all the particles of the group has been traced is denoted by the symbol g, that is, Pg, also known as gbest.The velocity of particle i is expressed by Vi = (vi1,vi2,...,viD).For each generation, its d-dimensional (1 ≤ d ≤ D) varies according to the following equation: Figure 1.Equation PSO Algorithm v () is the particle velocity, present () is the current particle (solution).pbest () and gbest () are defined as stated before.rand () is a random number between (0,1).c1, c2 are learning (acceleration) factors.usually c1 = c2 = 2. w is inertia weight.
In addition, the velocity Vi of the particles is limited by a maximum velocity Vmax.If the current acceleration of the particles causes its velocity vid in a dimension to exceed the maximum velocity vmax, d of the dimension, the velocity of the dimension is limited to the dimension maximum velocity vmax, d. for the equation ( 1), the first part is the inertia of the particle's previous behavior, the second part is the "cognition" part, the reflection of the particle itself; the third part is the "social" information sharing and mutual cooperation (Junying C.et al. in 2005).
The "cognitive" part can be explained by Thorndike's law of effect, that is, an enhanced random behavior is more likely to occur in the future.The behavior here is "cognition" and assumes that the correct knowledge is reinforced, and that such a model assumes that the particles are motivated to reduce the error (Kata S. et al., 2004).
The "social" part can be explained by Bandura's vicarious reinforcement.According to the theory, when the observer observes that a model strengthens a behavior, it increases the chances of its behavior.That the micro particle itself will be imitated by other particles (Shuquan L. and Kongguo Z,2008).

Movement Particle Swarm Optimization Algorithm (MPSO)
In this section, the inspiration of the proposed algorithm is first discussed.Then the mathematical model is provided.

Inspiration of MPSO
Viruses have both similarities and differences with other living organisms as shown in figure 3.One of the features of viruses that indicate their belonging to living matter is their need for replication and creation of offspring, but, unlike living organisms, a virus cannot survive on its own.It is activated only when it replicates in the host cell using host resources and nutrients.When a virus enters the cell, its sole purpose is to create multiple copies of itself to infect other cells.Everything that it does is aimed at increasing the fitness and the number of offsprings.
The virus, in order to proliferate and, thus, cause infection, it is necessary to penetrate into the cells of the host organism and begin using cell material.To penetrate the cell, proteins of the surface of the virus bind to specific surface proteins of the cell.Attachment, or adsorption, occurs between the virus particle and the cell membrane.A hole is formed in the membrane and the virus particle or only the genetic material gets inside the cell where the virus will multiply.Then the virus must take control of the cellular replication mechanism.At this stage, the distinction between susceptibility and tolerance takes place in the host cell.Tolerance leads to a decoupling of the infection.Once control of the cell is established and its environment is suitable for the virus to start creating its own copies, replication occurs quickly, giving rise to millions of new viruses.After the virus has created many copies of itself, the cell is exhausted due to the use of its resources.More viruses are not needed, so the cell often dies and newborn viruses have to look for a new host.This represents the final stage of the life cycle of the virus.
Thus, the virus entirely depends on the host cell.Most viruses are species-specific and affect only a narrow range of hosts -plants, animals, fungi or bacteria.Some viruses can "hide" inside the cell.This can result from evading the host's defense reactions and immune system, or simply because the continuation of replication is not in the interests of the virus.This hiding is called latency.During this time, the virus does not give rise to the offspring and remains inactive until the external stimulus -for example, light or stress, activates it.

Mathematical Model of MPSO
In this subsection the mathematical models of the algorithm is described, I chose main operations of the viral particles, also known as virions, consist of two or three parts: the genetic material made from either DNA, a protein coat and and the protein coat when they are outside a cell.
Initialize each virus position and velocity if virus position below the lower bound or above the upper bound then we initialize virus position and initialize virus velocity and way of movement as described in equation 1and 2

VV = lb + (ub -lb).*rand(N,dim)
(2) Where VV is the virus velocity and VP is the virus position, rand (): is a random function that generates numbers between 0 and 1, ub: the upper bound of the space dimension, lb: the lower bound of the space dimension, N is the number of generated viruses and Dim is the dimentaion of the search space perform local search on overall global best position and communicate genetic material RNA between viruses are mathematically represented by equation 3 that if the obtained value (OBV) is less than the global value of the virus particles (GP) then each virus will have a local value after applying in the fitness function as shown in equation 4.

OBV= fitness (VP)
(3) Find minimum in the neighborhood as shown in equation 5 by fist sorting the particles fitness value where the neighborhood of a particle is its near particles in the sorted index where the minimum value is in the upper of the index sorted from largest to smallest value.

VMIN = max(OBV)
(5) Each virus will perform random movement (RW)and local search (LS) for new cells to invade which is represented by new fitness value and it moves to a period of space defines as Cwhich is larger than 1.

VNP = rand(N,1) -C
(6) Where VNP is the virus new position with the random movement is a movement space that the virus will move larger than 1, N is the virus particle population number The pseudocode of the algorithm that describe the steps that is applied to the virus particle is shown in figure5 Figure 5. MPSO Pseudo code

Results and Discussion
In this section the MPSO algorithm is tested on 23 benchmark functions usedc by many researchers (16,(48)(49)(50)(51) from CEC 2005 special session benchmark functions (52).We have chosen these test functions to be able to compare our results to those of the current meta-heuristics.These benchmark functions are listed in Table 1.Where Dim indicates dimension of the function, range is the boundary of the function's search space, and f min is the optimum.Some of these functions represent the shifted, rotated, expanded, and combined variants of the classical functions which offer the greatest complexity among the current benchmark functions, Generally speaking, the benchmark functions used are minimization functions and can be divided into four groups: unimodal, multimodal, fixed-dimension multimodal, and composite functions.Note that detailed descriptions of the composite benchmark functions are available in the CEC 2005 technical report.
The second set of the benchmark test functions (F6 to F23), has multiple local values in addition to the global optimum.This set is used to test local optima avoidance and explorative ability of an algorithm.
The third set of the benchmark test functions is the composite test functions are the rotated, shifted, biased, and combined version of several unimodal and multi-modal test functions.

Exploitation and Exploration Analysis
The results in Table 6 show that the MPSO showed a competitive results with the compared optimizer's.Firstly, the MPSO algorithm shows better results on unimodal test functions.Due to the characteristics of the unimodal test functions, these results strongly show that the MPSO algorithm has a high exploitation and convergence.

Local Minima Avoidance
MPSO shows a good balance between exploration and exploitation that result in high local optima avoidance.

Stability of the MPSO
We calculated the standard deviation for all functions from F1 to F23 to test the stability of the MPSO as shown in    (Deb) and GSA) in solving the problem of Welded beam design and was able to find the Optimum cost in solving the equations discussed above related to this problem and the Optimum values obtained is 1.6895.

Conclusion
This work proposed an enhanced PSO optimization algorithm.23 functions used to benchmark the performance of the proposed meta-heuristic for the features of exploration, exploitation, local optima avoidance.The findings described that MPSO has the ability to provide good results in comparing to well-known heuristics including GWO, MFO, and SCA.For the first set of functions (unimodal functions), MPSO algorithm showed good exploitation the exploration ability of MPSO was competitive too over the multimodal functions.Finally the composite functions showed local optima avoidance

Figure
Figure 3. Virus particles

Table 1 .
The benchmark functions mathematical formulation

Table 7 Table 7 .
Standard deviation of MPSO, SCA, PSO, GWO and MFO The Convergence curve has been done to test the speed of MPSO in comparison with other algorithms Convergence curve of MPSO algorithm in comparison with GWO, SCA,, MFO and PSO.It can be noticed from figure7that MPSO optimizer outperformed all of the four optimization algorithms (GWO, SCA, MFO, and PSO) and it was the fastest to find the best solution as shown in the Convergence curve chart for each iteration for the bench mark functions from F1 to F7 which represent Unimodal Functions (Shifted Sphere But, in F8 we observed MFO algorithm better than Convergence curve MPSO, and F9 and F23 GWO algorithm better than Convergence curve MPSO algorithm also F14 and F17 PSO algorithm better than Convergence curve MPSO algorithm and we observed all fitness functions in F17 (PSO, GWO, SCA, MFO) are better than MPSO and in F16 all fitness Functions are similar.Where F14 is the Shifted Rotated Expanded Sca ffer's and F15 is the Hybrid Composition Function and F16 is Rotated Hybrid Composition space Function, F17 is the Rotated Hybrid Composition Function with Noise in Fitness It can be also observed from figure7that MPSO optimizer succeed to be the fastest and most effective optimizer for the compared algorithms (GWO, SCA, MFO, MFO, and PSO when it is tested for the bench mark functions F10-F13, F15, F18, F20-F21, F23 which is from the Multimodal Functions and Hybrid Compositions where F10 is the Shifted Rotated Rastrigin's Function, F11 is Shifted Rotated Weierstrass Function, F12 is Schwefel's Problem, F13 is the Expanded Extended Griewank's plus Rosenbrock's Function, F15 is Hybrid Composition Function.F18 is Rotated Hybrid Composition Function, F20 is Rotated Hybrid Composition Function with the Global Optimum on the Bounds, F21 is Rotated Hybrid Composition Function Function, Shifted Schwefel's Problem 1.2, Shifted Rotated High Conditioned Elliptic Function, Shifted Schwefel's Problem 1.2 with Noise in Fitness, Schwefel's Problem 2.6 with Global Optimum on Bounds) and F7 which is a Multimodal Basic Functions (Shifted Rotated Griewank's Function without Bounds),: Shifted Rosenbrock's Function, F7: Shifted Rotated Griewank's Function without Bounds this testifies that the proposed algorithm has a high exploitation ability.

Table 8 .
Comparison resultsIt can be noticed from Table8that MPSO optimizer gave good results in compasion with other algorithms (PSO,GWO, GA Coello, GA