Supernova Optimizer: A Novel Natural Inspired Meta-Heuristic

Bio and natural phenomena inspired algorithms and meta-heuristics provide solutions to solve optimization and preliminary convergence problems. It significantly has wide effect that is integrated in many scientific fields. Thereby justifying the relevance development of many applications that relay on optimization algorithms, which allow finding the best solution in the shortest possible time. Therefore it is necessary to further consider and develop new swarm intelligence optimization algorithms. This paper proposes a novel optimization algorithm called supernova optimizer (SO) inspired by the supernova phenomena in nature. SO mimics this natural phenomena aiming to improve the three main features of optimization; exploration, exploitation, and local minima avoidance. The proposed meta-heuristic optimizer has been tested over 20 will known benchmarks functions, the results have been verified by a comparative study with the state of art optimization algorithms Grey Wolf Optimizer (GWO), A Sine Cosine Algorithm for solving optimization problems (SCA), Multi-Verse Optimizer (MVO), Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm (MFO), The Whale Optimization Algorithm (WOA), Polar Particle Swarm Optimizer (PLOARPSO) and with Particle Swarm Optimizer (PSO). The results showed that SO provided very competitive and effective results. It outperforms the best state-of-art algorithms that are compared to on the most of the tested benchmark functions.


Introduction
The true beauty of algorithms inspired by nature is transforming the best solutions that could be created by nature to technology.Because of this fact, we have the ability to describe and solve complex problems with very simple initial conditions and rules that do not carry enough knowledge, about the nature of search space (Langdon, 2007) (Lugmayr et al., 2013).By carefully studying each natural phenomenon or complex interaction between organisms, from microorganisms to full human beings, it will let us see that nature always found the optimal strategy, ex: balancing the ecosystem, maintenance of diversity, adaptation, physical phenomena, such as the formation of rivers, the movement of clouds; movements of bird folks, The solution of these processes is simple, and the results are amazing.Problems in computer science have much in common with problems of nature; therefore, a mapping between nature and technology is possible calculations, inspired by nature, cover a wide range of applications, including computer networks, security, robotics, bio-medical equipment, control and parallel processing systems, data mining, energy systems, technology production and much more.(Lugmayr et al., 2013).
Classical methods for solving problems include two branches: mathematical and heuristics.Heuristic approach used in solving complex optimization tasks, especially where traditional methods fail.Heuristics, imitate the strategy of nature, it use a lot of random solutions, which is classified as a special class of randomized algorithms.
Meta-heuristic algorithms include (Zhang, 2015): a) choosing the correct representation of the problem; b) assessing the quality of the solution using the fitness function; c) defining operators in such a way as to obtain a new set solutions.The most prevalent and successful classes of this type are, evolutionary algorithms and algorithms based on the swarm, inspired natural evolution and based on the collective behavior of animals.The optimization method using a swarm of particles is traditionally used for solving the problem of searching for the global optimum of a multidimensional function, where it showed the very good results (Bonyadi, and Michalewicz, 2017) Solving optimization problems can be performed by Swarm intelligence (SI) based algorithms, Bio-inspired algorithms, natural inspired, Physics and Chemistry inspired and Mathematical methods as shown in figure 1 (Clerc, 2017) In general, met-heuristic algorithms aim to find the extremum of the optimized problem, at which the value of the objective function is consistently improved until extremum point is found.Depending on whether the algorithm can be local or global, they are divided into algorithms of local and global search.Local extremum search algorithms are designed to determine one of the extremums on the set of admissible solutions in which the objective function the maximum or minimum value.To find the extremum when the form of the optimized function is not fully known, or its structure is too complex, the methods swarm intelligence are used.The efficiency of the search procedure for an optimum is the possibility of finding the optimal solution to the problem and convergence (Zhang, 2015).
A common mathematical problem in all fields of science and engineering disciplines, is optimization -finding the best solutions.In nature, optimization algorithms can be deterministic or stochastic.Existing methods for solving optimization problems require huge computational costs, so to solve the problem such as stochastic optimization (Parkinson et al., 2013).A better methods to solve this optimization problem is using meta-heuristics based on iterative improvement, or a population of solutions (both in evolutionary and swarm algorithms).(Kees , 2012).This paper proposes a novel algorithm for Solving optimization problems that is inspired by the random star explosion and born of new stars that is known as supernova The applicability of supernova has been tested by implementing the algorithm using MATLAB to demonstrate the performance and effectiveness of the SO in solving optimization problems.The algorithm has been tested over 20 will known benchmarks functions from CEC2005 suit, the results have been verified by a comparative study with the state of art optimization algorithms GWO, SCA, MVO, MFO, WOA and also with the first optimization algorithm (PSO).The experimental results and the comparisons with other algorithms show the effectiveness of the proposed optimizer.
The rest of this paper is organized as follows.Section 2, presents the related work of swarm intelligence and bio-inspired algorithms and its achievements in solving optimization algorithms.Section 3 describes the proposed SO algorithm.Section 4 provides experimental results and comparison of SO with other meta-heuristic.Section 5 describe the stability of SO.Section 6 shows the Convergence analysis.Finally, the conclusions and future work are presented in Section 7.

Related Work
May algorithms has been inspired from nature, table 1 summarize the most common optimizers inspired from nature.A brief description about some of these algorithms will be described in this section  ( Dorigo , 1992), also known as ant algorithm, is a kind of probability algorithm used to find the optimal path in the graph.It was made by Marco Dorigo in 1992 in his doctoral dissertation "Ant system: optimization by a colony of cooperating agents", inspired by the behavior of the ants in finding the path of the food.
Simulated Annealing (SA) (Khachaturyan et al., 1979) Is a global optimization method that traverses the search space by generating adjacent solutions of the current solution.Advanced adjacency is always acceptable.Low-level adjacency solutions may be accepted based on the probability of differences based on quality and temperature parameters.The temperature parameter is modified with the process of the algorithm to change the nature of the search.Tabu Search (TS) is similar to simulated annealing, and they are traversing the solution by testing the mutations of the independent solution.The simulated annealing algorithm generates only one mutation for an independent solution, and the tabu search produces a lot of variational solutions and moves to the lowest degree of coincidence in the resulting solution.In order to prevent loops and to promote greater progress in the solution space, a taboo list is maintained by partial or complete solution construction.Moving to the element The inclusion of the tabular list is prohibited, and the taboo list is continually updated as the process of dissipating the space is solved.The Hungarian algorithm is a combinatorial optimization algorithm that solves the task allocation problem in polynomial time and promotes the later primitive duality method.American mathematician Harold Kuhn proposed the algorithm in 1955.This algorithm is called the Hungarian algorithm because a large part of the algorithm is based on previous Hungarian mathematicians Dénes Kőnig and Jenő Egerváry's work to create up.(Burkard, 2012) (Martello, 2010) Differential evolution algorithm (DE) is a post-heuristic algorithm for optimization problems.Essentially, it is a kind of greedy genetic algorithm based on real number coding with gifted ideology (Das, 2011).Differential evolution algorithm is similar to genetic algorithm, including mutation, crossover operation, elimination mechanism, and differential evolution algorithm and genetic algorithm is different from the part of the variation is the difference between the two members of the solution, after the expansion of the current solution by adding members Variable, so the differential evolution algorithm does not need to use the probability distribution to produce the next generation solution Yang (2009), developed The Cuckoo search algorithm (CSA) a meta-heuristic algorithm in imitating the behavior of cuckoos during the laying of eggs, and namely in the process of forced nesting parasitism.There are species of cuckoos that lay eggs in collective nests along with others cuckoo, although they can throw out competitors' eggs to increase probability of hatching their own chicks.A number of species of cuckoo nests parasitism, laying eggs in the nests of other birds as its kind, and, often, other species.Some birds may conflict with the invasion of the cuckoo that is sometimes such an "invasion" meets rebuff in some birds.For example, if the host of the nest finds eggs in it another kind, he either throws out these eggs, or simply leaves this nest and build another in a new place.Janson et al., (2005) proposed hierarchical particle swarm optimizer (HPSO), using dynamic hierarchical tree For the neighborhood structure, the best position of the better particles in the upper layer, the speed of each particle by its own history, the best position and rank tree in the particles, The location of the particles is determined by the best position of the particles.Sequential minimal optimization (SMO) is an algorithm used to solve the problem of optimization problems in support vector machine training.SMO was invented by Microsoft Research Institute's John Plater in 1998, which is currently widely used in the training process of SVM and is implemented in the popular SVM library LIBSVM.In 1998, the SMO algorithm published a stir in the field of SVM research, since previously available SVM training methods had to use sophisticated methods and require expensive third-party quadratic planning tools.The SMO algorithm is a good way to avoid this problem.
a meta-heuristic called Grey Wolf Optimizer (GWO) inspired by grey wolves.The GWO algorithm mimics the leadership hierarchy and hunting mechanism of grey wolves in nature.Four types of grey wolves such as alpha, beta, delta, and omega are employed for simulating the leadership hierarchy.Grey wolves mostly search according to the position of the alpha, beta, and delta.They diverge from each other to search for prey and converge to attack prey.In order to mathematically model divergence, we utilize with random values greater than 1 or less than -1 to oblige the search agent to diverge from the prey.This emphasizes exploration and allows the GWO algorithm to search globally.The Whale Optimization Algorithm which mimics the social behavior of humpback whales.The algorithm is inspired by the bubble-net hunting strategy Mirjalili and Lewis, (2016).Sine Cosine Algorithm (SCA) which is recently novel population-based optimization algorithm for solving optimization problems.it creates multiple initial random candidate solutions and requires them to fluctuate outwards or towards the best solution using a mathematical model based on sine and cosine functions.(Mirjalili, 2016).Multi-Verse Optimizer: a nature-inspired algorithm for global optimization algorithm (MVO), the main inspirations of MVO are based on three concepts in cosmology: white hole, black hole, and wormhole.The mathematical models of these three concepts are developed to perform exploration, exploitation, and local search, respectively.(Mirjalili et al., 2015) Khanesar et al,.( 2007), proposed A Novel Binary Particle Swarm Optimization a new interpretation for the velocity of binary PSO was proposed, which is the rate of change in bits of particles.Also a number of benchmark optimization problems are solved using this concept and quite satisfactory results are obtained.
Al-Sayyed and Fakhouri, 2017 introduced a novel optimization algorithm called POLARPSO has been that enhances the behavior of PSO and avoids the local minima problem by using a polar function to search for more points in the search space.
Particle Swarm Optimization, (PSO) method of numerical optimization, for the use of which you do not need to know the exact gradient of the optimized function.This method belongs to the group algorithms of swarm intelligence, which describe collective behavior of a decentralized self-organizing system.The systems of swarm intelligence, as a rule, consist of many agents locally interacting with each other and with the environment.The agents themselves are usually rather simple, but all together, locally interacting, create the so-called swarm intelligence.The method was first developed to simulate the social behavior of a flock of birds and fish.As a result of the development of the method, it has been successfully applied to problems finding the extreme points of a function.Niu et al., (2007) uses the master-slave model (master-slaver model), which contains a main group, a number of servants Body, and servant groups, and the main group carries on the search on the basis of the best position provided by the servant.Thi Thanh Binh, (2013) proposed new hybrid particle swarm optimization algorithm for solving MAEDP By proposing new algorithm to solve the MAEDP, called Hybrid Particle Swarm Optimization (HGAPSO).

Supernova Optimizer (SO)
In this section we explain the inspiration of the proposed algorithm, the mathematical model and the pseudo code.

Inspiration
Supernova is a transient astronomical event that occurs during the last stellar evolutionary stages of a massive star's life, whose dramatic and catastrophic destruction is marked by one final titanic explosion.This causes the sudden appearance of a "new" bright star, before slowly fading from sight over several weeks or months.Supernovae may expel much, if not all, of the material away from a star, at velocities up to 30,000 km/s or 10% of the speed of light.
SO is triggered by one of two basic mechanisms: the sudden resignation of nuclear fusion in a degenerate star or the sudden gravitational collapse of a massive star's core.In the first instance, a degenerate white dwarf may accumulate sufficient material from a binary companion, either through accretion or via a merger, to raise its core temperature enough to trigger runaway nuclear fusion, completely disrupting the star.
In the second case, the core of a massive star may undergo sudden gravitational collapse, releasing gravitational potential energy as a supernova.While some observed supernovae are more complex than these two simplified theories, the astrophysical collapse mechanics have been established and accepted by most astronomers for some time.
Due to the wide range of astrophysical consequences of these events, astronomers now deem supernovae research, across the fields of stellar and galactic evolution, as an especially important area for investigation.And this inspired ud to represent supernova theory by mathematical model for the particle movement in the search space of the optimization problem and for generation of new particles as described in section 3.2 The surrounding material is blasted away (f), leaving only a degenerate remnant.

Mathematical Model of Supernova
We chose three main operations of the supernova theory as the inspiration for supernova algorithm first The core of a massive star may undergo sudden gravitational collapse, releasing gravitational potential energy as a supernova and this process has been applied to the swarm of particle as show in equation 1 the particles here represent stars and the operations describe random explosion of stars and random born of new stars, second a degenerate white dwarf may accumulate sufficient material from a binary companion , third raise its core temperature enough to trigger runaway nuclear fusion, completely disrupting the star as shown in equation 2.
GC = GCI (noP, dim, ub ,lb ,SP ) …………………………………….. Equation 1Where GC is the Gravitational collapse that will result in new star born, GCI : is the generation of new stars, nop : number of generated stars , dim : the space dimension that the stars will be scattered into , ub : the upper bound of the space dimension , lb : the lower bound of the space dimension , SP: the random position of the generated stars described in equation 2 .
SP = rand()* (ub -lb)+lb + GSP ……………………….... Equation 2Where SP is the new particle position which represent the local best value at each iteration in supernova algorithm, GSP global supernova value, rand(): is a random function that generate numbers between 0 and 1, ub : the upper bound of the space dimension , lb : the lower bound of the space dimension SO increases the ability of exploitation and exploration by providing more exploring points.To improve the optimization and search ability we improved the search ability of supernova at each iteration by the inspiration of explosion idea and randomness so that each particle best position may have a near position to it that has better value and this new value will be a start of new particle (star) that when exploited it will give new particles that increase the search space and enhance the exploration feature as described in equation 2 this has extended the search ability to include more points directions of the particle movement which increased the possibility to find the global minima and avoid the local minima problem, rather than getting stuck at one local minima point After the initialization phase each particle had a velocity vector , position and after, the control parameter has been assigned for each individual, a new particle position will be given to each particle according to best previous solution that were gathered from previous generation.According to equation 3 NP =rand()* Pi (G) ……………………………………….Equation 3Where NP is the best new Position of the, rand( ) is random value between [0,1], Pi (G) is the best value for the particle that has been obtained in the previous generation G, After calculating the best position we will now update all particle position according the new position and this is performed by re initializing all the particles again according to Equation.Each individual will be given a random initial value and position; the best position assemble the best features that is transferred from generation to another.The value rand( ) ∈ [0, 1] controls the magnitude of the random points that will be added for the best solution.
In case a value for the generated individual outside of [0, 1], it is replaced by the limit value (0 or 1) closest to the generated value.When rand( ) > 1, rand( ) value is truncated to 1, and when rand( ) ≤ 0, then it is repeatedly applied to try to generate a valid value.

Iteration: Generating New Particles by Using Best Solution (Star Explosion)
In generation G, and at iteration i the elements value in the memory will be updated to a start from a new position near the previously obtained best value NP.At the beginning of the search each particle is initialized randomly.but in Equation.2 the particles will be now initialized and get benefit from the best value ( GSP) that has been found so far at each is incremented iteration note that all particles(star) will always regenerated to a new position randomly that will be near the explosion star which is near the best value and this will avoid the local optima problem.And give them new values for exploitation.At each iteration the fitness value will be calculated according the tested bench marked function, the pseudo code of SO algorithm are shown in figure 4 For each star

Experimental Results and Discussion
The performance of supernova has been evaluated on the CEC2005 Special Session on Real-Parameter Optimization benchmark suite, then we compared supernova to state-of-the-art algorithms including GWO, SCA , MFO, MVO, WOA and PSO.We performed our evaluation on 20 benchmark functions.With dimension D = 30 for F1 to F13, and the maximum number of objective function calls per run was D × 10, 00 (i.e.30,000).The number of runs per problem was 30, and the average performance and standard deviation of these runs was evaluated.The CEC2005 benchmark set consists of 20 test functions described in table 1.For all of the problems Functions F1 to F5 are Unimodal Functions shown in table 2. F6 to F12 are Multimodal Functions shown in table 2. F13 to F14 are Expanded Functions shown in table 3. Finally, F15 to F20 shown in table 5 are Hybrid Composition Functions which combine multiple test problems into a complex landscape.
In the evaluation we care about Exploitation Feature, Exploration Feature, Avoiding local minima.To test Exploitation Features we use the unimodal functions are suitable for benchmarking exploitation Therefore, these table results illustrated that SO is better than the compared algorithms in terms of optimum exploitation.,It can be inferred from the results in table 10 that the SO algorithm have shown a very competitive results in functions F1,F2,F3,F4 and F5 with all other algorithms ( GWO, SCA, PSO, MFO, MVO, WOA) , Exploration Feature, since the multimodal functions F8 to F20 have many local optima with the number increasing exponentially with dimension.Then they were suitable to test the exploration behavior of the algorithm.The results of table 11 to 13 of the compared algorithms showed a very strong results of the SO over other algorithms basic the multimodal benchmark functions and fairly competitive results in both the expanded multimodal benchmark functions It can be inferred that SO has a very good exploration feature

Experimental Parameters
We performed our evaluation on the 20 benchmark functions.The maximum number of iterations performed was 1000.The number of runs per problem was 30, and the average performance and standard deviation of these runs were collected for evaluation purposes.As shown in Table 3 which describes the Experimental parameters of the function.

SUPERNOVA Function
No.

Comparison with Other Algorithms
We compared SO with GWO, SCA, PSO and MFO on CEC2005 benchmark set problems, especially the new hybrid functions.The results for D ∈ {30} dimension on the tested functions and the overall results on all 20 functions are shown in Table 4

Unimodal Test functions and Exploitation
The results of Table 7 show that the proposed algorithm is able to provide very competitive results on the unimodal test functions.This testifies that the proposed algorithm has a high exploitation ability.

Multi-Modal Test Functions and Exploration Analysis
After discussing the exploitation feature of supernova, we are going to discuss the exploration feature of supernova.to discuss this we will refer to the results obtained from multi-model test function table 8, 9. the results of this table shows that the proposed algorithm is able to provide a very good exploration behavior.it may be observed that this supernova is better than other algorithms on f9, f110, and f11.The results indicate that supernova results are competitive in exploration.exploration always comes with local optima avoidance.in fact, stagnation in local solutions can be resolved by promoting exploration.in addition to the discussion provided in the preceding paragraph, local optima avoidance of supernova is also competitive

Composite Test Functions
This subsection discusses the balance between exploration and exploitation, due to the difficulty of this set of test functions.The results of supernova algorithm we not very competitive in these set of functions

Stability of SO
The standard deviation values for all functions from F1 to F20 shown in Table 11.Except for F8 are all near the zero and this means that SO is stable for the 30 experiments that were performed, the reason for the high standard deviation value of F8 is due to the very large values in the search space; see Figure.

Convergence Analysis
To confirm the convergence of the proposed algorithm, we provide the convergence curves in Figure 6.This evidences that SO algorithm can successfully improve the fitness of all guarantee finding better solutions as iteration increases.In this is due to that it search for the best global minimum that is generated by the particles that simulate the supernova phenomena It can be also observed from figure 2 that supernova optimizer succeed to be the fastest and most effective optimizer for all of the compared algorithms (GWO, SCA, MVO, MFO, WOA, and PSO) except the POLARPSO algorithm when it is tested for the bench mark functions F9-F11 which is from the Multimodal Functions set for Shifted Rastrigin's Function, Shifted Rotated Rastrigin's Function, Shifted Rotated Weierstrass Function respectively The analysis of results also shows that supernova optimizer showed a competitive and effective performance in finding the best solution that is almost similar to the compared algorithms for the functions F14, F16, F17 which represent more complex problems from the set of Multimodal Expanded Functions and Hybrid Composition Functions that represent the functions of (Shifted Rotated Expanded Scaffer's , Rotated Hybrid Composition Function with the Global Optimum on the Bounds, Rotated Hybrid Composition Function with High Condition Number Matrix , Non-Continuous Rotated Hybrid Composition Function).

Conclusion
This paper proposed a novel meta-heuristic optimizer for functions optimization Inspired by the supernova theory.The proposed Supernova Optimizer(SO) was conducted on 20 mathematical benchmark of the cec2005 because they provide a good testing suit for the exploration, exploitation and local optima avoidance of optimization algorithms, we compared our results to the state-of-art GWO, SCA, MFO, MVO, WOA, POLARPSO and PSO meta-heuristic algorithms.The Optimization performance of SO is being evaluated.The experimental results showed that SO incorporated a powerful and competitive way in optimizing the functions when compared to other algorithm as illustrated in table 3 to table 6.It significantly has an improved optimization performance and mostly over the exploitation feature as shown in the Convergence analysis, hence it outperformed most of the compared algorithms.

Figure 2 .
Figure 2. Star Explosion Expected to Create Spectacular Light (space.com,2017) Proposed Algorithm Consist of Two Phases the Initialization Phase, the Iteration Phase 3.2.1.1Initialization Similar to other swarm optimization algorithms for numerical optimization, a the population is represented as a set of real parameter vectors xi = (x1, ..., xD), i = 1, ..., N, where D is the dimensionality of the target problem, and N is the population size.At the beginning of the search, the individual vectors xi in population are initialized randomly.Then, a process vector generation and selection are repeated until termination criterion is encountered.
Figure 4. Pseudocode of the algorithm

F17:
Rotated Hybrid Composition Function with the Global Optimum on the Bounds F18: Rotated Hybrid Composition Function F19: Rotated Hybrid Composition Function with High Condition Number Matrix F20: Non-Continuous Rotated Hybrid Composition Function Table 3 5.

Table 1 .
Most common optimizers It all began with the study of behavior real ants.Experiments with Argentine ants, conducted Gossom in 1989 and Deneborg in 1990 served as a starting point for further study of swarm intelligence. of the idea is Marco Dorigo of the University of Brussels, Belgium.It was he who managed for the first time formalizes the behavior of ants and applies a strategy for their behavior to solution of the shortest path problem.Ant Colony Optimization (ACO)

Table 2 .
Bench Mark Functions

Table 4 .
to Table 6.Due to space, the table only shows the aggregate results of comparing each algorithm vs. SO GWO, SCA MEAN, STD, MIN, MAX values over 30 iterations

Table 7 .
Results of unimodal benchmark functions F1 to F7.

Table 8 .
Results of multimodal Basic benchmark functions F8 to F12:

Table 10 .
Results of Hybrid Composition benchmark functions F15 to F20

Table 11 .
standard deviation values for all functions from F1 to F20