Moth Flame Optimization Based on Golden Section Search and its Application for Link Prediction Problem

Moth Flame Optimization (MFO) is one of the meta-heuristic algorithms that recently proposed. MFO is inspired from the method of moths' navigation in natural world which is called transverse orientation. This paper presents an improvement of MFO algorithm based on Golden Section Search method (GSS), namely GMFO. GSS is a search method aims at locating the best maximum or minimum point in the problem search space by narrowing the interval that containing this point iteratively until a particular accuracy is reached. In this paper, the GMFO algorithm is tested on fifteen benchmark functions. Then, GMFO is applied for link prediction problem on five datasets and compared with other well-regarded metaheuristic algorithms. Link prediction problem interests in predicting the possibility of appearing a connection between two nodes of a network, while there is no connection between these nodes in the present state of the network. Based on the experimental results, GMFO algorithm significantly improves the original MFO in solving most of benchmark functions and providing more accurate prediction results for link prediction problem for major datasets.


Introduction
In order to improve the performance of Moth Flame Optimization algorithm, golden section search (GSS) strategy is utilized to develop a new version of MFO, which is called Golden Moth Flame Optimization (GMFO).GMFO focuses on enhancing the convergence rate of MFO through supporting the exploration mechanism by increasing the diversification of the search space (population), thus facilitating to get more intensification toward the best solution obtained so far in each iteration.
As in Mirjalili (2015), to explore the search space and to keep away from local solutions, MFO's search agents spend a large number of iterations.By this way, the exploitation of the MFO algorithm slows down and prevents the algorithm from locating a much better approximation of the global solution.So, GMFO is proposed to provide a mechanism that concentrates into direct the search agents towards the more promising region in the search space where the best solution can be found.Accordingly, the number of iterations to reach optimal solution will be reduced significantly and its convergence will be improved.
In order to achieve this improvement; the GSS features are employed for MFO to produce the proposed GMFO algorithm.The employment of GSS for the MFO is concentrated on updating the current best search space generated so far; by applying more exploration on it besides the local search that done by the original MFO such as logarithmic spiral.Performing further search space exploration makes the possibility to return to the same solution much less.Moreover, GSS ensures high diversification.Thus, it prevents solution from getting trapped in local optima and ensuring that direction is toward the global optima solution.

Moth Flame Optimization (MFO)
MFO, as in Mirjalili (2015), is a novel nature-inspired meta-heuristic and optimization paradigm.MFO optimizer is inspired from the method of moths' navigation in natural world.There is two moth's navigation; the first is called transverse orientation.In this navigation, a moth moves by trying to keep the same angle with regard to the light resource (as known that moths are attracted by light resources).Given that the light resource is distant from the moth, keeping the same angle with respect to the light guarantees moth flying in a straight line.In addition to keep moving in a straight line, moth usually flies spirally around the region of the light, which is considered as the second navigation.Therefore, moth in the long run will converge on its way to the light.Mirjalili et al. (2015) utilized these navigation methods, transverse and spiral navigation, and developed the Moth Flame Optimization Algorithm.A conceptual model of these two navigation methods is illustrated in Figure 1.
Figure 1.The transverse in a straight line (the red arrow) and the spiral movement of a moth (Mirjalili, 2015) The general framework of MFO has three main stages.The initialization stage is the first one, where the MFO produces population of moths randomly, and computes their related fitness values.The second stage is the iteration stage, where the main function is carried out and the moths move in the region of the search space.In the final stage, the stop criterion is met.However, false is returned if the stop criterion is not met.
In MFO algorithm (Mirjalili, 2015), the main search space update method of moths is the logarithmic spiral.The movement of this spiral is the core piece of the MFO since it states how the moths update their locations in the region of flames.The movement by this spiral lets a moth wing around a flame and not essentially in the gap among them.For that reason, the exploration and exploitation mechanisms of the search space can be dependable.Logarithmic spiral is defined as in formula (1) (Mirjalili, 2015).
Where M(i,j) indicates the jth position for the ith moth, F(i,j) indicates the jth position for the ith flame, and D represents the distance of the ith moth from the jth flame.b is a constant for defining the nature of the logarithmic spiral, and t is a random number in [-1, 1].D is calculated as in formula (2). Figure 2 shows the pseudo code of MFO algorithm.Endfor Endfor end-while Figure 2. The pseudo code of MFO algorithm (Mirjalili, 2015) The MFO algorithm was compared with other well-known nature-inspired algorithms on 29 benchmark and 7 real engineering problems which are: Welded beam design problem, Gear train design problem, Three-bar truss design problem, Pressure vessel design problem, Cantilever beam design problem, I-beam design problem, and Tension/compression spring design (Mirjalili, 2015).The statistical results on the benchmark functions demonstrate that MFO is capable to offer competitive results.As well, the results of the real problems show the advantages of this algorithm in solving difficult problems with constrained search spaces.

Golden Section Search (GSS)
The GSS is a search method aims at locating the best maximum or minimum point of a one-dimensional function by narrowing the interval that containing this point iteratively until a particular accuracy is reached (Press et al., 2007).
To understand the concept of GSS search method and how could be applied (Press et al., 2007) on an application, suppose that there are function f, its domain is represented by x, where x ϵ [x min , x max ], x min is the lower bound of domain x and x max is the upper bound.The interval [x min , x max ] is an interval where the best maximum or minimum point of f is supposed to be enclosed in.This interval should be divided into three regions.This is done by adding two points from the function domain x and located within the defined interval; such as the internal points x 1 and x 2 , where x 1 and x 2 ϵ [x min , x max ] and x 1 < x 2 .See Figures 3.a and 3.b.
After that, the function needs to be evaluated at these two internal points (Press et al., 2007).In this case, we try to find the promising interval where the best minimum point for function f is believed to be there.This evaluation is carried out as follows.If f(x 1 ) < f(x 2 ) is true, then the minimum will be located between x min and x 2 .So, x min will still the same while x 2 become the value of x max .But if f(x 1 ) > f(x 2 ) becomes true, the point will be located between x 1 and x max .So, x 1 will be the x min , while x max remains the same.Again, the new smaller interval resulted from this evaluation should be divided into three sections.The most important question rises here is how to determine where to divide the interval into sections or where to locate the intermediate points x 1 and x 2 .The answer of this question can be found by using the golden ratio.

Golden Ratio (GR)
Golden section search has this name because it depends on the special ratio; namely Golden Ratio.Golden Ratio is used to locate the intermediate points such as x 1 and x 2, which divide the given interval into smaller one to narrow the search space and to determine the next promising smaller interval where the best solution hope to have (Press et al., 2007).
To understand the concept of the Golden Ratio, set the following conditions, based on Figure 3.a, as in formulas ( 3) and (4) (Press et al., 2007).
It is noticed that the ratios in formula (4) are equals.These ratios have a special value which is called the Golden Ratio GR.To determine the value of Golden Ration, formula (3) can be substituted into formula (4) to give formula (5).
Solve the quadratic equation in formula (5) for GR and use the positive root, and then a value of GR is given as in formula (6) (Press et al., 2007).
( 5 1) / 2 0.61803 Consequently, the intermediate points x 1 and x 2 are located where the ratio of the distance from these points to the ends of the given interval (which represents the search region) is equal to the golden ratio.So, x 1 and x 2 can be computed as in formulas ( 7) and ( 8) which can be interpreted as the second interval will be smaller GR times the previous interval.
Accordingly, this study presents, illustrates and tests the Golden Moth Flame optimization algorithm GMFO, which is considered as an improvement of the well-known MFO algorithm (Mirjalili, 2015).The performance of the two algorithms will be investigated when solving the link prediction problem.

Link prediction Problem
Generally, Graphs provide a natural abstraction to represent interactions between different entities in a network (Srinivas and Mitra, 2016;Sharieh, et al., 2008;Barham, et al., 2016).A real or synthetic network in the world can be represented as a graph of nodes and edges.As in social network, users are represented as nodes, while the interactions between these users whether are associations, collaborations, or influences are presented as edges between these nodes (Liben-Nowell and Kleinberg, 2007).For instance, if there is a snapshot of a social network at time t 0 , link prediction looks for predicting the edges that will be added to the network during the interval from time t 0 to a given future time t accurately (Liben-Nowell and Kleinberg, 2007).So, the link prediction problem is related to the problem of inferring missing links from an observed network (Liben-Nowell and Kleinberg, 2007).
Importance of link prediction can be found in its wide variety of applications.Graphs used to represent as social networks, transportation networks, or disease networks (Srinivas and Mitra, 2016;Barham, et al., in press).Link prediction can specially be functional on these networks to analyze and solve interesting problems like predicting occurrence of a disease, managing privacy in networks, discovering spam emails, signifying another routes for probable navigation based on the current traffic models (Srinivas and Mitra, 2016).
More formally, the link prediction task can be formulated as follows (Liben-Nowell and Kleinberg, 2007).An unweighted and undirected graph G = (V, E), represents a topological structure of a network, such as social networks, where V is the set of nodes in G, and E is the set of existed edges in G.In this G, each edge e = (u, v) ∈ E and nodes u, v ∈ V represents an interaction between u and v that took place at a particular time t(e).For two time instances, t and t′, where t′ > t, let G[t, t′] denotes a sub-graph of G consisting of all edges with timestamps between t and t′.And let t 0 , t′ 0 , t 1 , and t′ 1 be four time instances, where t 0 < t′ 0 ≤ t 1 < t′ 1 .Then for a given network G[t 0 , t′ 0 ], the output is a list of edges does not present in G[t 0 , t′ 0 ] that are predicted to appear in the network G[t 1 ,t′ 1 ].
The rest of the paper is organized as follows.Section 2 is about the related works benefits from utilizing golden section search method.Section 3 illustrates the GMFO methodology.Section 4 presents the experimental results of GMFO and MFO on 15 benchmark functions and their discussions.Conclusion and future work is in Section 5.

Related Works
Due to the importance of golden section search and the golden ratio in mathematics, optimization and its applications in different fields, various approaches were proposed and utilized them in their works.This section presents some of these works to prove the successes of using GSS for various researches.This is besides the works deployed the MFO to solve particular problems.
Patel, et al. ( 2013) applied GSS for tracking the maximum power point (MPPT) for solar photovoltaic system (PV).Their approach benefits from using GSS which reflected in its fast response, robust performance and guaranteed convergence.They conclude that GSS can be a competitive method for PV generation systems because of its good performance.
Djeriou, et al. ( 2018) applied a maximum power point tracking procedure for stand-alone solar water pumping system to get better overall operating competence.This method is based on golden section search optimization method.GSS method provides two advantages: fastness and perturbation-free which both have an effect on the overall and direct effectiveness of the solar water pumping system.Liang, et al. (2016) examined the special elements of gait with and without the effect of clothe.They proposed the golden ratio-based segmentation method to decrease the influence of clothing.Their Experimental results are conducted and showed that their proposed method outperforms other approaches of segmentation.They found that the key problem is to find out the exact part of clothe, and discard it.Considering that the individual body is conventional to the golden ratio, and that clothe is designed according to this ratio, the golden ratio segmentation technique is used to discard the effect of clothing.Hassaballah, et al. (2013) proposed a new approach for face detection evaluation based on the golden ratio.The new evaluation measure is more practical and precise compared to the existing one for face detection.They conclude that "the golden ratio helps in estimating the face size according to the distance between the centers of eyes".Böcker (2012) proposed algorithm to solve NP-complete cluster editing problem based on the golden ratio.They presented a search tree algorithm for the problem with repeatedly branch the vertices can be isolated which improves the running time.Tsai, et al. (2010) proposed to employ the golden section search optimization algorithm to determining a good quality shape parameter of multi-quadrics for the solution of partial differential equations.Experimental results show that the proposed golden section search method is valuable and gives a reasonable shape parameter beside satisfactory precision of the solution.
Koupaei, et al. ( 2016) proposed a practical version of golden section search algorithm to optimize objective functions.Consequently, their work presented an algorithm takes the benefits and capabilities for both of chaotic maps and the golden section search method in order to solve nonlinear optimization problems.Practically, the proposed algorithm reduces the searching space by chaotic maps and solving an optimization problem on new promising space using the golden section search technique.
Due to the success achieved by GSS in most approaches proposed in literature, this work tends to improve the MFO algorithm based on applying the Golden Section Search in solving the link prediction problem.

Methodology: Golden Moth Flame Optimization (GMFO)
As in Mirjalili (2015), to explore the search space and to keep away from local solutions, MFO's search agents spend a large number of iterations.By this way, the exploitation of the MFO algorithm slows down and prevents the algorithm from locating a much better approximation of the global solution.So, GMFO is proposed to provide a mechanism that concentrates into direct the search agents towards the more promising region in the search space where the best solution can be found.Accordingly, the number of iterations to reach the best will be reduced significantly and its convergence will be improved.
In order to achieve this improvement, the Golden Section Search GSS features are employed for MFO to produce the proposed GMFO algorithm.The employment of GSS for the MFO is concentrated on updating the current best search space generated so far; by applying more exploration on it besides the local search that done by the original MFO such as logarithmic spiral.Performing further search space exploration makes the possibility to return to the same solution much less.Moreover, GSS ensures high diversification.Thus, preventing solution from getting trapped in local optima and ensuring that direction is toward the global optima solution.
In other words, applying the Golden Section Search for MFO can be considered as a process for updating the search space, where the search space represents the generated population.Thus, while searching the GSS tries to find the promising region in the search space where the best solution is supposed to have.So, GSS helps to generate more promising population.When the promising region is reached, searching for a much better solution than the later is performed.This search method accelerates the process of finding the best solution by considering the most promising population in iteration.This acceleration can be translated into enhancing the convergence behavior of the MFO algorithm.Table 1 represents the main concepts of applying GSS for GMFO.
Table 1.The main concepts of the GMFO derived from the Golden Section Search

Function domain
The initial search space or Population Divide the domain into sections Explore and Update search space Find the smaller interval from the domain Get best region in the search space(promising population) A good search space exploration mechanism can be provided by the golden section search where the moving toward the global optima solution is ensured.So, the solutions don't return to the same solutions once more.Also, golden section search ensures high diversity so that prevent solution from getting trapped in local optima (Press et al., 2007).The pseudo code for the Golden Section Search function adapted for GMFO and the Golden MFO algorithm are demonstrated in Figure4 and 5, respectively.Step 1: Consider x min , x max as the current search space limits Step2: Compute the intermediate points x 1 and x 2 using formulas ( 7) and ( 8) respectively //create new search space limits Step 3: Calculate the fitness value using the cost function for x 1 and x 2 so we have f(x 1 ) and f(x 2 ) Step4: Ensure that f(x 1 ) or f(x 2 ) is less than f(x best ) and update x 1 and x 2 accordingly x 2 = x best ; Step5: Update the promising interval according to x 1 and x 2.
if (f(x 1 ) < f(x 2 ) ) then x min *= x min ; x max *= x 2 ; x min *= x 1 ; x max *= x max ; Figure 5.The pseudo code of GMFO algorithm GMFO algorithm starts by initializing number of parameters such as number of search agents, maximum number of iteration, number of problem variables and setting the golden ratio to 0.61803.See Figure 5, (line 1).After that, the upper limit and lower limit for each variable of the underlying dataset should be specified.See Figure5, (line 2).These limits are considered as the initial domain or search space where the search agents try to find the best solution so far.Based on the defined interval, a population X has been generated.Its number of candidate solutions represented by searchAgents_no parameters.Each candidate solution is represented by a vector of size dim (which is the number of problem variables or features).See Figure5, (line 3).Now, it is the turn of the main loop which considers reaching maximum number of iteration as its stop criteria.See lines 4 and 24 in Figure 5.After that, the flame number is computed.There is no need to call the golden section search function in the first iteration because the initial domain limits x min and x max are available.At the second iteration, if statement (Figure 5, line 6) becomes true, the golden search function will be called as in Figure 4.
Golden search function uses x min and x max which are the current domain limits, GR which is golden ratio = 0.68103 and f(x best ) which is the fitness value for the best solution x best obtained so far.The goal is to update the current interval [x min , x max ] into smaller one [x min *, x max *], which can be considered as promising interval because the best solution expected to be within.See line7 Figure 5.
As in Figure 4, the Golden Section Search, which represents the core of GMFO, is performed as follows.Based on the current search space limits x min and x max and GR the intermediate points x 1 and x 2 can be computed using formulas ( 7) and ( 8), respectively (See steps 1 and 2, Figure 4).Then, calculate the fitness value for each x 1 and x 2 using the cost function to obtain f(x 1 ) and f(x 2 ), respectively.
The intermediate points x 1 and x 2 should be verified to make sure they represent the appropriate points and lead to the most promising interval where the best solution is supposed to have and to avoid trapping in local minima.Thus, the fitness values f(x 1 ) and f(x 2 ) should be compared with the best fitness obtained so far f(x best ).Because this work focuses on the minimization problems, the best is the lower.Therefore, if f(x best )< f(x 1 ), then the x best value will be assigned to x 1 ; else, x 1 will remain unchanged.The same will happen to x 2 such that if f(x best )< f(x 2 ) then the x best will be the value of x 2 ; else, x 2 will remain unchanged.See step 4 Figure4.
Upon updating x 1 and x 2 in comparison with x best , the promising interval will be computed by conducting a comparison between f(x 1 ) and f(x 2 ).If f(x 1 )< f(x 2 ) then the interval is [x min, x 2 ]; otherwise, the interval will become [x 1, x max ].At the end of this function, the values of x min and x max are updated and returned.See step 5 Figure 4. Now, we can return to the GMFO steps demonstrated in Figure 5. Line11 represents the most important step in the algorithm after applying the GSS function.This step facilitates updating the population according to the promising interval.This is done by normalizing each candidate solution in the population within the promising interval limits.This updating makes the population more promising and increases the probability to have the best solution.As a result, the number of iterations needs to reach the best solution will be decreased and the algorithm will converge fast.
Next, the fitness value for the updated population is calculated.The population is sorted according to sorted fitness value.Afterward, the best solution is identified.See lines 12-15 Figure5.
Later, generating the next population is performed using the formulas ( 1) and ( 2).This new generation will have their own values of promising interval limits, using the GSS function, and will be updated accordingly.Lines 5-20 in Figure5 will be repeated until the maximum iteration is reached.Finally, the overall best solution will be returned (See line 25, Figure 5).

Experimental Results and Discussions
In this section, the experimental results of testing 15 benchmark functions on the proposed algorithm GMFO and the original MFO algorithm are presented evaluated and discussed.

Experiments Platform
All experiments are implemented in Matlab R2017a and conducted using Intel® Core™ i7-5500 CPU @ 2.40 GHz processor, 8.00GB RAM, and Microsoft Windows8 of 64-bit Operating System.
Each experiment is repeated for 30 times independently.Reaching maximum number of iterations is adopted as the stopping criteria.The maximum number of iterations is 1000 and the population size is 50.

Benchmark Functions
Commonly, the optimization benchmark functions are used to measure the performance of algorithms designed for optimization.These functions are a set of well-known mathematical functions with identified global optima (BenchmarkFcns, 2018).As in most literature such as in Molga and Smutnicki (2005), the proposed GMFO is employed using 15 benchmark functions and a comparison among the original MFO is conducted.These test functions are classified into three sets: uni-modal, multimodal, and composite.
The uni-modal functions (F 1 -F 7 ) with their related number of variables (or dimensions), ranges and minimum return value are illustrated in Table 2.The appropriate way for benchmarking the exploitation of an algorithm is by using the uni-modal test functions.This is because they encompass one global optimum with no local optima.On the other hand, multimodal functions (F8-F13), which are illustrated in Table 3, encompass a considerable number of local optima and they are useful to benchmark the exploration process and avoid trapping in a local optimum.
( ,10,100,4) , 1 Ultimately, composite functions are mixture of various rotated, shifted, and biased multimodal benchmark functions.This work employed F 14 and F 15 composite functions as illustrated in Table4.Multimodal benchmark functions are helpful for benchmarking the balance among exploration and exploitation.

Experimental Results of Benchmark Functions and Discussions
Tables 5 through 7 show the experimental results of applying GMFO and MFO for testing the benchmark functions separately.Each experiment is performed for 30 independent runs.The average and standard deviation of the best fitness among these runs are taken.Bold values mean that GMFO algorithm is better, while underline is used to show that MFO algorithm is better.The Ave and STD represent the average fitness value, and standard deviation, respectively.
A comparison between GMFO and MFO over the unimodal functions (F 1 -F 7 ) is demonstrated in Table 5. Results in this table indicate that GMFO perform better than MFO for most unimodal functions such as F 1 -F 3 and F 5 -F 7 .This indication proves that MFO based on using golden section search improve the exploitation of the algorithm; however the unimodal functions is known to encompass one global optimum with no local optima.
Besides Table 5, Table 8 supports this indication by presenting the p values produced by Wilcoxon's rank-sum test.This test is computed for GMFO against MFO.p values less than or equal to 0.05 indicate that there are a significant difference between the results of GMFO and MFO, and thus there is a tangible improvement.Bold values indicate that GMFO is better, while underline represents that MFO is the better.In addition, a comparison between GMFO and MFO over the multimodal functions (F 8 -F 13 ) is demonstrated in Table 6.Results in this table indicate that GMFO perform better than MFO for most multimodal functions such as F 8 -F 12 .This indication proves that GMFO, which based on using golden section search, has the ability to improve the exploration of the search space; however the multimodal functions is known to encompass a considerable number of local optima and they are useful to benchmark the exploration process and avoid trapping in a local optimum.
Besides Table 6, Table 8 supports this indication by presenting the p values produced by Wilcoxon's rank-sum test.This test is computed for GMFO against MFO.p values less than or equal to 0.05 indicate that there are a significant difference between the results of GMFO and MFO, and thus there is a tangible improvement.Bold values indicate that GMFO is better, while underline represents that MFO is the better.This indication proves that GMFO, which based on using golden section search, has the ability to provide a balance between the exploration and the exploitation of the search space; thus converge faster besides avoid trapping in local optima.8 supports this indication by presenting the p values produced by Wilcoxon's rank-sum test.p values for F 14 and F 15 are less than or equal to 0.05.This indicates that there are significant differences between the results of GMFO and MFO, and thus, there is a tangible improvement.6 (a) through 6 (o), the convergence curves of the average fitness value for GMFO and MFO for 1000 iterations is plotted.Through these figures, it is noticed that GMFO converge faster than the original MFO algorithm.Thus GMFO has the ability to search the space faster and find the best region in it where the solution is supposed to be there.Accordingly, this notice supports that GMFO significant improvement for MFO algorithm.Based on the experimental results, the indication is GMFO performs better than MFO for most unimodal, multimodal and composite functions.This indication proves that MFO based on using golden section search improve exploration of the search space and hence the convergence to find the best solution.So, GMFO has the ability to search the space and find the best solution more efficiently than MFO.(o) F 15 (log scale) Figure 6.The convergence curves of the average fitness value over 30 independent runs

Experimental Results for Link Prediction Problem and Discussions
In order to test the performance of GMFO in comparison with other algorithms for link prediction problem, GMFO and other well-known meta-heuristic algorithms such as Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) are carried out on five datasets and a comparison between them is conducted in term of the prediction accuracy measured by Area under Curve (AUC).
To solve link prediction problem using an optimization algorithm, a design for both of candidate solutions and fitness function are required.Group of candidate solutions represent the population.In this study, each candidate solution is represented by an adjacency matrix of zeros and ones.Zeros represent missing links, while ones are for existing links (Barham and Aljarah, 2017).
These matrices are generated randomly and compared to the actual representation of the underlying network.This comparison measure the prediction accuracy based on the AUC metric which represents the fitness function for prediction.To compute the value of AUC, confusion matrix is required.See Table 9. TP: the number of correct classifications of the positive instances (true positive), FN: the number of incorrect classifications of positive instances (false negative), FP: the number of incorrect classifications of negative instances (false positive), TN: the number of correct classifications of negative instances (true negative).
AUC is usually used to assess the classification results on the class in two classes' classification problems.The classifier requires ranking the test results according to their likelihoods of belonging to the positive class with the most likely positive class ranked at the top (Lui, 2011).The true positive rate (TPR) represents recall as in formula (10), while is defined as in formula (11) (Lui, 2011).Greater AUC means better prediction model.AUC could be computed using formula (9).

Datasets
Table 10 shows the underlying datasets and their details of number of nodes and edges.These datasets are (Link Prediction Group (LPG), 2016): US airport network (USAir), network of the US political blogs (Political blogs), co-authorships network between scientists (NetScience), network or protein interaction (Yeast), and the King James Bible and information about their occurrences (King James) (L¨u, et al., 2016).To evaluate the quality of the GMFO for link prediction, performance metrics such as AUC is computed for each algorithm (GMFO, MFO, PSO and GA) over the five datasets.The best, average, and standard deviation of the AUC results over 30 independent runs are computed as in Table 11 for 30 independent runs.
Greater AUC means better prediction model.Highest AUC for one algorithm means highest prediction quality among other algorithms.
As shown in Table 11, among all algorithms, GMFO has the highest average AUC score.Even for the largest King James dataset, in number of edges, the GMFO still the best with 0.6966, while the other algorithms get average AUC scores from 0.5746 to 0.6013.

Conclusion and Future Work
In order to improve the performance of Moth Flame Optimization algorithm, golden section search strategy is utilized to develop a new version of MFO, which is called Golden Moth Flame Optimization (GMFO).GMFO focuses on enhancing the convergence rate of MFO through supporting the exploration mechanism by increasing the diversification of the search space (population), thus facilitating to get more intensification toward the best solution obtained so far in each iteration.
Accordingly, this paper presents, illustrates and tests the Golden Moth Flame optimization algorithm GMFO, which is considered as an improvement of the well-known MFO algorithm.
Golden section search (GSS) is a search method aims at locating the best maximum or minimum point of a onedimensional function by narrowing the interval that containing this point iteratively until a particular accuracy is reached.The experimental results of testing 15 benchmark functions on the proposed algorithm GMFO and the original MFO algorithm are presented evaluated and discussed.
Experimental results indicate that GMFO perform better than MFO for most uni modal, multimodal and composite functions.This indication proves that MFO based on using golden section search improve exploration of the search space and hence the convergence.So, GMFO has the ability to search the space and find the best solution more efficiently than MFO.As a future work, GMFO can be applied to solve different application and problems.
In addition, to evaluate the quality of the link prediction, performance metrics such as AUC is computed for GMFO and compared with MFO, PSO and GA algorithms over five datasets.The best, average, and standard deviation of the AUC results over 30 independent runs are computed.Among all algorithms, GMFO has the highest average AUC score for the prediction results.Future work is focused on applying the proposed GMFO for providing a solution for other significant applications.

Figure 3 .
How Golden Section Search evaluates the function f within the interval [x min , x max ], and divides this interval into 3 sections.Red arrow represents the new small interval.(a) Represents when f(x 1 )<f(x 2 ), while (b) represents when f(x 1 )> f(x 2 ).

Figure 4 .
Figure 4.The pseudo code of Golden Section Search Function for GMFO F 7 (log scale) (h) F 8 (log scale)

Table 2 .
The Uni-modal benchmark functions

Table 3 .
The multi-modal benchmark functions

Table 4 .
Description of composite benchmark functions

Table 5 .
The average (Ave) and standard deviation (STD) of best fitness value over 30 runs of uni-modal benchmark functions.

Table 6 .
The average (Ave) and standard deviation (STD) of best fitness value over 30 runs of multimodal benchmark functions Furthermore, a comparison between GMFO and MFO over the composite functions (F 14 and F 15 ) is demonstrated in Table7.Results in this table indicate that GMFO perform better than MFO for composite function F 15 .

Table 7 .
The average (Ave) and standard deviation (STD) of best fitness value over 30 runs of composite benchmark functions

Table 8 .
P-values of the Wilcoxon rank-sum test over all 30 runs (p>=0.05have been underlined)

Table 10 .
The datasets details, number of nodes and edges

Table 11 .
AUC results for each algorithm: The best, average, and standard deviation of the AUC over 30 independent runs.