Performance Evaluation of MLP and RBF Neural Networks to Estimate the Soil Saturated Hydraulic Conductivity

Soil saturated hydraulic conductivity is considered one of the physical soil properties that is very important in modeling of water movement and environmental studies. This study aimed to compare the performance of Multi-Layer Perceptron (MLP) and Radial Basis Function (RBF) in neural networks for estimation of the soil saturated hydraulic conductivity. For this, the data of 27 drilled cased borehole permeameter with three kinds of geometry water flow through the soils and the soil texture properties were used as the input parameters for models. The effectiveness of neural networks to estimate the soil saturated hydraulic conductivity were calculated and compared based on mean squared error (MSE), root mean squared error (RMSE) and coefficient determination (R). According to the above indicators, for all three types of drilled cased borehole permeameter surveyed in this study, the results show MLP neural networks had better performance than RBF neural networks in estimation of the soil saturated hydraulic conductivity and for wells with the horizontal, vertical and horizontal-vertical flow, which the amount of coefficient determination were respectively for all of them 0.94, 0.97 and 0.85.


Introduction
Water can move through soil as saturated flow, unsaturated flow, or vapor flow.Saturated flow takes place when the soil pores are completely filled (or saturated) with water.Saturated hydraulic conductivity describes the speed of movement of water through saturated soil, as well as evaluating and modeling of water, salts and transfer of pollutants to groundwater and environmental studies is widely used and very important.Having information about this parameter for understanding unsaturated zone and development of scientific management in maintaining agricultural productivity and reducing the negative environmental impacts is necessary (Nosrati Karizak et al., 1391, Kurvar et al., 1983, Rinoldzo and Top, 2008).Falling head in cased borehole permeameter method is one of measuring soil saturated hydraulic conductivity method, above the water table.Which was provided by Philip and modified by Reynolds (Philip, 1993, Reynolds, 2010).In order to estimate the saturated hydraulic conductivity, numerical solution of the equations that is presented by Philip and modified by Reynolds is difficult and time consuming (Asadollahzadeh, 2013).
Therefore, artificial neural networks that are powerful tools in machine learning algorithms can be used as an alternative numerical solution.
A neural network model is a conceptual model and in fact, is a simplified image of the mathematical model.The biggest problem faced by users and suppliers of mathematical models, is the need of these models to exact various input data.Artificial neural networks (ANN) which are driven by biological neural networks can help in solving such problems.These networks are a part of the intelligent systems, developed with various spread structures (Chitsazan et al., 2013).Thus, it is possible to process experimental data to identify relationships among them and to estimate the amount of saturated hydraulic conductivity (Shakeri Abdolmaleki et al. 2013, Menhaj, 2005).In this respect, the most common neural networks are used Multi Layer Perceptron (MLP) and Radial Basis Function (RBF) networks (Sarani et al., 2012, Shop & Liege 1998).Reynolds (2010), developed and evaluated the analysis of falling head cased borehole analysis of Philip in the unsaturated zone.In this study, vertical, horizontal and horizontal-vertical flow were tested in three different forms of water flow.According to the results of this study showed that the developed solutions increased the accuracy of permeameter of falling head in cased borehole.Shop and Liege (1998) did a research on the use of MLP network to estimate the saturated hydraulic conductivity.In this study, four artificial neural network models were used to predict the saturated hydraulic conductivity.The input parameters as follows; The first model (soil textural class), the second model (percentage of sand, silt and clay), the third model (previous model parameters and bulk special density) and fourth model (previous model parameters and moisture content percentage in 0.3 suction).Based on the results of this research, the lowest value for RMSE was reported for the fourth model in forecasting of saturated hydraulic conductivity.Navabian et al. (2004) in the other research estimated the saturated hydraulic conductivity by soil bulk density parameters including apparent especial mass, effective porosity, geometric mean and SD of diameter of particles using Multi-Layer Perceptron (MLP).The results showed that the MLP neural networks are able to estimate the saturated hydraulic conductivity with high accuracy.
Rezaie Arshad et al. (2012) in a study was used of MLP neural networks and regression models to estimate the soil saturated hydraulic conductivity using bulk density, total porosity and soil particle size distribution percentage.The comparison results with MLP showed that the network can more accurately estimate the saturated hydraulic conductivity.ShamsEmamzadeh et al ( 2014) in a study was used of MLP and RBF to estimate the soil saturated hydraulic conductivity.Comparing both results showed that generally, MLP neural network has a better accuracy.The findings of previous researchers demonstrated that to estimate saturated hydraulic conductivity using easily accessible parameters soil properties like pore size and particle size (grain size) distributions, and soil texture, pH,…and EC with Multi-Layer Perceptron (MLP).Also, to estimate these parameters is not used of the neural network of Radial Basis Function.
Therefore, the first aim of the present study was to compare the performance of neural networks of Multi-Layer Perceptron (MLP) and Radial Basis Function (RBF) in estimation of the soil saturated hydraulic conductivity.Secondly, in order to explore the possibility of replacing them instead of numerical solution of equation related to falling head of cased borehole permeameter method.For this purpose, the data of 27 drilled cased borehole permeameter with three kinds of geometry water flow through the soils, and the soil texture properties were used as input parameters for the models.The effectiveness of neural networks to estimate the soil saturated hydraulic conductivity were calculated and compared, based on mean squared error (MSE), root mean squared error (RMSE) and coefficient determination (R 2 ).
In this study was used of data from of the cased borehole used in the previous research, a case study farm of Tehran University Aboreyhan campus.The characteristics of soil as networks input and soil saturated hydraulic conductivity values which were estimated from the numerical solution of Reynolds equation, As the expected values of this parameters used in the basic analysis.(Asadollahzadeh, 2013).In this study, The ANN models were simulated in MATLAB R2010b software.

Study Area
In this study, field experiments were done at Aboureyhan campus research farm, University of Tehran, located in Pakdasht, 25 km southeast of Tehran, Iran.The soil texture is loamy in the upper 90 cm.In this work is used of characteristics and data recorded associated to the 27 permeameters of cased borehole with diameters of 4, 6, 8 cm and various depths where with 1m distances in form of regular network were digging in the ground, are used as the input parameters of models.
The saturated hydraulic conductivity of water in soil (or the intrinsic permeability of the soil) can be measured by both field and laboratory experiments.Either way, the experimental measurement of K (or k) consists in determining the numerical value for the coefficient in Darcy's equation.
In this regard, according to the required parameters in the developed solution by Reynolds, the amounts of physical properties of cased borehole include the radius, height of borehole (cm), water in the wells, soil infiltration, percentage of the constituent particles of soil texture, initial and saturated soil moisture as the input parameters for models were considered.
In order to cover the permeameter of wells, the device consists of a plastic circular consists of a circular tube The modi (2010).

Artifici
In the wor are simplif knowledge  The number of layers, number of neurons in each hidden layer and type of the transfer function in each layer normally are determined by trial and error method by network designers (Menhaj, 1384, Abdolmaleki -2013).
In this study, to train the input data and simulation of artificial neural network is used of supervised learning method.
The Multi-layer perceptron (MLP) networks trained using BP algorithm [Rumelhart et al. ,1986].Backpropagation algorithm, the most commonly used method is to train multi-layer feedforward networks and there are various methods for training BP algorithm, Methods such as conjugate gradient, quasi-Newton and Levenberg -Marquardt.
The conjugate gradient or the basic backpropagation algorithm adjusts the weights in the steepest descent direction (negative of the gradient).This is the direction in which the performance function is decreasing most rapidly.
Although the function decreases most rapidly along the negative of the gradient, this does not necessarily produce the fastest convergence.
In the conjugate gradient algorithms, a search is performed along conjugate directions, which produces generally faster convergence than steepest descent directions.
In most of the conjugate gradients algorithms, the step size is adjusted in each iteration.Quick optimization in the conjugate gradient method and for minimization problems Newton-type method is one of the ways in which used of quasi-Newton method.
This method is used of second-order derivatives of errors, weights and biases and often converges faster than conjugate gradient method.The Levenberg -Marquardt (LM) algorithm, like Newton's method is designed based on the use of second order derivatives.
Important characteristics of this function is its superfast convergence, that is the fastest known learning algorithm up to now.
Another advantage of this approach is that is not Initially required to determine the rate of learning and algorithms can change the learning rate adaptively in the network (Zoonemat Kermani andBai, 1392, Demos andBill, 2002).
In recent work, the Levenberg -Marquart algorithm is used to train the neural network.
The most important unit in neural network structure is their net inputs by using a scalar-to-scalar function called "the activation function or threshold function or transfer function", output a result value called the "unit's activation".
In this research, three of the most widely used transfer functions are used for MLP networks, are used in the neurons structure.These functions respectively are such as the linear transfer function, equation (1), sigmoid, equation (2) and sigmoid tangent, equation (3) (Kia, 1393).
In which: x: is an input to the network The major concern of the designer of an ANN structure is to determine the appropriate number of hidden layers and the number of neurons in each layer.
In order to achieve the best network structure in MLP network architecture, was used of the trial and error procedure using to determine an optimum the transfer layer functions, the number of neurons in the hidden layers during training.Also in this research, is the method used.Ebberhart and Dobbins (1990) suggested starting with hidden nodes equal to half of the input nodes.Tingsanchali and Gauatam (2000) found that starting with hidden nodes equal to or slightly greater than the input nodes is adequate (Wen Wang, 2000).
In this regard, the number of hidden layers and double-layers and the number of neurons were considered between 2 and 50.
In general, adjacent layers of neurons in each layer to all neurons are linked by a directional relationship and data can be transmitted between neurons through these connections.
Each of these connection links has its own specific weights are multiplied by transmitted information from one neuron to another neuron.
Firstly, an artificial Neural network must be trained to do a thing.Generally, An ANN consists of nodes in different layers; input layer, intermediate hidden layer(s) and the output layer.The connections between nodes of adjacent layers have "weights" associated with them.The goal of learning is to assign correct weights for these edges.In other words, the basis of the training network, is the process of determining the arc weights that in fact are the key elements of a Neural Network.Given an input vector, these weights determine what the output vector is.
In general, the process of training and determining the network weight and bias is performed based on 70% of the measured data random selection was used, then the model with about 10 percent of the records is validated.
Finally, about 20% of samples in order to test and evaluate the ability to generalize the model (Menhaj, 1384, Soltani et Al 2012).
Finally, in order to evaluate the generalization ability of the trained ANN model on unseen data, about 20% of samples in order to test and evaluate was used (Menhaj, 1384).

Radial Basis Function Network (RBFN)
Radial Basis Function Network (RBFN) correspond to a particular class of function approximators which can be trained, using a set of samples.RBFNs have been receiving a growing amount of attention since their initial proposal [Broomhead andLowe,1988, Moody andDarken,1988], and now a great deal of theoretical and empirical results is available.The RBFNs is widely used to estimate the non-parametric multi-dimensional functions via a limited series of educational information.
RBF networks have three layers: input layer, hidden layer and output layer.One neuron in the input layer corresponds to each predictor variable.With respects to categorical variables, n-1 neurons are used where n is the number of categories.Hidden layer has a variable number of neurons.Each neuron consists of a radial basis function centered on a point with the same dimensions as the predictor variables.The output layer has a weighted sum of outputs from the hidden layer to form the network outputs.
These networks are most inspired by statistical techniques models classification.RBFNs and statistical techniques have been used in a large number of areas.
These networks are a type of feedforward neural networks and structurally similar to MLP networks.
MLP and RBFN of types are the most widely used neural network models for practical applications.But compare between MLP & RBFNs shows that the MLP belong to a group of "classical" neural networks (whose weighted sums are loosely inspired by biology), while, the RBFN is based on analogy to regression theory (Broomhead & Lowe 1988).
The main difference of RBFNs with MLP networks is on the number of hidden layers, input vector type and transfer function.
The RBFNs networks only have a middle layer and function of neurons triggers, radial functions with center and specific width.
Moreover, contrary to the network back propagation of error in which sum of the weighted neuron -received by the intermediate hidden layer neurons as input activation function are calculated.
Here, the distance between each pattern with the vector of each vector in the middle layer as input of radial actuation function are calculated (Dehghani et al., 1389, Vaziri, 1385).
In this method, stimulus function is the radial function in the middle layer, and linear function in the output layer.
In this network, input signals are directly entered into the hidden layer neurons.
Unlike MLP networks with public activity functions, the activity functions are local in these networks.
The number of hidden layer neurons is obtained from trial and error method.
At the output layer, only the picker is available that its entrances are the hidden layer neurons output.
The number of neurons in the output layer are equal to the number of outputs.
This network according to its various applications, has become one of the most famous neural networks and most important competitor of the MLP neural network (Bayat andNajafi, 1392, Vaziri, 1385).

Data Preprocessing
In the preprocess stage, data for integration of the impact of inputs used for models output and increase of models function, before the usage of input data, first of all using the linear method of normalization, total data were taken into interval [ -1, 1], EQ.4 In which: X: observational data Xmax: maximum observational data for the desired parameter Xmin: the minimum observational data for the desired parameter Xn: the normalized data.it is required to be mentioned that in this research, the units used to all parameters of length and time, respectively are meter and second.

Accuracy Evaluation Criteria
In this study, the index of coefficient of determination (R2), mean squared error (MSE) and root mean square error (RMSE) is designed to assess the accuracy and efficiency of the models to estimate the amount of soil saturated hydraulic conductivity.

∑ ( ) .( )
(5) In which: : expected values of soil saturated hydraulic conductivity, for i-th sample : predicted values of saturated hydraulic conductivity for the i-th sample by Neural Network : the average of expected values of soil saturated hydraulic conductivity : the average of predicted values of soil saturated hydraulic conductivity by artificial neural network.

Results and Discussion
For each category, the data intended for Cased borehole permeameter with the horizontally, vertically and horizontally -vertically of flow, MLP and RBF neural network structure was formed.
For MLP networks in the selection of number of hidden layers, number of neurons in layers and transfer functions was used of trial and error method.
Among the performances were taken, the best results for each of the three streams were recorded and presented in Table .1.
In these networks, error function and education function for data of all three types of flows, respectively are MSE (Mean Squared Error) and trainlm (Levenberg-Marquart function). mas.ccsenet

6.Conclusion
This study was conducted to investigate and evaluate the performance of MLP and RBFN for data of each of three cased borehole permeameter with flow horizontal, vertical and horizontal -vertical flow of water by error-indices.
Although, study results based on determination coefficient values and error indexes defined per test samples show that, Both types of networks have been able to some extent acted with reasonable accuracy and be as a good alternative for Numerical Solution developed by Reynolds, But generally, for test samples of each of three cased borehole permeameter, MLP network performance in the estimation of soil saturated hydraulic conductivity compared with RBF network have been a better performance.
These networks have been able to estimate saturated hydraulic conductivity with less MSE and RMSE indexes and more determination coefficients values.
Based on resulting coefficients of determinations (R2), for all three types of drilled cased borehole permeameter surveyed in this study, the results show MLP neural networks had better performance than RBF neural networks in estimating the soil saturated hydraulic conductivity and for wells with the horizontal, vertical and horizontal-vertical flow, which the amount of coefficient determination were 0.94, 0.97 and 0.85 respectively.
Among the results of the research is observed that the MLP networks for estimation of saturated hydraulic conductivity of three types of cased boreholes, the best performance was for wells with vertical flow with the determination coefficient about 0.97.
Then, respectively, the wells with horizontal flow and combined horizontal-vertical flow by resulting coefficients of determinations (R2) are 0.94 and 0.85.
In addition, in this study for the number of equal input parameters, the MLP network structure compared with RBF networks, the number of neurons used were less and performance accuracy is higher.
The requirement for fewer neurons (processing units) in the network can be supposed as another reason of the superiority and power of MLP networks in estimation of hydraulic conductivity of saturation of soil based on the desired inputs. Figure

Figure
Figure 2. A

Figure
Figure 3.A

Figure
Figure 4.A

Figure
Figure 6.A