A Spectral Gradient Projection Method for Sparse Signal Reconstruction in Compressive Sensing

In this paper, a new spectral gradient direction is proposed to solve the l -regularized convex minimization problem. The spectral parameter of the proposed method is computed as a convex combination of two existing spectral parameters of some conjugate gradient method. Moreover, the spectral gradient method is applied to the resulting problem at each iteration without requiring the Jacobian matrix. Furthermore, the proposed method is shown to have converge globally under some assumptions. Numerically, the proposed method is efficient and robust in terms of its quality in reconstructing sparse signal and low computational cost compared to the existing methods.


Introduction
Consider the following unconstrained minimization problem for sparse signal recovery where ∈ ℝ × ( << ), ∈ ℝ , ∥⋅∥ is the ℓ −norm of a vector ∈ ℝ usually called regularizer and ∈ ℝ is a regularization parameter that can be interpreted as a trade-off between sparsity and residual error.
Several solvers have been proposed to solve the model (1). Due to simplicity and efficiency, iterative shrinkage thresholding (IST) and the quick iterative shrinkage thresholding algorithm (FISTA) are among methods used (Beck & Teboulle, 2009;Khoramian, 2012). Additionally, in (Hale et al., 2007), a continuous search fixed point based method was introduced, and the Barzilai-Borwein stepsize (Huang & Wan, 2017) implemented a nonmonotone line search acceleration technique. Gradient descent based methods are alternative methods used to solve model (1). Figueiredo (Figueiredo et al., 2007) initiate a gradient based method combined with projection to solve (1). Motivated by the Figueiredo method, Xiao and Zhu then suggested alternative method of solving the model (1) using the spectral gradient and the method of projection of the conjugate gradient (Xiao et al., 2011;Xiao & Zhu, 2013). Unlike IST and FISTA, the model (1) was first reformulated to become a monotone system of equations. This reformulation procedure can be found in many literature Ibrahim, Kumam, Abubakar, Jirakitpuwapat, et al., 2020). Afterwards, an algorithm to solve the problem is then constructed (Abubakar, Rilwan, et al., 2020;Abubakar & Kumam, 2018Ibrahim et al., 2019). It should be noted that with the reformulation of the model (1) into a monotone equation system, the model (1) is now equivalent to the following nonlinear convex constrained equation, where : → ℝ is a continuous mapping and ⊆ ℝ is a convex set. Thus, solving (2) is equivalent to solving (1).
In this paper, we aim to present an innovative iterative approach for solving the compressive sensing problem of ℓ1-norm regularization. Motivated by the spectral parameters in the work of (Yuan et al., 2020) and (Amini et al., 2019), we propose a spectral gradient approach for solving the ℓ1regularized convex minimization problem (1) using the hyperplane projection technique. The spectral parameter is determined as a convex combination of the spectral parameters proposed in (Yuan et al., 2020) and (Amini et al., 2019) respectively. Under suitable conditions, the proposed method globally converges. Numerical results show that the proposed approach is effective and reliable compared to existing methods in terms of its efficiency in reconstructing sparse signal with low computational costs.
This paper is structured as follows: Section 1 introduce an algorithm to solve model (2) which is equivalent to solving (1). In Section 2, we establish the global convergence of the proposed algorithm. In Section 3, we illustrate the good practical behaviour our algorithm in reconstructing sparse signal. Finally, the last Section gives the conclusion.

Algorithm
This section begin by defining the projection map together with some appropriate assumptions. Finally a description of the proposed algorithm is given with some remarks.
Suppose is a nonempty, closed and convex subset of ℝ . Then for any ∈ ℝ , its projection onto , denoted by ( ) is The projection map is nonexpansive, that is (3) Throughout we make the following assumptions (H 1 ) The set of solution to (2), denoted by , , is nonempty.
is monotone, that is, is Lipschitz continuous, that is there exists > 0 such that Motivated by the work of  We propose the following search direction where = ( ) and is a convex combination of * and * * given respectively by The step by step implementation of our algorithm is illustrated below.

Global Convergence
In this section, we use the following Lemmas to prove Theorem 3.5.

Lemma
Suppose that assumptions ( ) − ( ) hold and the sequence { } is generated by Algorithm 2.1, then { } is sufficiently descent and bounded. That is For ≥ 1, using the definition of , Remark 2.2 and 2.3, Therefore, For the ( ) part, if = 0, then If ≥ 1, then using the definition of , Remark 2.2 and Remark 2.3 we have, mas.ccsenet.org Modern Applied Science Vol. 14, No. 5; 2020

Proof. Let
≥ 0 such that (8) is not true for every non-negative integer , that is, Since is continuous, then allowing → ∞ in the above inequality, Likewise by (13), we have This contradicts with (14).

Numerical Experiment
Reconstruction of sparse signal in compressive sensing is shown in this section to illustrate the performance of Algorithm 2.1. We compared Algorithm 2.1 to the following methods in the literature to demonstrate the efficiency of our proposed method in signal reconstruction: SGCS (Xiao et al., 2011), CGD (Xiao & Zhu, 2013), and PSGM . In Matlab R2019b the four algorithms were programmed and run on a PC with a 2.40GHZ CPU processor and 8.00 GB RAM.
In this experiment, our aim is to recover a length sparse signal from a Gaussian noise sampling measurement of , where the number of samples is normally smaller than the original signal. The efficiency of the restored signal is measured by squared error (MSE) mean: where is the original signal and * is the signal restored. We set the signal size as = 2 , = 2 . The original signal contains 2 randomly nonzero elements. During the experiment, a random matrix is generated. This is done using the Matlab command rand(n,m). In addition, the observed date is computed by = + , where is the Gaussian noise which we set as (0,10 ).   Figure 1 above shows the original signal , the observed data and the signal * reconstructed using the four algorithms. From figure 1 it is clear that all tested methods were able to reconstruct the signal. However, it can be observed that, Algo.1 shows to be more efficient in reconsructing the sparse signals. This is reflected by less MSE, number of iterations and CPU time. Figure 2 illustrates the change trend of MSE and objective function values in terms of number of iterations and respective CPU time. To further highlight the efficiency of the Algo.1, we repeat the experiment ten more times. Each time the experiment is repeated, Algo.1 outperforms the compared methods. From Table 1 below, we have reported the numerical result for the ten experiments. On the average, Algo.1 require 76.9 iterations to recover the sparse signal, SGCS, CGD and PSGM requires an average of 128.7, 122 and 107.9 iterations respectively. For the CPU time, Algo.1 was the fastest in reconstructing the sparse signal. Algo.1 recorded an average CPU time of 2.148 seconds while SGCS, CGD and PSGM reconstructed the sparse signal at an average of 3.639, 3.505 and 3.092 seconds respectively. Finally, with respect to the quality of recontruction, Algo.1 recorded less mean squared error compared to SGCS, CGD and PSGM.

Conclusion
This paper presents a spectral gradient projection algorithm to solve the regularized problems with the ℓ −norm for reconstruction of sparse signal in compressive sensing. The method combines the line search method with a spectral parameter computed as a convex combination of two different spectral parameters of some conjugate gradient methods. Furthermore, we have shown that the proposed mehtod converges globally. To highlight the details of our contribution, we have presented numerical experiment in recovery sparse signal. These experiments illustrate clearly the effectiveness of our approach in reconstructing sparse signal in compressive sensing.