当前位置:文档之家› Smoothing Nonlinear Conjugate Gradient Method for ImageRestoration

Smoothing Nonlinear Conjugate Gradient Method for ImageRestoration

Smoothing Nonlinear Conjugate Gradient Method for ImageRestoration
Smoothing Nonlinear Conjugate Gradient Method for ImageRestoration

SIAM J.I MAGING S CIENCES c

2010Society for Industrial and Applied Mathematics Vol.3,No.4,pp.765–790

Smoothing Nonlinear Conjugate Gradient Method for Image Restoration Using

Nonsmooth Nonconvex Minimization ?Xiaojun Chen ?and Weijun Zhou ?Abstract.Image restoration problems are often converted into large-scale,nonsmooth,and nonconvex optimi-zation problems.Most existing minimization methods are not e?cient for solving such problems.It is well known that nonlinear conjugate gradient methods are preferred to solve large-scale smooth optimization problems due to their simplicity,low storage,practical computation e?ciency,and nice convergence properties.In this paper,we propose a smoothing nonlinear conjugate gradient method where an intelligent scheme is used to update the smoothing parameter at each iteration

and guarantees that any accumulation point of a sequence generated by this method is a Clarke stationary point of the nonsmooth and nonconvex optimization problem.Moreover,we present a class of smoothing functions and show their approximation properties.This method is easy to im-plement without adding any new variables.Three image restoration problems with di?erent pixels and di?erent regularization terms are used in numerical tests.Experimental results and comparison with the continuation method in [M.Nikolova et al.,SIAM J.Imaging Sci.,1(2008),pp.2–25]show

the e?ciency of the proposed method.Key words.image restoration,regularization,nonsmooth and nonconvex optimization,nonlinear conjugate gradient method,smooth approximation,potential function AMS subject classi?cations.65F22,65F10,65K05

DOI.10.1137/080740167

1.Introduction.The image restoration problem is that of reconstructing an image of an unknown scene from an observed image.This

problem plays an important role in medical sciences,biological engineering,and other areas of science and engineering [1,4,31].The most common image degradation model can be represented by the following system:

(1.1)b =Ax +η,

where η∈R m represents the noise,A is an m ×n blurring matrix,and x ∈R n and b ∈R m

are the underlying and observed images,respectively.In many cases,A is a matrix of block Toeplitz with Toeplitz blocks (BTTB)when zero boundary conditions are applied and block

Toeplitz-plus-Hankel with Toeplitz-plus-Hankel blocks (BTHTHB)when Neumann boundary conditions are used [22].?

Received by the editors November 7,2008;accepted for publication (in revised form)July 26,2010;published electronically October 14,2010.https://www.doczj.com/doc/3a10535641.html,/journals/siims/3-4/74016.html ?

Department of Applied Mathematics,The Hong Kong Polytechnic University,Hung Hom,Kowloon,Hong

Kong,China (maxjchen@https://www.doczj.com/doc/3a10535641.html,.hk ).This author’s work was supported in part by the Hong Kong Research Grants Council (PolyU5003/08P).

?

Department of Mathematics,Changsha University of Science and Technology,Changsha 410004,China

(weijunzhou@https://www.doczj.com/doc/3a10535641.html, ).This author was supported by the

Hong Kong Polytechnic University Postdoctoral Fel-lowship Scheme,the NSF foundation (10901026,10771057,and 10701018)of China and a project (09B001)of the Scienti?c Research Fund of Hunan Provincial Education Department.

765D o w n l o a d e d 01/04/16 t o 202.197.112.36. R e d i s t r i b u t i o n s u b j e c t t o S I A M l i c e n s e o r c o p y r i g h t ; s e e h t t p ://w w w .s i a m .o r g /j o u r n a l s /o j s a .p h p

766XIAOJUN CHEN AND WEIJUN ZHOU

Technically we are not solving (1.1)since η

is unknown.We are instead solving min x ∈R n

b ?Ax .Solving this problem alone will not get a satisfactory solution since the system is very sensitive to noise and lack of information.The smooth least squares problem min x ∈R n b ?Ax 22+β Dx 22

is often used,where D is an operator and βis the regularization parameter that controls the

trade-o?between the data-?tting term and the regularization term.For the regularization term,there has been a growing interest in using the l 1norm [6,15,23].The l 1solution tends to have better statistical properties than the l 2solution.In [13],Fu et al.considered the

mixed l 2–l 1norm form

(1.2)min x ∈R n

b ?Ax 2

2+β Dx 1

and the l 1–l 1norm form (1.3)min x ∈R n

b ?Ax 1+β Dx 1.These two minimization problems are convex but nonsmooth.In [24],Nikolova et al.consid-ered the more general form

(1.4)min x ∈R

n Θ(b ?Ax )+βΦ(x ),where Θforces closeness to data and Φembodies the priors.The mixed l 2–l 1norm form (1.2)and the l 1–l 1norm form (1.3)are special forms of (1.4).Minimization methods for these forms were proposed in [13].However,(1.4)can be nonconvex and nonsmooth.A class of regularization functions is of the form

Φ(x )=r

i =1?(d T i x ),where ?is called a potential function and {d 1,...,d r }is a set of vectors of R n .The role of Φis to push the solution to exhibit some a priori expected features,such as the presence of edges,smooth regions,and textures.As proven in [23],although convex potential functions such as ?(t )=|t |are often used for the regularization term,nonconvex regularization functions such

as ?(t )=α|t |1+α|t |with α>0provide better possibilities for restoring images with neat edges.For this reason,many image restoration problems are often converted into nonsmooth,noncon-vex optimization problems.Moreover,the optimization problems are large-scale because the

discretized scenes usually have a large number n =l ×l of pixels.Several e?cient algorithms

for image restoration problems which use linear or quadratic programming reformulation and

interior point methods are proposed in [13,24].Fu et al.[13]considered nonsmooth and convex problems,and Nikolova et al.[24]considered a continuation method for nonsmooth D o w n l o a d e d 01/04/16 t o 202.197.112.36. R e d i s t r i b u t i o n s u b j e c t t o S I A M l i c e n s e o r c o p y r i g h t ; s e e h t t p ://w w w .s i a m .o r g /j o u r n a l s /o j s a .p h p

SNCG FOR IMAGE RESTORATION 767

and nonconvex problems for arbitrary A in (1.4).The purpose of the continuation method proposed in [24]is to approximate the minimizer of the objective function.However,there is no guarantee for the convergence of the continuation method.A drawback of these methods is the use of 4n +2m additional variables,which makes these methods impractical for solving large-scale problems.The aim of this paper is to present an e?cient optimization method for large-scale,nonsmooth,nonconvex image restoration problems.This method ensures that from any starting point in R n ,any accumulation point of the sequence generated by the

method is a Clarke stationary point.Moreover,this method does not increase the dimension,which is important for large-scale problems.For convenience,in this paper we ?rst consider the following nonsmooth and nonconvex optimization problem in an abstract form:(1.5)min x ∈R n f (x ).Although large-scale nonsmooth and nonconvex optimization problems occur frequently in practice [3,20,32],e?cient existing methods are rare.Burke,Lewis,and Overton [3]intro-duced a robust gradient sampling algorithm for solving nonsmooth,nonconvex,unconstrained

minimization problems.Kiwiel [19]slightly revised this algorithm and showed that any accu-mulation point generated by the algorithm is a Clarke stationary point with probability one.Encouraging numerical results for some small problems are reported in [3,19].It is well known that nonlinear conjugate gradient methods such as the Polak–Ribi`e re–

Polyak (PRP)method [26,27]are very e?cient for large-scale smooth optimization problems

due to their simplicity and low storage.Moreover,

nonlinear conjugate gradient methods,such as the PRP+method [17],and conjugate gradient methods with suitable line search [16,35,36]are proposed for nonconvex minimization problems which ensure that any accumulation point generated by the algorithm is a stationary point.However,nonlinear conjugate gradient methods for solving nonsmooth optimization have not been studied.Moreover,we notice

that most models of image restoration have some symmetric character in the Hessian matrix

at points where the objective function is di?erentiable.The methods in [16,35,36]do not have the symmetric feature.To develop an e?cient optimization method for nonsmooth and nonconvex minimization problems arising from image restoration,we ?rst present a globally

convergent nonlinear conjugate gradient method for smooth nonconvex minimization problems where the search direction can be presented by the gradient with a symmetric and uniformly positive de?nite matrix.Next,we extend the method to solve nonsmooth and nonconvex

optimization by adopting smoothing functions.This paper is organized as follows.In the next section,we present a globally convergent

smoothing method for nonsmooth and nonconvex minimization problems.To present our approach clearly,we ?rst give a smooth version of this method and prove the convergence for the case where f is di?erentiable.In section 3,we present a class of smoothing functions

and show their nice approximation properties for image restoration.Moreover,we show that all assumptions for the convergence of the smoothing conjugate gradient (SCG)method hold and any accumulation point generated by the method is a Clarke stationary point when the method is applied to several common models of image restoration.In section 4,we present

numerical results for three images with n ×n (n =128×128to 256×256)blurring matrices to

D o w n l o a d e d 01/04/16 t o 202.197.112.36. R e d i s t r i b u t i o n s u b j e c t t o S I A M l i c e n s e o r c o p y r i g h t ; s e e h t t p ://w w w .s i a m .o r g /j o u r n a l s /o j s a .p h p

768XIAOJUN CHEN AND WEIJUN ZHOU

show the e?ectiveness and the e?ciency of the proposed method.Moreover,comparison with

the continuation method in [24]is reported.Numerical results show that the continuation method in [24]can ?nd smaller function values,and our method can obtain better peak signal-to-noise ratio (psnr)values with less CPU time.Throughout the paper, · denotes the l 2norm and · 1denotes the l 1norm.R +denotes the set of all nonnegative real numbers.R ++denotes the set of all positive real numbers.2.Algorithms’description.In this paper,we consider the general iterative scheme for

solving (1.5),

(2.1)x k +1=x k +αk d k ,k =0,1,...,where stepsize αk is a positive scalar and d k is

a search direction given by some formula.In order to describe our algorithms conveniently,we divide this section into two subsections for two cases:(a)f is smooth and nonconvex;(b)f is nonsmooth and nonconvex.

2.1.Smooth case.Based on the CG-Descent method in [16],the three-term descent method in [35],the symmetry of the limited memory BFGS method (L-BFGS)[25,30],and the modi?ed BFGS method [21],we propose a new symmetric descent nonlinear conjugate gradient method which converges to a stationary point from any starting point for nonconvex

minimization problems.We consider the search direction (2.2)d k =

?g k if k =0,?g k +βk d k ?1+θk z k ?1

if k >0,where g k =?f (x k )is the gradient of f at x k and βk =g T k z k ?1d T k ?1z k ?1?2 z k ?1 2g T k d k ?1(d T k ?1z k ?1)2,θk =g T k d k ?1

d T k ?1z k ?1

,(2.3)z k ?1=y k ?1+t k s k ?1,t k =ε0 g k r +max 0,?s T k ?1y k ?1 s k ?1 2 ,(2.4)with y k ?1=g k ?g k ?1,

s k ?1=x k ?x k ?1=αk ?1d k ?1,and for some constants ε0>0and r ≥0.

We can claim that (2.2)is well de?ned;that is,d k is ?nite-valued for g k ?1=0and αk >0.Indeed,since we have from (2.4)that (2.5)s T k ?1z k ?1 s k ?1 2=ε0 g k r +s T k ?1y k ?1 s k ?1 2+max 0,?s T k ?1y k ?1 s k ?1 2 ≥ε0 g k r

>0,which implies d T

k ?1z k ?1=s T k ?1

z k ?1αk ?1>0,βk and θk are ?nite-valued and thus d k is ?nite-valued.D o w n l o a d e d 01/04/16 t o 202.197.112.36. R e d i s t r i b u t i o n s u b j e c t t o S I A M l i c e n s e o r c o p y r i g h t ; s e e h t t p ://w w w .s i a m .o r g /j o u r n a l s /o j s a .p h p

SNCG FOR IMAGE RESTORATION 769

It is worth noticing that (2.1)–(2.2)can be written as a Newton-like method,

x k +1=x k ?αk H k g k ,

where H 0=I and for k ≥1

(2.6)H k =I ?d k ?1z T k ?1+z k ?1d T k ?1d T k ?1z k ?1+2 z k ?1 2(d T k ?1z k ?1)2d k ?1d T k ?1.This means that the search direction d k de?ned in (2.2)can be given as

(2.7)d k =?H k g k .This can be veri?ed as follows.It is obviously true for k =0.For k >0,we have

?H k g k =?g k +d k ?1z T k ?1g k +z k ?1d T k ?1g k d T k ?1z k ?1?2 z k ?1 2(d T k ?1z k ?1)2

d k ?1d T

k ?1g k =?g k + z T k ?1g k d T k ?1z k ?1?2 z k ?1 2d T k ?1g k (d T k ?1z k ?1)2 d k ?1+d T k ?1g k d T k ?1

z k ?1z k ?1=?g k +βk d k ?1+θk z k ?1=d k =1αk

(x k +1?x k ).Remark 2.1.If f is strictly convex,we can choose ε0=0.Then from s T k ?1y k ?1=(x k ?x k ?1)T (g k ?g k ?1)>0,we have that t k ≡0,and H k has the following well-de?ned version:(2.8)H k =I ?d k ?1y T k ?1+y k ?1d T k ?1d T k ?1y k ?1+2 y k ?1 2(d T k ?1y k ?1)2d k ?1d T

k ?1.Moreover,the following equality holds:?H k g k =?g k + y T k ?1g k d T k ?1y k ?1?2 y k ?1 2d T k ?1g k (d T k ?1y k ?1)2 d k ?1+d T k ?1g k

d T k ?1y k ?1

y k ?1.In this case,if the exact line search (αk ?1=argmin α>0f (x k ?1+αd k ?1))is used,then we

have ?αf (x k ?1+αd k ?1)=g T k d k ?1=0,and thus ?H k g k =?g k +y T k

?1g k

d T

k ?1y k ?1d k ?1.This is the well-known Hestenes–Stiefel conjugate

gradient method and satis?es the conjugacy condition d T k y k ?1=(?H k g k )T y k ?1=0;see [17].Therefore,we call this method (2.1)–(2.2)a nonlinear conjugate gradient method as it reduces to the standard conjugate gradient method when f is strictly convex and the exact line search is used.However,for nonconvex functions,(2.8)is not well de?ned because d T k ?1y k ?1=0can happen.It is signi?cant to introduce the

D o w n l o a d e d 01/04/16 t o 202.197.112.36. R e d i s t r i b u t i o n s u b j e c t t o S I A M l i c e n s e o r c o p y r i g h t ; s e e h t t p ://w w w .s i a m .o r g /j o u r n a l s /o j s a .p h p

770XIAOJUN CHEN AND WEIJUN ZHOU

additional term t k s k ?1in z k ?1and use d T k ?1z k ?1instead of d T k ?1y k ?1,which not only ensures that H k is well de?ned but also possesses the property (2.5).The next lemma lists an important property of H k de?ned by (2.6).

Lemma 2.1.Matrices H k are symmetric positive de?nite and satisfy

(2.9) H ?1

k

≤2,k ≥1.Proof .Obviously H k is symmetric.Moreover,for any k ≥1and any x ∈R n ,we have x T H k x =x T x ?2(x T d k ?1)(z T k ?1x )d T k ?1z k ?1+2 z k ?1 2(x T d k ?1)2(d T k ?1z k ?1)2=x T

x ?2 √2x T d k ?1d T k ?1z k ?1z k ?1 T 1√2x +2 z k ?1 2(x T d k ?1)2(d T k ?1z k ?1)2≥x T

x ? √2x T d k ?1d T k ?1z k ?1z k ?1 2+ 1√2x 2 +2 z k ?1 2(x T d k ?1)2(d T k ?1z k ?1)

2=x T x ?12x T x =12 x 2

.Hence H k is positive de?nite and its smallest eigenvalue is greater than 12,which gives (2.9).Lemma 2.1ensures that H k are symmetric positive de?nite and satisfy H k g k ≥12 g k .Existing nonlinear conjugate gradient methods [16,21,25,30,35]have no such nice property.

For instance,in the conjugate gradient method proposed in [16],the search direction is given by d k =?H HZ k g k ,where H HZ k =I ?1d T k ?1y k ?1d k ?1y T

k ?1+2 y k ?1 2d T k ?1y k ?1

d k ?1d T k ?1.Obviously,H HZ k is not symmetric.

Following directly from the above lemma and (2.7),we ?nd that the search direction is a descent direction.Lemma 2.2.Let {x k }and {d k }be generated by the method (2.1)and (2.2).We have (2.10)d T

k g k ≤?12 g k 2,

k ≥0.Now we present an algorithm for smooth and nonconvex minimization problems.

Algorithm 2.1.

Step 0.Choose constants ε0>0,r ≥0.Choose δ∈(0,σ),σ∈(δ,1),ρ∈(0,1),and

initial point x 0∈R n .Let k :=0.Step https://www.doczj.com/doc/3a10535641.html,pute d k by (2.2)–(2.4)with the chosen parameters ε0>0,r ≥0.Step 2.Determine αk by the Armijo line search,that is,αk =max {ρ0,ρ1,...}satis-fying

(2.11)f (x k +ρi d k )≤f (x k )+δρi g T

k d k ;

D o w n l o a d e d 01/04/16 t o 202.197.112.36. R e d i s t r i b u t i o n s u b j e c t t o S I A M l i c e n s e o r c o p y r i g h t ; s e e h t t p ://w w w .s i a m .o r g /j o u r n a l s /o j s a .p h p

SNCG FOR IMAGE RESTORATION 771

or the Wolfe line search,that is,αk satisfying (2.12) f (x k +αk d k )≤f (x k )+δαk g T k d k ,d T k g (x k +αk d k )≥σd T k g k .

Step 3.Set x k +1=x k +αk d k .

Step 4.Set k :=k +1.Go to Step 1.Remark 2.2.The Armijo line search and the Wolfe line search in Algorithm 2.1are well de?ned,since the search directions are descent directions from Lemma 2.2.

To ensure the convergence of Algorithm 2.1,we need the following standard assumption.Assumption A.(i)For any ?x ∈R n ,the level set

S (?x )={x ∈R n

|f (x )≤f (?

x )}is bounded.

(ii)f is continuously di?erentiable and there exists a constant L >0such that for any ?x ∈R n ,the gradient of f satis?es (2.13) g (x )?g (y ) ≤L x ?y ,x,y ∈S (?x ).

Assumption A is often used in the analysis of convergence of conjugate gradient methods.In the next section,we give some functions which satisfy Assumption A .Throughout this subsection,we always suppose that Assumption A holds.We can get from Assumption A that there exists a constant γ>0such that all x in the level set S (x 0)

satisfy (2.14) g (x ) ≤γ.Hence,we have z k ?1 ≤ y k ?1 +t k s k ?1 ≤2 y k ?1 +ε0 g k r s k ?1

≤(2L +ε0 g k r ) s k ?1 ≤(2L +ε0γr ) s k ?1 ,(2.15)where the ?rst and second inequalities use (2.4),the third inequality uses (2.13),and the last

inequality uses (2.14).

The next result shows that d k is bounded.Lemma 2.3.Let {x k }be generated by Algorithm 2.1.If there exists a positive constant εsuch that for all k ≥0

(2.16) g k ≥ε,

then there exists a constant c 0>0such that (2.17) d k ≤c 0 g k

,k ≥0.D o w n l o a d e d 01/04/16 t o 202.197.112.36. R e d i s t r i b u t i o n s u b j e c t t o S I A M l i c e n s e o r c o p y r i g h t ; s e e h t t p ://w w w .s i a m .o r g /j o u r n a l s /o j s a .p h p

772XIAOJUN CHEN AND WEIJUN ZHOU

Proof .By the de?nition (2.6)for H k ,we have

H k = I ?d k ?1z T k ?1+z k ?1d T k ?1d T k ?1z k ?1

+2 z k ?1 2(d T k ?1

z k ?1)2d k ?1d T k ?1 ≤ I +2 d k ?1 z k ?1 d T k ?1z k ?1+2 z k ?1 2 d k ?1

2(d T k ?1z k ?1)2≤ I +2(2L +ε0γr ) d k ?1 s k ?1 ε0 g k ?1 r d T k ?1s k ?1+2(2L +ε0γr )2 s k ?1 2 d k ?1 2

(ε0 g k ?1 r d T k ?1s k ?1)

2≤ I +2(2L +ε0γr )ε0εr +2(2L +ε0γr )2

(ε0εr )2

=:c 0,

where the second inequality uses (2.15),(2.5),and s k ?1=αk ?1d k ?1,and the third inequality

uses d T k ?1s k ?1=αk ?1 d k ?1 2.It follows from the above inequality and (2.7)that d k ≤c 0 g k .

The following theorem shows the convergence of Algorithm 2.1when the objective function f is smooth and nonconvex.Theorem 2.4.Let {x k }be generated by Algorithm 2.1.If all stepsize αk are computed by a single-type line search,either the Armijo line search (2.11)or the Wolfe line search (2.12),then we have (2.18)lim inf k →∞ ?f (x k ) =0.

Proof .(i)If all stepsize αk are computed by the Armijo line search (2.11),then the

conclusion follows directly from Theorem 3.1of

[14]and Lemma 2.2.(ii)If all stepsize αk are computed by the Wolfe line search,then we have from Assump-tion A and the ?rst inequalities in (2.12)and (2.10)that

(2.19)lim k →∞αk g

k 2=0.Now if (2.18)does not hold,then there exists a positive constant εsuch that (2.16)holds

for all k ≥0.It follows from the second inequality in (2.12)that Lαk d k 2≥(g k +1?g k )T d k ≥(1?σ)(?g T

k d k ).

This together with (2.10),(2.14),and (2.17)implies that αk ≥(1?σ)(?g T k d k )L d k 2≥(1?σ) g k

22L d k 2≥(1?σ)ε22Lc 20γ2>0.Hence from (2.19),we obtain

lim k →∞ g k =0,which leads to a contradiction.This shows that (2.18)holds.

Remark 2.3.Theorem 2.4shows that from any initial point x 0∈R n ,Algorithm 2.1con-verges to a stationary point x ?of f .If f is strongly pseudoconvex at x ?,then x ?is a local D o w n l o a d e d 01/04/16 t o 202.197.112.36. R e d i s t r i b u t i o n s u b j e c t t o S I A M l i c e n s e o r c o p y r i g h t ; s e e h t t p ://w w w .s i a m .o r g /j o u r n a l s /o j s a .p h p

SNCG FOR IMAGE RESTORATION 773

minimizer of f .A function f is said to be strongly pseudoconvex at x ?if there is a neighbor-hood Ωof x ?such that for every ξ∈?f (x ?)and every y ∈Ω,ξT (y ?x ?)≥0?f (y )≥f (x ?).Pseudoconvexity is weaker than convexity.For example,?1(t )=α|t |1+α|t |is strongly pseudo-convex at all t ∈R ,but it is not convex.

2.2.Nonsmooth case.From now on,throughout the paper,we assume that f is locally Lipschitz continuous but not necessarily di?erentiable.According to the Rademacher theo-rem [11],f is di?erentiable almost everywhere in R n .The subdi?erential ?f (x ),called the generalized gradient of f at x ,is de?ned by ?f (x )=conv {lim x i →x

x i ∈D f ?f (x i )},where “conv”denotes the convex hull of a set and D f is the set of points at which f is di?erentiable [10].Based on the idea of the smoothing Newton method for nonsmooth equations in [7,8],Zhang and Chen [34]proposed a smoothing projected gradient (SPG)method for nonsmooth and nonconvex optimization problems on a closed convex set.The SPG method is very simple

and suitable for large-scale problems.To accelerate the convergence rate,in this subsection,we extend Algorithm 2.1to the nonsmooth case by using the conjugate gradient of the smoothing function as the search direction.First we de?ne a class of smoothing functions of f ,which are

more general than those used for nonsmooth equations in [5,9,7,8].A smoothing function can be considered as a special smooth approximation of f ,which uses a scalar smoothing parameter to play a key role in convergence analysis of the smoothing method.De?nition 2.5.Let f :R n →R be a locally

Lipschitz continuous function.We call ?f :R n ×R +→R a smoothing function of f if ?f (·,μ)is continuously di?erentiable in R n for any ?xed μ∈R ++and

lim μ↓0

?f (x,μ)=f (x )for any ?xed x ∈R n .Now we denote ??f (x,μ)=?x ?f (x,μ),?g k =??f (x k ,μk );then we can present the following SCG method for nonsmooth and nonconvex optimization.Algorithm 2.2(smoothing conjugate gradient (SCG)method).

Step 0.Choose constants ε0>0,r ≥0.Choose δ∈(0,1),ρ,γ1∈(0,1),μ0>0,γ>0,and initial point x 0∈R n .Let d 0=??g 0.Set k :=0.Step https://www.doczj.com/doc/3a10535641.html,pute the stepsize αk by the Armijo line search,αk =max {ρ0,ρ1,...}satisfying ?f

(x k +ρm d k ,μk )≤?f (x k ,μk )+δρm ?g T k d k ;set x k +1=x k +αk d k .Step 2.If ??f (x k +1,μk ) ≥γμk ,then set μk +1=μk ;otherwise,choose μk +1=γ1μk .D o w n l o a d e d 01/04/16 t o 202.197.112.36. R e d i s t r i b u t i o n s u b j e c t t o S I A M l i c e n s e o r c o p y r i g h t ; s e e h t t p ://w w w .s i a m .o r g /j o u r n a l s /o j s a .p h p

774XIAOJUN CHEN AND WEIJUN ZHOU

Step https://www.doczj.com/doc/3a10535641.html,pute d k +1by the formula d k +1=??g k +1+ ?g T k +1?z k d T k ?z k ?2 ?z k 2?g T k +1d k (d T k ?z k )2 d k +?g T k +1d k

d T k ?z k

?z k ,where ?z k =?y k +(ε0 ?g k +1 r +max {0,?s T k ?y k s T k s k })s k ,?y k =?g k +1??g k ,and s k =x k +1?x k .Step 4.Set k :=k +1.Go to Step 1.Theorem 2.6.Suppose that ?f (·,μ)is a smoothing function of f .If,for every ?xed μ>0,?f

(·,μ)satis?es Assumption A ,then a sequence {x k }generated by Algorithm 2.2satis?es lim k →∞μk =0and lim inf k →∞

??f

(x k ,μk ?1) =0.Proof .Denote K ={k |μk +1=γ1μk }.If K is ?nite,then there exists an integer ˉk such

that for all k >ˉk (2.20) ??f

(x k ,μk ?1) ≥γμk ?1and μk =μˉk =:ˉμin Step 2of Algorithm 2.2.Since ?f (·,ˉμ)is a smooth function,Algorithm 2.2reduces to Algorithm 2.1for solving

min x ∈R n ?f (x,ˉμ).Hence,by Assumption A on ?f (·,ˉμ),we have from Theorem 2.4that lim inf k →∞

??f (x k ,ˉμ) =0,which contradicts (2.20).This shows that K must be in?nite and lim k →∞μk =0.Since K is in?nite,we can assume that K ={k 0,k 1,...}with k 0

have

lim i →∞ ??f (x k i +1,μk i ) ≤γlim i →∞μk i =0.Remark 2.4.We can replace the Armijo line search in Algorithm 2.2by the Wolfe line

search to have Theorem 2.6.In the next section,we give a class of smoothing functions which satisfy Assumption A for every ?xed μ>0.Note that Theorem 2.6does not need that

the limit of the Lipschitz constant for ??f (·,μ)exists.This is the novelty of the smoothing

nonlinear conjugate gradient method.

Smoothing functions of f can be de?ned in many ways.We present a class of smoothing

functions for image restoration in the next section.In general,we can use a kernel function ρ:R n →R +to de?ne a sequence of molli?ers which are bounded and continuous and satisfy ρμ(x )=μn ρ(μ,x ),μ>0.

Using it,we de?ne a smoothing function of f ,(2.21)?f

(x,μ)= R n f (x ?y )ρμ(y )dy = R n

f (y )ρμ(x ?y )dy.D o w n l o a d e d 01/04/16 t o 202.197.112.36. R e d i s t r i b u t i o n s u b j e c t t o S I A M l i c e n s e o r c o p y r i

g

h t ; s e e h t t p ://w w w .s

i a m .o r g /

j o u r n a l s /o j s a .p h p

SNCG FOR IMAGE RESTORATION 775

(See [28]and Example 7.19in [29].)By Theorem

9.67in [29],we have G (x ?)??f (x ?

)

with

G (x ?)={v |v =lim i →∞

i ∈K

?x ?f (x i ,μi ),x i →x ?,μi ↓0},where K is a subset of the set of all natural numbers.By Theorem 2.6,any accumulation point x ?of {x k }generated by Algorithm 2.2with a smoothing function de?ned by molli?ers satis?es 0∈?f (x ?);that is,x ?is a Clarke stationary

point of f [10].3.Smoothing functions.Many potential functions ?(t )in image restoration are contin-uously di?erentiable on R except at the origin—for instance,the following potential functions

[12,24]with α>0and p ∈(0,1):(3.1)?1(t )=

α|t |

1+α|t |,?2(t )=log (1+α|t |),?3(t )=(|t |+α)p .It is clear that these functions are nonsmooth and nonconvex.In this paper,we consider the

following class of nonsmooth potential functions.Assumptions on ?.We assume that ?:R →R +is a continuous function and satis?es the

following:

(i)?(t )≥0,?(t )=?(?t );that is,?is nonnegative and symmetric.(ii)?is continuously di?erentiable on R except at 0,and (3.2)lim t ↓0? (t )=? (0+

)=?lim t ↑0? (t )=?? (0?)∈(0,∞).

(iii)?is twice di?erentiable on R except at 0,and there is a constant ν0such that |? (t )|≤

ν0for t =0.It is easy to see that ?i ,i =1,2,3,satisfy our assumptions on ?.Now we present a class of smoothing potential functions which combines the splitting of ?in [24]and the integration convolution smoothing technique [5,28,29].Nikolova et al.[24]split the function ?into a smooth term and a nonsmooth convex term as (3.3)?(t )=ψ(t )+? (0+)|t |.

Consider the function (3.4)ψ(t )=?(t )?? (0+)|t |.

Since the derivative of ψsatis?es ψ (0+)=0=ψ (0?

),

D o w n l o a d e d 01/04/16 t o 202.197.112.36. R e d i s t r i b u t i o n s u b j e c t t o S I A M l i c e n s e o r c o p y r i g h t ; s e e h t t p ://w w w .s i a m .o r g /j o u r n a l s /o j s a .p h p

776XIAOJUN CHEN AND WEIJUN ZHOU

by (iii)of Assumptions on ?,ψis continuously di?erentiable at 0and thus on R .We can build a smoothing function of |t |by integration convolution.Let ρ∈[0,∞)be a piecewise continuous density function with a ?nite number of pieces satisfying

(3.5)ρ(τ)=ρ(?τ)and κ:=

∞?∞|τ|ρ(τ)dτ<∞.

The integration convolution smoothing function of |t |is de?ned as

(3.6)s μ(t )= ∞?∞|t ?μτ|ρ(τ)dτ.For instance,if we choose the density function ρ(τ)=

0if |τ|>0.5,

1otherwise ,then we obtain a smoothing function of |t |:(3.7)s μ(t )=?????|t |if |t |>μ2

,t 2μ+μ4if |t |≤μ2

.

This function is also known as the Huber potential function [18,2],which is very often used in robust statistics.It is worth noticing that ρis not necessarily continuous.From our numerical experiment,a piecewise continuous density function performs better than a continuous one.However,smoothing functions de?ned by (2.21)require the molli?er to be continuous in [28,29].

Combining (3.3)and (3.6),we de?ne a class of smoothing functions for a potential function

?as

(3.8)?μ(t )=ψ(t )+? (0+)s μ(t ).The following proposition shows that ?μhas nice smooth approximation properties.

Proposition 3.1.A function ?μde?ned by (3.8)with s μde?ned by (3.5)and (3.6)has the

following properties.(i)?μ(t )=?μ(?t )for t ∈R ;that is,?μis symmetric.

(ii)?μis continuously di?erentiable on R ,and its derivative can be given as

(3.9)? μ(t )=ψ (t )+2? (0+

) t

μ0ρ(τ)dτ.(iii)?μconverges uniformly to ?on R with |?μ(t )??(t )|≤? (0+)κμ.D o w n l o a d e d 01/04/16 t o 202.197.112.36. R e d i s t r i b u t i o n s u b j e c t t o S I A M l i c e n s e o r c o p y r i g h t ; s e e h t t p ://w w w .s i a m .o r g /j o u r n a l s /o j s a .p h p

SNCG FOR IMAGE RESTORATION 777

(iv)The set of limits of gradient ? μ(t )coincides with the Clarke generalized gradient;that is,(3.10){lim μ↓0,t →0? μ(t )}=??(0),and lim μ↓0,t →t ?? μ(t )=? (t ?

),t ?=0.Moreover,we have (3.11)lim μ↓0

? μ

(t )= ? (t )

if t =0,0if t =0.(v)For any ?xed μ>0,? μis Lipschitz continuous on R ;that is,there is a constant νμ>0such that (3.12)|? μ(t 1)??

μ(t 2)|≤νμ|t 1?t 2|.Proof .(i)We have from (3.8)that

?μ(t )=ψ(t )+? (0+)s μ(t )=

?(t )+? (0+)(s μ(t )?|t |).By variable transformation u =?τ,we have from ρ(τ)=ρ(?τ)that (3.13)s μ(?t )=

∞?∞|?t ?μτ|ρ(τ)dτ= ∞

?∞|t ?μu |ρ(u )du =s μ(t ),which shows that s μis symmetric in R .This,together with (i)of Assumptions on φ,implies that ?μis symmetric.

(ii)From the de?nition of s μ(t ),we have s μ(t )= +∞

?∞|t ?μτ|ρ(τ)dτ=

t

μ

?∞(t ?μτ)ρ(τ)dτ? +∞t μ

(t ?μτ)ρ(τ)dτ=t

t μ?∞ρ(τ)dτ? +∞t μρ(τ)dτ +μ +∞t

μ

τρ(τ)dτ? t μ

?∞τρ(τ)dτ =2t t μ0ρ(τ)dτ+μ +∞t μτρ(τ)dτ? t μ?∞τρ(τ)dτ ,D o w n l o a d e d 01/04/16 t o 202.197.112.36. R e d i s t r i b u t i o n s u b j e c t t o S I A M l i c e n s e o r c o p y r i g h t ; s e e h t t p ://w w w .s i a m .o r g /j o u r n a l s /o j s a .p h p

778XIAOJUN CHEN AND WEIJUN ZHOU

where the last equality uses ρ(τ)=ρ(?τ).Note that ρ(τ)≥0and ∞

?∞|τ|ρ(τ)dτ

<∞.By

the integral mean value theorem,we obtain s μ(t )

=lim Δt →0s μ(t +Δt )?s μ(t )

Δt

=lim Δt →02

t +Δt μ

ρ(τ)dτ+lim Δt →02t t +Δt μt

μρ(τ)dτ?2μ t +Δt μ

t μ

τρ(τ)dτΔt =2 t μ

ρ(τ)dτ+lim Δt →02μ t +Δ

t μt

μ t μ?τ ρ(τ)dτΔt =2 t μ0ρ(τ)dτ+lim Δt →02μt μ

?ξΔt t +Δt μt

μ

ρ(τ)dτ=2 t μ

ρ(τ)dτ,

(3.14)where ξ∈[t μ,t +Δt μ].This gives (3.9).

(iii)By (3.8),we have |?μ(t )??(t )|=? (0+)|s μ(t )?|t ||=? (0+) +∞?∞(|t ?μτ|?|t |)ρ(τ)dτ

≤? (0+

) +∞

?∞|t ?μτ?t |ρ(τ)dτ

=? (0+

) +∞

?∞μ|τ|ρ(τ)dτ

=? (0+)κμ,where κis speci?ed by (3.5).(iv)It follows from (3.14)and ρ(τ)=ρ(?τ)that

lim μ↓0,t →t ?s μ(t )=lim μ↓0,t →t ?2 t μ0ρ(τ)dτ=lim μ↓0,t →t ? t

μ

?t μ

ρ(τ)dτ.Then,for |t ?|=0,we have lim μ↓0,t →t ?

s

μ(t )=

1if t ?>0,?1if t ?<0.For t ?=0,we have lim μ↓0,t →0s μ(t )∈?????????????{1}if t μ→+∞,{α}if lim μ↓0,t →0 t μ <+∞,{?1}if

t μ→?∞,D o w n l o a d e d 01/04/16 t o 202.197.112.36. R e d i s t r i b u t i o n s u b j e c t t o S I A M l i c e n s e o r c o p y r i g h t ; s e e h t t p ://w w w .s i a m .o r g /j o u r n a l s /o j s a .p h p

SNCG FOR IMAGE RESTORATION 779

where α∈[?1,1].This shows that lim μ↓0,t →0s μ(t )?[?1,1].De?ne α(λ)=2 λ

0ρ(τ)dτ;then α(λ)∈[?1,1]is continuous in R since ρis piecewise continuous.Therefore,for any

α0∈(?1,1),there exists λ0such that α0=α(λ0).If we choose t k

μk

=λ0and μk ↓0,then we have

α0=lim μk ↓0,t k →0

s μk (t k ).

This shows that

[?1,1]?lim μ↓0,t →0

s μ(t ).Therefore,we have lim μ↓0,t →0s μ(t )=[?1,1].By

the continuity of ψ (t ),(3.14),and (3.9),we obtain (3.10).From ? μ(0)=0,we get (3.11).(v)We ?rst show that s μis Lipschitz continuous.Since ρis piecewise continuous with a ?nite number of pieces,there exists a constant κ0such that ρ(t )≤κ0for any t ∈R .For any

t 1,t 2∈R ,we have |s μ(t 1)?s μ(t 2)|=2

t 1μ0ρ(τ)dτ? t 2μ0ρ(τ)dτ =2 t 1μt 2μ

ρ(τ)dτ ≤2κ0μ|t 1?t 2|.Now we show that ψ is Lipschitz continuous.Since ψ(t )=?(t )?? (0+)|t |,we have from (iii)in our assumptions on ?that for t 1t 2>0,|ψ (t 1)?ψ (t 2)|=|? (t 1)?? (t 2)|≤ν0|t 1?t 2|.

If t 1=0,t 2=0,then we have from ψ (0)=0that |ψ (t 1)?ψ (0)|=|? (t 1)

?? (0+)|≤ν0|t 1?0|.If t 1t 2<0,we may assume t 2<0

|ψ (t 1)?ψ (t 2)|=|(? (t 1)?? (0+))?(? (t 2)+? (0+

))|

=|(? (t 1)?? (0+))+(? (?t 2)?? (0+))|≤|?

(t 1)?? (0+

)|+|? (?t 2)?? (0+)|

≤ν0t 1+ν0(?t 2)=ν0|t 1?t 2|.Letting νμ=ν0+2? (0+)κ0

μ

,we obtain (3.12).Now we are ready to de?ne a class of smoothing functions and use the SCG method (Algorithm 2.2)for nonsmooth and nonconvex image restoration problems.

In the remainder of this section as well as in the next section,we consider the minimization

problem (1.5)with the objective function

(3.15)f (x )= Ax ?b 2

+βr i =1?(d T i x ),D o w n l o a d e d 01/04/16 t o 202.197.112.36. R e d i s t r i b u t i o n s u b j e c t t o S I A M l i c e n s e o r c o p y r i g h t ; s e e h t t p ://w w w .s i a m .o r g /j o u r n a l s /o j s a .p h p

780XIAOJUN CHEN AND WEIJUN ZHOU

where A ∈R m ×n is a blurring matrix,b ∈R m is a vector containing observed data,?is a potential function satisfying our assumptions on ?,β>0is a constant,and d i ∈R n ,

i =1,...,r ,are the row vectors of a di?erence matrix.

Using the smoothing function ?μfor ?,we de?ne a class of smoothing functions for f as

follows:(3.16)?f (x,μ)= b ?Ax 2+βr i =1?μ(d T i x ).To show that the smoothing function ?f

has nice approximation properties and that any accumulation point of a sequence generated by Algorithm 2.2with ?f is a Clarke stationary

point,we need the de?nition of a regular function [10,De?nition 2.3.4]and some results from Proposition 2.3.3,Proposition 2.3.6,Theorem 2.3.9,and Corollary 3in [10,Chapter 2].We

give the de?nition of a regular function and summarize these results as follows.De?nition 3.2(see [10]).A function f is said

to be regular at x provided the following hold:(i)For all v ,the usual one-sided directional derivative f (x ;v )exists.(ii)For all v ,f (x ;v )=lim sup y →x

t ↓0

f (y +tv )?f (y )t .Lemma 3.3(see [10]).(i)Suppose that

g i

:R n →R ,i =1,...,m ,are Lipschitz continuous near x .Then their sum g = m i =1g i is also Lipschitz continuous near x and ?g (x )=? m i =1g i (x )?

m i =1?g i (x ).If each g i is regular at x ,then equality holds.

(ii)Let g (x )=h (F (x )),where F :R n →R m is Lipschitz continuous near x and where

h :R m →R is Lipschitz continuous near F (x ).Then g is Lipschitz continuous near

x and

?g (x )?conv αi ζi |ζi ∈?F i

(x ),α=(α1,...,αm )T

∈?h (F (x )) .If h is regular at F (x )and F is continuously di?erentiable at x (in this case the conv is super?uous),then equality holds.(iii)A Lipschitz continuous function g :R n →R is regular at x if g is convex or g is

continuously di?erentiable at x .

Theorem 3.4.Let f and ?f (·,μ)be de?ned by (3.15)and (3.16),respectively.Then the following hold:(i)?f

(·,μ)is continuously di?erentiable for any ?xed μ>0,and there exists a constant κ1>0satisfying |?f (x,μ)?f (x )|≤κ1μ.

(ii)?f (·,μ)satis?es the gradient consistent property;that is,

{lim μ↓0,x →x

???f (x,μ)}=?f (x ?).D o w n l o a d e d 01/04/16 t o 202.197.112.36. R e d i s t r i b u t i o n s u b j e c t t o S I A M l i c e n s e o r c o p y r i g h t ; s e e h t t p ://w w w .s i a m .o r g /j o u r n a l s /o j s a .p h p

SNCG FOR IMAGE RESTORATION 781

(iii)If A has full column rank,then for any ?x ∈R n ,the level set S μ(?x )={x ∈R n |?f (x,μ)≤?f (?x ,μ)}is bounded.(iv)For any ?xed μ>0,the gradient of ?f (x,μ)is Lipschitz continuous;that is,for any

x,y ∈S μ(?x ),there exists a constant L μsuch that

??f (x,μ)???f (y,μ) ≤L μ x ?y .Proof .From the de?nitions of ?and ?μ,we

can write f and ?f as (3.17)f (x )= b ?Ax 2+βr i =1ψ(d T i x )+β? (0+)r i =1

|d T i x |and (3.18)?f (x,μ)= b ?Ax 2+βr i =1ψ(d T i x )+β? (0+)r i =1

s μ(d T i x ).(i)It follows from (3.17),(3.18),and Proposition 3.1that ?f (·,μ)is continuously di?eren-tiable for any ?xed μ>0,and |?f

(x,μ)?f (x )|≤βr? (0+)κμ.Set κ1=βr? (0+)κμ;then (i)holds.(ii)Set h (t )=|t |and F (x )=d T x .Then by (ii)in Lemma 3.3,g (x )=|d T x |is regular at

x and ?g (x )=?|d T x |d .Using Lemma 3.3again,we have ?f (x )=2A T (Ax ?b )+βr i =1(ψ (d T i x ))d i +β? (0+)r i =1?(|d T i x |)d i .The gradient of the smoothing function ?f (·,μ)is given by

??f (x,μ)=2A T (Ax ?b )+βr i =1ψ (d T i x )d i +β? (0+)r i =1

s μ(d T i x )d i .By (iv)of Proposition 3.1and Lemma 3.3,we have

lim μ↓0,x →x ?

??f (x,μ) = lim x →x ? 2A T (Ax ?b )+βr i =1ψ (d T i x )d i +β? (0+)r

i =1lim μ↓0,x →x

?s μ(d T i x )d i

=2A T (Ax ??b )+βr i =1ψ (d T i x ?)d i +β? (0+)r i =1?|d T i x ?|d i =?f (x ?).

This shows that the gradient consistent property holds for the smoothing function ?f .D o w n l o a d e d 01/04/16 t o 202.197.112.36. R e d i s t r i b u t i o n s u b j e c t t o S I A M l i c e n s e o r c o p y r i g h t ; s e e h t t p ://w w w .s i a m .o r g /j o u r n a l s /o j s a .p h p

782XIAOJUN CHEN AND WEIJUN ZHOU

Table 1

Test results with β=0.001,α=1,and f (x orig )=1.0849.We stopped the iteration if x k ?x orig / x k ≤

0.06or the number of iterations exceeded 120.

CM SCG Initial point Time psnr f (x k )Time psnr f (x k )0.0001120.227328.46350.673464.641829.86790.75510.001119.240528.46350.673469.297029.87260.75290.01120.062928.46350.673464.712729.85640.7517

0.1118.664028.46350.673462.029929.83330.75130.2119.108428.46350.673487.252229.83170.75450.4119.603828.46350.673452.180229.84240.75440.6123.049528.46350.673494.942529.86300.75000.8119.535528.46350.673489.912829.84500.7547observed 121.205428.46350.673449.400829.84130.7583random 120.438528.46350.6734109.012829.83500.7517average 120.113574.3383Table 2

Test results with β=0.02,α=0.5,and f (x orig )=11.22.

CM SCG Initial point Time psnr f (x k )Time psnr f (x k )0.0001371.9526.53 6.05272.2327.01 6.270.001383.5626.53 6.05273.6827.03 6.280.01379.6926.53 6.05287.8527.06 6.280.1377.0726.53 6.05325.0127.04 6.280.2380.2126.53 6.05312.3927.17 6.300.4372.6426.53 6.05291.1927.04 6.280.6397.5026.53 6.05354.2227.11 6.290.8398.9226.53 6.05451.3527.04 6.28

observe 426.3326.53 6.05271.2227.11 6.29random 400.0826.53 6.05462.9427.07 6.28average 388.80330.21(iii)If S μ(?x )is unbounded,then there exists a sequence {x k }?S μ(?x )such that x k →∞.

We have from (i)in our assumptions on ?and (iii)in Proposition 3.1that

?f (x,μ)=x T A T Ax ?2(Ax )T b + b 2+βr i =1

?μ(d T i x )≥x T A T Ax ?2(Ax )T b + b 2+β

r i =1

?(d T i x )?|?(d T i x )??μ(d T

i x )|

≥x T A T Ax ?2(Ax )T b + b 2+βr i =1(?(d T i x )?? (0+

)κμ).

(3.19)Since A T A is symmetric positive de?nite, x k

→∞implies ?f (x k ,μ)→+∞.Hence (iii)is true.D o w n l o a d e d 01/04/16 t o 202.197.112.36. R e d i s t r i b u t i o n s u b j e c t t o S I A M l i c e n s e o r c o p y r i g h t ; s e e h t t p ://w w w .s i a m .o r g /j o u r n a l s /o j s a .p h p

SNCG FOR IMAGE RESTORATION 783

Original

Observed Figure 1.The left and the right ?gures are the original Lena image and the observed Lena image,respec-tively.

(iv)Using the expression ??f (x,μ)=2A T (Ax ?b )+βr

i =1? μ(d T i x )d i ,

we have from (v)of Proposition 3.1that ??f (x,μ)???f (y,μ) ≤2 A T A x ?y +βr i =1 d i ? μ(d T i x )?? μ(d T i y ) ≤2 A T A x ?y +βr

i =1

d i νμ d T i x ?d T i y

≤ 2 A T A +βr

i =1 d i 2νμ x ?y .Set L μ=2 A T A +β r i =1

d i 2νμ;then (iv)holds.Remark 3.1.Theorem 3.4shows that th

e smoothing function ?

f has very nice approxima-tion properties and satis?es all assumptions of the convergence theorem (Theorem 2.6)for the SCG method (Algorithm 2.2).The most signi?cant one is the gradient consistent prop-erty,which ensures that any accumulation point of a sequence generated by Algorithm 2.2is a Clarke stationary https://www.doczj.com/doc/3a10535641.html,in

g smoot

h approximations to solve nonsmooth optimization prob-lems has been studied in many papers [23,24].However,there is no guarantee for convergence

to a generalized stationary point of the nonsmooth optimization problems in [23,24].Remark 3.2.The smoothing functions studied in this section can be applied to other non-smooth image restoration models—for example,

approximating |t |by s μ(t )in the l 1–l 1model (1.3)and potential functions (3.1).From (3.19),it is not di?cult to see that the assumption that A has full column rank in (iii)of Theorem 3.4can be replaced by N (A )∩N (D )={0}and that ?is coercive,that is,t →∞??(t )→∞,where N (A )and N (D )are null spaces of A and D ,respectively.See Assumption 1in [33].

D o w n l o a d e d 01/04/16 t o 202.197.112.36. R e d i s t r i b u t i o n s u b j e c t t o S I A M l i c e n s e o r c o p y r i g h t ; s e e h t t p ://w w w .s i a m .o r g /j o u r n a l s /o j s a .p h p

784XIAOJUN CHEN AND WEIJUN ZHOU

(a) Restored by SCG(b) Restored by CM

(c) Restored by SCG(d) Restored by CM

Figure2.Lena image:(a)and(b)are the restoration images by SCG and CM methods withβ=0.001 andα=1and the observed image as the initial point;(c)and(d)are the restoration images by SCG and CM

methods withβ=0.02andα=0.5and the observed image as the initial point.

4.Numerical results.In this section,we present numerical results to show the e?ciency

of the SCG method(Algorithm2.2).The numerical testing was carried out on a Lenovo PC

(3.00GHz,2.00GB of RAM)with the use of MATLAB7.4.

In our numerical experiment,the objective function has the form(3.15),and its smoothing function is de?ned by(3.16)and(3.7).We test three often-used images:Lena,Cameraman,

and Phantom,which are all gray level images with intensity values ranging from0to1.The

Lena and Cameraman images are of size n=128×128.The Phantom image is of size from

n=128×128to n=256×256.The pixels of the observed images are contaminated by

Gaussian white noise with signal-to-noise ratios of45dB for Table1and60dB for the other

tables with blurring.The blurring function is chosen to be a two-dimensional Gaussian,

a(i,j)=e?2(i/3)2?2(j/3)2,

truncated such that the function has a support of7×7.

For the regularization term,we use di?erent potential functions?i for i=1,2,3.We choose D to be the identity matrix D0=I,or a matrix of a?rst-order di?erence operator,

D1=

L1?I

I?L1

with L1=

?

??

??

1?1

1?1

..

.

..

.

1?1

?

??

??,

D

o

w

n

l

o

a

d

e

d

1

/

4

/

1

6

t

o

2

2

.

1

9

7

.

1

1

2

.

3

6

.

R

e

d

i

s

t

r

i

b

u

t

i

o

n

s

u

b

j

e

c

t

t

o

S

I

A

M

l

i

c

e

n

s

e

o

r

c

o

p

y

r

i

g

h

t

;

s

e

e

h

t

t

p

:

/

/

w

w

w

.

s

i

a

m

.

o

r

g

/

j

o

u

r

n

a

l

s

/

o

j

s

a

.

p

h

p

相关主题
文本预览
相关文档 最新文档