




已閱讀5頁(yè),還剩20頁(yè)未讀, 繼續(xù)免費(fèi)閱讀
版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
豆丁網(wǎng)精品論文on local and global convergence of anonsmooth newton-type method fornonlinear semidefinite programschengjin li and wenyu sunabstractin this paper, we present a nonsmooth newton-type method for the solution of nonlinear semidefinite programs. at each iteration, the method solves a system of conventional linear equations with some spe- cial elements of b-subdifferential. this method is shown to be locally quadratically convergent under certain assumptions and globally con- vergent with line search and a special b-subgradient.key words. nonlinear semidefinite programs, nonsmooth newton-type method, local convergence, global convergence.1. introductionin this paper, we consider the nonlinear semidefinite program (in brief, nonlinear sdp):minx snf (x )s.t. g(x ) = 0, (1.1)x 0,where f : sn r, g : sn rm are twice differentiable functions, sn denotes the subspace of all symmetric matrices in rnn , and x 0 a symmetricpositive semidefinite matrix. if f and g are all linear (affine) functions, thethis work is supported by the national natural science foundation of china, the spe- cialized research fund of doctoral program of high education of china, and the natural science fund of jiangsu province of china.school of mathematics and computer science, nanjing normal university, nanjing210097, china. email: corresponding author. school of mathematics and computer science, nanjing normaluniversity, nanjing 210097, china. email: 豆丁網(wǎng)精品論文nonlinear sdp problem (1.1) reduces to a normal linear sdp problem, which has been extensively studied during the last decade, see, e.g.,8, 37, 36, 33. however research works on the nonlinear sdp are much more recent and still in its preliminary phase, for detail, please refer 2, 7, 11, 15, 5.using some results from the corresponding duality theory, it is not difficult to see that, under mild assumptions, the semidefinite program (1.1) has a solution if and only if the following optimality conditions (k k t conditions) hold:mdf (x ) + x r dgr (x ) s = 0,r=1gr (x ) = 0, r = 1, , m, (1.2)x 0, s 0, hx, si = 0,1where = (1, 2 , , m ) rm , df (x ) and dgr (x ) are f(rechet)-derivatives of f and gr at point x respectively, and ha, bi := trace(ab) denotes the in- ner product of a, b rnn , kx kf and kbk2 represent hx, x i 2 for all x snmand ( p b2) 1for all brm , respectively. the last equation in (1.2), i.e.r 2 r=1hx, si = 0 x s = 0 for all x 0, s 0, where the latter three 000denote zero matrix with proper dimension.there are many standard and efficient methods, based on k k t condi- tions, are used for solution of semidefinite programs, such as interior-point method 1, 9, 14, 19, 39, 32, 18, (non)smooth newton method, etc. by in- troducing smoothed fischer-burmeister function, modified minimum function and squared smoothing function, the smoothing newtons method for solution of linear sdp and nonlinear sdp was introduced in 11, 30, and kanzow etc. 12 showed the locally quadratic convergence of a nonsmooth newton-type method for linear sdp without strict complementarity. all of above newtons methods are base on certain equation with the functions just being introduced, which is equivalent to one of the following conditions:x 0, s 0, x s = 0, (1.3)orx 0, s 0, x s = i , (1.4)where 0 is a real number and i denotes the identical matrix with appreciate size.it is well known that for all x, s sn , the following equation is equivalentto (1.3) (please refer 28, 38),psn (x s) x = 0, (or psn (s x ) s = 0, )+where sn is the cone of all n n symmetric positive semidefinite matrices andpsn () the orthogonal projection on sn . then the k k t conditions can be+reformulated as+mdf (x ) + p r dgr (x ) s(x, , s) = r=1g(x )+psn (x s) x = 0, (1.5)where g(x ) = (g1(x ), g2(x ), , gm (x ). it is unknown whether better+theory or better computational results would be obtained when the last term in (1.5) is replaced by psn (s x ) s.+dislike fischer-burmeister or minimum function, it is not easy for orthogo- nal projection function psn () to be modified to own the equivalence propertywith (1.4), by which the equation (1.5) can be solved by smooth newtons method. the aim of this paper is to introduce a new special nonsmooth newtons method for solution of nonlinear sdp (1.1) by solving the equa- tion (x, , s) = 0, and we also establish the locally quadratic convergence of this method under different conditions and the global convergence of the method with line search.notation:let y := (x, , s), y := (x, , s), q := x s,h := x s, and let y be a k k t point of problem (1.1). by a, wedenote the linear operatora() := hdg1 (x ), i ,.hdgm (x ), iand let operator (a) be the adjoint of a, so (a)() =where = (1, , m ).assume that q has the following spectral decompositionmp i dgi (x ),i=1q = u (u ),where is the the diagonal matrix with eigenvalues 1 (q) n (q) of q as the diagonal elements, and u is a corresponding orthogonal matrix of orthonormal eigenvectors. thenpsn (q) = u (u ),(1.6)+ +where is the diagonal matrix whose diagonal entries are the nonnegativeparts of the respective diagonal entries of . for details, please refer 26, 10,35. define three index sets of positive, zero, and negative eigenvalues of q, respectively, as := i : i (q) 0, := i : i (q) = 0, := i : i (q) rnfor all x sn .furthermore, letak := (svec(dg1 (x k ), svec(dg2 (x k ), , svec(dgm (x k ) rmn ,it is easy to see thatmrlk svec(x ) = svecd2f (x k )(x ) + x k d2 gr (x k )(x ),andr=1ak svec(x ) = ak (x ), (ak )() = (ak )().so, the first and second equations in (2.4) can be reformulated as1lk svec(x ) + (ak ) svec(s) = svec(rk ) (2.5)andpak svec(x ) = gk . (2.6)before reformulating the last equation in (2.4), we need to introduce an important proposition.proposition 2.121, 28 for any v k b psn (qk ) (c psn (qk ), respec-+tively), there exists a vk b ps|k | (0) (c ps|k | (0), respectively) such thataek k+kaek k aek kv k (a) = u k aeaek kk vk (aekk ) 0 (u k ), (2.7)ssp+k k ( ) 00a sn , where ae:= (u k )au k . conversely, for any vk b pk (0)s| |+(c pk (0), respectively), there exists a v k b ns| |+(qk ) (cn (qk ), re-spectively) such that (2.7) holds, where 00 denotes hadamard product.moreover, the directional derivative p 0 n (qk ; a) 28, 3of psnat qk withs+ +direction a sn is given bykaek kaek k aek k sn (qp 0k+; a) = u k aek kps|k |+(aek k ) 0 (uk ).aekk k ( ) 00(2.8)just as that in 16, we use d(m) to denote the finite set of m m symmetricmatrices with entries from the set 0, 1 such that the entries in each row(considered from left to right) or column (from top to bottom) form a non- increasing sequence. define a set of linear operatorq(m) := v : sm sm | v (a) = b a, b d(m).+from 16 and proposition 2.1 we know q(m) b psm (0), it follows that+nmk := v k: sn swith vk q(|k|) b psn (qk ).kit is obvious that the zero mapping v 0 0 and the identity mappingv i|k |k |k k k+k i from s sbelongs to q(|) b ps|k | (0). let vo and vikkbe v k in (2.7) with vk being replaced by v 0and v irespectively, and byw k and w k we denote the elements in b (y k ) with v b psn (qk ) beingo i +v k and v k respectively.o i+from the definition of b-subdifferential of orthogonal projection psn , we+have v k (x ) is symmetric matrix for all v k psn (q), q sn and x sn .so we can definite t k rnn as follows.t k := (svec(v 11 ), svec(v 21 ), , svec(v n1 ), svec(v 22 ), svec(v 32 ), , svec(v n2), , svec(v nn ), (2.9)wherev ij = v k (ieij ) for certain v k mk .under above assumptions, the third equation in (2.4) can be reformulated ast k (h ) svec(x ) = t k svec(x ) t k svec(s) svec(x )2= svec(rk ). (2.10)from equation (2.5), (2.6) and (2.10), the equation (2.4) can be reformu-lated as svec(x ) lk (ak ) i svec(x ) svec(s) :=ak 00 t k i0t k svec(s)1 svec(rk ) =g2svec(rk ) , (2.11)where r(2n+m)(2n+m) can be regard as a linear operator related to vector variables (nvec(x ), , nvec(s), so equation (2.11) can be solved by several efficient methods for linear equations.remark 2.1 a) in fact, we can deduce the last equation in (2.11) by using tensor computation in 27, 16 directly, i.e., the 4-tensor te with entriest1 t2|+|tet3 t4=x x ut it it it ixxt it it it ii1 =1 i2 =1|+|1 1 u 2 2 u 3 2 u 4 1 + (|i1 =1 i2 =|+1u 1 1 u 2 2 u 3 2 u 4 1+x x ut it it it ixt it it it ii1 =|+1 i2 =11 1 u 2 2 u 3 2 u 4 1 ) +(i1 ,i2 )u 1 1 u 2 2 u 3 2 u 4 1|n+ xxi1i1 i2(ut1 i1ut2 i2ut3 i2ut4 i1+ ut1 i2ut2 i1ut3 i1ut4 i2 ),i1 =1 i2 =|+|+1where we omit the superscript k, ui,j is element in the orthogonal matrix u , and denotes the index set of 1-entries of certain matrix in d(|). fromabove formula we can deduce the symmetry of v (q) too. b) if we replace the variable in (2.11) by vector(nvec(x ), , nvec(s),where nvec(x ) := (x11 , , xn1, x22 , , xn2 , , xnn ) rn , then the equation (2.11) still holds by adjusting the coefficient matrix and right term of (2.11). furthermore, all theorems in this paper still hold in this case.by n k we denote the set of matrix t k which constructed from (2.9), so, we are in a position to state our algorithm in the following form.algorithm 2.1(s.0) choose y 0 sn rm sn , 0 and set k := 0.f(s.1) if k(y k )kn , stop. (where k(x, , s)kn = (kx k22+ kk2kf+ks 21) 2 ).(s.2) choose t k n k , and find y by solving equation (2.11).(s.3) set y k+1 = y k + y , k := k + 1, go to (s.1).it is obvious that, by (2.11), we find svec(x ), svec(s) and . to get the step y , it is enough to do inverse operation smat(svec(x ) = x and smat(svec(s) = s. then, it ensures the matrices x and s aresymmetry, so are x k and sk for all k = 0, 1, .3. local convergence analysisin this section, we deduce the locally quadratic convergence of algorithm2.1 in different cases. first, under the strict complementarity condition, which is a rather strong condition, we give the locally quadratic convergence theorem theorem 3.2. then, we discuss the case without the strict complementar- ity condition. to this end, we generalize an equivalent conclusion for linear sdp in 3 to nonlinear sdp. with the help of this equivalent conclusion we establish the locally quadratic convergence of algorithm 2.1 without the strict complementarity condition theorem 3.3 and corollary 3.3.let y be a solution of equation (1.5) and let the definitions of l, a, v ,ov , w , w , n and t be similar to that of lk , ak , v k , v k , w k , w k , n k andio io io it k in the front sections, respectively. now, first, let us introduce a convergencetheorem in 11, 22 as follows, where invertibility of all elements in c (y ), a conventional condition for locally quadratic convergence of nonsmooth new- tons method, is replaced by a weaker condition that all elements in b (y ) are invertible.theorem 3.111, 22 let y be a solution of the optimality conditions (1.5), and assume that all elements w b (y ) are invertible. then algorithm2.1 is locally quadratically convergent.+because of the strong semismoothness of psn (), theorem 3.1 also holds inthis paper. we use t0 and t1 to denote the matrices constructed from (2.9)with the linear operator v oand v irespectively when | = 0. if optimalsolution y satisfies the strict complementarity condition, i.e., x + s +0, then q must be nonsingular. so, is well-defined and differentiable at solution y , and v (x ) = v (x ) for all x sn , where v = dpsn (q). inthe following, we prove the theorem.theorem 3.2 suppose that the strict complementarity holds at y and that the matrix0at (a)t (i l) i(3.1)is nonsingular, then algorithm 2.1 is locally quadratically convergent.proof. from the remark above and the strict complementarity, we know the following (2n + m) (2n + m) matrixl(a)ia00t i0t (3.2)is well-defined. by means of theorem 3.1, it is enough to prove the nonsingu- larity of matrix (3.2).obviously, from the elementary transformations, we havel(a)ia00t i0t = 0 0(a)ia00t i t l0t i
溫馨提示
- 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 急診科護(hù)理查房中毒處理指南
- 天藝教育期末匯報(bào)
- 寵物美容培訓(xùn)
- 互動(dòng)活動(dòng)運(yùn)營(yíng)合同
- 工程設(shè)備管理與勞務(wù)合同
- 大學(xué)物理學(xué) 第一卷 經(jīng)典物理基礎(chǔ) 第6版 課件 14 熱平衡態(tài)的氣體分子動(dòng)理論
- 溝通計(jì)劃與協(xié)議
- 商品質(zhì)量風(fēng)險(xiǎn)控制合同(2篇)
- 統(tǒng)編版小學(xué)道德與法治三年級(jí)下冊(cè)《我很誠(chéng)實(shí)》說(shuō)課課件
- 建材零售合同范本
- 2024年安徽寧馬投資有限責(zé)任公司招聘10人筆試參考題庫(kù)附帶答案詳解
- 《變頻器原理及應(yīng)用》課件
- 第16課《有為有不為》公開課一等獎(jiǎng)創(chuàng)新教學(xué)設(shè)計(jì)
- 新生兒腭裂喂養(yǎng)護(hù)理
- 攝像服務(wù)行業(yè)品牌建設(shè)研究-深度研究
- 人像攝影基礎(chǔ)課件
- 中醫(yī)養(yǎng)生保健培訓(xùn)
- 2024年職業(yè)素養(yǎng)培訓(xùn)考試題庫(kù)(附答案)
- 第20課 聯(lián)合國(guó)與世界貿(mào)易組織-(說(shuō)課稿)2023-2024學(xué)年九年級(jí)下冊(cè)歷史部編版(安徽)
- 《光電對(duì)抗原理與應(yīng)用》課件第1章
- 網(wǎng)絡(luò)安全題庫(kù)及答案(1000題)
評(píng)論
0/150
提交評(píng)論