Projection type neural network and its convergence analysis
- 期刊名字:控制理論與應(yīng)用(英文版)
- 文件大?。?97kb
- 論文作者:Youmei LI,F(xiàn)eilong CAO
- 作者單位:Department of Computer Science,China Jiliang University
- 更新時(shí)間:2020-12-06
- 下載次數(shù):次
Jourmal of Control Theory and Applications3 (2006) 286- 290Projection type neural network and itsconvergence analysisYoumei LI, Feilong CAO2(1.Department of Computer Science, Shaoxing Clle of Arts and Sciences, Shaoxing Zhejiang 32000 China;2.China Jing University, Hangzhou Zhejiang 310018, China)Abstract: Projection type neural network for optimization problems has advantages over other networks for fewerparameters , low searching space dimension and simple structure. In this paper, by properly constructing a Lyapunovenergy function, we have proven the global convergence of this network when being used to optimize a continuouslydifferentiable convex function defined on a closed convex set. The result sttles the extensive applicability of the network.Several numerical examples are given to verify the eficiency of the network.Keywords: Neural network; Convex programming; Global convergence; Equilibrium points1 Introduction2 Projection-type neural network modelRecurrent neural network is a preferred optimizationConsider the following convex programming problemsolver due to its swiftness and validity, and has been exten-min{E(x),x∈8CR"},1)sively invetigated'in past decades. There exist a few neu-where 2 is a closed, nonempty convex subset of R", andral networks for convex programming or its special cases, E(x) is a continuously differentiable convex function of[1~3]. However their applications are limited for some x ∈s2. Let VE(x) denote the gradient vector of E(x) atdrawbacks, such as the difficulties of parameter settings and the point x. In this paper, we always assume the minimizerthe stringent conditions on which the convergence of neural set of the problem(1)2*≠0.network can be guaranteed.In [4], a projection-type neural network is presented,Recently in [4], a projection-type neural network waswhich can be described by the fllowing differential equa-proposed to solve nonlinear optimization problems with ation in terms of the state vectorcontinuously differentiable objective function and bounded=-x+ P2(x - aVE(x)),(2)constraints. This model has some advantages over others inwhere a, T are positive constants, and the activation func-network computation and implementation, for the numbertion Pa is defined byof variable parameters is fewer and the equilibrium pointsPa(x) = argmin|x - o川,of network correspond to the solutions of optimization prob-lem under suitable conditions.where I.1 is a vector norm. That is, P2(x) is the nearestBut regrettably, the model has some limitations when projection operator ofRn onto s2.LetF(x)= -x+Pn(x-applied to convex optimization problems, for is global aV E(x)) and denote all equilibriumn points of model (2)byconvergence require the objective function to be strictly02°.convex quadratic function and the feasible region is two- For convex optimization problem (1), it is well known [5]sides bounded. In this paper, we will further generalize this that x* is a optimizer if and only ifprojection-type neural network in [4] in two aspects:(x-x*)TVE(x*)≥0, Vx∈2.1) the feasible region is a closed nonempty set, needn'tBased on the monotone variational inequality theory, it canbe a bound-constraints region; 2) the objective function isbe represented equivalently that x* is a fixed point of projec-assumed to be convex. Based on this, the model is globallytion operator Pa(x - aVE(x)). That isconvergent for convex programming. Obviously the modelis suitable for those special case, such as linear or convex中國(guó)煤化工(")). .quadratic programming, and strictly convex programming. This.MYHCNMHGset of problem (1)ex-Received 25 January 2005; revised 14 April 2005.This work was supported by the National Natural Science Foundation of China (No. 60473034).Y L et al. /Journal of Control Theory and Applications3 (2006) 286 290287actly equals to the equilibrium set of model (2).we dropped the strict convex function condition of objectiveFurthermore, because 2 is closed convex set, for projec- function, also the feasible region is extended to any closedtion operator, it should satisfyconvex set.(u- P2(u)T(P2(u)-v)≥0,Vu∈R",v∈2,Remark 1 We can see from the theorem that the tra-|(P2(x) - Pa()l2≤((Pn(x) - P(),(x - y)》jectory of model (2) always converges to an optimal solu-≤|x- ylP.(3)tion despite 2* is finite or not. Starting from different ini-Lety = x- aVE(x) in (3), noting thatforanyx∈tial points, the trajectory may converge to different optimal82, Pn(x) =工, it immediately implies F(x)T*VE(x)≤0.solution. The simulation examples 1 and 2 will explain thisBy the definition of feasible direction, we know F(x) =clearly.-x + Pa(x - aVE(x)) is a feasible descent direction. ItRemark 2 Under certain simple constraints, the projec-should be pointed out that Pa(x) is Lipschiz continuous tion operator Pn can de described in detail, which makes(particularly, Lipschitz constant is 1).model (2) be readily implementable in hardware. Such asFrom a mathematical point of view, the characteristics fllowing insances :of an optimization neural network are relevant to the opti-Case1Q={x∈R"|a≤x≤b}.Here.Pr(x)=mization techniques employed. Prevalent and conventional (9r(x1), 92(x),... ,9n(x)T can be describedmethods are gradient methods, which involve constructingai; ifxi< ai,an appropriate computational energy function for the op-gi(xi)= Di, ifai≤xi≤ bi,timization problem and then designing a neural networkmodel which performs the gradient descent process of the(b,ifb;





-
C4烯烴制丙烯催化劑 2020-12-06
-
煤基聚乙醇酸技術(shù)進(jìn)展 2020-12-06
-
生物質(zhì)能的應(yīng)用工程 2020-12-06
-
我國(guó)甲醇工業(yè)現(xiàn)狀 2020-12-06
-
石油化工設(shè)備腐蝕與防護(hù)參考書十本免費(fèi)下載,絕版珍藏 2020-12-06
-
四噴嘴水煤漿氣化爐工業(yè)應(yīng)用情況簡(jiǎn)介 2020-12-06
-
Lurgi和ICI低壓甲醇合成工藝比較 2020-12-06
-
甲醇制芳烴研究進(jìn)展 2020-12-06
-
精甲醇及MTO級(jí)甲醇精餾工藝技術(shù)進(jìn)展 2020-12-06
