A Two-Layer Recurrent Neural Network for Nonsmooth Convex Optimization Problems. Qin, S. & Xue, X. IEEE Transactions on Neural Networks and Learning Systems, 26(6):1149–1160, June, 2015.
A Two-Layer Recurrent Neural Network for Nonsmooth Convex Optimization Problems [link]Paper  doi  abstract   bibtex   
In this paper, a two-layer recurrent neural network is proposed to solve the nonsmooth convex optimization problem subject to convex inequality and linear equality constraints. Compared with existing neural network models, the proposed neural network has a low model complexity and avoids penalty parameters. It is proved that from any initial point, the state of the proposed neural network reaches the equality feasible region in finite time and stays there thereafter. Moreover, the state is unique if the initial point lies in the equality feasible region. The equilibrium point set of the proposed neural network is proved to be equivalent to the Karush–Kuhn–Tucker optimality set of the original optimization problem. It is further proved that the equilibrium point of the proposed neural network is stable in the sense of Lyapunov. Moreover, from any initial point, the state is proved to be convergent to an equilibrium point of the proposed neural network. Finally, as applications, the proposed neural network is used to solve nonlinear convex programming with linear constraints and L1-norm minimization problems.
@article{qin_two-layer_2015,
	title = {A {Two}-{Layer} {Recurrent} {Neural} {Network} for {Nonsmooth} {Convex} {Optimization} {Problems}},
	volume = {26},
	issn = {2162-237X, 2162-2388},
	url = {https://ieeexplore.ieee.org/document/6856218/},
	doi = {10.1109/tnnls.2014.2334364},
	abstract = {In this paper, a two-layer recurrent neural network is proposed to solve the nonsmooth convex optimization problem subject to convex inequality and linear equality constraints. Compared with existing neural network models, the proposed neural network has a low model complexity and avoids penalty parameters. It is proved that from any initial point, the state of the proposed neural network reaches the equality feasible region in finite time and stays there thereafter. Moreover, the state is unique if the initial point lies in the equality feasible region. The equilibrium point set of the proposed neural network is proved to be equivalent to the Karush–Kuhn–Tucker optimality set of the original optimization problem. It is further proved that the equilibrium point of the proposed neural network is stable in the sense of Lyapunov. Moreover, from any initial point, the state is proved to be convergent to an equilibrium point of the proposed neural network. Finally, as applications, the proposed neural network is used to solve nonlinear convex programming with linear constraints and L1-norm minimization problems.},
	language = {en},
	number = {6},
	urldate = {2022-01-20},
	journal = {IEEE Transactions on Neural Networks and Learning Systems},
	author = {Qin, Sitian and Xue, Xiaoping},
	month = jun,
	year = {2015},
	keywords = {/unread},
	pages = {1149--1160},
}

Downloads: 0