Non-linear Optimization
Seyed Hamzeh Mirzaei; Ali Ashrafi
Abstract
Purpose: One of the most effective methods for solving unconstrained optimization problems is the trust region method. The strategy of determining the radius of the trust region has a significant effect on the efficiency of this method. On the other hand, imposing the monotonocity condition will decrease ...
Read More
Purpose: One of the most effective methods for solving unconstrained optimization problems is the trust region method. The strategy of determining the radius of the trust region has a significant effect on the efficiency of this method. On the other hand, imposing the monotonocity condition will decrease the convergence speed of this method. Therefore, improving and increasing the efficiency of this method is one of the most important issues and the attention of researchers.Methodology: Establishing a new adaptive trust region radius as well as combining the trust region method with a non-monotone strategy to avoid the adverse effects of monotonocity.Findings: A new adaptive trust region radius converged to zero is provided and then trust region combination is performed with a non-monotone strategy. Running the algorithm on a set of test functions shows that the new adaptive radius along with the non-monotone strategy used significantly improves the efficiency of the trust region method.Originality/Value: The presented non-monotone adaptive algorithm has a second-order convergence rate. In addition, it significantly reduces computational costs compared to traditional algorithms. On the other hand, the new adaptive radius avoids the ineffectiveness of the trust region close to the solution.
Non-linear Optimization
Zohreh Akbari
Abstract
In this paper, we present a new trust region method for unconstrained optimization problems with locally Lipschitz continuous, nonconvex functions. In this method, in the ratio test, the current objective function value is replaced with maximum of some objective function values in the previous iterations. ...
Read More
In this paper, we present a new trust region method for unconstrained optimization problems with locally Lipschitz continuous, nonconvex functions. In this method, in the ratio test, the current objective function value is replaced with maximum of some objective function values in the previous iterations. The new method nonmonotone properties and prevents falling into narrow valleys. Proving global convergence requires only two conditions: 1- there should be is a sufficient reduction for the approximate model in the solution of trust region subproblem, 2- the approximation Hessian matrix be bounded. Then, the convergence property of this method is investigated. Finally, the presented method is implemented on some nonconvex problems in MATLAB environment and numerical results are compared with the nonsmooth trust region method.
Non-linear Optimization
Narges Araboljadidi
Abstract
In this paper, we present a method for charaterizing the solution set of nonconvex optimization problems via their dual problems. In fact, the constrainted optimization problem which is considerd has pseudoconvex and locally Lipschitz functions, which are not necessarily convex and smooth, and include ...
Read More
In this paper, we present a method for charaterizing the solution set of nonconvex optimization problems via their dual problems. In fact, the constrainted optimization problem which is considerd has pseudoconvex and locally Lipschitz functions, which are not necessarily convex and smooth, and include a wide class of non-convex non-smooth functions. In the proposed method, a dual problem is formulated to characterizations of the solution set of the primal problem in a mixed type of Wolfe type and Mond-Weir type. First, we introduce some of the properties of the Lagrangian functions associated to these problems and then we explain the proof of the characterization of their solution sets.
Non-linear Optimization
Azhdar Soleymanpour Bakefayat
Abstract
In this paper, A innovative method designed to solving nonlinear optimization problems with convex object function and constrained. In this method, we define an cost function and we find variables to minimization of cost function. For create properly cost function we use K. K. T. optimal conditions. ...
Read More
In this paper, A innovative method designed to solving nonlinear optimization problems with convex object function and constrained. In this method, we define an cost function and we find variables to minimization of cost function. For create properly cost function we use K. K. T. optimal conditions. We used Nelder-Mead without derivative optimization method to minimization of cost function. When, dimensions of problem is about 10, application shows that efficiency of Nelder-Mead method is more than the other methods. Using new mathod is easier than the similar methods. By several examples efficiency of new method are verified.