[1] Andrei N. An adaptive conjugate gradient algorithm for lagre-scale unconstrained optimization. J Comput Appl Math, 2016, 292:83-91
[2] Yuan G. Modified nonlinear conjugate gradient methods with sufficient descent property for large-scale optimization problems. Optim Lett, 2009, 3:11-21
[3] Yuan G, Lu X. A modified PRP conjugate gradient method. Anna Operat Res, 2009, 166:73-90
[4] Yuan Lu X, Wei Z. A conjugate gradient method with descent direction for unconstrained optimization. J Comput Appl Math, 2009, 233:519-530
[5] Yuan G, Wei Z, Zhao Q. A modified Polak-Ribière-Polyak conjugate gradient algorithm for large-scale optimization problems. IEEE Tran, 2014, 46:397-413
[6] Yuan G, Meng Z, Li Y. A modified Hestenes and Stiefel conjugate gradient algorithm for large-scale nonsmooth minimizations and nonlinear equations. J Optimiz Theory App, 2016, 168:129-152
[7] Yuan G, Zhang M. A three-terms Polak-Ribière-Polyak conjugate gradient algorithm for large-scale nonlinear equations. J Comput Appl Math, 2015, 286:186-195
[8] Yuan G, Zhang M. A modified Hestenes-Stiefel conjugate gradient algorithm for large-scale optimization. Numer Func Anal Opt, 2013, 34:914-937
[9] Zhang H, Ni Q. A new regularized quasi-Newton algorithm for unconstrained optimization. Appl Math Comput, 2015, 259:460-469
[10] Lu X, Ni Q. A quasi-Newton trust region method with a new conic model for the unconstrained optimization. Appl Math Comput, 2008, 204:373-384
[11] Wei Z, Li G, Qi L. New quasi-Newton methods for unconstrained optimization problems. Appl Math Comput, 2006, 175:1156-1188
[12] Yuan G, Wei Z. The superlinear convergence analysis of a nonmonotone BFGS algorithm on convex objective functions. Acta Math Sci, 2008, 24B:35-42
[13] Yuan G, Wei Z, Lu X. Global convergence of the BFGS method and the PRP method for general functions under a modified weak Wolfe-Powell line search. Appl Math Model, 2017, 47:811-825
[14] Yuan G, Sheng Z, Wang B, et al, The global convergence of a modified BFGS method for nonconvex functions. J Comput Appl Math, 2018, 327:274-294
[15] Fan J, Yuan Y. A new trust region algorithm with trust region radius converging to zero//Li D. Proceedings of the 5th International Conference on Optimization:Techniques and Applications (December 2001, Hongkong), 2001:786-794
[16] Hei L. A self-adaptive trust region algorithm. J Comput Math, 2003, 21:229-236
[17] Shi Z, Guo J. A new trust region method for unconstrained optimization. J Comput Appl Math, 2008, 213:509-520
[18] Shi Z, Wang S. Nonmonotone adaptive trust region mthod. Eur J Oper Res, 2011, 208:28-36
[19] Zhang X, Zhang J, Liao L. An adaptive trust region method and its convergence. Sci in China, 2002, 45A:620-631
[20] Zhou Q, Zhang Y, Xu F, et al. An improved trust region method for unconstrained optimization. Sci China Math, 2013, 56:425-434
[21] Ahookhosh M, Amini K. A nonmonotone trust region method with adaptive radius for unconstrained optimization problems. Comput Math Appl, 2010, 60:411-422
[22] Amini K, Ahookhosh M. A hybrid of adjustable trust-region and nonmonotone algorithms for unconstrained optimization. Appl Math Model, 2014, 38:2601-2612
[23] Sang Z, Sun Q. A new non-monotone self-adaptive trust region method for unconstrained optimization. J Appl Math Comput, 2011, 35:53-62
[24] Cui Z, Wu B. A new modified nonmonotone adaptive trust region method for unconstrained optimization. Comput Optim Appl, 2012, 53:795-806
[25] Zhang X. NN models for general nonlinear programming, in Neural Networks in optimization. Dordrecht/Boston/London:Kluwer Academic Publishers, 2000
[26] Li G. A trust region method with automatic determination of the trust region radius. Chinese J Eng Math (in Chinese), 2006, 23:843-848
[27] Yuan G, Wei Z. A trust region algorithm with conjugate gradient technique for optimization problems. Numer Func Anal Opt, 2011, 32:212-232
[28] Yuan Y. Recent advances in trust region algorithms. Math Program, 2015, 151:249-281
[29] Powell M J D. A new algorithm for unconstrained optimization//Rosen J B, Mangasarian O L, Ritter K. Nonlinear Programming. New York:Academic Press, 1970:31-65
[30] Schnabel R B, Eskow E. A new modified Cholesky factorization. SIAM J Sci Comput, 1990, 11:1136-1158
[31] Zhang J, Wang Y. A new trust region method for nonlinear equations. Math Method Oper Res, 2003, 58:283-298
[32] Yuan G, Wei Z, Lu X. A BFGS trust-region method for nonlinear equations. Computing, 2011, 92:317-333
[33] Fan J, Pan J. An improve trust region algorithm for nonlinear equations. Comput Optim Appl, 2011, 48:59-70
[34] Yuan G, Wei Z, Li G. A modified Polak-Ribière-Polyak conjugate gradient algorithm for nonsmooth convex programs. J Comput Appl Math, 2014, 255:86-96
[35] Yuan G, Wei Z. Convergence analysis of a modified BFGS method on convex minimizations. Comput Optim Appl, 2010,47:237-255
[36] Xiao Y, Wei Z, Wang Z. A limited memory BFGS-type method for large-scale unconstrained optimization. Comput Math Appl, 2008, 56:1001-1009
[37] Yuan Y, Sun W. Optimization Theory and Methods. Beijing:Science Press, 1997
[38] Powell M J D. Convergence properties of a class of minimization algorithms//Mangasarian Q L, Meyer R R, Robinson S M. Nonlinear Programming. Vol 2. New York:Academic Press, 1975
[39] Steihaug T. The conjugate gradient method and trust regions in large scale optimization. SIAM J Numer Anal, 1983, 20:626-637
[40] Moré J J, Garbow B S, Hillstrom K H. Testing unconstrained optimization software. ACM Tran Math Software, 1981, 7:17-41
[41] Dolan E D, Moré J J. Benchmarking optimization software with performance profiles. Math Program, 2002, 91:201-213
[42] Andrei N. An unconstrained optimization test function collection. Advan Model Optim, 2008, 10:147-161