10,16,2021

News Blog Paper China
Algorithms for solving optimization problems arising from deep neural net models: nonsmooth problems2018-06-30   ${\displaystyle \cong }$
Machine Learning models incorporating multiple layered learning networks have been seen to provide effective models for various classification problems. The resulting optimization problem to solve for the optimal vector minimizing the empirical risk is, however, highly nonconvex. This alone presents a challenge to application and development of appropriate optimization algorithms for solving the problem. However, in addition, there are a number of interesting problems for which the objective function is non- smooth and nonseparable. In this paper, we summarize the primary challenges involved, the state of the art, and present some numerical results on an interesting and representative class of problems.
 
Algorithms for solving optimization problems arising from deep neural net models: smooth problems2018-06-30   ${\displaystyle \cong }$
Machine Learning models incorporating multiple layered learning networks have been seen to provide effective models for various classification problems. The resulting optimization problem to solve for the optimal vector minimizing the empirical risk is, however, highly nonlinear. This presents a challenge to application and development of appropriate optimization algorithms for solving the problem. In this paper, we summarize the primary challenges involved and present the case for a Newton-based method incorporating directions of negative curvature, including promising numerical results on data arising from security anomally deetection.
 
A Survey of Optimization Methods from a Machine Learning Perspective2019-10-23   ${\displaystyle \cong }$
Machine learning develops rapidly, which has made many theoretical breakthroughs and is widely applied in various fields. Optimization, as an important part of machine learning, has attracted much attention of researchers. With the exponential growth of data amount and the increase of model complexity, optimization methods in machine learning face more and more challenges. A lot of work on solving optimization problems or improving optimization methods in machine learning has been proposed successively. The systematic retrospect and summary of the optimization methods from the perspective of machine learning are of great significance, which can offer guidance for both developments of optimization and machine learning research. In this paper, we first describe the optimization problems in machine learning. Then, we introduce the principles and progresses of commonly used optimization methods. Next, we summarize the applications and developments of optimization methods in some popular machine learning fields. Finally, we explore and give some challenges and open problems for the optimization in machine learning.
 
Randomized Smoothing SVRG for Large-scale Nonsmooth Convex Optimization2018-05-11   ${\displaystyle \cong }$
In this paper, we consider the problem of minimizing the average of a large number of nonsmooth and convex functions. Such problems often arise in typical machine learning problems as empirical risk minimization, but are computationally very challenging. We develop and analyze a new algorithm that achieves robust linear convergence rate, and both its time complexity and gradient complexity are superior than state-of-art nonsmooth algorithms and subgradient-based schemes. Besides, our algorithm works without any extra error bound conditions on the objective function as well as the common strongly-convex condition. We show that our algorithm has wide applications in optimization and machine learning problems, and demonstrate experimentally that it performs well on a large-scale ranking problem.
 
Non-convex Min-Max Optimization: Applications, Challenges, and Recent Theoretical Advances2020-08-18   ${\displaystyle \cong }$
The min-max optimization problem, also known as the saddle point problem, is a classical optimization problem which is also studied in the context of zero-sum games. Given a class of objective functions, the goal is to find a value for the argument which leads to a small objective value even for the worst case function in the given class. Min-max optimization problems have recently become very popular in a wide range of signal and data processing applications such as fair beamforming, training generative adversarial networks (GANs), and robust machine learning, to just name a few. The overarching goal of this article is to provide a survey of recent advances for an important subclass of min-max problem, where the minimization and maximization problems can be non-convex and/or non-concave. In particular, we will first present a number of applications to showcase the importance of such min-max problems; then we discuss key theoretical challenges, and provide a selective review of some exciting recent theoretical and algorithmic advances in tackling non-convex min-max problems. Finally, we will point out open questions and future research directions.
 
Scalarizing Functions in Bayesian Multiobjective Optimization2019-04-11   ${\displaystyle \cong }$
Scalarizing functions have been widely used to convert a multiobjective optimization problem into a single objective optimization problem. However, their use in solving (computationally) expensive multi- and many-objective optimization problems in Bayesian multiobjective optimization is scarce. Scalarizing functions can play a crucial role on the quality and number of evaluations required when doing the optimization. In this article, we study and review 15 different scalarizing functions in the framework of Bayesian multiobjective optimization and build Gaussian process models (as surrogates, metamodels or emulators) on them. We use expected improvement as infill criterion (or acquisition function) to update the models. In particular, we compare different scalarizing functions and analyze their performance on several benchmark problems with different number of objectives to be optimized. The review and experiments on different functions provide useful insights when using and selecting a scalarizing function when using a Bayesian multiobjective optimization method.
 
Randomized Stochastic Variance-Reduced Methods for Stochastic Bilevel Optimization2021-05-05   ${\displaystyle \cong }$
In this paper, we consider non-convex stochastic bilevel optimization (SBO) problems that have many applications in machine learning. Although numerous studies have proposed stochastic algorithms for solving these problems, they are limited in two perspectives: (i) their sample complexities are high, which do not match the state-of-the-art result for non-convex stochastic optimization; (ii) their algorithms are tailored to problems with only one lower-level problem. When there are many lower-level problems, it could be prohibitive to process all these lower-level problems at each iteration. To address these limitations, this paper proposes fast randomized stochastic algorithms for non-convex SBO problems. First, we present a stochastic method for non-convex SBO with only one lower problem and establish its sample complexity of $O(1/?^3)$ for finding an $?$-stationary point under appropriate conditions, matching the lower bound for stochastic smooth non-convex optimization. Second, we present a randomized stochastic method for non-convex SBO with $m>1$ lower level problems by processing only one lower problem at each iteration, and establish its sample complexity no worse than $O(m/?^3)$, which could have a better complexity than simply processing all $m$ lower problems at each iteration. To the best of our knowledge, this is the first work considering SBO with many lower level problems and establishing state-of-the-art sample complexity.
 
Non-convex Optimization for Machine Learning2017-12-21   ${\displaystyle \cong }$
A vast majority of machine learning algorithms train their models and perform inference by solving optimization problems. In order to capture the learning and prediction problems accurately, structural constraints such as sparsity or low rank are frequently imposed or else the objective itself is designed to be a non-convex function. This is especially true of algorithms that operate in high-dimensional spaces or that train non-linear models such as tensor models and deep networks. The freedom to express the learning problem as a non-convex optimization problem gives immense modeling power to the algorithm designer, but often such problems are NP-hard to solve. A popular workaround to this has been to relax non-convex problems to convex ones and use traditional methods to solve the (convex) relaxed optimization problems. However this approach may be lossy and nevertheless presents significant challenges for large scale optimization. On the other hand, direct approaches to non-convex optimization have met with resounding success in several domains and remain the methods of choice for the practitioner, as they frequently outperform relaxation-based techniques - popular heuristics include projected gradient descent and alternating minimization. However, these are often poorly understood in terms of their convergence and other properties. This monograph presents a selection of recent advances that bridge a long-standing gap in our understanding of these heuristics. The monograph will lead the reader through several widely used non-convex optimization techniques, as well as applications thereof. The goal of this monograph is to both, introduce the rich literature in this area, as well as equip the reader with the tools and techniques needed to analyze these simple procedures for non-convex problems.
 
Black Box Algorithm Selection by Convolutional Neural Network2019-12-22   ${\displaystyle \cong }$
Although a large number of optimization algorithms have been proposed for black box optimization problems, the no free lunch theorems inform us that no algorithm can beat others on all types of problems. Different types of optimization problems need different optimization algorithms. To deal with this issue, researchers propose algorithm selection to suggest the best optimization algorithm from the algorithm set for a given unknown optimization problem. Usually, algorithm selection is treated as a classification or regression task. Deep learning, which has been shown to perform well on various classification and regression tasks, is applied to the algorithm selection problem in this paper. Our deep learning architecture is based on convolutional neural network and follows the main architecture of visual geometry group. This architecture has been applied to many different types of 2-D data. Moreover, we also propose a novel method to extract landscape information from the optimization problems and save the information as 2-D images. In the experimental section, we conduct three experiments to investigate the classification and optimization capability of our approach on the BBOB functions. The results indicate that our new approach can effectively solve the algorithm selection problem.
 
Joint Continuous and Discrete Model Selection via Submodularity2021-02-17   ${\displaystyle \cong }$
In model selection problems for machine learning, the desire for a well-performing model with meaningful structure is typically expressed through a regularized optimization problem. In many scenarios, however, the meaningful structure is specified in some discrete space, leading to difficult nonconvex optimization problems. In this paper, we relate the model selection problem with structure-promoting regularizers to submodular function minimization defined with continuous and discrete arguments. In particular, we leverage submodularity theory to identify a class of these problems that can be solved exactly and efficiently with an agnostic combination of discrete and continuous optimization routines. We show how simple continuous or discrete constraints can also be handled for certain problem classes, motivated by robust optimization. Finally, we numerically validate our theoretical results with several proof-of-concept examples, comparing against state-of-the-art algorithms.
 
Funneled Bayesian Optimization for Design, Tuning and Control of Autonomous Systems2019-02-05   ${\displaystyle \cong }$
Bayesian optimization has become a fundamental global optimization algorithm in many problems where sample efficiency is of paramount importance. Recently, there has been proposed a large number of new applications in fields such as robotics, machine learning, experimental design, simulation, etc. In this paper, we focus on several problems that appear in robotics and autonomous systems: algorithm tuning, automatic control and intelligent design. All those problems can be mapped to global optimization problems. However, they become hard optimization problems. Bayesian optimization internally uses a probabilistic surrogate model (e.g.: Gaussian process) to learn from the process and reduce the number of samples required. In order to generalize to unknown functions in a black-box fashion, the common assumption is that the underlying function can be modeled with a stationary process. Nonstationary Gaussian process regression cannot generalize easily and it typically requires prior knowledge of the function. Some works have designed techniques to generalize Bayesian optimization to nonstationary functions in an indirect way, but using techniques originally designed for regression, where the objective is to improve the quality of the surrogate model everywhere. Instead optimization should focus on improving the surrogate model near the optimum. In this paper, we present a novel kernel function specially designed for Bayesian optimization, that allows nonstationary behavior of the surrogate model in an adaptive local region. In our experiments, we found that this new kernel results in an improved local search (exploitation), without penalizing the global search (exploration). We provide results in well-known benchmarks and real applications. The new method outperforms the state of the art in Bayesian optimization both in stationary and nonstationary problems.
 
Smart Predict-and-Optimize for Hard Combinatorial Optimization Problems2019-11-22   ${\displaystyle \cong }$
Combinatorial optimization assumes that all parameters of the optimization problem, e.g. the weights in the objective function is fixed. Often, these weights are mere estimates and increasingly machine learning techniques are used to for their estimation. Recently, Smart Predict and Optimize (SPO) has been proposed for problems with a linear objective function over the predictions, more specifically linear programming problems. It takes the regret of the predictions on the linear problem into account, by repeatedly solving it during learning. We investigate the use of SPO to solve more realistic discrete optimization problems. The main challenge is the repeated solving of the optimization problem. To this end, we investigate ways to relax the problem as well as warmstarting the learning and the solving. Our results show that even for discrete problems it often suffices to train by solving the relaxation in the SPO loss. Furthermore, this approach outperforms, for most instances, the state-of-the-art approach of Wilder, Dilkina, and Tambe. We experiment with weighted knapsack problems as well as complex scheduling problems and show for the first time that a predict-and-optimize approach can successfully be used on large-scale combinatorial optimization problems.
 
Finding the Sparsest Vectors in a Subspace: Theory, Algorithms, and Applications2020-01-19   ${\displaystyle \cong }$
The problem of finding the sparsest vector (direction) in a low dimensional subspace can be considered as a homogeneous variant of the sparse recovery problem, which finds applications in robust subspace recovery, dictionary learning, sparse blind deconvolution, and many other problems in signal processing and machine learning. However, in contrast to the classical sparse recovery problem, the most natural formulation for finding the sparsest vector in a subspace is usually nonconvex. In this paper, we overview recent advances on global nonconvex optimization theory for solving this problem, ranging from geometric analysis of its optimization landscapes, to efficient optimization algorithms for solving the associated nonconvex optimization problem, to applications in machine intelligence, representation learning, and imaging sciences. Finally, we conclude this review by pointing out several interesting open problems for future research.
 
Randomized Iterative Methods for Linear Systems: Momentum, Inexactness and Gossip2019-09-26   ${\displaystyle \cong }$
In the era of big data, one of the key challenges is the development of novel optimization algorithms that can accommodate vast amounts of data while at the same time satisfying constraints and limitations of the problem under study. The need to solve optimization problems is ubiquitous in essentially all quantitative areas of human endeavor, including industry and science. In the last decade there has been a surge in the demand from practitioners, in fields such as machine learning, computer vision, artificial intelligence, signal processing and data science, for new methods able to cope with these new large scale problems. In this thesis we are focusing on the design, complexity analysis and efficient implementations of such algorithms. In particular, we are interested in the development of randomized iterative methods for solving large scale linear systems, stochastic quadratic optimization problems, the best approximation problem and quadratic optimization problems. A large part of the thesis is also devoted to the development of efficient methods for obtaining average consensus on large scale networks.
 
Structured Convex Optimization under Submodular Constraints2013-09-26   ${\displaystyle \cong }$
A number of discrete and continuous optimization problems in machine learning are related to convex minimization problems under submodular constraints. In this paper, we deal with a submodular function with a directed graph structure, and we show that a wide range of convex optimization problems under submodular constraints can be solved much more efficiently than general submodular optimization methods by a reduction to a maximum flow problem. Furthermore, we give some applications, including sparse optimization methods, in which the proposed methods are effective. Additionally, we evaluate the performance of the proposed method through computational experiments.
 
Principal Component Analysis Applied to Gradient Fields in Band Gap Optimization Problems for Metamaterials2021-04-04   ${\displaystyle \cong }$
A promising technique for the spectral design of acoustic metamaterials is based on the formulation of suitable constrained nonlinear optimization problems. Unfortunately, the straightforward application of classical gradient-based iterative optimization algorithms to the numerical solution of such problems is typically highly demanding, due to the complexity of the underlying physical models. Nevertheless, supervised machine learning techniques can reduce such a computational effort, e.g., by replacing the original objective functions of such optimization problems with more-easily computable approximations. In this framework, the present article describes the application of a related unsupervised machine learning technique, namely, principal component analysis, to approximate the gradient of the objective function of a band gap optimization problem for an acoustic metamaterial, with the aim of making the successive application of a gradient-based iterative optimization algorithm faster. Numerical results show the effectiveness of the proposed method.
 
Learning Combinatorial Optimization on Graphs: A Survey with Applications to Networking2020-07-13   ${\displaystyle \cong }$
Existing approaches to solving combinatorial optimization problems on graphs suffer from the need to engineer each problem algorithmically, with practical problems recurring in many instances. The practical side of theoretical computer science, such as computational complexity, then needs to be addressed. Relevant developments in machine learning research on graphs are surveyed for this purpose. We organize and compare the structures involved with learning to solve combinatorial optimization problems, with a special eye on the telecommunications domain and its continuous development of live and research networks.
 
First-Order Methods for Convex Optimization2021-01-04   ${\displaystyle \cong }$
First-order methods for solving convex optimization problems have been at the forefront of mathematical optimization in the last 20 years. The rapid development of this important class of algorithms is motivated by the success stories reported in various applications, including most importantly machine learning, signal processing, imaging and control theory. First-order methods have the potential to provide low accuracy solutions at low computational complexity which makes them an attractive set of tools in large-scale optimization problems. In this survey we cover a number of key developments in gradient-based optimization methods. This includes non-Euclidean extensions of the classical proximal gradient method, and its accelerated versions. Additionally we survey recent developments within the class of projection-free methods, and proximal versions of primal-dual schemes. We give complete proofs for various key results, and highlight the unifying aspects of several optimization algorithms.
 
Learning to Optimize Under Constraints with Unsupervised Deep Neural Networks2021-01-03   ${\displaystyle \cong }$
In this paper, we propose a machine learning (ML) method to learn how to solve a generic constrained continuous optimization problem. To the best of our knowledge, the generic methods that learn to optimize, focus on unconstrained optimization problems and those dealing with constrained problems are not easy-to-generalize. This approach is quite useful in optimization tasks where the problem's parameters constantly change and require resolving the optimization task per parameter update. In such problems, the computational complexity of optimization algorithms such as gradient descent or interior point method preclude near-optimal designs in real-time applications. In this paper, we propose an unsupervised deep learning (DL) solution for solving constrained optimization problems in real-time by relegating the main computation load to offline training phase. This paper's main contribution is proposing a method for enforcing the equality and inequality constraints to the DL-generated solutions for generic optimization tasks.
 
Optimizing Wireless Systems Using Unsupervised and Reinforced-Unsupervised Deep Learning2020-01-03   ${\displaystyle \cong }$
Resource allocation and transceivers in wireless networks are usually designed by solving optimization problems subject to specific constraints, which can be formulated as variable or functional optimization. If the objective and constraint functions of a variable optimization problem can be derived, standard numerical algorithms can be applied for finding the optimal solution, which however incur high computational cost when the dimension of the variable is high. To reduce the on-line computational complexity, learning the optimal solution as a function of the environment's status by deep neural networks (DNNs) is an effective approach. DNNs can be trained under the supervision of optimal solutions, which however, is not applicable to the scenarios without models or for functional optimization where the optimal solutions are hard to obtain. If the objective and constraint functions are unavailable, reinforcement learning can be applied to find the solution of a functional optimization problem, which is however not tailored to optimization problems in wireless networks. In this article, we introduce unsupervised and reinforced-unsupervised learning frameworks for solving both variable and functional optimization problems without the supervision of the optimal solutions. When the mathematical model of the environment is completely known and the distribution of environment's status is known or unknown, we can invoke unsupervised learning algorithm. When the mathematical model of the environment is incomplete, we introduce reinforced-unsupervised learning algorithms that learn the model by interacting with the environment. Our simulation results confirm the applicability of these learning frameworks by taking a user association problem as an example.