Laboratory for Control, Optimization, and Power 
Home
Research Group
Research
Publications
Professional Activities
Awards and Honors
Biography
Teaching

 

 
 


Brief Summary: My main expertise is in the broad area of control systems, networks, power systems, and optimization. In the past two years, my research has been focused on the optimization techniques specialized for energy systems with the aim of solving hard nonconvex problems, such as the 50-year-old optimal power flow problem. In the 4 previous years, my research was to apply control and optimization theory to related interdisciplinary applications in communications, circuits, networks, and computer science. Specific results include: (i) convexification of the optimal power flow problem, (ii) smart antenna design for wireless communication, (iii) optimal design of very large-scale circuits, (iv) design of optimal transmission protocols for internetworks, (v) distributed averaging (consensus) under communication constraints, and (vi) a variety of fundamental control problems such as decentralized control of interconnected systems and biologically motivated controller design. (back to the top
)


Smart Grid: Over the past few years, the idea of upgrading today's transmission grid into a Smart Grid has been seriously considered by the electric power industry, state and federal regulators, government agencies, and academics. At the high level, the goal is to modernize the electricity transmission and distribution system to maintain a reliable and secure electricity infrastructure that can meet future demand growth. In particular, a smart grid must have many properties, including: (i) optimized efficiency, reliability, and flexibility, (ii) the ability to integrate distributed (renewable) resources and distributed generation, and (iii) the capability to interact with plug-in hybrid electric vehicles. From the system point of view, the deign of a smart grid requires several tools and theories from various areas of mathematics and engineering.

All of the areas that I have worked on fit within a multi-faceted, interdisciplinary approach to the problem of designing a smart grid:

  • Optimization: According to a National Science Foundation report, one of the three important actions needed to be taken towards designing a smart grid is to create new optimization methods that can solve the existing nonconvex problems associated with power grids.

  • Control: The controllers used in the existing conventional grids are mainly based on the centralized control theory. However, a truly decentralized and robust controller is needed for a smart grid with extensive use of small distributed generators and active load control.

  • Communications: Meters, appliances, consumer devices and plug-in hybrid electric vehicles must be made ``smart," meaning that they should be able to communicate with each other and with the grid. Wireless communication theory plays an important role in deciding what type of communication means suits the needs of a smart grid.

  • Networks: More generally, internetworking plays a fundamental role in devising a communication protocol (e.g. TCP/IP and extensions) that specifies how much information each smart device should exchange with the rest of the grid and how the required amount of information can be shared in the network while minimizing channel usage.

  • Computer Science: The signals transmitted to the local controllers of the grid by smart devices are noisy, lossy and asynchronous. To deal with this issue, ideas from computer science, such as consensus, need to complement those from control and optimization. (back to the top)


Electric Power Networks: There are several important optimization problems for a power network, from resource allocation to state estimation and different scheduling problems. Since the behavior of a power network is governed by nonlinear physical laws, these optimization problems are nonconvex and hard to solve. I have done a series of projects to show how the physics of a power network helps design efficient optimization algorithms.

Project 1: Optimal Power Flow Problem

A fundamental optimization problem for a power network is the resource allocation problem, which is called optimal power flow (OPF). This problem aims to find an optimal operating point of a power network that satisfies certain constraints on power and voltage parameters and, in addition, minimizes an appropriate cost function such as generation cost or transmission loss. Since 1962, the OPF problem has been extensively studied in the literature and numerous gradient and nonlinear algorithms have been proposed to find a near-optimal solution of this highly nonconvex problem. Due to this nonconvexity, these algorithms are not robust, lack performance guarantees, and may not find a global optimum. These issues become more critical for smart grids consisting of many (distributed) generators where the problem may need to be solved frequently due to the fast time-varying nature of renewable resources. Among many optimization methods that have been adapted for the OPF problem are linear programming, quadratic programming, nonlinear programming, Lagrange relaxation, interior point methods, artificial intelligence, artificial neural network, fuzzy logic, genetic algorithm, evolutionary programming and particle swarm optimization.

Our goal in this project was to find a globally optimal solution of the OPF problem in polynomial time using convex optimization. To this end, we first derived the dual of a variant of the OPF problem and expressed it in the form of a semidefinite program. Then, we proposed an easy-to-check condition under which the duality gap is zero so the solution of the OPF problem can be recovered from that of its convex dual problem. We showed that this zero-duality-gap condition is satisfied for all IEEE test systems with 14, 30, 57, 118 and 300 buses, in addition to many randomly generated systems. We investigated this condition and showed that it holds widely in practice. The main underlying reason for the successful convexification of the OPF problem can be traced back to the passivity of transformers and transmission lines.

Project 2: Security-Constrained Optimal Power Flow Problem

The security-constrained OPF (SCOPF) problem aims to find an optimal operating point of the grid that does not shift the operation of the system to a critical state if a fault happens. In order to design a reliable smart grid, the SCOPF problem must be solved in real time, yet it is known that the SCOPF problem is even (much) harder than the OPF problem. We have shown that the earlier OPF result can also be generalized to this case as well.

Project 3: Network Topology Design

Given a power network, the objective of this project was to study how the topology of the network can be manipulated by some nonlinear power electronic devices in such a way that every OPF problem becomes solvable in polynomial time over the same power circuit. In this work, we proved that adding multiple controllable phase shifters to certain lines of the power network simplifies the verification of the duality gap. In particular, if a sufficient number of phase shifters are added to the network, the duality gap becomes zero for the OPF problem with nonlinear phase shifters. This result implies that every network topology can be modified by the integration of phase shifters to guarantee the solvability of OPF in polynomial time for all possible values of loads and physical limits.

Project 4: Electricity Market

This project was concerned with the existence of competitive equilibria in electricity markets with nonconvex network constraints and nonlinear cost/utility functions. We derived a necessary and sufficient condition for the existence of a competitive equilibrium in the context of economic dispatch. We showed that a competitive equilibrium may exist, even when the duality gap is nonzero for the optimal power flow (OPF) problem. However, the Lagrange multipliers for the power balance equations in the OPF problem are indeed a correct set of market-clearing prices in presence of no duality gap. Moreover, we proved that every power network can be manipulated by phase shifters to guarantee the existence of a competitive equilibrium.

Project 5: Feasible Injection Regions of Power Networks

The goal of this project was to understand the geometrical  properties of the injection region of a tree-shaped power network. We have characterized the faces of the feasible injection region and have shown that this highly nonconvex region preserves important properties of a convex set to the extent that optimizations over this region can be cast as convex programs. More precisely, we have focused on the Pareto front of the injection region, i.e., the set of those points in the injection region that are eligible to be a solution to a typical OPF problem. Although the injection region is still non-convex, we have proved that the Pareto fronts of this set and its convex hull are identical under some common constraints, provided a practical angle condition is satisfied. This implies that the injection region can be replaced by its convex hull in the OPF problem without changing the global solution.(back to the top)


Communications and Circuits:

Project 1: Synthesis of Large-Scale Linear Circuits using Convex Optimization

Several important problems in circuits, electromagnetics, and optics can be reduced to the analysis and synthesis of some linear systems in the frequency domain. These systems, in the circuit theory, consist of passive elements including resistors, inductors, capacitors, ideal transformers, and ideal gyrators. Many methods have been proposed in the literature of automated circuit synthesis to design the topology and/or the component sizes of a circuit, which can be broadly classified as (i) knowledge-based methods, (ii) sizing-based methods, (iii) sized topology generation methods. These general-purpose techniques mainly rely on some heuristic or local search algorithms.

In this project, the objective was to show that advanced techniques in convex optimization theory can be exploited to globally optimize a special, nonetheless important, class of circuits efficiently using a deterministic polynomial-time algorithm. We considered a linear multi-port network for which certain specifications must be satisfied at a desired frequency by designing some elements of the circuit. These unknown elements are  collected together in  a control unit. We showed that designing a control unit in the form of a switching network that makes the circuit meet the design specifications is an NP-complete problem. Instead, the design of a passive network for the control unit can be cast as a semidefinite optimization. Since a passive network may require many components for its implementation, the design of a sparse (decoupled) passive controller was also studied.

We elucidated the efficacy of our work by designing optimal radiating circuits corresponding to antenna systems. As an example, for the first time, we have designed a wavelength-size on-chip passive antenna system with only one active element (antenna element) that can make simultaneous nulls in many directions. This antenna design can be computed in a few seconds (rather than the weeks for previous algorithms on on-chip antenna design). The radiation pattern of this antenna is comparable to the pattern of an array smart antenna whose size is more than 15 times the signal wavelength.

Project 2: Design of Passively Controllable Smart Antennas

Conventional antennas for wireless transmission, e.g. omni-directional antennas, radiate in almost all directions.  To avoid co-channel interference, save power and improve security, much attention has been paid to designing smart transmitting/receiving antenna systems that can steer the beam towards a desired direction and made a null in undesired directions. Array antenna is the first type of smart antenna whose size could be several multiples of the signal wavelength. An array antenna is easy to program, but cannot be easily implemented on a chip partly due to its size. An on-chip smart antenna is the second type of smart antenna whose size is comparable to the wavelength, but its programming is a hard nonconvex problem.

Using the results developed in Project 1, we designed a secure, power-efficient, beam-steerable and on-chip transmission system for wireless networks. Indeed, we proposed a passively controllable smart (PCS) antenna system that can be programmed to generate different radiation patterns at the far field by adjusting its variable passive controller at every signal transmission. Based on combining theories from convex optimization and algebraic geometry, we showed that a PCS antenna has excellent features. Unlike the existing smart antennas whose programming leads to an NP-hard problem or are made of many active elements, our PCS antenna has a low-complex programming capability and consists of only one active element. (back to the top)

Communication Networks: There has been a growing interest in studying the Internet congestion control ever since the first congestion collapse occurred. The main idea behind the existing congestion control algorithms is more or less the same: each user measures some feedback signal, such as packet loss or queueing delay, and accordingly adapts its transmission rate. Among the current transmission control protocols (TCP) for congestion control are TCP-Tahoe, Reno, New Reno, and Vegas. From a mathematical standpoint, the available resource allocation algorithms, such as the primal, dual and primal/dual algorithms, are particularly designed to solve the underlying problem in a distributed way asymptotically. Nonetheless, it is not clear how well the network operates during the transient time. For instance, the capacity link constraints can be violated in this period.

Project 1: Network Congestion Control Design via Optimal Control Theory

The main motivation of this project was that the available resource allocation controllers are mainly devised to derive the state of the system to a desired equilibrium point and, therefore, they are oblivious to the real-time behavior of the closed-loop system. In this work, we studied what utility functionals (as opposed to utility functions) the existing congestion control algorithms maximize. In particular, we showed that there exist meaningful utility functionals whose maximization leads to the celebrated primal, dual and primal/dual algorithms. This idea opens up the possibility of designing optimal transmission protocols to guarantee the real-time performance of the network, something which can never be realized by the existing congestion controllers. Another big advantage of our technique is that the designed protocol is automatically  stable, implying that the stability of the algorithm is no longer an issue.

Project 2: Design of Congestion Controllers with Satisfactory Real-Time Performance

Motivated by the poor performance of congestion control protocols based on only implicit feedback from a high speed network, we studied how to design a high-performance congestion controller when it is allowed to receive explicit information from the network. The existing congestion control algorithms using explicit feedback, such as the primal/dual controller, cannot guarantee a desirable performance at all times for the network and are concerned with only the steady-state properties of the system. To remedy this issue, we introduced a new congestion control protocol that has a tuning parameter whose adjustment controls four important aspects of the real-time performance of the network.  (back to the top)


Biologically Motivated Controller Design: Many theories have been developed for the analysis and synthesis of time-delay control systems due to the ubiquity of communication, computation or propagation delays in both embedded systems and biological circuits. Most of the existing controller design methods for time-delay systems regard delay as a nuisance and design a controller for the undelayed model of system in such a way that it is sufficiently robust to the underlying delay. Nevertheless, it is known that the voluntary introduction of delay in the control of an undelayed system can benefit the control process.

Transmission lines and cavity delay lines (trombone delay lines)  play the role of delay blocks in electronics/communications and optics, respectively. Moreover, neurons and gene regulatory networks are two sources of delays in biology. Time delays appear in genetic networks due to transcription, translation, and translocation processes.  Recently, there has been a considerable amount of interest in synthetic biology, whose goal is to build artificial biological systems for engineering applications.  By regarding a biological system composed of several interacting components as a distributed control system, two easy-to-manipulate parameters for design purposes are (i) the topology of the distributed system (interaction topology) and (ii) time delays in the interactions. Hence, the interaction graph together with the amount of delays in the interactions plays the role of the controller in a biological system.

Project 1:  Delay-Based Controller Design for Continuous-Time Systems

Motivated by the availability of different types of delays in embedded systems and biological circuits, in this project we studied the benefits that delay can provide in simplifying the implementation of controllers for continuous-time systems.  Given a continuous-time linear time-invariant (LTI) controller, we showed that the controller can be approximated arbitrarily precisely by a simple delay-based controller. This controller is composed of some delay blocks, a few integrators and possibly a unity feedback. This result implies that every high-order LTI controller has a simple delay-based implementation, which uses delay blocks rather than several integrators.  Different problems associated with the approximation procedures, such as finding the optimal number of delay blocks or studying the robustness of the designed controller with respect to delay values, were then investigated. We also studied the design of an LTI continuous-time controller satisfying given control objectives whose delay-based implementation needs the least number of delay blocks.

To demonstrate the importance of our results, we showed that the slow dynamics of a gene regulatory network with 20 states (or equivalently 20 integrator blocks) can be approximated well using only 20 delay blocks. In addition, an 80-th order control system requiring at least 30 integrators for a satisfactory approximation  can be implemented using 2 integrators and 9 delay blocks.

Project 2:  Delay-Based Controller Design for Hybrid Systems

In engineered control systems, a common source of delays is the discrete delay in clocked systems. Computer controlled systems have been widely used in a broad range of applications from robotics, autopilot and radar to anti-lock braking systems. Current silicon technology has enabled the design of embedded systems operating at very high frequencies. However, the conventional methods for the synthesis of sampled-data control systems require high processing power to cope with numerical issues if the sampling rate is relatively fast. More precisely, increasing the sampling frequency makes the digital controller extremely sensitive to measurement noise and computational round-off errors. The situation becomes worse if the sampling is subject to jitter and irregularities.

Using our results in Project 1, we proposed a robust digital-control scheme for continuous-time systems that can be used in two important scenarios: (i) having a high sampling frequency with limited computational power (ii) having a slow processor with jitter and irregular sampling times.  We showed that every continuous-time stabilizing (LTI) controller can be implemented in a hybrid form consisting of a sampler, a digital processor, some so-called ``modified second-order holds" and possibly a unity feedback from the hold circuit to the sampler. This hybrid controller benefits from the fact that the increase of the sampling frequency has a direct influence only on the memory size of the controller, as opposed to its parameters. This property makes the parameters of the controller robust to the sampling rate and irregularities. (back to the top)


Consensus: During the past few decades, there has been a particular interest in the area of distributed computation. The distributed averaging problem, as a particular case, is concerned with computing the average of numbers owned by the agents of a group. This problem has been investigated through the notion of consensus in several papers, motivated by different applications such as load-balancing, synchronization of coupled oscillators, multi-agent coordination and flocking, and state estimation in wireless sensor networks.

Consider the distributed average consensus problem in which the values associated with a set of agents are to be averaged in a distributed fashion. Since in many applications all agents cannot update their numbers synchronously, gossip algorithms have been widely exploited by researchers to handle the averaging problem asynchronously. On the other hand, in light of communication constraints, the data being exchanged between a pair of agents is normally quantized. This has given rise to the emergence of quantized gossip algorithms.

Project 1:  Quantized Consensus by Means of Stochastic Gossip Algorithm

In this work, we considered a digital-communication-based consensus problem, where the initial values of the agents are real numbers (as opposed to integers) whilst  only quantized data can be exchanged among the agents. A weighted connected graph was considered, where the weight of each edge represented the probability of establishing a communication between its corresponding vertices through the updating procedure. First, we showed that a quantized consensus is reached under the stochastic gossip algorithm proposed in a recent paper, for a wide range of updating parameters and any arbitrary quantizer. We then studied the convergence time of the gossip algorithm.

Project 2:  Quantized Consensus by Means of Adaptive Stochastic Gossip Algorithm

We showed in our Project 1 that the quantized consensus is reached under a simple updating protocol that deploys a fixed tuning factor. In this work,  we considered the case when the tuning factor is allowed to be time-dependent in order to achieve two goals. First, this makes it possible to study the numerical stability of the protocol with a fixed tuning factor under a small perturbation of this parameter. Furthermore, exploiting a time-varying tuning factor facilitates the implementation of the consensus protocol and pushes the steady state of the system towards an equilibrium point, as opposed to making it oscillatory. Letting the stochastic gossip algorithm proposed in Project 1 be time-varying is challenging due to converting the underlying finite-dimensional problem to an infinite-dimensional one for which many results in Markov chain theory fail to hold.  (back to the top)


Old Projects (accomplished before 2008): Before 2008, I worked on several control related problems, some of which are listed below:

  • Design of optimal decentralized/distributed controllers for interconnected systems.
  • Optimal cooperative control of unmanned aerial vehicles
  • Different stability related problems for distributed control systems
  • Robust stability and stabilizability of polynomially uncertain systems
  • Different problems arising in sampled-data control systems   (back to the top)