Controlling Pendula: A Tribute to Mark Spong - Karl Åström
Pendula have been one of Marks interest. He has used them in his courses and he has even made popular products, like the Pendubot and the Reaction Wheel Pendulum, indeed a remarkable achievement for a person who started his career in applied mathematics. It is therefore appropriate to talk about pendula in this celebration of Mark. Pendula can be used to illustrate interesting control problems such as stabilization, large transitions, control of chaotic systems and safe manual control of intrinsically unstable systems. In this presentation I will focus on strategies for swinging up a pendulum. A variety of strategies, that all have all been analyzed and implemented, will be presented. Energy management is a central theme. A simple strategy is to use hybrid control, energy is fed into the system until the upright position is reached and the pendulum is then grabbed by switching to a linear stabilizing strategy. It is also possible to devise strategies without switching that have smooth feedback laws. Feedback is then applied to shape the energy function so that it has a local minimum at the desired equilibrium. There may however be many local minima. Damping or energy pumping can then be used to make the desired equilibrium stable and all other equilibria unstable. The result is a two-parameter family of simple smooth strategies. The control laws obtained have two terms, one can be interpreted as a nonlinear spring and the other as a nonlinear damping. One parameter controls the spring term and the other the damping term. Conditions that guarantee that all solutions except those starting at local equilibria or on the separatrices converge to the equilibrium where the pendulum is in the upright position.
Feedback Control of Bipedal Locomotion - Jessy Grizzle
Though bipedal robots will be teleoperated when they are initially deployed in the real world, the onboard control system will have to ensure upright stability when the robot encounters unseen or unexpected obstacles, such as stepping in a hole or tripping over an obstruction. In this regard, we have been developing feedback controllers to allow MABEL, a planar (2D) bipedal robot that is roughly human sized, to traverse ground with unseen structured perturbations, including a 20 cm step decrease in ground height and an abrupt 12.5 cm increase in ground height. We are also expecting a new 3D robot to be commissioned soon in our laboratory and hence have been pushing ahead in the development of feedback control designs for agile, robust 3D locomotion. We'll discuss the best results we have at the time of the celebration!
Interacting with Multi-Robot Networks - Magnus Egerstedt
By now, we have gotten pretty good at designing decentralized control algorithms for teams of mobile robots. But, significant work remains for understanding how human operators should interact with the robot teams. This talk will present an Eulerian approach to the human-swarm interaction problem, where the robots are acting as particles suspended in a fluid, and the interactions take on the form of manipulating, e.g., stirring, the fluid. Applications to air-traffic control, mobile sensor networks, and multi-robot surveillance will be discussed.
Managing Uncertainty in Robotics: From Control to Planning - M Seth Hutchinson
Robots never know exactly where they are, what they see, or what they're doing. They live in dynamic environments, and must coexist with other, sometimes adversarial agents. All of these factors contribute to the uncertainty that is inherent in any real-world robotic task. In this talk, I will describe a range of methods that can be used to cope effectively with these uncertainties, from robust sensor-based controllers, to game theoretic motion strategies, to general models such as partially observable Markov decision processes (POMDPs). In each case, it is important to choose a solution strategy that is sufficiently powerful to cope with the level of uncertainty inherent in the task, while employing the minimal acceptable level of generality. This is particularly important in the face of real-time performance demands (e.g., for sensor-based manipulation tasks), or when fully general solutions may be intractable (e.g., finding optimal policies for POMDPs). Algorithms and experimental verification will be presented.
Nonprehensile Robotic Manipulation: Progress and Prospects - Kevin Lynch
Nonprehensile manipulation primitives such as rolling, sliding, pushing, pivoting, tipping, tapping, jiggling, juggling, batting, and throwing and catching are commonly used by humans, animals, and even industrial automation processes. These graspless manipulation modes exploit dynamics to create object motions that would otherwise be impossible to achieve. Despite this, many robots seem to prefer grasping and avoid nonprehensile manipulation. This limited repertoire artificially constrains the set of manipulation tasks the robot can achieve.
I will give my perspective on progress and challenges in planning and control of nonprehensile robotic manipulation and touch on its relationship to robot locomotion.
Experiments with Rijke Tubes: Investigating Thermoacoustic Dynamics and Control - Bassam Bamieh
Rijke Tube experiments are relatively simple to build and conduct in a typical university controls laboratory. Despite its simplicity, this platform can be used to study a rich array of thermoacoustic phenomena. These include thermoacoustic and combustion instabilities, as well as the dynamics and optimization of thermoacoustic energy conversion processes. Although the underlying dynamics can seem complex, involving the interaction of acoustics with convective heat transfer, we will show how the standard concepts of root loci, nyquist and bode plots, limit cycles, as well as empirical identification techniques can be used to provide a thorough understanding of the experiments. In more advanced experiments, optimal periodic control techniques illustrate the potential of active control for increasing the efficiency in future smart thermoacoustic energy conversion devices.
What the Quantum Control Can Do For Us? - T.J. Tarn
Due to the rapid advances made in Nano-Bio-technology, optical control of molecular dynamics and quantum computation, there is an increasing need to understand the fundamental structure, from the systems theoretical point of view, of the control and observation of quantum mechanical systems for designing advanced sensors and actuators.
In this presentation we start with a discussion of the differences between two feedback controls: the classical (measurement based) feedback control and the quantum (coherent) feedback control used in the control of quantum mechanical systems. We then proceed to study two important control design problems for quantum mechanical control systems and show that the classical feedback control cannot achieve the design goals while using quantum feedback control one can fulfill the desired objectives.
In our study it has discovered that in contrast to the important concept of feedback linearization used in the classical nonlinear control design the feedback nonlinearization is very useful in the field of nonlinear optics and has proved that it is impossible to completely decouple quantum noises from a quantum system by applying the classical feedback controls only. Our investigation opens up a very rich field of research problems for study in control.
Analysis for a Class of Stochastic Hybrid Systems with Non-Unique Solutions - Andrew Teel
We consider a class of stochastic hybrid systems, emphasizing models where solutions are not necessarily unique. This generality allows systems that exhibit an interaction of stochastic and worst-case (adversarial) effects. Through examples, we emphasize the role that causality in the solution definition plays in guaranteeing the validity of Lyapunov conditions for stochastic stability. Our modeling and analysis approach builds upon a particular framework for non-stochastic hybrid systems.
PDE Control in Texas: Oil Drilling and Riser Flows- Miroslav Kristic
There is hardly a better locale than Texas for conducting research in application of state-of-the-art PDE control designs to drilling and production of oil and gas, in both continental and off-shore settings. A problem of "unilateral" teleoperation arises in drilling at large depths, where the destabilizing friction-dominated dynamics of a drill bit are separated from the actuator on the surface by a "drillstring" that may be up to several kilometers long, and whose torsional motion is governed by the wave PDE. In addition to a stabilizing design for this problem, I will present a design for suppression of the slugging instabilities in multiphase (gas-oil-water) flows in long risers, where the dynamics are governed by coupled first-order hyperbolic PDEs.
The Challenges of Cyberphysical Systems - PR Kumar
We present a historical account of paths leading to the present interest in cyberphysical systems. We outline several foundational research topics that underlie this area. These include issues in data fusion,real-time communication, security, middleware, hybrid systems and proofs of correctness.
Bayesian and Non-Bayesian Social learning in a Network Setting - Ali Jadbabaie
In this talk, I will present a dynamic model of opinion formation in social networks when the information required for learning a parameter may not be at the disposal of any single agent and where individuals engage in communication with their neighbors in order to learn from their experiences. I will first consider the case when the agents incorporate their neighbors' beliefs and their private signals in a Bayesian way and show conditions under which learning occurs.
Motivated by the practical difficulties of Bayesian updating of beliefs in a network setting, I will present a simple update mechanism in which instead of incorporating the views of their neighbors in a fully Bayesian manner, agents use a simple updating rule which linearly combines their personal experience and the views of their neighbors. I will show that, as long as individuals take their personal signals into account in a Bayesian way, repeated interactions will lead them to successfully aggregate information and learn the true parameter. This result holds in spite of the apparent naïveté of agents' updating rule, the agents' need for information from sources the existence of which they may not be aware of, worst prior views, and the assumption that no agent can tell whether her own views or those of her neighbors are more accurate.
In the second part of the talk, I will characterize upper and lower bounds on the rate by which agents performing such an update learn the realized state, and show that the bounds can be tight. These bounds enable us to compare efficiency of different networks in aggregating dispersed information.
Joint work with Pooya Molavi (Penn ESE), Alireza Tahbaz-Salehi (Columbia GSB), and Alvaro Sandroni (Northwestern, Kellogg)
Distributed Convergence to Nash Equilibria in Networked Zero-Sum Games - Jorge Cortes
Recent years have seen an increasing interest on networked strategic scenarios where individual agents may cooperate or compete with each other, interact across different layers and with dynamically changing neighbors, and have access to limited information. This talk is a contribution to this growing body of work. We consider a class of strategic scenarios in which two networks of agents have opposing objectives with regards to the optimization of a common objective function. In the resulting zero-sum game, individual agents collaborate with neighbors in their respective network and have only partial knowledge of the state of the agents in the other network. For the case when the interaction topology of each network is undirected, we synthesize a distributed saddle-point strategy and establish its convergence to the Nash equilibrium for the class of strictly concave-convex and locally Lipschitz objective functions. Somewhat surprisingly, we also show that this dynamics does not converge in general if the topologies are directed. This justifies the introduction, in the directed case, of a generalization of this distributed dynamics which we show converges to the Nash equilibrium for the class of strictly concave-convex differentiable functions with globally Lipschitz gradients. The technical approach combines concepts from algebraic graph theory, nonsmooth analysis, consensus algorithms, set-valued dynamical systems, and game theory.
Randomized Methods for Network Security Games - Joao Hespanha
This talk addresses the solution of large zero-sum matrix games using randomized methods. We formalize a procedure -- termed the Sampled Security Policy (SSP) algorithm -- by which a player can compute policies that, with high probability, guarantees a certain level of performance against an adversary engaged in a random exploration of the game's decision tree.
The SSP Algorithm has applications to numerous combinatorial games in which decision makers are faced with a number of possible options that increases exponentially with the size of the problem. In this talk we focus on an applications in the area of network security, where system administrators need to consider multi-stage, multi-host attacks that may consist of long sequences of actions by an attacker in their attempt to circumvent the system defenses. In practice, this leads to policy spaces that grow exponentially with the number of stages involved in an attack.
New Results on Passivity-based Pose Synchronization - Masayuki Fujita
This talk is on passivity-based pose synchronization which have been carried out in collaboration with Mark Spong during the last few years. We first introduce our early works, which is strongly inspired by his talk at TokyoTech in 2005. Then, we show new results on the topic, where pose synchronization is achieved under a fully autonomous setting without any helps of absolute information. The idea to meet the objective is to apply stability theory of perturbed systems to the present framework while viewing couplings between position and orientation evolution as perturbations. Finally, we present our future perspective of the work to integrate it with a vision-based observer which was also developed our collaboration.
Synchronization and Pattern Formation in Diffusively Coupled Systems - Murat Arcak
We discuss spatially distributed networks that exhibit a diffusive coupling structure, common in biomolecular networks and multi-agent systems. We first review conditions that guarantee spatial homogeneity of the solutions of these systems, referred to as "synchrony." We next point to structural system properties that allow diffusion-driven instability -- a phenomenon critical to pattern formation in biology -- and show that an analogous instability mechanism exists in multi-agent systems. The results reviewed in the talk also demonstrate the role played by the Laplacian eigenvalues in determining the dynamical properties of diffusively coupled systems. We conclude with a discussion of how these eigenvalues can be assigned with a design of node and edge weights of a graph, and present a formation control example.
Synchronization in Oscillator Networks and Smart Grids - Francesco Bullo
The emergence of synchronization in complex networks of coupled oscillators is a pervasive topic in numerous scientific disciplines including biology, physics, chemistry, and engineering. A coupled-oscillator network is characterized by a population of heterogeneous oscillators and a graph describing the interaction among the oscillators. These two ingredients give rise to rich dynamic behaviors that have fascinated the scientific community for decades. Nikhil Chopra and Mark Spong wrote a wonderful article on this subject a few years ago establishing the exponential convergence properties of the so-called Kuramoto model for coupled oscillators.
In this talk I will present joint work with Florian D\"orfler and John Simpson on novel algebraic conditions for synchronization. The results exploit elegant connections among the theory of coupled oscillators, the graph-theoretical properties of electric circuits, and multiagent dynamical systems. Our results are relevant in the context of future power grids subject to renewable stochastic power sources: assessing the existence, stability, optimality, and robustness of synchronous states is a pervasive topic in the study and operation of power networks.
An Internal Model Principle for Synchronization in Heterogeneous Multi-Agent Systems - Frank Allgower
Distributed control and coordination in groups of dynamical systems has evolved to one of the major areas of modern control theory and application. In a group of physical systems, there will be hardly two individuals that are exactly identical. Systems may be structurally different, e.g., due to different types of actuators, or they may have different parameter values like friction or damping coefficients. Despite this fact, synchronization and consensus can be achieved in many cases. Therefore, we ask what requirements the individual dynamic systems in a group have to fulfill in order to be able to synchronize in a meaningful way. It turns out that an internal model principle for synchronization can be derived that relates the problem of output synchronization with the theory of output regulation. Output synchronization among non-identical systems using diffusive couplings is possible only if all individual systems, together with their local controllers, contain an internal model of some common virtual exosystem. Necessary conditions are found that are expressed in terms of linear matrix equations, known as Francis equations, in the linear case and nonlinear partial differential equations, known as FBI equations, in the nonlinear case. Furthermore, based on these necessary conditions, constructive controller design methods are proposed for the linear case.
Better feedback control of bipedal locomotion - Aaron Ames
This talk presents the process of formally achieving bipedal robotic walking through controller synthesis inspired by human locomotion, and demonstrates these methods through experimental realization on multiple bipedal robots. Motivated by the hierarchical control present in humans, we claim that the essential information needed to understand walking is encoded by a simple class of functions canonical to human walking. In other words, we view the human as a complex system, or "black box," and outputs of this system (as computed from human locomotion data) are presented that appear to characterize its behavior—thus yielding low dimensional characterization of human walking. By considering the equivalent outputs for the bipedal robot, a nonlinear controller can be constructed that drives the outputs of the robot to the output of the human; moreover, the parameters of this controller can be optimized so that stable robotic walking is provably achieved while simultaneously producing outputs of the robot that are as close as possible to those of a human. The end result is the automatic generation of bipedal robotic walking that is remarkably human-like and is experimentally realizable, as will be evidenced by the demonstration of the resulting controllers on multiple robotic platforms.
Ten Years of Interconnection and Damping Assignment PBC of Mechanical Systems - Romeo Ortega
The IDA-PBC design technique for mechanical systems was introduced in a paper with Mark Spong in 2002. To date the paper has more than 400 references, witnessing of its wide acceptance by the control community. In this talk we review the recent results reported on this technique.
Collision Avoidance for Multi-Vehicle Systems - Dusan Stipanovic
In this talk I will present a number of results and contributions to safe, that is, collision free coordination and control of multiple-vehicle systems which were done in collaboration with Professor Mark W. Spong. I will emphasize his insights, guidance and support which resulted in a number of theoretical contributions as well as practical implementations on testbeds with both ground and aerial unmanned vehicles.
Adaptive Learning Structures for Real-Time Optimal Control and Differential Games - Frank Lewis
This talk will discuss some new adaptive control structures for learning online the solutions to optimal control problems and multi-player differential games. Techniques from reinforcement learning are used to design a new family of adaptive controllers based on actor-critic learning mechanisms that converge in real time to optimal control and game theoretic solutions. Continuous-time systems are considered.
Optimal feedback control design has been responsible for much of the successful performance of engineered systems in aerospace, industrial processes, vehicles, ships, robotics, and elsewhere since the 1960s. H-infinity control has been used for robust stabilization of systems with disturbances. Optimal feedback control design is performed offline by solving optimal design equations including the algebraic Riccati equation and the Game ARE. It is difficult to perform optimal designs for nonlinear systems since they rely on solutions to complicated Hamilton-Jacobi-Bellman or HJI equations. Finally, optimal design generally requires that the full system dynamics be known.
Optimal Adaptive Control. Adaptive control has provided powerful techniques for online learning of effective controllers for unknown nonlinear systems. In this talk we discuss online adaptive algorithms for learning optimal control solutions for continuous-time linear and nonlinear systems. This is a novel class of adaptive control algorithms that converge to optimal control solutions by online learning in real time. In the linear quadratic (LQ) case, the algorithms learn the solution to the ARE by adaptation along the system motion trajectories. In the case of nonlinear systems with general performance measures, the algorithms learn the (approximate smooth local) solutions of HJ or HJI equations. The algorithms are based on actor-critic reinforcement learning techniques. Methods are given that adapt to optimal control solutions without knowing the full system dynamics. Application of reinforcement learning to continuous-time (CT) systems has been hampered because the system Hamiltonian contains the full system dynamics. Using a technique known as Integral Reinforcement Learning (IRL), we will develop reinforcement learning methods that do not require knowledge of the system drift dynamics.
Online Algorithms for Zero-Sum Games. We will develop new adaptive control algorithms for solving zero-sum games online for continuous-time dynamical systems. Methods based on reinforcement learning policy iteration will be used to design adaptive controllers that converge to the H-infinity control solution in real-time. An algorithm will be given for partially known systems where the drift dynamics is not known.
Cooperative/Non-Cooperative Multi-Player Differential Games. New algorithms will be presented for solving online non zero-sum multi-player games for continuous-time systems. We use an adaptive control structure motivated by reinforcement learning policy iteration. Each player maintains two adaptive learning structures, a critic network and an actor network. The parameters of these two networks are tuned based on the actions of the other players in the team. The result is an adaptive control system that learns based on the interplay of agents in a game, to deliver true online gaming behavior.
On Quadratic Convergence and the (Non-)Existence of Minimizing Trajectories - John Hauser
There are many trajectory optimization problems including those seeking minimum time solutions that we expect to have a minimizing trajectory. This intuition is often "confirmed" in calculations. What evidence is appropriate here and what conclusions and hints may we draw from it? We argue that the second derivative (or differential) of a trajectory functional (incorporating the system dynamics and constraints) provides strong evidence for (and against) the existence of a (locally) minimizing trajectory. This story will be illustrated by the search for minimum-time race-line trajectories through a chicane for two simplified vehicle models. Surprisingly, there is strong evidence that there is NO minimizing trajectory for the vehicle that includes a model of steering forces. This situation is abstracted to a very simple dynamics/objective setting where it can be shown that no minimizing trajectory exists.
Updated:
February 28, 2013
Copyright © 2011 The University of Texas at Dallas