Collaborative Intelligence

The research in Alelab is articulated around the notion of collaborative intelligence. A collaborative intelligence system is an autonomous team that collaborates without central coordination in a manner that we can call intelligent. Somewhat fortuitously, this problem specification is well aligned with the three components of my research program. To enable collaborative intelligence we need to sort out the problem of how to learn in distributed systems — at least a part of which is about distributed optimization. For the members of the team to collaborate, they need wireless communication support. One, however, that is not predesigned but is autonomously self configured in support of the task at hand — as in my work on communications for control and robotics. As the team moves through the environment they acquire data, but the structure of that data is more accurately described by a graph than the uniform grids that describe time and images — as in my work on graph signal processing.

When looking at the present this is a somewhat meaningless distinction. It is is simply an alternative taxonomy that equally successful at describing the components of my research program. When looking towards the future, however, the distinction does become relevant. A research program driven by the tension between global and local properties actions, and behavior is a program centered around the exploitation of my technical expertise. A research program on collaborative intelligence is one driven by a technological application which refocuses the objective of my future research efforts. In this light, my efforts on communications for control and robotics evolve towards the development of wireless autonomous systems. My work on optimization evolves towards the development of algorithms for collaborative learning. And my work on graph signal processing refocusses on the analysis of signals and information supported on unconventional structures. These are the three components of my future research agenda, that I preview in the following.

Wireless Autonomous Systems

The goal of this project is to advance an integrated approach for the joint design of wireless interfaces and autonomous systems. I have long advocated a joint design approach and have advanced significant preliminary results in control systems and mobile autonomous systems. Building on this preliminary work I will develop along the following two research vectors:

Co-design. A wireless autonomous system is not an autonomous system that runs on a wireless network, the network is the autonomous system. The wireless network itself must be able to configure autonomously by determining transmit opportunities, selecting suitable transmit modes, and negotiating appropriate packet routes. These autonomous networks must make decisions that are conducive to accomplish the task assigned to the autonomous system (controlling an industrial robot, supporting a group of first responders, or configure a highway platoon). We formulate autonomy as the joint selection of control actions and wireless communications actions subservient to task accomplishment.

Learning. Autonomous systems must adapt to randomly varying conditions in complex environments. We will perform this adaptation through the development of advanced learning techniques for adaptive control of wireless autonomous systems. These include offline model-based approaches, online data-driven learning, as well as reinforcement learning using sparse kernels and convolutional neural networks.

Put together, these innovations mean that we should stop thinking of autonomous systems that run on top of wireless networks. Rather, the network itself is the autonomous system and must of its own accord be able to autonomously identify communication and routing policies. This research ideas are the core of the Intel Science and Technology Center on Wireless Autonomous Systems. They are also an important component of the Army Research Lab Collaborative Research Alliance on Distributed Collaborative Intelligent Systems Technology

Distributed Collaborative Learning

Learning is a process in which we transform one representation of a dataset — a collection of training samples — into a representation that is adapted for a particular task — a neural network trained for object recognition, or a network trained for obstacle avoidance. In multiagent systems different agents have access to different data and need not have identical individual goals. The fact that they collaborate means they want to accomplish the same group goal but this most likely entail different specific goals for each individual. Such variation in the availability of data and purpose of the representation suggests that distributed collaborative learning is characterized by the coexistence of multiple representations. My research is concerned with finding representations that are composable, hierarchical, and sufficient:

Composable. We want learning to be composable so that different agents can maintain different local representations, yet are able to aggregate them into common representations when the need and opportunity arises.

Hierarchical. Storing different representations for different tasks can be costly. We want our representations to be part of nested hierarchies so that adaptation to different tasks is tantamount to moving up and down a tree of hierarchical representations.

Sufficient. We typically think of learning as finding a representation of a dataset that is optimal for accomplishing some task. We can alternatively think of finding a representation that is sufficient for solving the task at hand. The latter thinking is not only more in tune with the time, computation, communication, and energy restrictions typical of multiagent systems but is also seen as an enabler of composable and hierarchical representations.

My current effort is in developing representations that have the properties listed above. Over time, the design of algorithms to compose representations and adapt to the hierarchy tree are also of interest. These ideas are part of the Army Research Lab Collaborative Research Alliance on Distributed Collaborative Intelligent Systems Technology.

Machine Learning on Network Data

Signal structure is a fundamental enabler for the characterization and extraction of information. For the specific case of time signals and images we have well developed tools in statistics, signal processing, and information theory. These tools do not generalize well to data sources that will become pervasive in distributed intelligent systems. These are signals whose structure will be more accurately described by graphs or homological properties motivating research along the following two directions:

Graph structured signals. This arises when data is collected over an irregular domain that renders the conventional notions of smoothness and proximity not applicable. We resort to endowing the signal with a support graph that defines proximity between different elements and search for representations over these irregular domains. Liner processing of graph structured signals is well understood by now although there are still important questions being investigated. Work on nonlinear processing is minimal. An important focus of my group is the generalization of convolutional neural networks to signals supported on graphs.

Homological structure. We build on graph descriptions by lifting graphs to higher-order structures (simplicial and cell complexes) and enriching the linear-algebraic techniques to homological algebra. Homological features such as connected components and holes give formal handles on abstract structures that can be manipulated to characterize and extract information.

Put together, the two directions about delineate what I deem as the next frontier in Information Processing: Signals with unconventional structure. These research ideas are the core of my Army Research Office project on Geometric and Graph Structures in Information Characterization and Extraction as well as my National Science Foundation project on Metric Representations of Network Data.