Almost all projects in our lab fall within the realm of decision intelligence: the art of building intelligence hierarchies capable of making decisions with limited information. At times, these hierarchies may choose to collect more data (explore), generalize based on current understanding (exploit), or augment decisions based on prediction (infer). The following projects are considered foundational decision intelligence.
Reinforcement Learning is a way where intelligent agents can learn for themselves. (Agents can be a computer programs, robots, humans, etc.) This is a machine learning paradigm where agents take actions in an environment (real or virtual) and are rewarded based on their performance. At first, these agents will make many random (and frankly silly) choices, but we'll, over time, be "reinforced" through rewards for good decisions made. Our research seeks to improve the state-of-the-art in modern reinforcement learning.
Learning Heuristics for Motion Planning
Searching a space is often exhaustive, in other words, it takes a computer one time to think of each and every possibility where something can be. Often, a guiding function, or heuristic, can speed up the search; however, a poorly chosen heuristic can be detrimental by giving us a less than ideal solution. This research uses machine learning and modern ai tools to build heuristic functions suitable for high-speed searching, without losing optimality.
Decisionomics (economic decisioning)
Economic policy-making is complicated. Often, one must consider social impacts: health, incomes, disparate equity, products, capital, labor, and almost anything else you can think of in a complex human system. Oftentimes, information comes slowly, randomly fluctuates, and new data can quickly disrupt old models and thinking. This research deals with core concepts of decision-making under uncertainty for financial risk and social systems.