Placeholder image

Machine learning (ML) has already made significant progress in modeling the world around us by, for instance, detecting physical objects and understanding natural language. Following these developments, we are now attempting to use ML to make decisions for us in application areas such as robotics. However, when we try to integrate these black-box ML models in such high-stakes physical systems, doubts among end users can arise: can we trust the decisions made by autonomous systems without understanding them? Or, can we even fix these systems when they behave in unintended ways, or design legal guidelines if we do not understand how they work? The focus of LENS Lab is to make learning-based automation explainable to end users, engineers, and legislative bodies alike. We develop tools and techniques at three different stages of automation to address the following questions.

  1. Before deployment: Can we explain how a learning-based autonomous system will fail?
  2. During deployment: Can we explain what actions can be taken with certainty during deployment?
  3. After deployment: Can we explain why a system made a particular decision after that decision has been made?


Placeholder image




Before Deployment: Ex Ante Analysis for Finding Possible Failure Modes

We know machine learning-equipped real world systems sometimes do not work as we expect. Just because they do not work "sometimes" we cannot throw them away because they work most of the time. What we can do is trying to characterize what that "sometimes" is. Knowing how and when (i.e., under which conditions) they fail 1) helps us build better models and 2) stratergically use them to suit the application.

Auditing Large Language Models for Discrimination

Large languge models trained on data scraped from the internet can exhibit undesirable behavior such as discrimination. In this work, we develop an interactive AI auditing tool to evaluate discrimination of such models in a human-in-the-loop fashion. Given the task of generating reviews from a few words, we can evaluate how much of toxicity the model exhibits against gender, race, etc. The tool outputs a summary report.

Placeholder image

Finding How and When the Deep Learning Models Fail Using Reinforcement Learning

Autonomous cars fail in adverse weather. The perception modules of an autonomous car uses more complex deep learning models compared to the control modules. Therefore, the perception modules are more susceptible to failures. Due to this complexity, exhaustively finding all failure modes is not practically possible. Hence, we formulate a method to find most likely failure modes. To this end, we construct a reward function in such a way that encourages the probabilistic likelihood of finding failures and uses reinforcement learning to explore those failures.

  • Demonstrating how the object detection, object tracking, and trajectory prediction modules of an autonomous car fail under different patterns and levels of rain [IROS’22]
Representative paper: How Do We Fail? Stress Testing Perception in Autonomous Vehicles
Placeholder image

Analyzing if Deep Learning Models Are Robust Against Adversaries and Unfamiliar Inputs

What happens if we deploy our model in a domain that it has never seen? For instance, if you train an object detection neural network based on data collected in the US and deploy it in Australia, what would the neural network do when it sees a kangaroo? Or, if we develop a mobil app for skin cancer detection, how will different light conditions and skin colors affect the results? We need ways to characterize these effects before we deploy a neural network in the real-world.

  • Out of distribution detection (OOD) in autonomous driving [ITSC’21]
  • Out of Distribution Detection and Adversarial Attacks on Deep Neural Networks for Robust Medical Image Analysis [ICML-AdvML’21]
  • Adapting implict environment representations for new domains [RSS’20]
Representative paper: Out-of-Distribution Detection for Automotive Perception
Placeholder image

During Deployment: Quantifying the Epistemic Uncertainty ("Unknown Unknowns")

Temporal variations of spatial processes exhibit highly nonlinear patterns and modeling them is vital in many disciplines. For instance, robots operating in dynamic environments demand richer information for safer and robust path planning. In order to model these spatiotemporal phenomena, I develop and utilize theory from deep learning, reproducing kernel Hilbert space (RKHS), approximate Bayesian inference such as stochastic variational inference, scalable Gaussian processes, and directional statistics to model nonlinear patterns and uncertainty. I quantify both model and data uncertainty (aka epistemic and aleatoric uncertainty) in small and big data settings.


Uncertainty in Occupancy

Modeling the uncertainty of occupancy of an environment is important for operating mobile robots. I have developed scalable continuous occupancy maps that can quantify the epistemic uncertainty in both static and dynamic environments. Summary:

Representative paper: Bayesian Hilbert Maps for Dynamic Continuous Occupancy Mapping
Placeholder image


Uncertainty in Directions

Estimating the directions of moving objects or flow is useful for many applications. I am mainly interested in modeling the multimodal aleatoric uncertainty associated with directions. Summary:

  • Developing a multimodal directional mapping framework [IROS’18]
  • Extending it to model spatiotemporal directional changes [RA-L'19]
  • Developing the Directional Primitive framework for incorporating prior information [ITSC'20]
Representative paper: Directional Primitives for Uncertainty-Aware Motion Estimation in Urban Environments
Placeholder image


Uncertainty in Velocity

With the advancement of efficient and intelligent vehicles, future urban transportation systems will be filled with both autonomous and human-driven vehicles. Not only we will have driverless cars on roads but also we will have delivery drones and urban air mobility systems with vertical take-off and landing capability. How can we model the epistemic uncertainty associated with the velocity and acceleration of vehicles in 3D large-scale transportation systems?

  • Modeling global and local environment dynamics in large-scale transportation systems
Placeholder image


Predicting Future in Space and Time

Humans subconsciously predict how the space around them evolves to make reliable decisions. How can robots predict what would happen in the next few seconds around them? Summary:

  • Proposing kernel methods for propagating uncertainty into the future, especially for static ego-agents [NeurIPS’16, ICRA’19]
  • Predicting the future occupancy maps using ConvLSTMs for moving vehicles in urban settings [arXiv’20]
Representative paper: Double-Prong ConvLSTM for Spatiotemporal Occupancy Prediction in Dynamic Environments
Placeholder image

During Deployment: Safe Decision-Making Under Uncertainty

Because we can't have an ideal model of the environment, robots should take into account uncertainty when making decisions for safe and robust operation. They should consider multiple, if not all, hypotheses when making decisions. For this purpose, I work on propagating uncertainty from perception into decision-making while taking into account the uncertainty of states, dynamic models, etc. For decision-making under uncertainty, I make use of imitation policy learning algorithms partially observable Markov decision processes solvers, model predictive control algorithms, and Bayesian optimization.


Epistemic Uncertainty in Deep Reinforcement Learning

Summary:

  • Disentangling Epistemic and Aleatoric Uncertainty in Reinforcement Learning [arXiv'22]
Placeholder image

Decision-Making While Accounting for Diversity and Stochasticity of Human Policies

When humans operate in environments in which they have to follow some rules, as in driving, we cannot expect them to perfectly adhere to rules. Therefore, it is important to take into account this intrinsic stochasticity when making decisions. We specifically focus on developing uncertainty-aware intelligent driver models that are invaluable for planning in autonomous vehicles as well as validating their safety in simulation. Summary:

  • Modeling human driving behavior through generative adversarial imitation learning [T-ITS'22]
  • Augmenting rule-based intelligent driver behavior models with data-driven stochastic parameter estimation for driving on highways [ACC'20]
  • Improving the efficiency of controllers through adaptive importance sampling [arXiv'22]
Placeholder image


Decision-Making in State and Observation Uncertainty in Wirelessly Connected Systems

When robots operate in real-world environments, they do not have access to all the information they require to make safe decisions. Some information might even be partially observable or highly uncertain. How can agents account for this uncertainty? How can agents share information with each other to complete their individual tasks safely and efficiently? Summary:

  • Partially observable Markov decision processes (POMDPs) for uncertainty-aware driving [ITSC'22]
  • Sharing what each other see is helpful but communication is expensive. Should we share data, model, or decisions? [ICRA'22]
  • Unifying the robot dynamics with network-level message routing for a team of robots to achieve a common task [RA-L'22]
Placeholder image


Exploring Unknown Environments

When a robot enters a new environment it needs to explore the environment and build a map for future use. Just think about what Romba does when we operate it in our home for the first time. Robots can also be used for gathering task-specific information such as search and rescue, subjected to constraints. Summary:

  • Using perception uncertainty for exploring environments by avoiding hazards
Placeholder image

Time Series Explanation
Placeholder image


An Interactive Dashboard for Clinicians
Once a system made a decision, we need to explain why it made that particular decision. Such explanations help the end users (e.g., clinicians, human drivers) understand the system.