In computer science, and machine learning in particular, the primary purpose of many systems is to help humans make decisions. Simultaneously, many of these systems also stand to benefit from having a human in the loop, whether it is to reinforce good decisions, warn against bad decisions, or simply to provide expert advice in areas of uncertainty.
A simple, concrete example can be seen in recommender systems. Whether it is through explicit feedback (such as rating a movie on Netflix) or implicit feedback (such as clicking/not clicking on an advertisement), the vast majority of successful, real-world recommender systems are constantly interacting with and adapting to each user.
One important research direction for our group is to mathematically formalize this idea of adaptivity and develop algorithms for interactive decision making that only boast strong theoretical guarantees, but also perform well in practice.