Abstract: Classification algorithms such as Support Vector Machines (SVM) output an array of probabilities or score functions, which are then thresholded to generate binary predictions. The prevalence of these machine learning methods in commercial use has raised concerns of their ability to codify and amplify biases existent in data. Previous techniques to address this problem mainly rely on pre- or post-processing steps; they are often blind to score functions underlying predictions, making their fairness guarantees non-robust to changes in thresholding. Furthermore, pre- and post-processing is inherently greedy and thus may be prone to excessively penalizing accuracy. In response, we propose the framework of Fair Optimization to enforce robust fairness at training time. The goal of this approach is to capture independence through constraints tractable in an optimization framework, effectively controlling the discriminatory ability of a learner within the learning stage. We show that our method is able to find fair classifiers that retain accuracy for a number of real datasets, and present generalizations of our method to the setting of unsupervised learning as well.

Bio: Matt Olfat is a PhD candidate in IEOR at UC Berkeley. He received his B.S. in Systems Engineering and Mathematics from the University of Virginia in 2014, and his M.S. in Industrial Engineering and Operations Research from UC Berkeley in 2016. His research interests include fairness in machine learning, applications of machine learning in public policy, and decompositions of high-dimensional datasets.